Skip to main content
Log in

What Does it Mean to Measure Mind Perception toward Robots? A Critical Review of the Main Self-Report Instruments

  • Review
  • Published:
International Journal of Social Robotics Aims and scope Submit manuscript

Abstract

Although most studies that seek to measure participants’ judgments and attitudes regarding humanoid robots’ possessing (or appearing to possess) a mind or mental capacities have been based on verbal measures, there is yet no standard psychometric instrument for this end. Using a COSMIN approach, this critical review seeks to summarize the most valid and reliable self-report instruments that aim to measure mental state attribution to humanoid robots. 501 papers were reviewed, but only 11 were included, finding that: (1) The instruments do not usually measure mental state attribution toward robots as an exclusive phenomenon but as a factor associated with the tendency to anthropomorphize non-human entities; (2) There is a lack of consensus regarding a definition of mental state attribution and the psychometric dimensions that underlie it; (3) The tendency to anthropomorphize does not by itself imply the attribution of mind to robots. In our discussion, we delve into the general problem of mind perception/attribution and speculate on the possible theoretical basis for a multifactorial model for measuring mind perception as part of a broader phenomenon we term “psycheidolia.”

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Data availability statement

The authors hereby declare that, since no new data were created or analyzed in this study, data sharing is not applicable to this article.

References

  1. Waytz A, Gray K, Epley N et al (2010) Causes and consequences of mind perception. Trends Cogn Sci 14:383–8. https://doi.org/10.1016/j.tics.2010.05.006

    Article  Google Scholar 

  2. Premack D, Woodruff G (1978) Does a chimpanzee have a theory of mind? Behav Brain Sci 1:515–526. https://doi.org/10.1017/S0140525X00076512

    Article  Google Scholar 

  3. Dennett DC (1971) Intentional systems. J Philos 68(4):87–106. https://doi.org/10.2307/2025382. arxiv.org/abs/2025382

    Article  Google Scholar 

  4. Griffin R, Baron-Cohen S (2002) The intentional stance: developmental and neurocognitive perspectives. Daniel Dennett. Cambridge University Press, Cambridge (UK), pp 83–116

  5. Dennett DC (1981) The intentional stance. MIT Press, Cambridge

    Google Scholar 

  6. Trombetta C, Mecacci L (2019) édouard Claparède and the concept of mentalization. Eur Yearb Hist Psychol 5:139–151. https://doi.org/10.1484/J.EYHP.5.118914

    Article  Google Scholar 

  7. Groth J (2016) W. R. Bion’s models of mind as the foundation of the concept of mentalization. Curr Issues Pers Psychol 4(1):18–30. https://doi.org/10.5114/cipp.2016.58213

    Article  Google Scholar 

  8. Fonagy P, Gergely G, Jurist EL et al (2002) Affect regulation, mentalization, and the development of the self. Other Press, New York

    Google Scholar 

  9. Siegel DJ, Hartzell M (2003) Parenting from the inside out: how a Deeper Self-Understanding Can Help You Raise Children Who Thrive. J.P. Tarcher/Putnam, New York

    Google Scholar 

  10. Wegner DM (2002) The illusion of conscious will. MIT Press, Cambridge

    Book  Google Scholar 

  11. Epley N, Waytz A (2010) Mind perception. In: Handbook of social psychology, vol. 1, 5th Ed. Wiley, Hoboken, pp 498–541 https://doi.org/10.1002/9780470561119.socpsy001014

  12. Young L, Waytz A (2013) Mind attribution is for morality. In: Understanding other minds: perspectives from developmental social neuroscience, 3rd edn. Oxford University Press, New York, pp 93–103, https://doi.org/10.1093/acprof:oso/9780199692972.003.0006

  13. Griffiths PE (1988) Emotion and evolution. PhD thesis, The Australian National University

  14. Baron-Cohen S (1985) Social cognition and pretend play in autism. University of London, Doctoral

    Google Scholar 

  15. Jacobs O, Gazzaz K, Kingstone A (2022) Mind the robot! Variation in attributions of mind to a wide set of real and fictional robots. Int J Soc Robot 14:1–9. https://doi.org/10.1007/s12369-021-00807-4

    Article  Google Scholar 

  16. Thellman S, de Graaf M, Ziemke T (2022) Mental state attribution to robots: a systematic review of conceptions, methods, and findings. ACM Trans Hum Robot Inter. https://doi.org/10.1145/3526112

    Article  Google Scholar 

  17. Gray HM, Gray K, Wegner DM (2007) Dimensions of mind perception. Science 315(5812):619–619. https://doi.org/10.1126/science.1134475

    Article  Google Scholar 

  18. Franklin SP (1995) Artificial minds. MIT Press, Cambridge

    Google Scholar 

  19. Ghiglino D, Wykowska A (2020) When robots (pretend to) think. In: Artificial intelligence. Brill mentis, Leiden, NL, chap Artificial Intelligence, pp 49–74 https://doi.org/10.30965/9783957437488_006

  20. Long A (1998) Nous. Routledge Encyclopedia of Philosophy, Milton Park

    Google Scholar 

  21. Long A (1998) Psychē. Routledge Encyclopedia of Philosophy, Milton Park

    Google Scholar 

  22. OpenAI (2022) Introducing ChatGPT. https://openai.com/blog/chatgpt

  23. Stone P, Brooks R, Brynjolfsson E et al (2016) Artificial intelligence and life in 2030. One Hundred Year Study on Artificial Intelligence: Report of the 2015–2016 Study Panel. Technical report, Stanford University, Stanford https://doi.org/10.48550/arXiv.2211.06318

  24. Hinds PJ, Roberts TL, Jones H (2004) Whose job is it anyway? A study of human–robot interaction in a collaborative task. Hum Comput Inter 19(1–2):151–181. https://doi.org/10.1080/07370024.2004.9667343

    Article  Google Scholar 

  25. Wykowska A (2019) Intentional mindset toward robots–open questions and methodological challenges. Front Robot AI. https://doi.org/10.3389/frobt.2018.00139

    Article  Google Scholar 

  26. Thellman S, Silvervarg A, Ziemke T (2017) Folk-psychological interpretation of human vs. humanoid robot behavior: exploring the intentional stance toward robots. Front Psychol. https://doi.org/10.3389/fpsyg.2017.01962

    Article  Google Scholar 

  27. Mokkink LB, de Vet HCW, CaC P et al (2018) COSMIN risk of bias checklist for systematic reviews of patient-reported outcome measures. Qual Life Res 27(5):1171–1179. https://doi.org/10.1007/s11136-017-1765-4

    Article  Google Scholar 

  28. Gusenbauer M, Haddaway NR (2020) Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of Google Scholar, PubMed, and 26 other resources. Res Synth Methods 11(2):181–217. https://doi.org/10.1002/jrsm.1378

    Article  Google Scholar 

  29. Chin M, Yordon R, Clark B et al (2005) Developing an anthropomorphic tendencies scale. Proc Hum Factors Ergon Soc Ann Meet 49:1266–1268. https://doi.org/10.1177/154193120504901311

    Article  Google Scholar 

  30. Powers A, Kiesler S (2006) The advisor robot: tracing people’s mental model from a robot’s physical attributes. In: HRI 2006: proceedings of the 2006 ACM conference on human-robot interaction, pp 218–225. https://doi.org/10.1145/1121241.1121280

  31. MacDorman K (2006) Subjective ratings of robot video clips for human likeness, familiarity, and eeriness: an exploration of the uncanny valley. In: ICCS/CogSci-2006 long symposium: toward social mechanisms of android science

  32. Bartneck C, Kulić D, Croft E et al (2009) Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int J Soc Robot 1(1):71–81. https://doi.org/10.1007/s12369-008-0001-3

    Article  Google Scholar 

  33. Waytz A, Morewedge C, Epley N et al (2010) Making sense by making sentient: effectance motivation increases anthropomorphism. J Pers Soc Psychol 99:410–35. https://doi.org/10.1037/a0020240

    Article  Google Scholar 

  34. Waytz A, Cacioppo J, Epley N (2010) Who sees human? The stability and importance of individual differences in anthropomorphism. Perspect Psychol Sci 5:219–232. https://doi.org/10.1177/1745691610369336

    Article  Google Scholar 

  35. Ruijten P, Haans A, Ham J et al (2019) Perceived human-likeness of social robots: testing the Rasch model as a method for measuring anthropomorphism. Int J Soc Robot 11:1–18. https://doi.org/10.1007/s12369-019-00516-z

    Article  Google Scholar 

  36. Marchesi S, Ghiglino D, Ciardo F et al (2019) Do we adopt the intentional stance toward humanoid robots? Front Psychol 10:450. https://doi.org/10.3389/fpsyg.2019.00450

    Article  Google Scholar 

  37. Spatola N, Kühnlenz B, Cheng G (2021) Perception and evaluation in human–robot interaction: the human–robot interaction evaluation scale (HRIES)—a multicomponent approach of anthropomorphism. Int J Soc Robot. https://doi.org/10.1007/s12369-020-00667-4

    Article  Google Scholar 

  38. David D, Meggy H, Thérouanne P et al (2022) Development and validation of a social robot anthropomorphism scale (SRA) in a French sample. Int J Hum Comput Stud 162(102):802. https://doi.org/10.1016/j.ijhcs.2022.102802

    Article  Google Scholar 

  39. Bartneck C, Kanda T, Ishiguro H et al (2007) Is the uncanny valley an uncanny cliff? In: Robot and human interactive communication, 2007. RO-MAN 2007. The 16th IEEE international symposium, pp 368–373. https://doi.org/10.1109/ROMAN.2007.4415111

  40. Bartneck C, Kanda T, Ishiguro H et al (2009) My robotic doppelgänger—a critical look at the uncanny valley. In: RO-MAN 2009—the 18th IEEE international symposium on robot and human interactive communication, pp 269–276. https://doi.org/10.1109/ROMAN.2009.5326351

  41. Riva P, Sacchi S, Brambilla M (2015) Humanizing machines: anthropomorphization of slot machines increases gambling. J Exp Psychol Appl 21:313–25. https://doi.org/10.1037/xap0000057

    Article  Google Scholar 

  42. Güth W, Schmittberger R, Schwarze B (1982) An experimental analysis of ultimatum bargaining. J Econ Behav Organ 3(4):367–388. https://doi.org/10.1016/0167-2681(82)90011-7

    Article  Google Scholar 

  43. Carpinella CM, Wyman AB, Perez MA et al (2017) The Robotic Social Attributes Scale (RoSAS): development and validation. In: Proceedings of the 2017 ACM/IEEE international conference on human–robot interaction. Association for computing machinery, New York, NY, HRI’17, pp 254–262, https://doi.org/10.1145/2909824.3020208

  44. Fussell S, Kiesler S, Setlock L et al (2008) How people anthropomorphize robots. In: HRI 2008—proceedings of the 3rd ACM/IEEE international conference on human–robot interaction: living with robots, pp 145–152. https://doi.org/10.1145/1349822.1349842

  45. Banks J (2020) Theory of mind in social robots: replication of five established human tests. Int J Soc Robot 12(2):403–414. https://doi.org/10.1007/s12369-019-00588-x

    Article  Google Scholar 

  46. Hortensius R, Cross ES (2018) From automata to animate beings: the scope and limits of attributing socialness to artificial agents. Ann NY Acad Sci. https://doi.org/10.1111/nyas.13727

    Article  Google Scholar 

  47. Duffy BR (2003) Anthropomorphism and the social robot. Robot Auton Syst 42(3):177–190. https://doi.org/10.1016/S0921-8890(02)00374-3

    Article  Google Scholar 

  48. Złotowski J, Proudfoot D, Yogeeswaran K et al (2014) Anthropomorphism: opportunities and challenges in human–robot interaction. Int J Soc Robot 7:347–360. https://doi.org/10.1007/s12369-014-0267-6

    Article  Google Scholar 

  49. Spatola N, Marchesi S, Wykowska A (2022) Different models of anthropomorphism across cultures and ontological limits in current frameworks the integrative framework of anthropomorphism. Front Robot AI. https://doi.org/10.3389/frobt.2022.863319

  50. Epley N, Waytz A, Cacioppo JT (2007) On seeing human: a three-factor theory of anthropomorphism. Psychol Rev 114:864–886. https://doi.org/10.1037/0033-295X.114.4.864

    Article  Google Scholar 

  51. Wykowska A, Wiese E, Prosser A et al (2014) Beliefs about the minds of others influence how we process sensory information. PLoS ONE 9(e94):339. https://doi.org/10.1371/journal.pone.0094339

  52. Wykowska A, Kajopoulos J, Obando-Leitón M et al (2015) Humans are well tuned to detecting agents among non-agents: examining the sensitivity of human perception to behavioral characteristics of intentional systems. Int J Soc Robot. https://doi.org/10.1007/s12369-015-0299-6

    Article  Google Scholar 

  53. Chaminade T, Rosset D, Da Fonseca D et al (2012) How do we think machines think? An fMRI study of alleged competition with an artificial intelligence. Front Hum Neurosci. https://doi.org/10.3389/fnhum.2012.00103

    Article  Google Scholar 

  54. Wiese E, Metta G, Wykowska A (2017) Robots as intentional agents: using neuroscientific methods to make robots appear more social. Front Psychol 8:1663. https://doi.org/10.3389/fpsyg.2017.01663

    Article  Google Scholar 

  55. Özdem C, Wiese E, Wykowska A et al (2017) Believing androids—fMRI activation in the right temporo-parietal junction is modulated by ascribing intentions to non-human agents. Soc Neurosci 12(5):582–593. https://doi.org/10.1080/17470919.2016.1207702

    Article  Google Scholar 

  56. Kahlbaum KL (1866) Die Sinnesdelirien. Allgemeine Zeitschrift für Psychiatrie und psychisch-gerichtliche Medizin 23:56–78

    Google Scholar 

Download references

Acknowledgements

We would like to thank Santiago Fernandez-Ballina for his editorial help in writing this article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Victor Galvez.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Consent for publication

The authors hereby declare that this study included no participants whose consent for publication is required.

Supplementary information

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Galvez, V., Hanono, E. What Does it Mean to Measure Mind Perception toward Robots? A Critical Review of the Main Self-Report Instruments. Int J of Soc Robotics 16, 501–511 (2024). https://doi.org/10.1007/s12369-024-01113-5

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12369-024-01113-5

Keywords

Navigation