How Do We Want to Interact with Social Robots?

By Elisabeth Hildt 

Image courtesy of Brett Davis on Flickr
During the past decades, a variety of studies have been published that investigate how humans interact with robots (see for example Mavridis 2015; Saunderson & Nejat 2019). Social robots have been deployed in various contexts to provide assistance in fields such as healthcare and elderly care or to engage and support children with autism spectrum disorder (Pretz 2019; Pepito et al. 2020).  

More recently, there has been a shift to increasingly consider neuroscience and behavioral sciences when studying human-robot interaction. Research on social cognition and human-robot interaction investigates how humans perceive, interact with, and respond to robots in social contexts. These studies are characterized by using methods from cognitive/social neuroscience and psychology to study how humans interact with physically embodied social robots (Cross et al. 2019; Henschel et al. 2020). While one of the aims of the research field is to use neuroscience-related study results to inform the development of social robots with which humans intuitively interact, it is striking that the question of how humans would like to interact with social robots is rarely addressed. In this article, I argue that for human-robot-interaction to be user-friendly, the broader implications of engaging with social robots need to be addressed. 

Recent research involving social cognition has provided interesting insights. One example from Ciardo et al. (2020) found that sense of agency (i.e., the perceived control participants feel over the outcome of an action) was reduced in a shared-control setting of human-robot interaction, much like in human-human interaction. The authors suggest that this effect results from humans representing the robot as an intentional agent. Based on the study results, the authors discuss possible negative consequences such as diffusion of responsibility of the reduced sense of agency while interacting with social robots (for example, in robot-assisted care in hospitals). This could result in responsibility gaps due to doctors or nurses relying too much on the technology. 

There is also a spectrum of studies analyzing how robots influence human behavior (Saunderson & Nejat 2019). A recent example is a study published by Connolly et al. (2020). The study team investigated whether a Cozmo robot displaying a sad face and shutting down for 10 seconds after being attacked by a human prompts prosocial human intervention by other study participants, and whether this effect is increased when two emotionally expressive bystander robots express sadness after their “fellow robot” was attacked. The two Cozmo bystander robots reacted by turning towards the attacked robot and expressing sadness by means of anthropomorphic animations that involved audio and facial displays. The results indicated that the bystander robots’ expression of disapproval increased the probability that research participants intervened in order to support the robot. Prosocial interventions included actions taken to safeguard the robot or to show empathy for the robot, and verbal interventions such as requests to stop the attack or verbal disapprovals. Reflections on the influencing and persuasive power of robots in human-robot interaction and on blame-laden moral rebukes, i.e., a robot communicating its disapproval of others’ norm violations (see Zhu et al. 2020), may provide a fruitful framework for the discussion of this and similar research.  

All of this is fascinating, albeit preliminary, research in a fledgling field. Undoubtedly, more research is needed to build on these results and to provide a more conclusive description of the relevant factors in human-robot interaction. 

Image courtesy of Wikimedia Commons
Overall, these studies indicate that the ways humans and the human brain react to and interact with social or humanoid robots are similar to how they react to and interact with other human beings. Thus, it seems that it does not really make sense to ask the question: “How do we want to interact with social or humanoid robots?” The above studies seem to indicate that we do not have an option at all. Our brains will just react. 

One possible response is to say that research merely reveals the ways humans perceive, react to, and interact with robots, and that it is best to get used to and adjust to the new reality with embodied AI technology and social robots.  

Such a position is not helpful, however. It seems defeatist and fatalistic, for this position omits a crucial factor: it is human beings who design technology, who decide how future robots will be built, how they will function, and what they will look like. 

Researchers in the fields of social cognition and human-robot interaction consider one of their aims to be to provide understanding that helps develop social robots with which humans can intuitively interact. Cross and colleagues write in their editorial “From social brains to social robots” (Cross et al. 2019, p. 2): 

“This burgeoning research area stands to not only construct a more sophisticated understanding of the neurocognitive mechanisms and consequences of human artificial agent interactions, but also promises to inform the development of increasingly socially sophisticated robots.” 

The basic assumption often is that if humans interact with robots in a similar way as they do with other human beings, then this is a good thing, as easy and intuitive interaction with robots is considered an indication of user-friendliness (Saunderson & Nejat 2019; Pepito et al. 2020; Wykowska 2020). However, this assumption is far from proven.  

Instead, it is worthwhile to consider a broader conception of user-friendliness that does not just focus on the immediate human-robot interaction and sees intuitive technology use as the primary goal, but one that takes possible individual and societal implications of interactions with social robots into account.  

Most crucially, such an approach reflects on the implications of human-robot interaction that, albeit somehow resembling human-human interaction, involves robots, the capabilities of which are not comparable to human capabilities. Current social or humanoid robots do not exhibit anything similar to human emotions, human agency, or a human mind. To what extent are robots involved in deception and inadequate attribution of presumed capabilities? To what extent can human-robot interaction that simulates human-human interaction be a meaningful substitute for interpersonal interaction?  

The answers to these questions and the ethical implications are far from clear. Factors to be considered include the implications of unilateral relationships, subliminal interference, emotional dependence, reductions in perceived autonomy and sense of agency, and diffusion of responsibility. 

Seen from such a broader perspective, user-friendliness might be interpreted quite differently. It may include aspects like preventing humans from inadequately ascribing presumed capabilities to robots, preventing users from developing unilateral emotional bonds, or preventing users from experiencing negative influences on their autonomy due to robots exhibiting emotionally expressive or persuasive behaviors. 


References

  1. Ciardo, F., Beyer, F., De Tommaso, D., Wykowska, A. (2020): Attribution of intentional agency towards robots reduces one’s own sense of agency, Cognition 194 (2020) 104109. https://doi.org/10.1016/j.cognition.2019.104109  
  2. Connolly, J., Mocz, V., Salomons, N., Valdez, J., Tsoi, N., Scassellati, B., Vázquez, M. (2020):  Prompting Prosocial Human Interventions in Response to Robot Mistreatment. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’20), March 23–26, 2020, Cambridge, United Kingdom. ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3319502.3374781 
  3. Cross, E.S., Hortensius, R., Wykowska A. (2019): From social brains to social robots: applying neurocognitive insights to human–robot interaction. Phil. Trans. R. Soc. B 374: 20180024. http://dx.doi.org/10.1098/rstb.2018.0024 
  4. Henschel, A., Hortensius, R., Cross, E.S. (2020): Social Cognition in the Age of Human-Robot Interaction, Trends in Neurosciences 43(6): 373-384.  
  5. Mavridis, N. (2015): A review of verbal and non-verbal human-robot interactive communication, Robotics and Autonomous Systems 63: 22–35. 
  6. Pepito, J.A., Ito, H., Betriana, F., Tanioka, T., Locsin, R.C. (2020): Intelligent humanoid robots expressing artificial humanlike empathy in nursing situations, Nursing Philosophy, Volume 21, Issue 4, October 2020, e12318. https://doi.org/10.1111/nup.12318
  7. Pretz, K. (2019): Humanoid Robots Teach Coping Skills to Children with Autism, IEEE Spectrum, 10 July 2019, https://spectrum.ieee.org/the-institute/ieee-member-news/humanoid-robots-teach-coping-skills-to-children-with-autism
  8. Saunderson, S., Nejat, G. (2019): How Robots Influence Humans: A Survey of Nonverbal Communication in Social Human–Robot Interaction, International Journal of Social Robotics 11: 575–608. 
  9. Wiese, E., Metta, G., Wykowska, A. (2017): Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social. Front. Psychol. 8:1663. doi: 10.3389/fpsyg.2017.01663 
  10. Wykowska, A. (2020): Where I Work, Nature 583, p. 652. 
  11. Zhu, Q., Williams, T., Jackson, B., Wen, R. (2020): Blame-Laden Moral Rebukes and the Morally Competent Robot: A Confucian Ethical Perspective, Science and Engineering Ethics 26: 2511–2526. 

______________


Elisabeth Hildt is Professor of Philosophy and Director of the Center for the Study of Ethics in the Professions at the Illinois Institute of Technology in Chicago. Her research focus is on bioethics, neuroethics, and ethics of technology. Before moving to Chicago, she was the head of the Research Group on Neuroethics/Neurophilosophy at the Department of Philosophy at the University of Mainz, Germany. 




Want to cite this post?

Hildt, E. (2021). How Do We Want to Interact with Social Robots? The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2021/04/how-do-we-want-to-interact-with-social.html


Emory Neuroethics on Facebook

Emory Neuroethics on Twitter

AJOB Neuroscience on Facebook