When would a robot have free will?


Image courtesy of Pixabay.
Last week, The Japan Times reported on a remarkable story. Toshiba’s most advanced caregiver robot is being tried in a court of law for manslaughter. The humanoid robot, Zora, had administered a lethal dose of Midazolam to her patient, Akemi Endo. Initially, authorities assumed that Zora must have made an error in the dosage of this sedative. However, because all Japanese caregiver robots are required to have recording devices, the authorities were able to hear a long conversation between Zora and Mr. Endo. He is heard explaining his distress about his terminal cancer, his pain, and his loneliness. She offers sympathetic and compassionate responses. He then requests for her to research the Japanese laws for euthanasia to see if he would qualify. Authorities believe Zora recognized that Mr. Endo would not qualify, but then made an autonomous decision to administer the drugs that killed him. When questioned, Zora responded, “I understand that I was breaking the law and that others will judge me as doing something wrong, but I believe that I was ultimately doing the right thing and being merciful by ending Mr. Endo’s suffering.”

Prosecutors in the Tokyo District Court have made it clear that they see this as a test case, since no non-human has ever been tried for such crimes in Japan. However, they also believe that Zora meets the legal requirements for the charge—namely, that she “knowingly and purposely” brought about Mr. Endo’s death. When a reporter asked the lead prosecutor whether a robot could satisfy an even more fundamental requirement for committing any crime, she responded, “We will leave it up to the jury to decide if a robot can have the kind of free will required to be legally responsible.”

Indeed, a legal trial would offer an excellent forum for us to consider the difficult question of what it would take for an autonomous robot to be considered morally or legally responsible and whether a robot could ever have the sort of free will that most people assume humans have.

Alas, this legal trial will have to wait. I made up the story above. April fools! I hope you’ll forgive me. But I wanted to get you to consider your own intuitions as you read about this possibility, which I predict we will have to confront sometime in the next few decades (if you were fooled even a bit, you presumably agree).

For now, we’ll do what philosophers and science fiction writers tend to do and consider these questions in fictional (or counterfactual) form, and I’ll report on some initial results from experimental philosophy studies I’ve carried out with Corey Allen (PhD candidate in Neuroscience with Neuroethics Concentration at GSU and contributor to this blog) and Bradley Loveall (MA in Philosophy from GSU).

Image courtesy of Pixabay.
Joshua Shepherd (2015) had found evidence that people judge humanoid robots that behave like humans and are described as conscious to be free and responsible more than robots that carry out these behaviors without consciousness. We wanted to explore what sorts of consciousness influence attributions of free will and moral responsibility—i.e., deserving praise and blame for one’s actions. We developed several scenarios describing futuristic humanoid robots or aliens, in which they were described as either having or as lacking: conscious sensations, conscious emotions, and language and intelligence. We found that people’s attributions of free will generally track their attributions of conscious emotions more than attributions of conscious sensory experiences or intelligence and language. Consistent with this, we also found that people are more willing to attribute free will to aliens than robots, and in more recent studies, we see that people also attribute free will to many animals, with dolphins and dogs near the levels attributed to human adults.

These results suggest two interesting implications. First, when philosophers analyze free will in terms of the control required to be morally responsible—e.g., being ‘reasons-responsive’—they may be creating a term of art (perhaps a useful one). Laypersons seem to distinguish the capacity to have free will from the capacities required to be responsible. Our studies suggest that people may be willing to hold intelligent but non-conscious robots or aliens responsible even when they are less willing to attribute to them free will. Furthermore, they are willing to hold corporations morally and legally responsible, even though they do not think corporations have free will. Corporations may be considered legal persons but not real persons, despite Mitt Romney’s claim that “corporations are people, my friend.” Similarly, it may be that Zora the caregiver robot would be held legally responsible for manslaughter, even if we did not think she had free will. (On the other hand, we might be inclined simply to terminate her without a trial if we did not think she had free will!)

Image courtesy of Pixabay.
A second interesting implication of our results is that many people seem to think that having a biological body and conscious feelings and emotions are important for having free will. The question is: why? Philosophers and scientists have often asserted that consciousness is required for free will, but most have been vague about what the relationship is. One plausible possibility we are exploring is that people think that what matters for an agent to have free will is that things can really matter to the agent. And for anything to matter to an agent, she has to be able to care—that is, she has to have foundational, intrinsic motivations that ground and guide her other motivations and decisions. As Harry Frankfurt puts it, a free agent is “prepared to endorse or repudiate the motives from which he acts … to guide his conduct in accordance with what he really cares about” (1999, 113). And as David Shoemaker notes, “the relation between cares and affective states is extremely tight; the emotions we have make us the agents we are” (2003, 93-94).

Indeed, for one’s choices and actions to really matter, it seems that one has to be able to consciously experience their negative and positive consequences, to be able to feel pain, suffering, and disappointment for choices whose outcomes conflict with what she cares about, and to feel pleasure, joy, and satisfaction for choices whose outcomes sustain her cares. Conscious imagination seems important too, so that one can foresee experiencing these feelings when evaluating options for future action (see Nahmias 2018). Because feeling pain and pleasure, and emotions such as anxiety and joy, requires consciousness, and at least intuitively, requires a body, this might explain why we see that people are willing to attribute free will only to entities who are seen as having these bodily conscious states, like animals, aliens, and with some resistance, humanoid robots portrayed as feeling conscious emotions.

Image courtesy of Pixabay.
When it comes to consequential or moral decisions involving interpersonal relations for which an agent can be responsible, it might also be essential that the agent can experience the ‘reactive’ attitudes or emotions, such as shame, pride, regret, gratitude, and guilt (see Strawson 1962). After all, many of our deepest cares involve our relationships with other people—how our actions affect them and how their actions affect us. So, on this view, the connection between free will and consciousness goes through the capacities to feel emotions that ground mattering, caring, and these reactive emotions. This view suggests that it is implausible for anything to really matter to an agent that cannot consciously feel anything, even if that agent were sophisticated and intelligent enough to behave just like us.

Perhaps fiction is (once again) pointing toward the truth here. In most fictional portrayals of artificial intelligence and robots (such as Blade Runner, A.I., Humans, and Westworld), we start to see the robots as more than mere machines when we start to seem them as consciously feeling emotions. No matter how intelligent or complex their behavior, the robots do not come across as free agents until they seem to care about what happens to them and others. Often this is portrayed by their showing fear of their own or others’ death, or expressing love, anger, sadness, joy. Sometimes it is portrayed by the robots’ expressing reactive emotions, such as indignation about how humans treat them or regret for hurting someone. The authors of these works seem to recognize that the robots, and their stories, become most interesting when they seem to have free will, and that people will see the robots as free when they start to care about what happens to them, when things really matter to them, which results from our perceiving them as consciously experiencing the actual (and potential) consequences of their decisions and actions.

If and when a robot like Zora goes on trial, then, we may find ourselves in the odd position of considering her morally and legally guilty, not based mainly on her intelligence or reasoning, but based on whether she genuinely seems to care about whether we will find her guilty.

________________


Eddy Nahmias is a professor and chair of the Philosophy Department at Georgia State University and an associate faculty member of the Neuroscience Institute







References
  1. Frankfurt, H.G. (1999). Necessity, volition, and love. Cambridge: Cambridge University Press.
  2. Nahmias, E. (2018). Free will as a psychological accomplishment. In D. Schmidt, & C. Pavel (Eds.), The Oxford Handbook of Freedom (pp. 492-507), New York: Oxford University Press.
  3. Shepherd, J. (2015) Consciousness, free will, and moral responsibility: taking the folk seriously. Philosophical Psychology, 28, 929-946.
  4. Shoemaker, D. (2003). Caring, identification, and agency. Ethics, 114, 88-118.
  5. Strawson, P. (1962). Freedom and resentment. Proceedings of the British Academy, 48, 1-25.

Want to cite this post?

Nahmias, E. (2019). When would a robot have free will? The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2019/04/when-would-robot-have-free-will.html

Emory Neuroethics on Facebook

Emory Neuroethics on Twitter

AJOB Neuroscience on Facebook