When would a robot have free will?
By Eddy Nahmias
![]() |
Image courtesy of Pixabay. |
Prosecutors in the Tokyo District Court have made it clear that they see this as a test case, since no non-human has ever been tried for such crimes in Japan. However, they also believe that Zora meets the legal requirements for the charge—namely, that she “knowingly and purposely” brought about Mr. Endo’s death. When a reporter asked the lead prosecutor whether a robot could satisfy an even more fundamental requirement for committing any crime, she responded, “We will leave it up to the jury to decide if a robot can have the kind of free will required to be legally responsible.”
Indeed, a legal trial would offer an excellent forum for us to consider the difficult question of what it would take for an autonomous robot to be considered morally or legally responsible and whether a robot could ever have the sort of free will that most people assume humans have.
Alas, this legal trial will have to wait. I made up the story above. April fools! I hope you’ll forgive me. But I wanted to get you to consider your own intuitions as you read about this possibility, which I predict we will have to confront sometime in the next few decades (if you were fooled even a bit, you presumably agree).
For now, we’ll do what philosophers and science fiction writers tend to do and consider these questions in fictional (or counterfactual) form, and I’ll report on some initial results from experimental philosophy studies I’ve carried out with Corey Allen (PhD candidate in Neuroscience with Neuroethics Concentration at GSU and contributor to this blog) and Bradley Loveall (MA in Philosophy from GSU).
![]() |
Image courtesy of Pixabay. |
These results suggest two interesting implications. First, when philosophers analyze free will in terms of the control required to be morally responsible—e.g., being ‘reasons-responsive’—they may be creating a term of art (perhaps a useful one). Laypersons seem to distinguish the capacity to have free will from the capacities required to be responsible. Our studies suggest that people may be willing to hold intelligent but non-conscious robots or aliens responsible even when they are less willing to attribute to them free will. Furthermore, they are willing to hold corporations morally and legally responsible, even though they do not think corporations have free will. Corporations may be considered legal persons but not real persons, despite Mitt Romney’s claim that “corporations are people, my friend.” Similarly, it may be that Zora the caregiver robot would be held legally responsible for manslaughter, even if we did not think she had free will. (On the other hand, we might be inclined simply to terminate her without a trial if we did not think she had free will!)
![]() |
Image courtesy of Pixabay. |
Indeed, for one’s choices and actions to really matter, it seems that one has to be able to consciously experience their negative and positive consequences, to be able to feel pain, suffering, and disappointment for choices whose outcomes conflict with what she cares about, and to feel pleasure, joy, and satisfaction for choices whose outcomes sustain her cares. Conscious imagination seems important too, so that one can foresee experiencing these feelings when evaluating options for future action (see Nahmias 2018). Because feeling pain and pleasure, and emotions such as anxiety and joy, requires consciousness, and at least intuitively, requires a body, this might explain why we see that people are willing to attribute free will only to entities who are seen as having these bodily conscious states, like animals, aliens, and with some resistance, humanoid robots portrayed as feeling conscious emotions.
![]() |
Image courtesy of Pixabay. |
Perhaps fiction is (once again) pointing toward the truth here. In most fictional portrayals of artificial intelligence and robots (such as Blade Runner, A.I., Humans, and Westworld), we start to see the robots as more than mere machines when we start to seem them as consciously feeling emotions. No matter how intelligent or complex their behavior, the robots do not come across as free agents until they seem to care about what happens to them and others. Often this is portrayed by their showing fear of their own or others’ death, or expressing love, anger, sadness, joy. Sometimes it is portrayed by the robots’ expressing reactive emotions, such as indignation about how humans treat them or regret for hurting someone. The authors of these works seem to recognize that the robots, and their stories, become most interesting when they seem to have free will, and that people will see the robots as free when they start to care about what happens to them, when things really matter to them, which results from our perceiving them as consciously experiencing the actual (and potential) consequences of their decisions and actions.
If and when a robot like Zora goes on trial, then, we may find ourselves in the odd position of considering her morally and legally guilty, not based mainly on her intelligence or reasoning, but based on whether she genuinely seems to care about whether we will find her guilty.
________________
Eddy Nahmias is a professor and chair of the Philosophy Department at Georgia State University and an associate faculty member of the Neuroscience Institute.
References
- Frankfurt, H.G. (1999). Necessity, volition, and love. Cambridge: Cambridge University Press.
- Nahmias, E. (2018). Free will as a psychological accomplishment. In D. Schmidt, & C. Pavel (Eds.), The Oxford Handbook of Freedom (pp. 492-507), New York: Oxford University Press.
- Shepherd, J. (2015) Consciousness, free will, and moral responsibility: taking the folk seriously. Philosophical Psychology, 28, 929-946.
- Shoemaker, D. (2003). Caring, identification, and agency. Ethics, 114, 88-118.
- Strawson, P. (1962). Freedom and resentment. Proceedings of the British Academy, 48, 1-25.
Want to cite this post?
Nahmias, E. (2019). When would a robot have free will? The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2019/04/when-would-robot-have-free-will.html