Should AI Have Moral Status? The Importance of Gauging Public Opinion

By Meghan Hurley

Image by Gerd Altmann
via PublicDomainPictures.net

Artificial intelligence’s rapidly progressing ability to plan actions and integrate and process information in a manner similar to humans, coupled with an increasingly anthropomorphic conceptualization of AI’s underlying mechanisms, has led experts in a variety of fields to discuss the likelihood of AI developing some form of consciousness or sentience. Some researchers believe that it is less a question of whether AI can achieve consciousness and more a question of when they will (Long and Kelley, 2010). Still, the somewhat ambiguous and often debated definition of consciousness prevents experts from understanding what consciousness may look like in AI, as well as how similar it would be to current human models of consciousness. Additionally, there are equally unclear criteria for determining which beings and entities deserve moral status. 

For some academics, moral status depends largely on “[whether an entity’s interests] morally matter to some degree for the entity’s own sake” (Jaworska and Tannenbaum, 2021), but other frequently suggested criteria for defining moral status include consciousness, self-awareness, physical substrate, and sentience. As of now, our conception of moral status appears fairly binary, deeming entities as either worthy or unworthy of moral status, with ambiguous and not yet universal criteria for its definition. This current conception makes it even more difficult to predict how AI with consciousness and human-like abilities may function in society, what moral and legal status they may be given, and how they will be treated by society. Regardless of these uncertainties, the potential future in which AI obtains consciousness warrants discussions of the ethical, legal, and social implications of this development. 

While most of the decision-making regarding the moral, legal, and social status of AI will likely be conducted by academics and experts in the near future, it is equally pertinent to understand the public’s opinion and acceptance of this potential future. After all, changes that may integrate AI into our legal and social systems will result in the public having to vastly increase their interaction with AI, including interactions that may appear or feel human-like or intimate in nature. A recent study by Conrad et al highlighted the importance of public opinion on bioethical issues, stating that “results of [their survey could] inform potential [policy] in multiple ways” (Conrad, Humphries, & Chatterjee, 2019). In the case of AI, it may be possible to influence and inform future policy regarding the moral status and legal rights of AI by better understanding the public’s comfort with and acceptance of proposed policy changes.  

The creation of a robust assessment tool 

Image via pxfuel

As of early 2021, little research had been done to gather data on the public opinion of AI in general. In January of this year, however, an article was published that assessed the public perception of an AI-mediated future with autonomous systems like smart offices, autonomous vehicles, and domotics (home automation) (Kassends-Noor et al., 2021). My ongoing work at Emory University, which follows a similar methodology, focuses on gathering these crucial public opinions regarding the moral status of AI by developing, evaluating, and administering an assessment tool to various stakeholders— not only members of the public, but also experts in various related fields such as lawyers, computer scientists, neuroscientists, and ethicists. 

The interview process began in March and allowed me to engage with individuals with various levels of understanding of and familiarity with AI. This not only resulted in the modification and adjustment of the written assessment tool for clarity and effectiveness, but also provided interesting insight into the way that members of the public differentially conceptualize AI and themes such as personhood, moral status, and moral responsibility. The four interviewees, each with varying levels of formal education and familiarity with AI, were taken through a set of scenarios with AI as the actor, whether it be interacting with humans, performing a task and making some kind of mistake, or providing humans with information. Participants then ranked their agreement or disagreement with various questions or statements, followed by a discussion in which participants could further articulate their personal conceptions of moral status, personhood, and the abilities of AI. From these responses and discussions, similar conceptualizations of AI began to emerge between participants. 

While these conceptualizations cannot be extrapolated or generalized to the entire public, they raised some important questions post-interview about what factors influence society’s perception of AI: 

  • What specific abilities make an entity deserving of the title “human-like”? What criteria make an entity worthy of identification as a “person” or worthy of “personhood” under the law? To what extent do these concepts overlap? How “human-like” does an entity need to be in order to be worthy of this identification?
  • To what extent do we as a society correlate worthiness of rights, protections, and welfare with “humanness”?
  • What abilities or characteristics must an entity have in order for us to care about their wellbeing?
  • How does the public understand and conceptualize moral responsibility? Can this conceptualization be applied to AI?
  • Should AI be blamed for their mistakes? What forms should this blame take (acknowledgement of mistake, legal action, etc.)?

One theme in particular that prompted fascinating discussions during the interview process was infallibility. Surprisingly, when participants discussed infallibility and AI making mistakes, they often compared and related AI to humans. While participants across different stakeholders were reluctant to attribute “humanness” to AI when it came to their abilities (feeling, consciousness, self-awareness), they were more than happy to compare AI to humans in their mistake-making, with one participant even remarking that AI “[almost thinks like a] human because a human is not infallible either” (Hurley, 2021). Multiple interviewees echoed similar sentiments, feeling as though AI should not be blamed for actions that are ultimately mistakes. This ability to excuse or forgive AI for their mistakes seems to suggest that people are beginning to conceptualize AI in a way that attributes to them at least some amount of humanness, even if small. Although these participants had described AI previously as “just” pieces of technology, they failed to treat them as such by giving them the benefit of the doubt when they made a mistake. 

Image by Chris 73
via Wikimedia Commons

It will be important in future interviews to better understand which stakeholders are willing to be so forgiving of AI and why, as well as to explore whether there are certain factors that may reduce or prevent this willingness. If, for example, AI is referred to instead as only an “algorithm,” will participants feel some sort of “algorithm aversion” (Dietvorst, Simmons, & Massey, 2018) in which they feel negatively towards or would want to avoid the AI because of its mistake? Moreover, does the physical appearance of an AI play a role in this forgiveness? Current research on trust in robots certainly points to this possibility; according to Kok and Soh (2020), the physical appearance and embodiment of a robot “is a key consideration in the formation of trust [in human-robot interactions].” If this is the case, should we increase the use of AIs with a well-defined body to increase human trust? Would an AI with a “body” make it easier for participants to attribute blame? Or would this “body” allow participants to better relate to the AI, making them even more willing to forgive and forget? Could this relationship between physical form and willingness to attribute blame or forgiveness affect the future prevalence of AI’s with a physical “body” versus those that are “just algorithms”?

The answers to these questions and their ethical and social implications are uncertain but will ideally be explored via further interviews with various stakeholders. With more opinions and insight from interviewees, this project will solidify the robustness of the assessment tool so that modifications can be made, and it can be administered to participants for quantitative data collection. With this data, we aim to develop a more complete view of public opinion of the moral status of AI. 

With regards to moral status in general, its lack of a universally accepted definition gives us the opportunity to shift our current understanding to one that will accommodate for entities like AI that fit some of the defining criteria of moral status, but not all. Perhaps we should adopt a conception of moral status that allows for degrees or levels based on certain criteria being met rather than restricting it to a binary (DeGrazia, 2008). Moreover, embracing AI as beings with moral status may force us to ditch notions about the “inferiority” of non-human beings and may lead to increased protection and welfare of these entities. While there is a large difference between attributing AI or other entities “humanness” or “human-like” qualities, and attributing them full “personhood” or the definition of “person” that is currently the legal standard, the future may consist of a reevaluation of what abilities and traits warrant protections for entities, possibly even leading to a reconceptualization of what it means to be a “person” or have “personhood,” and making it crucial to understand AI’s moral status and place in our society.  


References

  1. Conrad, E. C., Humphries, S., & Chatterjee, A. (2019). Attitudes Toward Cognitive Enhancement: The Role of Metaphor and Context. AJOB Neuroscience, 10(1), 35-47. doi:10.1080/21507740.2019.1595771
  2. Degrazia, D. (2008). Moral Status As a Matter of Degree? The Southern Journal of Philosophy, 46(2), 181-198. doi:10.1111/j.2041-6962.2008.tb00075.x
  3. Dietvorst, B. J., Simmons, J. P., & Massey, C. (2018). Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them. Management Science, 64(3), 1155-1170. doi:10.1287/mnsc.2016.2643
  4. Hurley, M. (2021). An assessment tool for the public opinion of the moral status of Artificial Intelligence. [Unpublished manuscript]. Emory University.
  5. Kassens-Noor, E., Wilson, M., Kotval-Karamchandani, Z., Cai, M., & Decaminada, T. (2021). Living with Autonomy: Public Perceptions of an AI-Mediated Future. Journal of Planning Education and Research. doi:10.1177/0739456x20984529
  6. Kok, B. & Soh, H. (2020). Trust in Robots: Challenges and Opportunities. Current Robotics Reports, 1(4), 297-309. https://doi.org/10.1007/s43154-020-00029-y
  7. Long, L. N., & Kelley, T. D. (2010). Review of Consciousness and the Possibility of Conscious Robots. Journal of Aerospace Computing, Information, and Communication, 7(2), 68- 84. doi:10.2514/1.46188

______________


Meghan is a graduate student at Emory University completing a master’s degree in Bioethics. She conducts research under Dr. Gillian Hue at the Just Neuroethics Lab, with her work focusing on emergent neurotechnologies and the application of concepts such as moral status, moral agency, and legal status to artificial intelligence. Post-grad, she plans to pursue a JD/PhD and hopes to utilize this legal expertise to better analyze the legal implications of developing neurotechnologies and AI. 



Want to cite this post?

Hurley, M. (2021). Should AI Have Moral Status? The Importance of Gauging Public Opinion. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2021/08/should-ai-have-moral-status-importance.html


Emory Neuroethics on Facebook

Emory Neuroethics on Twitter

AJOB Neuroscience on Facebook