Should AI Have Moral Status? The Importance of Gauging Public Opinion
By Meghan Hurley
![]() |
Image by Gerd Altmann via PublicDomainPictures.net |
For some academics, moral status depends largely on “[whether an entity’s interests] morally matter to some degree for the entity’s own sake” (Jaworska and Tannenbaum, 2021), but other frequently suggested criteria for defining moral status include consciousness, self-awareness, physical substrate, and sentience. As of now, our conception of moral status appears fairly binary, deeming entities as either worthy or unworthy of moral status, with ambiguous and not yet universal criteria for its definition. This current conception makes it even more difficult to predict how AI with consciousness and human-like abilities may function in society, what moral and legal status they may be given, and how they will be treated by society. Regardless of these uncertainties, the potential future in which AI obtains consciousness warrants discussions of the ethical, legal, and social implications of this development.
While most of the decision-making regarding the moral, legal, and social status of AI will likely be conducted by academics and experts in the near future, it is equally pertinent to understand the public’s opinion and acceptance of this potential future. After all, changes that may integrate AI into our legal and social systems will result in the public having to vastly increase their interaction with AI, including interactions that may appear or feel human-like or intimate in nature. A recent study by Conrad et al highlighted the importance of public opinion on bioethical issues, stating that “results of [their survey could] inform potential [policy] in multiple ways” (Conrad, Humphries, & Chatterjee, 2019). In the case of AI, it may be possible to influence and inform future policy regarding the moral status and legal rights of AI by better understanding the public’s comfort with and acceptance of proposed policy changes.
The creation of a robust assessment tool
![]() |
Image via pxfuel |
The interview process began in March and allowed me to engage with individuals with various levels of understanding of and familiarity with AI. This not only resulted in the modification and adjustment of the written assessment tool for clarity and effectiveness, but also provided interesting insight into the way that members of the public differentially conceptualize AI and themes such as personhood, moral status, and moral responsibility. The four interviewees, each with varying levels of formal education and familiarity with AI, were taken through a set of scenarios with AI as the actor, whether it be interacting with humans, performing a task and making some kind of mistake, or providing humans with information. Participants then ranked their agreement or disagreement with various questions or statements, followed by a discussion in which participants could further articulate their personal conceptions of moral status, personhood, and the abilities of AI. From these responses and discussions, similar conceptualizations of AI began to emerge between participants.
While these conceptualizations cannot be extrapolated or generalized to the entire public, they raised some important questions post-interview about what factors influence society’s perception of AI:
- What specific abilities make an entity deserving of the title “human-like”? What criteria make an entity worthy of identification as a “person” or worthy of “personhood” under the law? To what extent do these concepts overlap? How “human-like” does an entity need to be in order to be worthy of this identification?
- To what extent do we as a society correlate worthiness of rights, protections, and welfare with “humanness”?
- What abilities or characteristics must an entity have in order for us to care about their wellbeing?
- How does the public understand and conceptualize moral responsibility? Can this conceptualization be applied to AI?
- Should AI be blamed for their mistakes? What forms should this blame take (acknowledgement of mistake, legal action, etc.)?
One theme in particular that prompted fascinating discussions during the interview process was infallibility. Surprisingly, when participants discussed infallibility and AI making mistakes, they often compared and related AI to humans. While participants across different stakeholders were reluctant to attribute “humanness” to AI when it came to their abilities (feeling, consciousness, self-awareness), they were more than happy to compare AI to humans in their mistake-making, with one participant even remarking that AI “[almost thinks like a] human because a human is not infallible either” (Hurley, 2021). Multiple interviewees echoed similar sentiments, feeling as though AI should not be blamed for actions that are ultimately mistakes. This ability to excuse or forgive AI for their mistakes seems to suggest that people are beginning to conceptualize AI in a way that attributes to them at least some amount of humanness, even if small. Although these participants had described AI previously as “just” pieces of technology, they failed to treat them as such by giving them the benefit of the doubt when they made a mistake.
![]() |
Image by Chris 73 via Wikimedia Commons |
The answers to these questions and their ethical and social implications are uncertain but will ideally be explored via further interviews with various stakeholders. With more opinions and insight from interviewees, this project will solidify the robustness of the assessment tool so that modifications can be made, and it can be administered to participants for quantitative data collection. With this data, we aim to develop a more complete view of public opinion of the moral status of AI.
With regards to moral status in general, its lack of a universally accepted definition gives us the opportunity to shift our current understanding to one that will accommodate for entities like AI that fit some of the defining criteria of moral status, but not all. Perhaps we should adopt a conception of moral status that allows for degrees or levels based on certain criteria being met rather than restricting it to a binary (DeGrazia, 2008). Moreover, embracing AI as beings with moral status may force us to ditch notions about the “inferiority” of non-human beings and may lead to increased protection and welfare of these entities. While there is a large difference between attributing AI or other entities “humanness” or “human-like” qualities, and attributing them full “personhood” or the definition of “person” that is currently the legal standard, the future may consist of a reevaluation of what abilities and traits warrant protections for entities, possibly even leading to a reconceptualization of what it means to be a “person” or have “personhood,” and making it crucial to understand AI’s moral status and place in our society.
References
- Conrad, E. C., Humphries, S., & Chatterjee, A. (2019). Attitudes Toward Cognitive Enhancement: The Role of Metaphor and Context. AJOB Neuroscience, 10(1), 35-47. doi:10.1080/21507740.2019.1595771
- Degrazia, D. (2008). Moral Status As a Matter of Degree? The Southern Journal of Philosophy, 46(2), 181-198. doi:10.1111/j.2041-6962.2008.tb00075.x
- Dietvorst, B. J., Simmons, J. P., & Massey, C. (2018). Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them. Management Science, 64(3), 1155-1170. doi:10.1287/mnsc.2016.2643
- Hurley, M. (2021). An assessment tool for the public opinion of the moral status of Artificial Intelligence. [Unpublished manuscript]. Emory University.
- Kassens-Noor, E., Wilson, M., Kotval-Karamchandani, Z., Cai, M., & Decaminada, T. (2021). Living with Autonomy: Public Perceptions of an AI-Mediated Future. Journal of Planning Education and Research. doi:10.1177/0739456x20984529
- Kok, B. & Soh, H. (2020). Trust in Robots: Challenges and Opportunities. Current Robotics Reports, 1(4), 297-309. https://doi.org/10.1007/s43154-020-00029-y
- Long, L. N., & Kelley, T. D. (2010). Review of Consciousness and the Possibility of Conscious Robots. Journal of Aerospace Computing, Information, and Communication, 7(2), 68- 84. doi:10.2514/1.46188
______________
Want to cite this post?
Hurley, M. (2021). Should AI Have Moral Status? The Importance of Gauging Public Opinion. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2021/08/should-ai-have-moral-status-importance.html