Skip to main content

R. L. Rabb Symposium on Embedding AI in Society Summary

 By Erin Morrow and Veljko Dubljević

Image courtesy of Pixabay

On February 17-18, the R.L. Rabb Symposium on Embedding AI in Society convened to share novel research and discourse surrounding the rising social role and influence of artificial intelligence. This symposium, organized by the AI in Society Group at North Carolina State University, included keynote speakers Sylvester Johnson (Virginia Tech University), Joanna Bryson (Hertie School), Frank Pasquale (Brooklyn Law School), and Benjamin Kuipers (University of Michigan). The event attracted over 350 (neuro)ethicists, social scientists, and computer scientists from all over the world. It addressed four main themes, which served as topics for concurrent sessions throughout: Integrating Ethics into AI Decision Making, Safety with and from AI, Providing Transparency and Respecting Data Privacy in Data Analysis, and Future of Employment and Other Issues in the Age of AI. The organizers interpreted these themes broadly to include many types of applications of AI across multiple domains, including but not limited to autonomous vehicles, healthcare robots, policing algorithms, and AI personal assistants.

Joanna Bryson commenced with a talk entitled “Bias, Trust, and Doing Good: Scientific Explorations of Topics in AI Ethics.” A significant portion of the presentation explored how trust is operationalized, with Bryson arguing that trust is necessary for interactions in which the outcome is not certain. In particular, she highlighted experiments exploring cross-cultural differences in social decision-making when faced by uncertainty. Cultural contexts appeared to impact how inclined a given population was to make prosocial or antisocial choices—one insight being that, in an economic game of trust, participants from Boston in the United States preferred to “[exploit] advantages of the present system” rather than take economic punishments into their own hands. In contrast, participants from cities such as Muscat, Oman and Athens, Greece preferred to intervene, opting to use a mix of prosocial and antisocial strategies (i.e., punishing free-riders, who contributed less to group wealth than the punisher, and punishing those who contributed as much as or more than the punisher, respectively). This variability of social tendencies, then, impacts how people interact with entities (presumably including AI) in encounters that require trust. Different regions and peoples will likely experience different relationships with AI, as we do with each other.

Frank Pasquale followed with a discussion centered on “Renewing the Political Economy of Automation.” He took on the popularized notion that—e.g., if the diagnosis accuracy of medical AI continues to improve—AI will overtake the contribution of physicians in the practice of medicine. He acknowledged that, while AI will likely assist in helping provide more accurate diagnoses, the doctor’s role implicates qualities beyond mere pattern-driven model execution. Take the bias present in datasets with a disproportionate lack of dark skin representation in dermatology: if exclusively left to AI, cases of skin cancer in these populations may go undetected. Pasquale instead suggested a ‘melting pot’ scenario in which AI serves a complement to physicians rather than a substitute. More broadly, he predicted that certain human labor will actually become more valued as lower-paying, more dangerous, and more tedious jobs will be occupied by AI. Therefore, he argued, societal and legal discussion must take place regarding what the public values most, as well as what role AI should therefore serve in the workforce.

Image courtesy of Future Atlas on Flickr

Sylvester Johnson began the next morning with “Race, Cyborgs, and Weaponized AI: How Will Algorithmic Security Impact Multiracial Democracy?” Here, Johnson explored the intersection of AI and race, first by contextualizing the increased militarization of AI in society. Although some AI systems are constructed for benign purposes (see IBM’s Project Debater), Johnson raised the rippling effect of a 2016 event in Dallas: the killing of Micah Johnson by an AI robot directed by police. This robot, which was loaded with explosives and sent to locate Johnson within a city building, was the first to fatally harm a person in the United States (see here for further details and perspective). The targeting of Johnson, a Black man, demonstrates the implications of assault-ready AI in a climate of disproportionately anti-Black police brutality and racism. In this case, humans were definitively responsible for Johnson’s death, as police personnel made the decision to pursue—yet for devices such as autonomous missiles, the technology itself makes the ultimate choice as to who and/or what to target. What might the implications be for this reallocation of responsibility within a context of racial justice?

Offering more of an anthropological perspective, Benjamin Kuipers spoke as the symposium neared its conclusion. His presentation on “Hunting for Unknown Unknowns: AI and Ethics in Society” found parallels in Bryson’s earlier talk as he addressed what he claimed to be the fundamental pillar of civilization: trust. He posited that trust is an essential contributor to cooperation and social norms, with the evolutionary benefit of increasing the resources available to society. However, Kuipers suggested that AI has the potential to threaten trust. All AI systems use models, which are inherent simplifications of reality. This leads to a failure in accounting for “unknown unknowns,” or factors which cannot be predicted from the model. In particular, the common model of utility maximization can damage trust when not used thoughtfully (i.e., when this maximization takes advantage of vulnerability and thus dampens cooperative actions) and/or when the oft-tricky utility measure turns out to be inappropriate (e.g., a function that merely raises financial value may not result in desired outcomes). Until the influence of AI on trust can be more precisely evaluated, Kuipers argued, society has work to do.

Image courtesy of Brian J. Matis on Flickr

By far, the most popular of the concurrent sessions—which followed the keynotes—were of the Integrating Ethics into AI Decision Making theme. 10-12-minute recorded presentations, which can be accessed here, addressed topics from decolonizing AI in the Global South, to questioning moral testimony in natural language processors, to determining where accountability lies in the AI pipeline. In these sessions, attendees learned of efforts to democratize AI—particularly through a vision to expand African datasets and African AI research conducted by Africans—of reasons to be skeptical of second-hand, data-driven moral judgments made by AI, and of different forms of accountability at the various stages of AI creation, including design, data, legislation, and dissemination. A diverse range of subject matters were discussed within this framework and served as stepping-stones for further conversation in the symposium’s virtual lobby.

This programming, along with an abstract spotlight each morning, offered a multitude of disciplinary perspectives on the social situation of AI. Prominent themes included AI’s potential influence on or even jeopardization of interpersonal and societal trust, different forms of AI bias and how they might be addressed, and synergistic relationships with AI in the economy and job market. Interwoven in these discussions were questions of responsibility, accountability and oversight, and human-AI dynamics. The topics featured also shared themes in neuroscience, touching on the ethical implications of intelligent systems and devices designed to be ‘brain-like’ (e.g., artificial neural networks). All in all, the symposium served as a vessel for important dialogues about our future with AI, which look to be extended in a special issue on Embedding AI in Society in the journal AI & Society: Journal of Culture, Knowledge and Communication

The AI in Society Group at NC State is actively seeking contributions: abstracts and manuscripts, as well as any inquiries, can be submitted to [email protected].


Erin Morrow is an undergraduate student at Emory University majoring in Neuroscience and Behavioral Biology. Along with having a particular interest in the ethical ramifications of altering and accessing memories and the impact of neurotechnology, Erin is the lead research assistant at the Hamann Cognitive Neuroscience Laboratory, where she conducts memory research and neuroimaging analysis. She hopes to integrate her pursuits in neuroethics with her engagement in volunteerism and her future academic aspirations in research.

Dr. Veljko Dubljević is an Assistant Professor of Philosophy and Science, Technology & Society (STS), and leads the NeuroComputational Ethics Research Group at NC State University.  Before arriving in Raleigh, he spent three years as a postdoctoral fellow at the Neuroethics Research Unit at IRCM and McGill University in Montreal, Canada. He studied philosophy and economics, and obtained a PhD in Political Science (University of Belgrade, Serbia). After that, he joined the Research Training Group ‘Bioethics’ at University of Tuebingen (Germany), and after studying philosophy, bioethics, and neuroscience there, obtained a doctorate in philosophy (University of Stuttgart, German). He has published one monograph, two edited volumes and over 70 peer-reviewed articles in neuroethics, practical philosophy and ethics of AI. His most recent work on neurocomputational models of moral decision making is funded by a CAREER grant from the NSF.

Want to cite this post?

Morrow, E. & Dubljević, V. (2021). R. L. Rabb Symposium on Embedding AI in Society Summary. The Neuroethics Blog. Retrieved on , from


Emory Neuroethics on Facebook