Skip to main content

Smart AI

By Jonathan D. Moreno

Image courtesy of Flickr

Experiments that could enhance rodent intelligence are closely watched and long-term worries about super-intelligent machines are everywhere. But unlike avoiding smart mice, we’re not talking about industry standards for the almost daily steps toward computers that possess at least near-human intelligence. Why not? 

Computers are far more likely to achieve human or near-human intelligence than lab mice, however remote the odds for either. The prospects for making rodents smarter with implanted human neurons have dimmed as the potential for a smart computer continues to grow. For example, a recent paper on systems of human neurons implanted in mice didn’t make them smarter maze-runners. By contrast, in 2016 a computer program called AlphaGo showed it could defeat a professional human Go player. Those machine-learning algorithms continue to teach themselves new, human-like skills, like facial recognition — except of course that they are better at it than the typical human. 

There’s been no lack of focus on the unsettling prospect of “pretty smart” mice. In 2000, Stanford’s Irving Weissman proposed to transplant human neural cells into fetal mice. The purpose was to study human neurons in vivo, but the proposal quickly turned into a flash point for debate about the ethical issues involved in human/non-human chimeras. Then-Senator Brownback introduced legislation to criminalize attempts to make such creatures, called the “Human Chimera Prohibition Act,” though most of its definitions seemed to deal with hybrids. President George W. Bush called for a ban on “human-animal hybrids” in his 2006 State of the Union address

Despite these presidential-level worries and much forehead-furrowing about smart mice among us bioethicists, an intelligence enhancement event is far more likely to take place in the AI realm than in animal studies. Although both are far-fetched, the very criteria for AI intelligence enhancement are still very much at issue whereas physical limitations like the size of a rodent skull appear to be decisive limiting factors for the effects of any humanized implant. 

Image courtesy of

This is not a matter of the “existential risk” that Skynet-style computers will align against us and turn all the atoms of the universe into whatever their basic programming instructs them to do, come what may. Nor is it the challenge of deciding whether it’s wrong to be cruel to self-aware machines a la Westworld. Instead, there’s the very practical and immediate problem of whether the next AI improvement risks machine awareness and how to prepare for it. Should some anticipatory legal arrangements be made? Should ethical standards for the treatment of this new kind of intelligent creature be made ready? Perhaps most fundamental, who will have the authority to communicate with this intelligent device or to decide its fate? 

To make matters more confusing, when that moment supposedly comes not everyone will agree that the AI in question is truly conscious, just as there is disagreement about whether non-human animals or even certain profoundly disabled human beings are conscious.  That debate itself will be contentious. 

Industry standards have been applied to such efforts as putting human neural stem cells into non-human animals. Despite skepticism about scientists self-regulating, those standards have worked. A similar process should be undertaken by the major players in AI. They could promulgate a process for the periodic evaluation of progress toward smart and potentially self-aware AI. This wouldn’t be regulation, but self-governance. In fact, earnest steps along these lines might help convince lawmakers that they don’t need to step in. 

Image courtesy of Pixabay

Even without considering doomsday scenarios or ethical quandaries, the creation of a pretty smart AI with the consciousness that goes with it would be a world-changing event, not unlike discovering intelligent aliens. Indeed, in at least one respect that would be more significant because humans would be the ones to create this new kind of person, not the contingencies of extra-terrestrial evolution. 

Despite the ongoing anxieties expressed in many quarters about AI intelligence enhancement, groups formed to address the policy implications (e.g., Stanford’s AI 100 group) have not taken up the nearer-term question of how we should react to and treat conscious AI, as for example, possessors of “rights.” That debate has already begun in Europe and is only likely to be intensified, Rather, they are focused on the far more likely disruptions like those of markets even if AI is not conscious in any meaningful sense.
Of course we should worry about big-picture changes that stand to be triggered by better and better AI, but we need to keep an eye on more subtle first signals of what we might call consciousness or self-awareness. That could come sooner and certainly would have more impact on humanity and society than robots stealing jobs or becoming our sex partners.
There is a gap between the most extreme concerns about intelligent AI and monitoring the incremental movement that might lead to those results. Even normally anti-regulatory entrepreneurs like Elon Musk have suggested that this is one area in which they would advocate some form of regulation. But if the major companies working on enhancing AI established an industry standard of prior review, that extreme and possibly counter-productive result can be avoided.
Pretty smart AI is coming. It’s in the industry’s interest to be a step ahead.


Jonathan D. Moreno is the David and Lyn Silfen University Professor at the University of Pennsylvania where he is a Penn Integrates Knowledge (PIK) professor. At Penn he is also Professor of Medical Ethics and Health Policy, of History and Sociology of Science, and of Philosophy.  His latest book is Impromptu Man: J.L. Moreno and the Origins of Psychodrama, Encounter Culture, and the Social Network (2014), which Amazon called a “#1 hot new release.”  Among his previous books are The Body Politic, which was named a Best Book of 2011 by Kirkus Reviews, Mind Wars (2012), and Undue Risk (2000).   

Want to cite this post?

Moreno, J. D. (2018). Smart AI. The Neuroethics Blog. Retrieved on , from


Emory Neuroethics on Facebook