|Image courtesy of Flickr
Experiments that could enhance rodent intelligence are closely watched and long-term worries about super-intelligent machines are everywhere. But unlike avoiding smart mice, we’re not talking about industry standards for the almost daily steps toward computers that possess at least near-human intelligence. Why not?
There’s been no lack of focus on the unsettling prospect of “pretty smart” mice. In 2000, Stanford’s Irving Weissman proposed to transplant human neural cells into fetal mice. The purpose was to study human neurons in vivo, but the proposal quickly turned into a flash point for debate about the ethical issues involved in human/non-human chimeras. Then-Senator Brownback introduced legislation to criminalize attempts to make such creatures, called the “Human Chimera Prohibition Act,” though most of its definitions seemed to deal with hybrids. President George W. Bush called for a ban on “human-animal hybrids” in his 2006 State of the Union address.
|Image courtesy of pngimg.com
This is not a matter of the “existential risk” that Skynet-style computers will align against us and turn all the atoms of the universe into whatever their basic programming instructs them to do, come what may. Nor is it the challenge of deciding whether it’s wrong to be cruel to self-aware machines a la Westworld. Instead, there’s the very practical and immediate problem of whether the next AI improvement risks machine awareness and how to prepare for it. Should some anticipatory legal arrangements be made? Should ethical standards for the treatment of this new kind of intelligent creature be made ready? Perhaps most fundamental, who will have the authority to communicate with this intelligent device or to decide its fate?
|Image courtesy of Pixabay
Even without considering doomsday scenarios or ethical quandaries, the creation of a pretty smart AI with the consciousness that goes with it would be a world-changing event, not unlike discovering intelligent aliens. Indeed, in at least one respect that would be more significant because humans would be the ones to create this new kind of person, not the contingencies of extra-terrestrial evolution.
Of course we should worry about big-picture changes that stand to be triggered by better and better AI, but we need to keep an eye on more subtle first signals of what we might call consciousness or self-awareness. That could come sooner and certainly would have more impact on humanity and society than robots stealing jobs or becoming our sex partners.
There is a gap between the most extreme concerns about intelligent AI and monitoring the incremental movement that might lead to those results. Even normally anti-regulatory entrepreneurs like Elon Musk have suggested that this is one area in which they would advocate some form of regulation. But if the major companies working on enhancing AI established an industry standard of prior review, that extreme and possibly counter-productive result can be avoided.
Pretty smart AI is coming. It’s in the industry’s interest to be a step ahead.
Want to cite this post?
Moreno, J. D. (2018). Smart AI. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/08/smart-ai.html