Tuesday, December 13, 2016

Meet Tomorrow's World: A Meeting on the Ethics of Emerging Technologies

By Marcello Ienca

Marcello Ienca, M.Sc., M.A., is a PhD candidate and research assistant at the Institute for Biomedical Ethics, University of Basel, Switzerland. His current projects include the assessment of intelligent assistive technologies for people with dementia and other neurocognitive disabilities, the regulation of pervasive neurotechnology, and the neurosecurity of human-machine interfaces. He is the chair of the Student/Postdoc Committee of the International Neuroethics Society and the current coordinator of the Swiss Network for Neuroscience, Ethics and Law.

Technology is rapidly reshaping the world we live in. In the past few decades, mankind has not significantly changed biologically, but human societies have undergone continuous and unprecedented developments through technological innovation. Today, most human activities—from messaging to geolocation, from financial transactions to medical therapies— are computer-mediated. In the next decades, the quantity and variety of activities mediated by digital technology is bound to increase exponentially. In parallel, with advancements in artificial intelligence (AI), robotics and microcomputing, the friction between man and machine is set to vanish and the boundaries at the human-machine interface are bound to blur. In an attempt to anticipate our technological futures as well as their impact on our societies and our systems of values, the International Neuroethics Society (jointly with the Temporal Dynamics of Learning Center, the Science Collaboratory of the University of California, San Diego, and the National Science Foundation) sponsored a public event on the Ethics of Emerging Technologies as part of the 2016 annual INS meeting in San Diego, California. The event was organized by INS President Judy Illes, INS Executive Director Karen Graham, Dr. Rachel Wurzman of the INS Public Session Program Committee and Prof. Andrea Chiba, Dr. Roger Bingham and Prof. Deborah Forster of UCSD. A panel of international experts in various areas of science and ethics gathered in San Diego on November 9 to discuss various critical issues emerging at the human-machine interface with possible disruptive implications for ethics and society. The first perspective was provided by Dr. William D. Casebeer, career intelligence analyst and Lieutenant Colonel in the US Air Force. His short talk proposed an interesting analogy between pervasive technology and the art of storytelling to show how technology could be actually used, in the near future, to raise empathy, deliver personalized experiences and facilitate human interaction.

Image courtesy of Wikimedia Commons
The second talk, delivered by Dr. Kate Darling, research specialist at the MIT Media Lab, focused on the near-term effects of robotic technology, with a particular interest in their legal, social, and ethical issues. Her analysis took a first step from the common observation that people often treat robots like they are alive, despite consciously knowing that they are not living beings in any significant sense. A paradigmatic example is the confirmed report of soldiers having held funerals for fallen robots. The reason for that, Darling argued, presumably stems from the fact that robots embody physicality and movement, two qualities that are essential to animated beings. This phenomenon shows that robots can also be effectively used for supporting empathic activities such as therapy, assistance and social interaction. However, Dr. Darling also anticipated possible risks associated with robotics-assisted activities, in particular the risk that such engaging technologies may become manipulative technologies. This risk appears particularly relevant in the context of using anthropomorphic, or at least animalomorphic, assistive robotics for people with neurocognitive disabilities, including Paro and iCat. Due to their cognitive impairments, users of these technologies may lack the capacity to discern the robotic nature of these devices, and hence fail to draw the line between the biophysical and in-silico world.

Similar risks of physical-digital conflation were addressed at a deeper neurobiological level by Prof. Mayank Mehta, lab head at the W. M. Keck Center for Neurophysics at University of California, Los Angeles. His research results showed that a brain region involved in spatial learning produces a pattern of activity when it processes virtual reality (VR) that is completely different than the pattern of activity produced when it processes activity in the real world. In particular, more than half of all neurons in the space mapping brain region shut down when processing VR and the remaining maps are scrambled. In addition to space mapping, this part of the brain is crucial for many forms of learning and is directly involved in several neurological diseases invluding Alzheimer's disease, epilepsy, post-traumtic stress disorder (PTSD) and depression. Mehta's research has also shown that this part of the brain is very plastic, even in adults and seniors, suggesting that abnormal activity patterns in VR could potentially rewire this part of the brain. As the use of VR is rapidly becoming pervasive for communication and entertainment purposes, Mehta emphasized the urgent need of understanding the long-term consequences of VR use on this improtant and delicate part of the brain.

VR and assistive robotics not only raise the risk of eroding our awareness of the physical-digital distinction, but raise privacy and identity concerns, too. In fact, emerging technologies can also be used to collect personal data and even influence a person’s behavior. These privacy risks were well exemplified by Jay Giedd, Professor of Psychiatry at the University of California, San Diego. He introduced the notion of “penetration technology” to refer to the phenomenon where technology advances faster than society can adapt in terms of governance and regulation. For example, he reported that the number of events in which people participate on Facebook was demonstrated to be a good predictor of depression. However, he observed that conducting big-data research on large volumes of publicly available information on social media opens up fundamental research ethics dilemmas. In particular, there is an ethical issue of privacy and confidentiality: is a person’s informational privacy respected if researchers mine data from his or her social media profile without explicit authorization? On the one hand, the answer to this question seems to be affirmative because that information was posted publicly by the user and after acceptance of the social media’s terms and conditions. On the other hand, however, no explicit request for the acquisition of informed consent was advanced by the researchers. In parallel, neuroscientist and entrepreneur Vivienne Ming emphasized the positive dimension of such big-data trends: although collecting large volumes of data for research purposes may be questionable from a privacy perspective, it can also be extremely beneficial for science and society. Going back to Giedd’s example, the application of predictive analytics techniques to large volumes of social media data might predict depression or a manic episode, possibly saving lives that could not be saved otherwise.

Image courtesy of Jisc
These critical ethical questions were further analyzed in depth by Dr. Hannah Maslen and Prof. Julian Savulescu of the University of Oxford. Dr. Maslen called for an open debate on determining the principles that should govern the use of health technology in educational contexts. She introduced the idea of a child’s “right to future openness” to emphasize the duty of parents to keep a sufficient number of options open for their children. Moving from the educational context to a larger societal perspective, Prof. Savulescu underlined the role of human morality in enabling social interaction within groups. From his perspective, technology can be coopted as an efficient tool to achieve these prosocial goals, particularly if it is used to provide cognitive and moral enhancement both at the individual and collective level. Such enhancement, however, may cause structural transformations to critical aspects of our modern societies, including the political dimension.

On the whole, all panelists were optimistic about future advancements in technological innovation and dissemination. However, they called for a cooperative effort to anticipate the ethical and social implications of emerging technologies and to open a public debate on these issues. This debate has been successfully initiated with the Meeting on the Ethics of Emerging Technologies. It is now up to all of us to shape the technological future we want to live in.

Want to cite this post?

Ienca, M. (2016). Meet Tomorrow's World: A Meeting on the Ethics of Emerging Technologies. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/12/meet-tomorrows-worlda-meeting-on-ethics.html

No comments: