Skip to main content

Neuroethics Meets Artificial Intelligence

By Marcello Ienca

Image courtesy of Pixabay
The history of Artificial Intelligence (AI) is inextricably intertwined with the history of neuroscience.
Since the early days of AI, scientists turned to the human brain as a source of guidance for the
development of intelligent machines (Ullman 2019). Unsurprisingly, many pioneers of AI such as
Warren McCulloch were trained in the sciences of the brain. Modern AI borrowed most of its
vocabulary from neurology and psychology. For instance, computational models consisting of
networks of interconnected units —one of the most common approaches to AI— are called
Artificial Neural Networks (ANN). Each unit is called an “artificial neuron.” Several areas of research
in AI are labelled through neuropsychological categories such as computer vision, machine learning,
natural language processing
etc. It’s not just a matter of terminology. ANNs, for example, are actually
inspired by and based on the functioning of biological neural networks that constitute animal nervous
systems.


In spite of this intimate link between AI and neuroscience, ethical reflections on these two disciplines
have developed quite independently of each other and with little interaction between the two
research communities. On the one hand, the AI ethics community has focused primarily on issues
such robot rights, algorithmic transparency, biases in AI systems, and autonomous weapons. On the
other hand, the neuroethics community has primarily focused on issues such as pharmacological
enhancement, brain interventions, neuroimaging, and free will.

Image courtesy of Ars Electronica, Flickr
However, in the face of recent technological developments, these two communities can no longer
afford to operate in silos. Several ethically-sensitive developments are occurring at the interface
between neuroscience and AI. One is the testing of AI algorithms in clinical neuroscience research
for predictive and diagnostic purposes. Machine learning algorithms, for instance, have been
successfully trained to detect early signs of Alzheimer’s disease and mental illness from brain scans
(Ding et al. 2018, Kalmady et al. 2019). These findings hold great potential for improving current
diagnostic protocols and enabling earlier, patient-tailored therapy. At the same time, the prospect of
automated diagnostics challenges established models of doctor-patient relationship, opens the risk of
algorithmic discrimination for certain patient groups (e.g. those underrepresented in the datasets used
to train the algorithms) and raises privacy concerns, especially if relevant information regarding these
indicators of illness falls in to the hands of employers and health insurance providers. Another
interesting area of research at the neuroscience-AI interface is brain simulations, i.e. the attempt to
use AI models to create virtual simulations or functional representations of brain activity. Although
still in the initial stages of development, this area or research promises to yield scientific and ethical
benefits. If successful, brain simulations will —in the long run— provide neuroscientists with extra
tools to explore how the brain works. For example, functioning computer models of the human
brain (or parts of the brain) could allow researchers to conduct some types of research on mental
illness and behavior without some of the typical methodological and ethical challenges of animal
models and human subjects research. Furthermore, brains and machines are getting closer and closer
through the so-called brain-computer interfaces (BCIs). These systems establish a direct connection
pathway between a brain and an external computer system such as a robotic limb, a personal
computer, a speech synthesizer or an electronic wheelchair. It goes without saying that AI software is
essential to enable this direct brain-computer communication. BCIs might be a game-changer for
several classes of neurological patients. In certain cases, they already are. In parallel, they present our
community with novel ethical challenges such the implications for personal identity and
psychological continuity of human-AI coupling, the privacy and security of the data collected by and
feeding the BCI, and the prospect of using BCI not only to restore function among neurological
patients but also as an alternative method for cognitive enhancement of healthy people (Drew 2019).
Finally, the weaponization of AI is extending to neurotechnology, BCI in particular. As BCIs and
other neurotechnologies are being increasingly tested in military research, ethical issues of dual-use
and neurosecurity become more pressing (Ienca, Jotterand and Elger 2018).

Image courtesy of Pixabay

Given this relationship between neuroscience and AI, the neuroethics and the AI-ethics communities
must no longer operate in silos but pursue greater mutual and cross-disciplinary exchange. Creating a
common ethical discourse at the brain-AI interface will likely yield benefits for both fields. There are
already some positive examples. For instance, the International Neuroethics Society (INS) has often
featured panel discussions on AI and other emerging technologies. In November 2018, researchers
gathered in Mexico to discuss the “Neuroethical Considerations of Artificial Intelligence”. In May 2019,
researchers at LMU Munich organised a conference titled “(Clinical) Neurotechnology meets Artificial Intelligence” focusing on the ethical, legal and social implications of the two fields.
Researchers involved in international brain initiatives, such as the Human Brain Project and the US
Brain Initiative, have been discussing the value of computational models in neuroscience (Ramos et al.
2019) and the impact of artificial systems on the study of consciousness (Salles et al. 2019). Meanwhile,
several publications have tackled ethical issues at the intersection of the two domains (Ienca 2018,
Kellmeyer et al. 2016, Yuste et al. 2017).


That being said, developing a unified framework for the ethics of brains and machines is no easy task.
Ethical disagreement is frequent within each community, let alone across them. A forthcoming
review of international AI ethics guidelines revealed substantial divergence on interpretation and
practical implementation of ethical principles (Jobin, Ienca and Vayena 2019). Meanwhile,
neuroethicists strongly disagree on the need for regulation of nonclinical uses of neurotechnology
(Ienca, Haselager and Emanuel 2018, Wexler 2019). However, the aim of creating a common ethical
discourse at the brain-AI interface should not be moral agreement. On the contrary, disagreement
can foster intellectual progress. The optimal response of the ethics community to developments in
neuroscience and AI is not consensus but rather mutual awareness, information exchange, and,
possibly, collaboration.
______________

 

Marcello Ienca is a Research Fellow at the Department of Health Sciences and Technology at ETH Zurich, Switzerland. He is the PI of the SAMW/KZS-funded project “Digitalizing Elderly Care in Switzerland”. His research focuses on the ELSI of neurotechnology and artificial intelligence, big data trends in neuroscience and biomedicine, digital health and cognitive assistance for people with intellectual disabilities etc. Ienca has received several awards for social responsibility in science and technology such as the Prize Pato de Carvalho, the Vontobel Award for Ageing Research, and the Paul Schotsmans Prize from the European Association of Centres of Medical Ethics (EACME). Ienca is the current coordinator of the Swiss Network of Neuroscience, Ethics and Law (SNNEL), the Chair of the Student-Postdoc Committee of the International Neuroethics Society (INS) and a member of the Steering Group on Neurotechnology and Society of the Organisation for Economic Co-operation and Development (OECD).  


References
  1. Ding, Y., Sohn, J.H., Kawczynski, M.G., Trivedi, H., Harnish, R., Jenkins, N.W., Lituiev, D.,
    Copeland, T.P., Aboian, M.S., Mari Aparici, C., Behr, S.C., Flavell, R.R., Huang, S.-Y., Zalocusky,
    K.A., Nardo, L., Seo, Y., Hawkins, R.A., Hernandez Pampaloni, M., Hadley, D., & Franc, B.L. (2018). A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG
    PET of the Brain. Radiology, 290, 2, 456-464. 
  2. Ienca, M. (2018). Democratizing cognitive technology: a proactive approach. Ethics and
    Information Technology
    .
  3. Ienca, M., Haselager, P., & Emanuel, E.J. (2018). Brain leaks and consumer neurotechnology.
    Nature biotechnology, 36, 9, 805-810. 
  4. Ienca, M., Jotterand, F., & Elger, B.S. (2018). From Healthcare to Warfare and Reverse: How
    Should We Regulate Dual-Use Neurotechnology? Neuron, 97, 2, 269-274. 
  5. Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics
    guidelines. Nature Machine Intelligence, 1-11. 
  6. Kalmady, S.V., Greiner, R., Agrawal, R., Shivakumar, V., Narayanaswamy, J.C., Brown, M.R.G.,
    Greenshaw, A.J., Dursun, S.M., & Venkatasubramanian, G. (2019). Towards artificial
    intelligence in mental health by improving schizophrenia prediction with multiple brain
    parcellation ensemble-learning. npj Schizophrenia, 5, 1, 2.
  7. Kellmeyer, P., Cochrane, T., MÜLler, O., Mitchell, C., Ball, T., Fins, J.J., & Biller-Andorno, N. (2016). The Effects of Closed-Loop Medical Devices on the Autonomy and Accountability of
    Persons and Systems. Cambridge Quarterly of Healthcare Ethics, 25, 4, 623-633. 
  8. Ramos, K.M., Grady, C., Greely, H.T., Chiong, W., Eberwine, J., Farahany, N.A., Johnson, L.S.M.,
    Hyman, B.T., Hyman, S.E., Rommelfanger, K.S., Serrano, E.E., Churchill, J.D., Gordon, J.A., &
    Koroshetz, W.J. (2019). The NIH BRAIN Initiative: Integrating Neuroethics and Neuroscience.
    Neuron, 101, 3, 394-398. 
  9. Salles, A., Bjaalie, J.G., Evers, K., Farisco, M., Fothergill, B.T., Guerrero, M., Maslen, H., Muller, J.,
    Prescott, T., Stahl, B.C., Walter, H., Zilles, K., & Amunts, K. (2019). The Human Brain Project:
    Responsible Brain Research for the Benefit of Society. Neuron, 101, 3, 380-384. 
  10. Ullman, S. (2019). Using neuroscience to develop artificial intelligence. Science, 363, 6428,
    692. 
  11. Wexler, A. (2019). Separating neuroethics from neurohype. Nature Biotechnology
  12. Yuste, R., Goering, S., Bi, G., Carmena, J.M., Carter, A., Fins, J.J., Friesen, P., Gallant, J., Huggins,
    J.E., & Illes, J. (2017). Four ethical priorities for neurotechnologies and AI. Nature News, 551,
    7679, 159.

Want to cite this post?




Ienca, M. (2019). Neuroethics meets Artificial Intelligence. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2019/10/neuroethics-meets-artificial.html

Comments







Emory Neuroethics on Facebook