Tuesday, July 10, 2018

Solitary Confinement: Isolating the Neuroethical Dilemma

By Kristie Garza
Eastern State Penitentiary
Image courtesy of Wikimedia Commons
In 1842, Charles Dickens visited the Eastern Penitentiary in Philadelphia to examine what was being called a revolutionary form of rehabilitation. After his visit, he summarized his observations into an essay in which he stated, “I am only the more convinced that there is a depth of terrible endurance in it which none but the sufferers themselves can fathom, and which no man has a right to inflict upon his fellow-creature. I hold this slow and daily tampering with the mysteries of the brain, to be immeasurably worse than any torture of the body” (1).  Dickens’ words describe solitary confinement. While there is no one standard for solitary confinement conditions, it usually involves an individual being placed in complete sensory and social isolation for 23 hours a day. What Dickens observed in 1842 is not unlike current solitary confinement conditions.

Tuesday, July 3, 2018

Neuroethics: the importance of a conceptual approach

By Arleen Salles, Kathinka Evers, and Michele Farisco

Image courtesy of Wikimedia Commons.
What is neuroethics? While there is by now a considerable bibliography devoted to examining the philosophical, scientific, ethical, social, and regulatory issues raised by neuroscientific research and related technological applications (and a growing number of people in the world claim to take part in the neuroethical debate), less has been said about how to interpret the field that carries out such examination. And yet, this calls for discussion, particularly considering that the default understanding of neuroethics is one that sees the field as just another type of applied ethics, and, in particular, one dominated by a Western bioethical paradigm. The now-classic interpretation of neuroethics as the “neuroscience of ethics” and the “ethics of neuroscience” covers more ground, but still fails to exhaust the field (1).

As we have argued elsewhere, neuroethics is a complex field characterized by three main methodological approaches (2-4). “Neurobioethics” is a normative approach that applies ethical theory and reasoning to the ethical and social issues raised by neuroscience. This version of neuroethics, which generally mirrors bioethical methodology and goals, is predominant in healthcare, in regulatory contexts, and in the neuroscientific research setting.

Tuesday, June 26, 2018

Facial recognition, values, and the human brain

By Elisabeth Hildt

Image courtesy of Pixabay.
Research is not an isolated activity. It takes place in a social context, sometimes influenced by value assumptions and sometimes accompanied by social and ethical implications. A recent example of this complex interplay is an article, “Deep neural networks can detect sexual orientation from faces” by Yilun Wang and Michal Kosinski, accepted in 2017 for publication in the Journal of Personality and Social Psychology.

In this study on face recognition, the researchers used deep neural networks to classify the sexual orientations of persons depicted in facial images uploaded on a dating website. While the discriminatory power of the system was limited, the algorithm was reported to have achieved higher accuracy in the setting than human subjects. The study can be seen in the context of the “prenatal hormone theory of sexual orientation,” which claims that gay men and women tend to have gender-atypical facial morphology.

Tuesday, June 19, 2018

Disrupting diagnosis: speech patterns, AI, and ethical issues of digital phenotyping

By Ryan Purcell, PhD

Jim Schwoebel, presenter at April The Future Now: (NEEDS)
Diagnosing schizophrenia can be complex, time-consuming, and expensive. The April seminar on The Future Now: (NEEDs) Neuroscience and Emerging Ethical Dilemmas at Emory focused on one innovative effort to improve this process in the flourishing field of digital phenotyping. Presenter and NeuroLex founder and CEO Jim Schwoebel had witnessed his brother struggle for several years with frequent headaches and anxiety, and saw him accrue nearly $15,000 in medical expenses before his first psychotic break. From there it took many more years and additional psychotic episodes before Jim’s brother began responding to medication and his condition stabilized. Unfortunately, this experience is not uncommon; a recent study found that the median period from the onset of psychotic symptoms until treatment is 74 weeks. Naturally, Schwoebel thought deeply about how this had happened and what clues might have been seen earlier. “I had been sensing that something was off about my brother’s speech, so after he was officially diagnosed, I looked more closely at his text messages before his psychotic break and saw noticeable abnormalities,” Schwoebel told Psychiatric News. For Schwoebel, a Georgia Tech alum and co-founder of the neuroscience startup accelerator NeuroLaunch, this was the spark of an idea. Looking into the academic literature he found a 2015 study led by researchers from Columbia University who applied machine learning to speech from a sample of participants at high risk for psychosis. They found that the artificial intelligence correctly predicted which individuals would transition to psychosis over the next several years.

Tuesday, June 12, 2018

Ethical Concerns Surrounding Psychiatric Treatments: Do Academics Agree with the Public?

By Laura Y. Cabrera, Rachel McKenzie, Robyn Bluhm

Image courtesy of the
U.S. Airforce Special Operations Command.
Treatments for psychiatric disorders raise unique ethical issues because they aim to change behaviors, beliefs, and affective responses that are central to an individual’s sense of who they are. For example, interventions for depression aim to change feelings of guilt and worthlessness (as well as depressed mood), while treatments for obsessive-compulsive disorder try to diminish both problematic obsessive beliefs and compulsive behaviors. In addition to the specific mental states that are the target of intervention, these treatments can also affect non-pathological values, beliefs, and affective responses. The bioethics and neuroethics communities have been discussing the ethical concerns that these changes pose for individual identity [1,2], personality [3,4], responsibility [5], autonomy [6,7], authenticity [8], and agency [9,10]. 

Tuesday, June 5, 2018

Participatory Neuroscience: Something to Strive For?

By Phoebe Friesen

Image courtesy of Pixabay.
In the last few decades, there has been an increasing push towards making science more participatory by engaging those who are part of or invested in the community that will be impacted by the research in the actual research process, from determining the questions that are worth asking, to contributing to experimental design, to communicating findings to the public. Some of this push stems from the recognition that research is always value-laden and that the values guiding science have long been those of an elite and unrepresentative few (Longino, 1990). This push also has roots in feminist standpoint theory, which recognizes the way in which marginalized individuals may have an epistemic advantage when it comes to identifying problematic assumptions within a relevant knowledge project (Wylie, 2003). Additionally, many have noted how including the voices of those likely to be impacted by research can support the process itself (e.g. by identifying meaningful outcome measures) (Dickert & Sugarman, 2005). As a result, participatory research is becoming widely recognized as having both ethical and epistemic advantages. The field of neuroscience, however, which takes the brain as its primary target of investigation, has been slow to take up such insights. Here, I outline five stages of participatory research and the uptake of neuroscientific research in each, discuss the challenges and benefits of engaging in such research, and suggest that the field has an obligation, particularly in some cases, to shift towards more participatory research.

Tuesday, May 29, 2018

Ethical Implications of fMRI In Utero

By Molly Ann Kluck

Image courtesy of Wikimedia Commons.
When my neuroethics mentor approached me with a publication from Trends in Cognitive Science called “Functional Connectivity of the Human Brain in Utero” (1) in hand, I was immediately delighted by the idea of performing an ethical analysis on the use of functional Magnetic Resonance Imaging (fMRI) on fetuses in utero. As of right now, I’m still conducting this ethical analysis. 

Using fMRI to look at human brains as they develop in utero is groundbreaking for a couple reasons. For one, there is a vast difference between the fMRI method currently used to investigate developing brains and previous methods that were used to examine fetal brain development. Research on developing brains had utilized preterm neonates, or babies born prematurely. While these data are valuable, there are issues with validity associated with this method: early exposure to an abnormal environment (e.g. being in the intensive care unit, where many preterm babies go after birth, being in an MRI machine, etc.) for a fetal brain, incomplete exposure to the essential nutrients and protection offered by the womb, and the plasticity of the fetal brain all can cause preterm neonates to experience differences in brain development (2). An accurate map of the brain as it typically develops will not be truly accurate if it is produced solely using preterm neonates. However, surveying a developing brain while it is still in utero, as can be done with fMRI in utero, is a different matter altogether. The chances of this research providing a more accurate picture of the developing brain increase due to the uninterrupted development of the fetus in utero. 

Tuesday, May 22, 2018

Should you trust mental health apps?

By Stephen Schueller

Image courtesy of Pixabay.
If you were to search the Google Play or Apple iTunes store for an app to help support your mental health you’d find a bewildering range of options. This includes nearly 1000 apps focused on depression, nearly 600 focused on bipolar disorder, and 900 focused on suicide (Larsen, Nicholas, & Christensen, 2016). But how much faith should you have that these apps are actually helpful? Or to take an even more grim position, might some apps actually be harmful? Evidence suggests the latter might be true. In one study, researchers who examined the content in publicly available bipolar apps actually found one app, iBipolar, that instructed people to drink hard liquor during a bipolar episode to help them sleep (Nicholas, Larsen, Proudfoot, & Christensen, 2015). Thus, people should definitely approach app stores cautiously when searching for an app to promote their mental health.

Thursday, May 17, 2018

Presenting... The Neuroethics Blog Reader: Black Mirror Edition!

It is our pleasure to present you with The Neuroethics Blog Reader: Black Mirror Edition!

This reader features the seven contributions from the blog's Black Mirror series, in which six different student writers explored the technology and neuroethical considerations presented in various  episodes of the British science fiction anthology television series. 

As Dr. Karen Rommelfanger puts it: 

This reader "... features critical reflections on the intriguing, exciting and sometimes frightful imagined futures for neurotechnology. Every day, in real life, we move closer to unraveling the secrets of the brain and in so doing become closer to understanding how to intervene with the brain in ways previously unimaginable. Neuroscience findings and the accompanying neurotechnologies created from these findings promise to transform the landscape of every aspect of our lives. As neuroethicists, we facilitate discussions on the aspirations of neuroscience and what neuroscience discoveries will mean for society. Sometimes this means dismantling overhyped neuroscience and staving of possible dystopian futures, but ultimately neuroethics aims to make sure that the neuroscience of today and of the future advance human flourishing."

The Neuroethics Blog, now in its 7th year of creating weekly publications, runs mostly in part to our amazing blog editorial team. A special thank you to: Sunidhi Ramesh (Volume Editor of the reader and outgoing Assistant Managing Editor), Carlie Hoffman (Managing Editor), Nathan Ahlgrim (incoming Assistant Managing Editor), Kristie Garza (Supporting Editor and blog contributor), and Jonah Queen (Supporting Editor and blog contributor). We would also like to thank the authors of the pieces featured in the reader; you can read more about them on the last page of the publication.

Want to read more? Check out a digital copy of the reader below.

Tuesday, May 15, 2018

Regulating Minds: A Conceptual Typology

By Michael N. Tennison 

Image courtesy of Wikimedia Commons.
Bioethicists and neuroethicists distinguish therapy from enhancement to differentiate the clusters of ethical issues that arise based on the way a drug or device is used. Taking a stimulant to treat a diagnosed condition, such as ADHD, raises different and perhaps fewer ethical issues than taking it to perform better on a test. Using a drug or device to enhance performance—whether in the workplace, the classroom, the football field, or the battlefield—grants the user a positional advantage over one’s competitors. Positional enhancement raises issues of fairness, equality, autonomy, safety, and authenticity in ways that do not arise in therapy; accordingly, distinguishing enhancement from therapy makes sense as a heuristic to flag these ethical issues. 

Tuesday, May 8, 2018

Trust in the Privacy Concerns of Brain Recordings

By Ian Stevens

Ian is a 4th year undergraduate student at Northern Arizona University. He is majoring in Biomedical Sciences with minors in Psychological Sciences and Philosophy to pursue interdisciplinary research on how medicine, neuroscience, and philosophy connect. 


Brain recording technologies (BRTs), such as brain-computer interfaces (BCIs) that collect various types of brain signals from on and around the brain could be creating privacy vulnerabilities in their users.1,2 These privacy concerns have been discussed in the marketplace as BCIs move from medical and research uses to novel consumer purposes. 3,4 Privacy concerns are grounded in the fact that brain signals can currently be decoded to interpret mental states such as emotions,5 moral attitudes,6 and intentions.7 However, what can be interpreted from these brain signals in the future is ambiguous.

Tuesday, May 1, 2018

The Promise of Brain-Machine Interfaces: Recap of March's The Future Now: NEEDs Seminar

Image courtesy of Wikimedia Commons.
By Nathan Ahlgrim

If we want to – to paraphrase the classic Six Million Dollar Man – rebuild people, rebuild them to be better, stronger, faster, we need more than fancy motors and titanium bones. Robot muscles cannot help a paralyzed person stand, and robot voices cannot restore communication to the voiceless, without some way for the person to control them. Methods of control need not be cutting-edge. The late Dr. Stephen Hawking’s instantly recognizable voice synthesizer was controlled by a single cheek movement, which seems shockingly analog in today’s world. Brain-machine interfaces (BMIs) are the emerging technology that promise to bypass all external input and allow robotic devices to communicate directly with the brain. Dr. Chethan Pandarinath, assistant professor of biomedical engineering at Georgia Tech and Emory University, discussed the good and bad of this technology in March’s The Future Now NEEDs seminar: "To Be Implanted and Wireless". He shared his experience and perspective, agreeing that these invasive technologies hold incredible promise. Keeping that promise both realistic and equitable, though, is an ongoing challenge.

Tuesday, April 24, 2018

The Effects of Neuroscientific Framing on Legal Decision Making

By Corey H. Allen

Corey Allen is a graduate research fellow in the Georgia State University Neuroscience and Philosophy departments with a concentration in Neuroethics. He is a member of the Cooperation, Conflict, and Cognition Lab, and his research investigates (1) the ethical and legal implications of neuropredictive models of high-risk behavior, (2) the role of consciousness in attributions of moral agency, and (3) the impact of neurobiological explanations in legal and moral decision making.

More than ever, an extraordinary amount of up-and-coming companies are jumping to attach the prefix “neuro” to their products. In many cases, this ”neurobabble” is inadequate and irrelevant, serving only to take advantage of the public’s preconceptions about the term. This hasty neuroscientific framing doesn’t stop with marketing but instead creeps into public and legal discourse surrounding action and responsibility. This leads to the question: does the framing of an issue as “neuroscientific” change the perceptions of and reactions to that issue? This question, especially in the realm of legal decision making, is the focus of ongoing research by Eyal Aharoni, Jennifer Blumenthal-Barby, Gidon Felsen, Karina Vold, and myself, with the support of Duke University and the John Templeton Foundation. With backgrounds varying from psychology, philosophy, neuroscience, to neuroethics, our team employs a multi-disciplinary approach to probe the effects of neuroscientific framing on public perceptions of legal evidence as well as the ethical issues surrounding such effects.

Tuesday, April 17, 2018

The Fake News Effect in Biomedicine

By Robert T. Thibault

Robert Thibault is interested in expediting scientific discoveries through efficient research practices. Throughout his PhD in the Integrated Program in Neuroscience at McGill University, he has established himself as a leading critical voice in the field of neurofeedback and published on the topic in Lancet Psychiatry, Brain, American Psychologist, and NeuroImage among other journals. He is currently finalizing an edited volume with Dr. Amir Raz, tentatively entitled “Casting light on the Dark Side of Brain Imaging,” slated for release through Academic Press in early 2019. 

We all hate being deceived. That feeling when we realize the “health specialist” who took our money was nothing more than a smooth-talking quack. When that politician we voted for never really planned to implement their platform. Or when that caller who took our bank information turned out to be a fraud. 

These deceptions share a common theme—the deceiver is easy to identify and even easier to resent. Once we understand what happened and who to blame, we’re unlikely to be misled by such chicanery again. 

But what if the perpetrator is more difficult to identify? What if they are someone we have a particular affection for? Can we maintain the same objectivity? 

What if the deceiver is you? 

Tuesday, April 10, 2018

Global Neuroethics and Cultural Diversity: Some Challenges to Consider

By Karen Herrera-Ferrá, Arleen Salles and Laura Cabrera

Karen Herrera-Ferrá, MD, MA lives in Mexico City and founded the Mexican Association of Neuroethics. She has a Post-doctorate in Neuroethics (Neuroethics Studies Program at the Pellegrino Center for Clinical Bioethics (PCCB) at Georgetown University), a MA on Clinical Psychology, and an MD. She also has a Certificate on Cognitive Behavioral Therapy and another one on History of Religions. She has a one-year fellowship on Psychosis and another on OCD. She is currently a PhD Candidate on Bioethics. On May 2016 she developed a national project to formally introduce and develop neuroethics in her country. The main focus of this project is to depict and include national leaders in mental health, interested in neuroethics, so to inform and divulge this discipline among scholars and society. She also works as a mental health clinician in a private hospital, lectures in different hospitals and Universities in Mexico and is an Affiliated Scholar of the Neuroethics Studies Program at the Pellegrino Center for Clinical Bioethics PCCB at Georgetown University. Her interests and research focus on two main topics: recurrent violent behavior and globalization of neuroethics in Latin America. 

Arleen Salles, Senior Researcher, Centre for Research Ethics and Bioethics, Uppsala University, Sweden, Task leader and research collaborator in the Ethics and Society subproject (SP12) of the EU-flagship Human Brain Project, Director of the Neuroethics Program at CIF (Centro de Investigaciones Filosoficas)  in Buenos Aires, Argentina.

Dr. Laura Cabrera is Assistant Professor of Neuroethics at the Center for Ethics and Humanities in the Life Sciences. She is also Faculty Affiliate at the National Core for Neuroethics, University of British Columbia. Laura Cabrera's interests focus on the ethical and societal implications of neurotechnology, in particular when use for enhancement purposes as well as for treatments in psychiatry. She has been working on projects at the interface of conceptual and empirical methods, exploring the attitudes of professionals and the public toward pharmacological and brain stimulation interventions, as well as their normative implications. Her current work also focuses on the ethical and social implications of environmental changes for brain and mental health. She received a BSc in Electrical and Communication Engineering from the Instituto Tecnológico de Estudios Superiores de Monterrey (ITESM) in Mexico City, an MA in Applied Ethics from Linköping University in Sweden, and a PhD in Applied Ethics from Charles Sturt University in Australia. Her career goal is to pursue interdisciplinary neuroethics scholarship, provide active leadership, and train and mentor future leaders in the field. 

The impact of scientific brain research and the effects of neurotechnology on human beings as biological and moral beings is increasingly felt in medicine and the humanities around the world. Neuroethics attempts to offer a collective response to the ethical issues raised by rapidly developing science and to find new answers to age-old philosophical questions. A growing number of publications show that the field has disseminated to many countries, including developing countries (1-3). Mindful that ethical issues are typically shaped by the interplay of science and society, there has been a recent emphasis on the need for a more culturally and socially sensitive field and a call for a wider and more inclusive neuroethics: a “cross cultural” “global” or “international” neuroethics (4). While the sentiment is good, what exactly a more inclusive neuroethics entails is not necessarily clear. Does it entail just recognizing the need for the field to be more aware of existing disparities in brain and mental health issues and their treatment in different regions? Does it entail recognizing the global scope of neuroethical problems? Or possibly working towards a common, unified approach to neuroethical issues that incorporates different viewpoints and methods? 

Tuesday, April 3, 2018

The Seven Principles for Ethical Consumer Neurotechnologies: How to Develop Consumer Neurotechnologies that Contribute to Human Flourishing

By Karola Kreitmair 

Karola Kreitmair, PhD, is a Clinical Ethics Fellow at the Stanford Center for Biomedical Ethics. She received her PhD in philosophy from Stanford University in 2013 and was a postdoctoral fellow in Stanford’s Thinking Matters program from 2013-2016. Her research interests include neuroethics, especially new technologies, deep brain stimulation, and the minimally-conscious state, as well as ethical issues associated with wearable technology and citizen science.  

Brain-computer interfaces, neurostimulation devices, virtual reality systems, wearables, and smart phone apps are increasingly available as consumer technologies intended to promote health and wellness, entertainment, productivity, enhancement, communication, and education. At the same time, a growing body of literature addresses ethical considerations with respect to these neurotechnologies (Wexler 2016; Ienca & Adorno 2017; Kreitmair & Cho 2017). The ultimate goal of ethical consumer products is to contribute to human flourishing. As such, there are seven principles which developers must respect if they are to develop ethical consumer neurotechnologies. I take these considerations to be necessary for the development of ethical consumer neurotechnologies, i.e. technologies that contribute to human flourishing, but I am not committed to claiming they are also jointly sufficient. 

The seven principles are: 
1. Safety 
2. Veracity 
3. Privacy 
4. Epistemic appropriateness 
5. Existential authenticity 
6. Just distribution 
7. Oversight 

Tuesday, March 27, 2018

Neuroprosthetics for Speech and Challenges in Informed Consent

Hannah Maslen is the Deputy Director of the Oxford Uehiro Centre for Practical Ethics, University of Oxford. She works on a wide range of topics in applied philosophy and ethics, from neuroethics to moral emotions and criminal justice. Hannah is Co-PI on BrainCom, a 5-year European project working towards the development of neural speech prostheses. Here, she leads the work package on ‘Ethics, Implants and Society’.  

Scientists across Europe are combining their expertise to work towards the development of neuroprosthetic devices that will restore or substitute speech in patients with severe communication impairments. The most ambitious application will be in patients with locked-in syndrome who have completely lost the ability to speak. Locked-in syndrome is a condition in which the patient is awake and retains mental capacity but cannot express himself or herself due to the paralysis of afferent motor pathways, preventing speech and limb movements (except for some form of voluntary eye movement, usually up and down) (1).

Tuesday, March 20, 2018

Downloading Happiness

By Sorab Arora

Sorab Arora is currently a Master’s in Public Health student at Emory University, specializing in Healthcare Management and Policy. He has researched health technology design and strategy focused on behavioral medicine, most recently at Northwestern University’s Center for Behavioral Intervention Technologies. Arora is a graduate of both the University of Chicago (Summer Business Scholar – 2017) and Grinnell College (2016), where he has bridged social entrepreneurship with mobile technologies and medical innovation. 

With median adult smartphone ownership rising to nearly 70% in advanced markets, individuals ranging from wealthy millennials to homeless youth have unprecedented access to mobile technologies (Poushter, 2016; Ben-Zeev et al., 2013). From “swiping” potential soulmates to ordering prescription glasses to one’s door, the proliferation of opportunities for immediate gratification through mobile applications only continues to grow. In what economists have now termed the “Fourth Industrial Revolution,” this period of integrated consumer technologies focuses on human-centered design and improved efficiency across global sectors (Schwab, 2017). In healthcare especially, mobile health (mHealth) platforms offer an innovative new element to how medicine can be conceptualized, delivered, and implemented. 

Tuesday, March 13, 2018

The Brain In Context

By Sarah W. Denton

Sarah W. Denton is a research assistant with the Science and Technology Innovation Program at the Wilson Center. Denton is also a research assistant with the Institute for Philosophy and Public Policy at George Mason University. Her research primarily focuses on ethical and governance implications for emerging technologies such as artificial intelligence, neurotechnology, gene-editing technology, and pharmaceuticals. 

Tim Brown, University of Washington PhD student and research assistant with the Center for Sensorimotor Neural Engineering’s (CSNE) Neuroethics Thrust, introduced the session titled, “The Brain in Context,” at the International Neuroethics Society’s 2017 Annual Meeting moderated by Husseini Manji, Janssen Global Therapeutic Neuroscience Area Head. This session provided a multidisciplinary view of the challenges we face today in understanding the context of lived experiences and how our brains impact our environments. Getting at the heart of the context in which our brains develop and grow may help us to reduce stigma by increasing our understanding of how our environments impact our brains in a myriad of ways.

Tuesday, March 6, 2018

Practical and Ethical Considerations in Consciousness Restoration

By Tabitha Moses

Tabitha Moses is a second year MD/PhD (Translational Neuro-science) Candidate at Wayne State University School of Medicine. She earned a BA in Cognitive Science and Philosophy and an MS in Biotechnology from The Johns Hopkins University. Her research focuses on substance use, mental illness, and emerging neurotechnologies. Her current interests in neuroethics include the concepts of treatment and enhancement and how these relate to our use of new technologies as well as how we define disability.

What does it mean to be conscious? In Arthur Caplan’s plenary session at the 2017 International Neuroethics Society annual meeting (Neuromodulation of the Dead, Persistent Vegetative State, and Minimally Conscious), he explored this question and how the answers may impact research and medicine. 

Thursday, March 1, 2018

Black Mirror in the Rear-View Mirror: An Interview with the Authors

Image courtesy of Wikimedia Commons.
The Neuroethics Blog hosted a special series on Black Mirror over the past year, originally coinciding with the release of its third season on Netflix. Black Mirror is noted for its telling of profoundly human stories in worlds shaped by current or future technologies. Somnath Das, now a medical student at Thomas Jefferson University, founded the Blog’s series on Black Mirror. Previous posts covered "Be Right Back", "The Entire History of You""Playtest", "San Junipero", "Men Against Fire", "White Bear", and "White Christmas". With Season 4 released at the end of December 2017, Somnath reconvened with contributing authors Nathan Ahlgrim, Sunidhi Ramesh, Hale Soloff, and Yunmiao Wang to review the new episodes and discuss the common neuroethical threads that pervade Black Mirror.
The discussion has been edited for clarity and conciseness. 

*SPOILER ALERT* - The following contains plot spoilers for the Netflix television series Black Mirror.

Tuesday, February 27, 2018

The Ethical Design of Intelligent Robots

By Sunidhi Ramesh

The main dome of the Massachusetts
Institute of Technology (MIT).
(Image courtesy of Wikimedia.)
The morning of February 1, 2018, MIT President L. Rafael Reif sent an email addressed to the entire institute community. In it was an announcement introducing the world to a new era of innovation—the MIT Intelligence Quest, or MIT IQ.

Formulated to “advance the science and engineering of both human and machine intelligence,” the project aims “to discover the foundations of human intelligence and drive the development of technological tools that can positively influence virtually every aspect of society.” The kicker? MIT IQ not only exists to develop these futuristic technologies, but it also seeks to “investigate the social and ethical implications of advanced analytical and predictive tools.”

In other words, one of the most famous and highly ranked universities in the world has dedicated itself to preemptively consider the consequences of the future of technology while simultaneously developing that same technology in hopes of making a “better world.”

Tuesday, February 20, 2018

One Track Moral Enhancement

By Nada Gligorov

Nada Gligorov is an associate professor in the Bioethics Program of the Icahn School of Medicine at Mount Sinai. She is also faculty for the Clarkson University-Icahn School of Medicine Bioethics Masters Program. The primary focus of Nada’s scholarly work is the examination of the interaction between commonsense and scientific theories. Most recently, she authored of a monograph titled Neuroethics and the Scientific Revision of Common Sense (Studies in Brain and Mind, Springer). In 2014, Nada founded the Working Papers in Ethics and Moral Psychology speaker series–a working group where speakers are invited to present well-developed, as yet unpublished work.

Within the debate on neuroenhancement, cognitive and moral enhancements have been discussed as two different kinds of improvements achievable by different biomedical means. Pharmacological means that improve memory, attention, decision-making, or wakefulness have been accorded the status of “cognitive enhancers,” while attempts to improve empathy or diminish aggression have been categorized as “moral enhancements.” According to Ingmar Persson and Julian Savulescu (2008; 2012), cognitive enhancement could outstrip our natural abilities to improve commonsense morality. The view of commonsense morality as static motivates Persson and Savulescu (2008) to establish two distinct tracts of enhancement and to argue that cognitive enhancement needs to be coupled with moral enhancement to prevent the negative impact of rapid scientific progress that might be precipitated by the use of cognitive enhancers. To argue that cognitive enhancement might lead to improvements both in science and in commonsense morality, I will propose that commonsense morality is a folk theory with features similar to a scientific theory.

Tuesday, February 13, 2018

International Neuroethics Society Annual Meeting Summary: Ethics of Neuroscience and Neurotechnology

By Ian Stevens

Ian is a 4th year undergraduate student at Northern Arizona University. He is majoring in Biomedical Sciences with minors in Psychological Sciences and Philosophy to pursue interdisciplinary research on how medicine, neuroscience, and philosophy connect. 

At the 2017 International Neuroethics Society Annual Meeting, an array of neuroscientists, physicians, philosophers, and lawyers gathered to discuss the ethical implications of neuroscientific research in addiction, neurotechnology, and the judicial system. A panel consisting of Dr. Frederic Gilbert with the University of Washington, Dr. Merlin Bittlinger, with the Universitätsmedizin Berlin – Charité, and Dr. Anna Wexler with the University of Pennsylvania presented their research on the ethics of neurotechnologies.

Tuesday, February 6, 2018

The Anniversary of the First Neuroethics Conference (No, Not That One)

By Jonathan D. Moreno

Jonathan D. Moreno is the David and Lyn Silfen University Professor at the University of Pennsylvania where he is a Penn Integrates Knowledge (PIK) professor. At Penn he is also Professor of Medical Ethics and Health Policy, of History and Sociology of Science, and of Philosophy.  His latest book is Impromptu Man: J.L. Moreno and the Origins of Psychodrama, Encounter Culture, and the Social Network (2014), which Amazon called a “#1 hot new release.”  Among his previous books are The Body Politic, which was named a Best Book of 2011 by Kirkus Reviews, Mind Wars (2012), and Undue Risk (2000).

The 15th anniversary of what is widely viewed as the first neuroethics conference, “Neuroethics: Mapping the Field” was celebrated in 2017. The meeting was held in San Francisco, organized by the University of California and Stanford, and sponsored by the Dana Foundation. Cerebrum, the journal that is published by the foundation, celebrated the anniversary by publishing short memoirs by some of the speakers, including my own. The feature was dubbed “The First Neuroethics Meeting.”

Except that it wasn’t. The first conference that was recognizably about neuroethics was held in Washington, D.C. under the auspices of a conservative think tank, and its 20th anniversary is in 2018. 

Tuesday, January 30, 2018

The International Roots of Future Neuroethics

By Denis Larrivee 

Denis Larrivee is a Visiting Scholar at the Neiswanger Bioethics Institute
Loyola University Chicago and a member of the International Neuroethics Society 
communication committee. He also serves on the editorial board for the journal Neurology and Neurological Sciences, where he is the section head for neuroscience. He is currently the editor of a text on Brain Computer Interfacing and Brain Dynamics. 

The reappearance in 2017 of the Ambassador Session at the International Neuroethics Soci-ety’s annual meeting underlines both the rapid upswing of global investment in neuroscience and the internationally perceived need for ethical deliberation about its interpretive significance, distinctive cultural manifestations, and evolution of complementary policy and juridical structures best serving global versus regional interests. The 2017 session juxtaposed the more mature organizational approaches of the American and European neuroethical programs against recent undertakings in Asia, a juxtaposition that helped to clarify how neuroethics progress is conditioned by local neuroscience research priorities and how more established programs assist in cross-cultural transmission to shape budding, national efforts. 

Tuesday, January 23, 2018

Neuroethics Women to Watch

By Judy Illes, CM, PHD,
Immediate Past President, International Neuroethics Society (INS)

Dr. Illes is Professor of Neurology and Canada Research Chair in Neuroethics at the University of British Columbia. Her research, teaching and service focus on ethical, legal, social and policy challenges specifically at the intersection of the brain sciences and biomedical ethics. Her latest book, Neuroethics: Anticipating the Future (Oxford University Press) was released in July 2017. Dr. Illes hold many prestigious awards for her work both in neuroethics and on behalf of women in science. She was appointed to the Order of Canada, the country’s highest civilian award, in December 2017. 

During the two years that I was President of the INS, and really since 2002 overall when we first set the modern neuroethics vision in motion, one of my greatest joys has been to work with outstanding people in our field. I have relentlessly sought to create opportunities for leadership especially among early career neuroethicists who seek to contribute, sometimes in the footsteps of more senior people and sometimes along a completely separate path that they set of their own. My focus has been on the women and men of our field alike and, during my term as President specifically, these opportunities unfolded in different forms. Working with remarkable staff led by Karen Graham (INS Executive Director) since the birth of the INS and Elaine Snell (Chief Operating Officer), and the INS Board, I created an Emerging Issues Task Force, for example, a Rising Star Lecture (Kreitmair, 2017), and many podium opportunities at our annual meetings. 

Tuesday, January 16, 2018

Neurodevelopmental Disability on TV: Neuroethics and Season 1 of ABC’s Speechless

By John Aspler and Ariel Cascio

John Aspler, a doctoral candidate in Neuroscience at McGill University and the Neuroethics Research Unit, focuses on the experiences of key stakeholders affected by fetal alcohol spectrum disorder, the way they are represented and discussed in Canadian media, and the potential stigmatization they face given related disability stereotypes. 

Ariel Cascio, a postdoctoral researcher at the Neuroethics Research Unit of the Institut de recherches cliniques de Montréal, focuses primarily on autism spectrum conditions, identity, subjectivity, and biopolitics. 


Television can be an important medium through which to explore cultural conceptions of complex topics like disability – a topic tackled by Speechless, a single-camera family sitcom. Speechless tells the story of JJ DiMeo, a young man with cerebral palsy (CP) portrayed by Micah Fowler, who himself has CP. The show focuses on JJ’s daily life as well as the experiences of his parents and siblings. JJ’s aide, an African-American man named Kenneth, voices for JJ, as the latter uses a head-mounted laser pointer to indicate words and letters on a communication board (explaining the show’s title).

Tuesday, January 9, 2018

Dog Days: Has neuroscience revealed the inner lives of animals?

By Ryan Purcell

Image courtesy of Pexels.
On a sunny, late fall day with the semester winding down, Emory neuroscientist Dr. Gregory Berns gave a seminar in the Neuroethics and Neuroscience in the News series on campus. Berns has become relatively famous for his ambitious and fascinating work on what he calls “the dog project”, an eminently relatable and intriguing study that has taken aim at uncovering how the canine mind works using functional imaging technology.

The seminar was based on some of the ideas in his latest book, What It’s Like to Be a Dog (and other adventures in Animal Neuroscience). In it, Berns responds to philosopher Thomas Nagel’s influential anti-reductionist essay “What Is It Like to Be a Bat?” and recounts his journey to perform the world’s first functional magnetic resonance imaging (fMRI) session on an awake, unrestrained dog. Like so many seemingly impossible tasks, when broken down into many small, discrete steps, getting a dog to step into an fMRI machine and remain still during scanning became achievable (see training video here).