Tuesday, May 22, 2018

Should you trust mental health apps?

By Stephen Schueller

Image courtesy of Pixabay.
If you were to search the Google Play or Apple iTunes store for an app to help support your mental health you’d find a bewildering range of options. This includes nearly 1000 apps focused on depression, nearly 600 focused on bipolar disorder, and 900 focused on suicide (Larsen, Nicholas, & Christensen, 2016). But how much faith should you have that these apps are actually helpful? Or to take an even more grim position, might some apps actually be harmful? Evidence suggests the latter might be true. In one study, researchers who examined the content in publicly available bipolar apps actually found one app, iBipolar, that instructed people to drink hard liquor during a bipolar episode to help them sleep (Nicholas, Larsen, Proudfoot, & Christensen, 2015). Thus, people should definitely approach app stores cautiously when searching for an app to promote their mental health.

Thursday, May 17, 2018

Presenting... The Neuroethics Blog Reader: Black Mirror Edition!

It is our pleasure to present you with The Neuroethics Blog Reader: Black Mirror Edition!


This reader features the seven contributions from the blog's Black Mirror series, in which six different student writers explored the technology and neuroethical considerations presented in various  episodes of the British science fiction anthology television series. 

As Dr. Karen Rommelfanger puts it: 

This reader "... features critical reflections on the intriguing, exciting and sometimes frightful imagined futures for neurotechnology. Every day, in real life, we move closer to unraveling the secrets of the brain and in so doing become closer to understanding how to intervene with the brain in ways previously unimaginable. Neuroscience findings and the accompanying neurotechnologies created from these findings promise to transform the landscape of every aspect of our lives. As neuroethicists, we facilitate discussions on the aspirations of neuroscience and what neuroscience discoveries will mean for society. Sometimes this means dismantling overhyped neuroscience and staving of possible dystopian futures, but ultimately neuroethics aims to make sure that the neuroscience of today and of the future advance human flourishing."

The Neuroethics Blog, now in its 7th year of creating weekly publications, runs mostly in part to our amazing blog editorial team. A special thank you to: Sunidhi Ramesh (Volume Editor of the reader and outgoing Assistant Managing Editor), Carlie Hoffman (Managing Editor), Nathan Ahlgrim (incoming Assistant Managing Editor), Kristie Garza (Supporting Editor and blog contributor), and Jonah Queen (Supporting Editor and blog contributor). We would also like to thank the authors of the pieces featured in the reader; you can read more about them on the last page of the publication.

Want to read more? Check out a digital copy of the reader below.



Tuesday, May 15, 2018

Regulating Minds: A Conceptual Typology

By Michael N. Tennison 

Image courtesy of Wikimedia Commons.
Bioethicists and neuroethicists distinguish therapy from enhancement to differentiate the clusters of ethical issues that arise based on the way a drug or device is used. Taking a stimulant to treat a diagnosed condition, such as ADHD, raises different and perhaps fewer ethical issues than taking it to perform better on a test. Using a drug or device to enhance performance—whether in the workplace, the classroom, the football field, or the battlefield—grants the user a positional advantage over one’s competitors. Positional enhancement raises issues of fairness, equality, autonomy, safety, and authenticity in ways that do not arise in therapy; accordingly, distinguishing enhancement from therapy makes sense as a heuristic to flag these ethical issues. 

Tuesday, May 8, 2018

Trust in the Privacy Concerns of Brain Recordings

By Ian Stevens

Ian is a 4th year undergraduate student at Northern Arizona University. He is majoring in Biomedical Sciences with minors in Psychological Sciences and Philosophy to pursue interdisciplinary research on how medicine, neuroscience, and philosophy connect. 

Introduction

Brain recording technologies (BRTs), such as brain-computer interfaces (BCIs) that collect various types of brain signals from on and around the brain could be creating privacy vulnerabilities in their users.1,2 These privacy concerns have been discussed in the marketplace as BCIs move from medical and research uses to novel consumer purposes. 3,4 Privacy concerns are grounded in the fact that brain signals can currently be decoded to interpret mental states such as emotions,5 moral attitudes,6 and intentions.7 However, what can be interpreted from these brain signals in the future is ambiguous.

Tuesday, May 1, 2018

The Promise of Brain-Machine Interfaces: Recap of March's The Future Now: NEEDs Seminar

Image courtesy of Wikimedia Commons.
By Nathan Ahlgrim

If we want to – to paraphrase the classic Six Million Dollar Man – rebuild people, rebuild them to be better, stronger, faster, we need more than fancy motors and titanium bones. Robot muscles cannot help a paralyzed person stand, and robot voices cannot restore communication to the voiceless, without some way for the person to control them. Methods of control need not be cutting-edge. The late Dr. Stephen Hawking’s instantly recognizable voice synthesizer was controlled by a single cheek movement, which seems shockingly analog in today’s world. Brain-machine interfaces (BMIs) are the emerging technology that promise to bypass all external input and allow robotic devices to communicate directly with the brain. Dr. Chethan Pandarinath, assistant professor of biomedical engineering at Georgia Tech and Emory University, discussed the good and bad of this technology in March’s The Future Now NEEDs seminar: "To Be Implanted and Wireless". He shared his experience and perspective, agreeing that these invasive technologies hold incredible promise. Keeping that promise both realistic and equitable, though, is an ongoing challenge.

Tuesday, April 24, 2018

The Effects of Neuroscientific Framing on Legal Decision Making

By Corey H. Allen

Corey Allen is a graduate research fellow in the Georgia State University Neuroscience and Philosophy departments with a concentration in Neuroethics. He is a member of the Cooperation, Conflict, and Cognition Lab, and his research investigates (1) the ethical and legal implications of neuropredictive models of high-risk behavior, (2) the role of consciousness in attributions of moral agency, and (3) the impact of neurobiological explanations in legal and moral decision making.

More than ever, an extraordinary amount of up-and-coming companies are jumping to attach the prefix “neuro” to their products. In many cases, this ”neurobabble” is inadequate and irrelevant, serving only to take advantage of the public’s preconceptions about the term. This hasty neuroscientific framing doesn’t stop with marketing but instead creeps into public and legal discourse surrounding action and responsibility. This leads to the question: does the framing of an issue as “neuroscientific” change the perceptions of and reactions to that issue? This question, especially in the realm of legal decision making, is the focus of ongoing research by Eyal Aharoni, Jennifer Blumenthal-Barby, Gidon Felsen, Karina Vold, and myself, with the support of Duke University and the John Templeton Foundation. With backgrounds varying from psychology, philosophy, neuroscience, to neuroethics, our team employs a multi-disciplinary approach to probe the effects of neuroscientific framing on public perceptions of legal evidence as well as the ethical issues surrounding such effects.

Tuesday, April 17, 2018

The Fake News Effect in Biomedicine

By Robert T. Thibault

Robert Thibault is interested in expediting scientific discoveries through efficient research practices. Throughout his PhD in the Integrated Program in Neuroscience at McGill University, he has established himself as a leading critical voice in the field of neurofeedback and published on the topic in Lancet Psychiatry, Brain, American Psychologist, and NeuroImage among other journals. He is currently finalizing an edited volume with Dr. Amir Raz, tentatively entitled “Casting light on the Dark Side of Brain Imaging,” slated for release through Academic Press in early 2019. 

We all hate being deceived. That feeling when we realize the “health specialist” who took our money was nothing more than a smooth-talking quack. When that politician we voted for never really planned to implement their platform. Or when that caller who took our bank information turned out to be a fraud. 

These deceptions share a common theme—the deceiver is easy to identify and even easier to resent. Once we understand what happened and who to blame, we’re unlikely to be misled by such chicanery again. 

But what if the perpetrator is more difficult to identify? What if they are someone we have a particular affection for? Can we maintain the same objectivity? 

What if the deceiver is you? 

Tuesday, April 10, 2018

Global Neuroethics and Cultural Diversity: Some Challenges to Consider

By Karen Herrera-Ferrá, Arleen Salles and Laura Cabrera

Karen Herrera-Ferrá, MD, MA lives in Mexico City and founded the Mexican Association of Neuroethics. She has a Post-doctorate in Neuroethics (Neuroethics Studies Program at the Pellegrino Center for Clinical Bioethics (PCCB) at Georgetown University), a MA on Clinical Psychology, and an MD. She also has a Certificate on Cognitive Behavioral Therapy and another one on History of Religions. She has a one-year fellowship on Psychosis and another on OCD. She is currently a PhD Candidate on Bioethics. On May 2016 she developed a national project to formally introduce and develop neuroethics in her country. The main focus of this project is to depict and include national leaders in mental health, interested in neuroethics, so to inform and divulge this discipline among scholars and society. She also works as a mental health clinician in a private hospital, lectures in different hospitals and Universities in Mexico and is an Affiliated Scholar of the Neuroethics Studies Program at the Pellegrino Center for Clinical Bioethics PCCB at Georgetown University. Her interests and research focus on two main topics: recurrent violent behavior and globalization of neuroethics in Latin America. 

Arleen Salles, Senior Researcher, Centre for Research Ethics and Bioethics, Uppsala University, Sweden, Task leader and research collaborator in the Ethics and Society subproject (SP12) of the EU-flagship Human Brain Project, Director of the Neuroethics Program at CIF (Centro de Investigaciones Filosoficas)  in Buenos Aires, Argentina.

Dr. Laura Cabrera is Assistant Professor of Neuroethics at the Center for Ethics and Humanities in the Life Sciences. She is also Faculty Affiliate at the National Core for Neuroethics, University of British Columbia. Laura Cabrera's interests focus on the ethical and societal implications of neurotechnology, in particular when use for enhancement purposes as well as for treatments in psychiatry. She has been working on projects at the interface of conceptual and empirical methods, exploring the attitudes of professionals and the public toward pharmacological and brain stimulation interventions, as well as their normative implications. Her current work also focuses on the ethical and social implications of environmental changes for brain and mental health. She received a BSc in Electrical and Communication Engineering from the Instituto Tecnológico de Estudios Superiores de Monterrey (ITESM) in Mexico City, an MA in Applied Ethics from Linköping University in Sweden, and a PhD in Applied Ethics from Charles Sturt University in Australia. Her career goal is to pursue interdisciplinary neuroethics scholarship, provide active leadership, and train and mentor future leaders in the field. 

The impact of scientific brain research and the effects of neurotechnology on human beings as biological and moral beings is increasingly felt in medicine and the humanities around the world. Neuroethics attempts to offer a collective response to the ethical issues raised by rapidly developing science and to find new answers to age-old philosophical questions. A growing number of publications show that the field has disseminated to many countries, including developing countries (1-3). Mindful that ethical issues are typically shaped by the interplay of science and society, there has been a recent emphasis on the need for a more culturally and socially sensitive field and a call for a wider and more inclusive neuroethics: a “cross cultural” “global” or “international” neuroethics (4). While the sentiment is good, what exactly a more inclusive neuroethics entails is not necessarily clear. Does it entail just recognizing the need for the field to be more aware of existing disparities in brain and mental health issues and their treatment in different regions? Does it entail recognizing the global scope of neuroethical problems? Or possibly working towards a common, unified approach to neuroethical issues that incorporates different viewpoints and methods? 

Tuesday, April 3, 2018

The Seven Principles for Ethical Consumer Neurotechnologies: How to Develop Consumer Neurotechnologies that Contribute to Human Flourishing

By Karola Kreitmair 

Karola Kreitmair, PhD, is a Clinical Ethics Fellow at the Stanford Center for Biomedical Ethics. She received her PhD in philosophy from Stanford University in 2013 and was a postdoctoral fellow in Stanford’s Thinking Matters program from 2013-2016. Her research interests include neuroethics, especially new technologies, deep brain stimulation, and the minimally-conscious state, as well as ethical issues associated with wearable technology and citizen science.  

Brain-computer interfaces, neurostimulation devices, virtual reality systems, wearables, and smart phone apps are increasingly available as consumer technologies intended to promote health and wellness, entertainment, productivity, enhancement, communication, and education. At the same time, a growing body of literature addresses ethical considerations with respect to these neurotechnologies (Wexler 2016; Ienca & Adorno 2017; Kreitmair & Cho 2017). The ultimate goal of ethical consumer products is to contribute to human flourishing. As such, there are seven principles which developers must respect if they are to develop ethical consumer neurotechnologies. I take these considerations to be necessary for the development of ethical consumer neurotechnologies, i.e. technologies that contribute to human flourishing, but I am not committed to claiming they are also jointly sufficient. 

The seven principles are: 
1. Safety 
2. Veracity 
3. Privacy 
4. Epistemic appropriateness 
5. Existential authenticity 
6. Just distribution 
7. Oversight 

Tuesday, March 27, 2018

Neuroprosthetics for Speech and Challenges in Informed Consent


Hannah Maslen is the Deputy Director of the Oxford Uehiro Centre for Practical Ethics, University of Oxford. She works on a wide range of topics in applied philosophy and ethics, from neuroethics to moral emotions and criminal justice. Hannah is Co-PI on BrainCom, a 5-year European project working towards the development of neural speech prostheses. Here, she leads the work package on ‘Ethics, Implants and Society’.  

Scientists across Europe are combining their expertise to work towards the development of neuroprosthetic devices that will restore or substitute speech in patients with severe communication impairments. The most ambitious application will be in patients with locked-in syndrome who have completely lost the ability to speak. Locked-in syndrome is a condition in which the patient is awake and retains mental capacity but cannot express himself or herself due to the paralysis of afferent motor pathways, preventing speech and limb movements (except for some form of voluntary eye movement, usually up and down) (1).