Tuesday, January 17, 2017

The Medicalization of Mental Illness in Gun Violence

By Carolyn C. Meltzer, MD

Dr. Meltzer serves as the William P. Timmie Professor and Chair of the Department of Radiology and Imaging Sciences and as the Associate Dean for Research at the Emory University School of Medicine. Her work focuses on applying novel advanced imaging strategies to better understand brain structure-function relationships in normal aging, late-life depression, and Alzheimer’s disease. She is also involved in oncologic imaging research and, while at the University of Pittsburgh, oversaw the clinical evaluation of the world’s first combined PET/CT scanner. She established the Emory Center for Systems Imaging to broadly support the advance of imaging technologies in basic and translational research, including beta testing of the first human combined MRI/PET scanner. Dr. Meltzer has also served as the Chair of the Neuroradiology Commission and Chair of the Research Commission on the American College of Radiology’s Board of Chancellors, President of the Academy of Radiology Research, Trustee of the Radiological Society of North America Foundation, and President of the American Society of Neuroradiology.

On January 6, 2017, a young man pulled a semiautomatic handgun from his checked baggage and shot and killed several passengers in the Fort Lauderdale airport. In the days following the incident, information about erratic behavior and his prior involvement in incidents of domestic abuse emerged.

Tuesday, January 10, 2017

A CRISPR View of Life

By Shweta Sahu

Image courtesy of Wikimedia Commons
We now live in a society where many are trying to get a leg up where they can, whether it be through pharmacological neuroenhancement (like Ritalin and Adderall) or other neurotechnologies (like transcranial direct current simulation). Technology also allows us to exert an even earlier influence on neurodevelopmental disorders through prenatal genetic testing for fetuses. Such technologies include amniocentesis and chorionic villus sampling, that screen for Down’s, Edwards’ and Patau’s syndromes, and give parents the chance to decide whether they would like to terminate or continue with their pregnancy. One article even claims 53% of all pregnancies were aborted following prenatal diagnoses of Down’s Syndrome, though there is still much dispute over the exact numbers.

Tuesday, January 3, 2017

Future (Brain) Identities, Lost in Translation

On December 6-7, 2016, the 92nd Street Y and the Future Today Institute successfully convened leading "research scientists, technologists, ethicists, policy makers, authors, elected officials, academics and artists to take stock of where we are—and where we are going." 

On Dec 7, Emory's own Neuroethics Program Director, Dr. Karen Rommelfanger gave the closing keynote for the Future. Today Summit at the 92Y in New York. The topic of her talk was Future (Brain) Identities, Lost in Translation.

A preview of her talk can be found below.

Tuesday, December 27, 2016

Is memory enhancement right around the corner?

By Ryan Purcell

“Everyone has had the experience of struggling to remember long lists of items or complicated directions to get somewhere,” Dr. Justin Sanchez of DARPA said in a recent press release. “Today we are discovering how implantable neurotechnologies can facilitate the brain’s performance of these functions.” The US Department of Defense is interested in how the brain forms memories because hundreds of thousands of soldiers – or “warfighters” as they are now called – have suffered from traumatic brain injury (TBI) and some have severe memory problems. Beyond the military, TBI is a major public health concern that affects millions of Americans as patients and caregivers and is incredibly expensive. A breakthrough treatment is needed and for that, ambitious research is required.

But does this research agenda end at treating disease, or could these findings also be applied to memory enhancement goals?

Tuesday, December 13, 2016

Meet Tomorrow's World: A Meeting on the Ethics of Emerging Technologies

By Marcello Ienca

Marcello Ienca, M.Sc., M.A., is a PhD candidate and research assistant at the Institute for Biomedical Ethics, University of Basel, Switzerland. His current projects include the assessment of intelligent assistive technologies for people with dementia and other neurocognitive disabilities, the regulation of pervasive neurotechnology, and the neurosecurity of human-machine interfaces. He is the chair of the Student/Postdoc Committee of the International Neuroethics Society and the current coordinator of the Swiss Network for Neuroscience, Ethics and Law.

Technology is rapidly reshaping the world we live in. In the past few decades, mankind has not significantly changed biologically, but human societies have undergone continuous and unprecedented developments through technological innovation. Today, most human activities—from messaging to geolocation, from financial transactions to medical therapies— are computer-mediated. In the next decades, the quantity and variety of activities mediated by digital technology is bound to increase exponentially. In parallel, with advancements in artificial intelligence (AI), robotics and microcomputing, the friction between man and machine is set to vanish and the boundaries at the human-machine interface are bound to blur. In an attempt to anticipate our technological futures as well as their impact on our societies and our systems of values, the International Neuroethics Society (jointly with the Temporal Dynamics of Learning Center, the Science Collaboratory of the University of California, San Diego, and the National Science Foundation) sponsored a public event on the Ethics of Emerging Technologies as part of the 2016 annual INS meeting in San Diego, California. The event was organized by INS President Judy Illes, INS Executive Director Karen Graham, Dr. Rachel Wurzman of the INS Public Session Program Committee and Prof. Andrea Chiba, Dr. Roger Bingham and Prof. Deborah Forster of UCSD. A panel of international experts in various areas of science and ethics gathered in San Diego on November 9 to discuss various critical issues emerging at the human-machine interface with possible disruptive implications for ethics and society. The first perspective was provided by Dr. William D. Casebeer, career intelligence analyst and Lieutenant Colonel in the US Air Force. His short talk proposed an interesting analogy between pervasive technology and the art of storytelling to show how technology could be actually used, in the near future, to raise empathy, deliver personalized experiences and facilitate human interaction.

Tuesday, December 6, 2016

"Inflammation might be causing depression": Stigma of mental illness, reductionism, and (mis-)representations of science

by Katie Givens Kime

Image courtesy of Flickr
Is depression a Kind of Allergic Reaction?” Provocative headlines like these appear throughout popular media. Besides misrepresenting scientific findings, such journalistic coverage impacts perceptions of mental illness, as well as expectations of those seeking treatment. In last month’s Neuroethics in the News talk, Dr. Jennifer Felger, from Emory’s Department of Psychiatry and Behavioral Sciences, shared her experiences and insights on the translation (and mistranslation) of research by journalists. In relating the story of her own interactions with the media, Felger emphasized the complex and varying transactional relationships between journalists and scientists. The impact of such coverage carries notable neuroethical dimensions, potentially affecting the capacity for agency and/or aspects of a sense of self for a person experiencing mental illness.

Tuesday, November 29, 2016

"American Horror Story" in Real Life: Understanding Racialized Views of Mental Illness and Stigma

By Sunidhi Ramesh

Racial and ethnic discrimination have taken various forms in the
United States since its formation as a nation. The sign in the image
reads: "Deport all Iranians. Get the hell out of my country."
Image courtesy of Wikipedia.
From 245 years of slavery to indirect racism in police sanctioning and force, minority belittlement has remained rampant in American society (1). There is no doubt that this history has left minorities in the United States with a differential understanding of what it means to be American and, more importantly, what it means to be an individual in a larger humankind.

Generally, our day-to-day experiences shape the values, beliefs, and attitudes that allow us to navigate the real world (2). And so, with regards to minorities, consistent exposure to these subjective experiences (of belittlement and discrimination, for example) can begin to shape subjective perceptions that, in turn, can mold larger perspectives and viewpoints.

Last spring, I conducted a project for a class to address the reception (3) of white and non-white, or persons of color (POC), students to part of an episode from American Horror Story: Freak Show. The video I asked them to watch portrays a mentally incapacitated woman, Pepper, who is wrongfully framed for the murder of her sister’s child. The character’s blatant scapegoating is shocking not only for the lack of humanity it portrays but also for the reality of being a human being in society while not being viewed as human.

Although the episode remains to be somewhat of an exaggeration, the opinions of the interview respondents in my project ultimately suggested that there exists a racial basis of perceiving the mental disabilities of Pepper—a racial basis that may indeed be deeply rooted in the racial history of the United States.

Tuesday, November 22, 2016

Debating the Replication Crisis - Why Neuroethics Needs to Pay Attention

By Ben Wills

Ben Wills studied Cognitive Science at Vassar College, where his thesis examined cognitive neuroscience research on the self. He is currently a legal assistant at a Portland, Oregon law firm, where he continues to hone his interests at the intersections of brain, law, and society.

In 2010 Dana Carney, Amy Cuddy, and Andy Yap published a study showing that assuming an expansive posture, or “power pose,” leads to increased testosterone levels, task performance, and self-confidence. The popular media and public swooned at the idea that something as simple as standing like Wonder Woman could boost performance and confidence. A 2012 TED talk that author Amy Cuddy gave on her research has become the site’s second-most watched video, with over 37 million views. Over the past year and change, however, the power pose effect has gradually fallen out of favor in experimental psychology. A 2015 meta-analysis of power pose studies by Ranehill et al. concluded that power posing affects only self-reported feelings of power, not hormone levels or performance. This past September, reflecting mounting evidence that power pose effects are overblown, co-author Dana Carney denounced the construct, stating, “I do not believe that ‘power pose’ effects are real.”

What happened?

Tuesday, November 15, 2016

The 2016 Kavli Futures Symposium: Ethical foundations of Novel Neurotechnologies: Identity, Agency and Normality

By Sean Batir (1), Rafael Yuste (1), Sara Goering (2), and Laura Specker Sullivan (2)

Image from Kavli Futures Symposium
(1) Neurotechnology Center, Kavli Institute of Brain Science, Department of Biological Sciences, Columbia University, New York, NY 10027

(2) Department of Philosophy, and Center for Sensorimotor Neural Engineering, University of Washington, Seattle, WA 98195

Detailed biographies for each author are located at the end of this post

Often described as the “two cultures,” few would deny the divide between the humanities and the sciences. This divide must be broken down if humanistic progress is to be made in the future of transformative technologies. The 2016 Kavli Futures Symposium held by Dr. Rafael Yuste and Dr. Sara Goering at the Neurotechnology Center of Columbia University addressed the divide between the humanities and sciences by curating an interdisciplinary dialogue between leading neuroscientists, neural engineers, and bioethicists across three broad topics of conversation. These three topics include conversations on identity and mind reading, agency and brain stimulation, and definitions of normality in the context of brain enhancement. The message of such an event is clear: dialogue between neurotechnology and ethics is necessary because the novel neurotechnologies are poised to generate a profound transformation in our society.

Tuesday, November 8, 2016

On the ethics of machine learning applications in clinical neuroscience

By Philipp Kellmeyer

Dr. med. Philipp Kellmeyer, M.D., M.Phil. (Cantab) is a board-certified neurologist working as postdoctoral researcher in the Intracranial EEG and Brain Imaging group at the University of Freiburg Medical Center, German. His current projects include the preparation of a clinical trial for using a wireless brain-computer interface to restore communication in severely paralyzed patients. In neuroethics, he works on ethical issues of emerging neurotechnologies. He is a member of the Rapid Action Task Force of the International Neuroethics Society and the Advisory Committee of the Neuroethics Network.

What is machine learning, you ask? 
As a brief working definition up front: machine learning refers to software that can learn from experience and is thus particularly good at extracting knowledge from data and for generating predictions [1]. Recently, one particularly powerful variant called deep learning has become the staple of much of recent progress (and hype) in applied machine learning. Deep learning uses biologically inspired artificial neural networks with many processing stages (hence the word "deep"). These deep networks, together with the ever-growing computing power and larger datasets for learning, now deliver groundbreaking performances at many tasks. For example, Google’s AlphaGo program that comprehensively beat a Go champion in January 2016 uses deep learning algorithms for reinforcement learning (analyzing 30 million Go moves and playing against itself). Despite these spectacular (and media-friendly) successes, however, the interaction between humans and algorithms may also go badly awry.