Thursday, July 24, 2014

The New Normal: How the definition of disease impacts enhancement

We’ve all been there. It’s exam week of your junior year of college with two papers due the day after a final. You’re a new faculty member with a semester of lectures to prepare and a lab to get started. You’re a tax accountant and it’s early April. There is simply too much to do and not enough hours in the day to get it all done while sleeping enough to keep your brain working like you need it to. In that situation, where do you stand on cognitive enhancement drugs? Most of us wouldn’t hesitate to grab a cup of coffee but what about a caffeine pill, or a friend’s Adderall? Many discussions about cognitive enhancement eventually come down to this question: where do we draw the line? Currently most of the cognitive enhancers that create unease for ethicists and the general public alike are prescription drugs that were originally meant to treat conditions recognized as out of the realm of “normal” such as diseases or deficits. Therefore, a key step in deciding where we should stand on the acceptability of cognitive enhancement is to determine what is normal and what needs to be medically treated. I’ll argue that one reason there is so much gray area in the enhancement debate is that delineating normal from diseased – particularly in the brain – is hardly a black-and-white matter.

Why does the definition of disease matter? Enhancement is typically defined relative to normal abilities. Anjan Chatterjee of the University of Pennsylvania suggested that “Therapy is treating disease, whereas enhancement is improving “normal” abilities. Most people would probably agree that therapy is desirable. By contrast, enhancing normal abilities gives pause to many.”1 However, many neuroethicists have wrestled with clearly defining enhancement2,3. The director of Emory’s Center for Ethics, Paul Root Wolpe argued (2002) that the enhancement debate centers on the ability of substances or therapeutics to directly affect the brain in ways that are not necessary to restore health and, certainly, to date the cognitive enhancement debate has focused primarily on pharmaceuticals, many of which are approved to treat disorders but can have effects on healthy individuals as well. Perhaps the best examples of this are methylphenidate (Ritalin) and modafinil (Provigil) which are prescribed for attention deficit hyperactivity disorder (ADHD) and narcolepsy respectively, but are increasingly being used by students and professionals to boost cognitive performance at school and in the workplace3-5.

From nytimes.com

Tuesday, July 15, 2014

Intellectual Property from Clinical Research on Neuropsychiatric Disorders: What Constitutes Informed Consent?

By Elaine F. Walker, Ph.D. & Arthur T. Ryan, M.A.


Elaine Walker is a Professor of Psychology and Neuroscience in the Department of Psychology at Emory University and is the Director of the Development and Mental Health Research Program, which is supported by the National Institute of Mental Health. Her research is focused on child and adolescent development and the brain changes that are associated with adolescence. She is also a member of the AJOB Neuroscience editorial board.

The pace of advances in biomedical research has accelerated in conjunction with new technologies for studying cellular processes. While this progress holds promise for relieving human suffering from a range of illnesses, it also poses significant and thorny questions about the ownership of new knowledge. In June of 2013, the Supreme Court issued a unanimous ruling on the Association for Molecular Pathology v Myriad Genetics, Inc.; all justices agreed that naturally occurring DNA sequences cannot be patented1. This ruling was precipitated by a patent owned by Myriad genetics on the DNA sequences for the human BRCA1 and BRCA2 genes, which are associated with human variation in susceptibility to cancer. The ruling concluded that genes are products of nature and, therefore, cannot be claimed as the intellectual property (IP) of any individual or commercial entity. Within hours after this ruling, other companies announced that they would offer genetic testing for BRCA1 and BRCA2 at a significantly lower cost than Myriad had been charging for years.

While the Supreme Court's ruling on the patentability of naturally occurring human genetic sequences had broad and immediate implications, it represents only the tip of the iceberg with respect to the contentious issues that will confront intellectual property (IP) rights for future biomedical advances. We can anticipate more ethical and legal debates regarding commercialization in the fields of proteomics (the study of protein structure and function), epigenetics (changes in gene expression mediated by RNA, as opposed to changes in the DNA code), stem cells, and the study of the human connectome (the map of neural connections in the brain). The implications of the pursuit of patents in these areas will extend to all fields of medicine, but they present some particularly complex problems with regard to the brain disorders that are the province of neurology and psychiatry.

Tuesday, July 8, 2014

Early Intervention and The Schizophrenia Prodrome

On May 7th the Emory University Graduate Students in Psychology and Neuroscience (GSPN) hosted a colloquium talk given by Vijay Mittal, assistant Professor of Psychology and Neuroscience at the University of Colorado at Boulder. In the talk, titled “Translational Clinical Science in the Psychosis Prodrome: From Biomarkers to Early Identification and Intervention,” Dr. Mittal, who received his Ph.D. from Emory, discussed some of his research on the prodrome for schizophrenia.1

Dr. Vijay Mittal
The prodrome for schizophrenia is a collection of neurological and psychological symptoms that can indicate risk for developing schizophrenia (as has been discussed previously on this blog) prior to the development of clinically relevant symptoms. Research on the prodrome is gaining much attention and funding because it could lead to a better understanding of how schizophrenia develops and better ways to intervene prior to its onset.

Mittal began his talk with a background on the schizophrenia prodrome. He explained that, though schizophrenia usually manifests itself during late adolescence, people who develop schizophrenia exhibit atypical characteristics from a young age, during the premorbid and prodromal stages. In the premorbid stage (which occurs during childhood) some minor cognitive and social impairments are present, though they are hard to differentiate from typical development. In the prodromal stage (which starts during puberty) those traits worsen and new ones develop that are similar to (though less frequent and severe than) the main symptoms of schizophrenia (both the positive and negative). Common symptoms of the prodrome include perceptual aberration, paranoia, mild delusions (which can be distinguished from reality2), depression, anhedonia, cognitive decline, and social withdrawal.

The positive, negative, and cognitive symptoms of schizophrenia.
Via dasmaninstitute.org.

Tuesday, July 1, 2014

“Pass-thoughts” and non-deliberate physiological computing: When passwords and keyboards become obsolete

Imagine opening your email on your computer not by typing a number code, a password, or even by scanning a finger, but instead by simply thinking of a password. Physical keys and garage door openers could also become artifacts of the past once they are replaced with what could be referred to as pass-thoughts. Just last year, researchers at UC Berkley used EEG signals emitted from subjects as biomarker identifiers to allow access to a computer. The entire system – the headset, the Bluetooth device, and the computer – had an error rate of less than 1%.1 While wearing EEG headsets to open our devices may seem futuristic, this type of scenario could become more prevalent in the future due to advances in physiological computing (PC). Physiological computing is a unique form of human computer interactions because the input device for a computer is any form of real-time physiological data, such as a heart-rate or EEG signal. This is in stark contrast to the peripheral devices that we are familiar with today, such as a keyboard, remote, or mouse.2

The field of physiological computing is still quite new, but research has suggested that different physiological computers require varying degrees of intentionality from the human user, and that the devices can be placed on a spectrum.3

Via physiologicalcomputing.net

On one end of the spectrum are technologies where users can deliberately interact with input devices based on voluntary muscle movement such as electrooculography (EOG) to direct the movement of a cursor (shown in 2 on the spectrum).4 In contrast, brain-computer-interfaces (BCI)­ such as the exoskeleton showcased at the recent first kick for the 2014 World Cup, bypass this step­ since BCIs are often developed for those with diminished movement capacities and disabilities. However, in both cases the general principle is the same: the interface is ultimately translating a neural signal that the user has specifically and deliberately directed to complete a task.5

Tuesday, June 24, 2014

Should you read more because a neuroscientist said so?

By Lindsey Grubbs

Lindsey Grubbs is a PhD student in the English Department at Emory University, where she is also working on a certificate in bioethics. She holds a master’s degree in English and gender studies from the University of Wyoming. She is interested in the relationship between literature and science, and works with American literature from the nineteenth century until today to interrogate and complicate the boundaries between health and wellness, normalcy and aberrance, and physical and mental complaints.

As neuroscientists begin to approach topics usually falling under the purview of other specialties, how can they ethically incorporate various forms of knowledge rather than provide simplified metrics that will, in a data hungry society, be easier for most to latch onto?

In 2013, we saw the publication of at least two high profile studies claiming neuroscientific proof for the potential moral benefits of reading fiction. Greg Berns and his associates published “Short- and Long-Term Effects of a Novel on Connectivity in the Brain” in Brain Connectivity (Berns, Blaine, Prietula, & Pye, 2013), and David Comer Kidd and Emanuele Castano published “Reading Literary Fiction Improves Theory of Mind” in Science (Kidd & Castano, 2013). The Berns article makes a relatively modest claim: the day after an evening session reading a novel, test subjects had short-term increased brain connectivity in areas of the brain associated with taking perspectives and understanding narratives, and longer-term connectivity that lasted several days in the bilateral somatosensory cortex, which the authors suggest could help explain the mechanism of “embodied semantics,” the idea that there is somatosensory involvement in the processing of language, as when tactile metaphors like “I had a rough day” activate the somatosensory cortex (Lacey, Stilla, & Sathian, 2012). As suggested by its title, the Kidd and Castano piece makes a more dramatic claim: the authors conducted five experiments and write that reading award-winning literary fiction improves subjects’ theory of mind both alone and in comparison to nonfiction or popular bestselling fiction. The reaction to these studies in the press follows the trend of a mania for neuroscientific evidence and colorful images of the brain1.  Why is it necessary, though, to grant scientific authority more weight as evidence than other forms of knowledge?

Via The Wire

Tuesday, June 17, 2014

Predicting Alzheimer's Disease: Potential Ethical, Legal, and Social Consequences

By Henry T. Greely, J.D.


Henry T. (Hank) Greely is the Deane F. and Kate Edelman Johnson Professor of Law and Professor, by courtesy, of Genetics at Stanford University. He directs the Stanford Center for Law and the Biosciences and the new Stanford Program in Neuroscience and Society  SPINS). He is also a member of the AJOB Neuroscience Editorial Board.

Would you want to know the date and time of your death? Life-Line, the first published fiction by Robert A. Heinlein, one of the giants of 20th century science fiction, explored that question. The story’s protagonist, Hugo Pinero, had invented a machine that could tell precisely when individuals would die, but, as Pinero found to his distress, he could not intervene to change their fates.

Would you want to know whether you would be diagnosed with Alzheimer disease (AD)? This question is rapidly leaving the realm of science fiction; indeed, it already has for some unlucky people. Our ability to predict who will suffer from this evil (and I chose that word carefully) condition is proceeding on several fronts and may already be coming into clinical use.

This post will briefly note the ways in which AD prediction is advancing and what some of the ethical, legal, and social implications of such an ability would be, before asking “should we care?”

Via the BBC

Friday, June 6, 2014

June 9th and 10th: President's Commission for the Study of Bioethical Issues at Emory University

The Presidential Commission for the Study of Bioethical Issues is an advisory panel that counsels the President on bioethical issues in light of scientific and medical advances. Most recently, the panel published Gray Matters: Integrative Approaches for Neuroscience, Ethics, and Society as a part of the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative. This document touched on relevant ethical issues related to neuroscience and made recommendations for integrating ethics into various facets of neuroscience research, education, and policy making.

On June 9–10, 2014, a public meeting of the President’s Commission for the Study of Bioethical Issues will be taking place at Emory University in the Rollins School of Public Health Building. The complete agenda is listed here, but the Commission will discuss the BRAIN Initiative and current work taking place in the field of neuroscience. Watch the live webcast and follow AJOB Neuroscience on Twitter if you are unable to attend!



Tuesday, June 3, 2014

Brain Imaging and Neurofeedback: Has Fiction Become Reality?

By Carolyn C. Meltzer, MD

Dr. Carolyn C. Meltzer is a professor at the Emory University School of Medicine Departments of Radiology and Imaging Sciences, Psychiatry and Behavioral Sciences, and Neurology. She is also a member of the AJOB Neuroscience Editorial Board.

“Power is in tearing human minds to pieces and putting them together again in new shapes of your own choosing.”
George Orwell, 1984


In the iconic geopolitical thriller “The Manchurian Candidate,” advanced mind control techniques are used on a Korean War prisoner to turn him into an assassin. As we move into an era in which functional neuroimaging may be applied in ways akin to “mind reading,” such as applied to lie detection and economic choices, this fictional work more closely mimics reality.

Functional neuroimaging tools have helped us to tease out neuronal networks and to better understand how we think and act in health and disease. With the exception of few specific instances of validated clinical use (such as mapping of exquisite cerebral cortex prior to resecting a nearby tumor), most behavioral functional imaging studies require group, rather than individual data.

New research has focused on exploiting brain-computer interfaces that address therapeutic approaches to neurological and psychiatric conditions in individualized care settings. Recording brain activity and using it to modulate behavior or motor activity - or to seek a specific therapeutic outcome - has spawned the field of neurofeedback. Initial applications have used invasive approaches, such as deep brain stimulation in movement disorders and medically intractable depression. More recently, emphasis has turned to non-invasive approaches. Florin and colleagues (2014) demonstrate how real-time magnetoencephalography (MEG) source imaging may modulate the activity of targeted specific brain regions reinforced by visual subject feedback.

Tuesday, May 27, 2014

A review of The Future of the Mind: The Scientific Quest to Understand Enhance, and Empower the Mind

The Future of the Mind, authored by physicist Dr. Michio Kaku, explores how neuroscience might inform questions that philosophers have been debating for centuries: Do we have a soul? What happens after we die? Do we even have to die? And what would it take to produce a robot with human consciousness or emotions? To explore these questions, Dr. Kaku interviewed hundreds of scientists who are actively conducting ground breaking work in labs around the world, and from these conversations he made predictions on how these scientific findings would shape our future. The work that Dr. Kaku discusses, such as the latest advances in brain-computer-interfaces (BCI) for the disabled,1 recording dream images with MRI machines,2 or implanting memories in mice,3,4 makes for a fascinating and engrossing read from start to finish. The Future of the Mind is at its best when taking readers through these areas of research and explaining the long-term significance, however many of the neurophilosophical questions posed are largely left to the readers’ imaginations for resolution.

The Future of the Mind is divided into three parts or books, and each book delves more and more into the technology of the future and the type of society that will exist decades and centuries from now. Book I sets the stage for how important physics is for neuroscience; the revolutionary technologies such as MRI, PET, and DBS have used basic physics knowledge, as Dr. Kaku notes, to promote the explosion of advances in the field of neuroscience. The state of these technologies in current research is introduced, along with how to conceptualize consciousness, and in Book II, he discusses how these technologies will enable us to conduct acts similar to telepathy and telekinesis, manipulate thoughts and memories, and enhance intelligence. Book III revisits the idea of consciousness and explores the possibilities related to mind-altering technologies, and suggests we reframe our understanding of consciousness beyond a single type of consciousness (i.e., dreaming, drug-induced states, and mental illnesses). He also suggests that the future understandings of consciousness may move beyond humans to include robots and aliens. Book III also explores ideas straight out of science fiction such as that one day our physical bodies will be too cumbersome for travel to other galaxies through deep space, so we’ll simply leave them behind.


Tuesday, May 20, 2014

Translating Preclinical Test Results into “Real World” Consequences

By Jalayne J. Arias, JD, MA

Jalayne J. Arias is the Associate Director of the NeuroEthics Program and Assistant Professional Staff in the Department of Bioethics at the Cleveland Clinic. Ms. Arias’ work incorporates empirical and conceptual projects addressing critical legal and ethical issues inherent in diagnosing, treating, and researching Alzheimer’s disease and other neurodegenerative conditions. Most recently, she served as the principal investigator for the study Stakeholders’ Perspectives on Preclinical Alzheimer’s Diagnosis: Patients, Families and Care Givers. Her recent publication, Confidentiality in preclinical Alzheimer disease studies (Neurology), addresses confidentiality concerns relevant to biomarker testing in Alzheimer’s.

In 2007, Dr. Dubois and co-authors introduced the concept of prodromal Alzheimer’s disease in their Lancet article revising diagnostic criteria. In 2011, the National Institutes of Aging and the Alzheimer’s Association supported a series of papers introducing a new paradigm for diagnostic criteria, including Mild Cognitive Impairment and preclinical Alzheimer’s disease. Both papers and new definitions of Alzheimer’s disease incorporate the discovery of Amyloid beta, a biomarker that purports to indicate disease pathology. The concept of using biomarkers, which are detectible years before a patient begins experiencing symptoms, offers the potential for offering preclinical testing in the clinical context. Yet, as researchers continue to validate biomarkers, little is known about how preclinical test results may affect patients and their families.