Wednesday, November 14, 2018

Me, Myself, and my Social Constructs

By Ashley Oldshue

“He began to search among the infinite series of impressions which time had laid down, leaf upon leaf, fold upon fold softly, incessantly upon his brain”
--- Virginia Woolf, To the Lighthouse

Image courtesy to Tomas Castelazo, Wikimedia Commons
Identity is a motif that runs central to our lives, it is woven into our language, our learning, and our literature. Virginia Woolf, in her novel To the Lighthouse, describes identity as a flipbook of images (Woolf, 1981, p. 169). She asserts that when we look at someone, we do not hold a single, uniform concept of them. Instead, we see a series of images and interactions running like a flipbook in our heads. It is in this idea of who they are that we are able to add pages and evolve over time. However, no one deed can erase all the rest. Everybody is made up of good and bad, and these inconsistencies together form an identity. However, what if someone did change so drastically that it was like reading a whole new book?

Tuesday, November 13, 2018

Neuralink: Concerns of Brain-Machine Interfaces

By Oscar Gao

Introduction 
Image courtesy to Nicolas Ferrando and Lois Lammerhuber, Flickr
When Elon Musk starts a company developing brain-machine interfaces, you know it has the potential to be the next big thing. He claimed that for people to be competitive in the artificial intelligence age, we will have to become cyborgs, a "merger of biological intelligence and machine intelligence” (Marsh, 2018; Solon, 2017). He started the company Neuralink, which aims to build “ultra high bandwidth brain-machine interfaces to connect humans and computers.” This company, at the moment, is hiring computer scientists and engineers who have "exceptional ability and a track record of building things that work" (“NEURALINK”, n.d.). Also specified on its website, one does not need experience in neuroscience to apply for a job. The company does, however, need to work with neuroscientists and neuroethicists to discuss the ethical implications and guidelines for their projects.

Tuesday, November 6, 2018

Medicine & Neuroethics: Perspectives from the Intersection

By Somnath Das

Image courtesy of publicdomainpictures.net.
The first year of medical school is infamously rigorous – it both challenges and changes virtually anyone who dares to undertake it. My experience with this trial was certainly not unique. Despite the knowledge I have gained (on paper, at least), I greatly missed learning about a passion of mine: neuroethics. June marked the two-year anniversary of my attending the Neuroethics in Paris study abroad course hosted by Emory University, which served as the foundation of my exposure to this field. I additionally had the pleasure of taking a graduate neuroethics course offered by the Emory Center for Ethics Masters of Bioethics Program during my time at Emory, which was a more rigorous, yet very essential and fulfilling, dive into the field. Given my previous exposure, it felt odd to begin medical school with little opportunity to formally engage in the field of neuroethics. While my experience with the first year of medical school did not include formal content in neuroethics, I couldn’t help but notice multiple parallels between the two fields, which I will briefly discuss in this blog post. Ultimately, it is my belief that physicians must pay attention to, study, and engage in the field of neuroethics. In this post, I illustrate the reasons for holding this belief by highlighting some of the critical discussions present in both fields; it is my hope that these debates balloon to involve many doctors and patients in the near future.

Tuesday, October 30, 2018

Phenomenology of the Locked-in Syndrome: Time to Move Forward

By Fernando Vidal

Image courtesy of Wikimedia Commons.
The main features of the locked-in syndrome (LIS) explain its name: persons in LIS are tetraplegic and cannot speak, but have normal visual perception, consciousness, cognitive functions and bodily sensations. They are “locked in” an almost entirely motionless body. A condition of extremely low prevalence identified and named in 1966, LIS most frequently results from a brainstem stroke or develops in the advanced stage of a neurodegenerative disease such as amyotrophic lateral sclerosis (ALS), which affects the motor neuron system and leads to paralysis. LIS presents three forms. In total or complete LIS (CLIS), patients lack all mobility; in classic LIS, blinking or vertical eye movements are preserved; in incomplete LIS, other voluntary motion is possible. Mortality is high in the early phase of LIS of vascular origin, but around 80% of patients who become stable live ten years and 40% live twenty years after entering the locked-in state. Persons who are locked-in as consequence of stroke or traumatic injury sometimes evolve from classic to incomplete LIS. They can usually communicate via blinking or vertical eye movement, by choosing letters from an alphabet spell board. When additional movements are regained, they facilitate the use of a computer. It is hoped that brain-computer interfaces (BCI) will enable CLIS patients to communicate too.

Tuesday, October 23, 2018

Normalization of Enhancement: Recap of September’s The Future Now: NEEDs

By Nathan Ahlgrim

As I sit down to write this post, I have just consumed my first Nerv shot. It actually tastes quite nice, the penetrating citrus sensation gone in a couple gulps. The taste, however, is secondary; it’s marketed as “Liquid Zen.” At September’s The Future Now: Neuroscience and Emerging Ethical Dillemas Series (NEEDs), Dr. Michael Jiang presented his motivation for co-founding and developing Nerv. His presentation began just how his company did, with a simple question: “Who here drinks coffee?”

Tuesday, October 16, 2018

What can neuroscience tell us about ethics?

By Adina L. Roskies

Image courtesy of Bill Sanderson, Wellcome Collection
What can neuroscience tell us about ethics? Some say nothing – ethics is a normative discipline that concerns the way the world should be, while neuroscience is normatively insignificant: it is a descriptive science which tells us about the way the world is. This seems in line with what is sometimes called “Hume’s Law”, the claim that one cannot derive an ought from an is (Cohon, 2018). This claim is contentious and its scope unclear, but it certainly does seem true of demonstrative arguments, at the least. Neuroethics, by its name, however, seems to suggest that neuroscience is relevant for ethical thought, and indeed some have taken it to be a fact that neuroscience has delivered ethical consequences. It seems to me that there is some confusion about this issue, and so here I’d like to clarify the ways in which I think neuroscience can be relevant to ethics.

Wednesday, October 10, 2018

Ethical Considerations for Emergent Neuroprosthetic Technology

By Emily Sanborn

Image courtesy of Wikimedia Commons
In the 21st century, there is a push towards producing neurotechnology that will make our lives easier. A category of these technologies are neuroprosthetics, devices that can supplement or supplant the input or output of the nervous system to obtain normal function (Leuthardt, Roland, and Ray, 2014). In the emergence of these technologies, there are ethical issues presented and a question is formed: are we fixing what is not broken? (Moses, 2016). 

A recent article from the Smithsonian magazine reported a technology that will allow humans to develop a “sixth sense” (Keller, 2018). David Eagleman, an adjunct professor at Stanford University’s department of Psychiatry and Behavioral Science, invented a sensory augmentation device called Versatile Extra-Sensory Transducer (VEST), which is a vest covered with vibratory motors that is worn on the body. VEST works by receiving auditory signals from speech and the surrounding environment and translating that signal via Bluetooth to vibrations. The vibrations are transmitted to the vest in dynamic patterns that correlate to specific speech and auditory signals. The user is then able to feel the sonic world. In time, they may be able to use this new touch sensation to understand spoken word (Eagleman, 2015). 

Tuesday, October 9, 2018

An injection of RNA may transfer memories?

By Gabriella Caceres

Figure 1. Image by Bédécarrats et al. 2018
Imagine a future in which you could tell your spouse about your day by simply transferring the memory to them, or one in which you could pass your memories on even after your death. These scenarios may seem far ahead in the future, but steps are definitely being taken towards this development. To combat our natural memory inaccuracy and decline due to old age or Alzheimer’s disease, which has been found in 1 out of every 10 people over 65 years old (WHO, 2017), scientists are beginning to investigate the biology of memory and the ways in which the process of making memories can be improved. A recent and controversial article published by Science News reported that RNA may be used to transfer memories from one sea slug to another. Bedecarrats et al. 2018 claimed that they were able to transfer memories from neurons of sea slugs (Aplysia californica) by first sensitizing the slugs with shocks until they had a long-lasting withdrawal response to touch. Then, the researchers extracted the RNA from the sensory neurons of the shocked slugs, and injected that RNA into the sensory neurons of non-sensitized sea slugs (figure 1). The authors postulated that the sensitization occurred because the donor sea slug underwent epigenetic changes, or when a methyl group gets attached to the DNA and modulates gene expression (D’Urso et al. 2014). This whole process resulted in a transfer of sensitization (a form of implicit, or unconscious, memory) to the recipient slug, as it experienced the same long-lasting response to touch that the donor slug did.

Tuesday, October 2, 2018

How to be Opportunistic, Not Manipulative

By Nathan Ahlgrim

Opportunistic Research
Government data is often used to
answer key research questions.
Image courtesy of the U.S. Census Bureau

Opportunistic research has a long and prosperous history across the sciences. Research is classified as
opportunistic when researchers take advantage of a special situation. Quasi-experiments enabled by government programs, unique or isolated populations, and once-in-a-lifetime events can all trigger opportunistic research where no experiments were initially planned. Opportunistic research is not categorically problematic. If anything, it is categorically efficient. Many a study could not be ethically, financially, or logistically performed in the context of a randomized control trial.

Biomedical research is certainly not the only field that utilizes opportunistic research, but it does present additional ethical challenges. In contrast, many questions in social science research can only be ethically tested via opportunistic research, since funding agencies are wary of explicitly withholding resources from a ‘control’ population (Resch et al., 2014). We, as scientists, are indebted to patients who choose to donate their time and bodies to participate in scientific research while inside an inpatient ward; their volunteerism is the only way to perform some types of research.

Almost all information we have about human neurons comes from generous patients. For example, patients with treatment-resistant epilepsy can have tiny wires lowered into their brains, a technique known as intracranial microelectrode recording, enabling physicians to listen in on the neuronal chatter at a resolution normally restricted to animal models (Inman et al., 2017; Chiong et al., 2018). Seizures, caused by runaway excitation of the brain, are best detected by recording electrical signals throughout the brain. By having such fine spatial resolution inside a patient’s brain, surgeons can be incredibly precise in locating the site of the seizure and treating the patient. It’s what else those wires are used for that introduces thorny research ethics.

Wednesday, September 26, 2018

Caveats in Quantifying Consciousness

This piece belongs to a series of student posts written during the Neuroscience and Behavioral Biology Paris study abroad program taught by Dr. Karen Rommelfanger in June 2018.

By Ankita Moss

Image courtesy of Flickr user, Mike MacKenzie.
As I was listening to a presentation during the 2018 Neuroethics Network Conference in Paris, a particular phrase resonated with me: we must now contemplate the existence of “the minds of those that never lived.”

Dr. John Harris, a professor at the University of Manchester, discussed both the philosophical and practical considerations of emerging artificial intelligence technologies and their relationship to human notions of the theory of mind, or the ability to interpret the mental states of both oneself and others and use this to predict behavior.

Upon hearing this phrase and relating it to theory of mind, I immediately began to question my notions of “the self” and consciousness. To UC Berkeley philosopher Dr. Alva Noe, one manifests consciousness by building relationships with others, acting deliberately on the external environment in some capacity. Conversely, a group of Harvard scientists claim they have found the mechanistic origin of consciousness, a connection between the brainstem region responsible for arousal and regions of the brain that contribute to awareness.

Tuesday, September 25, 2018

Artificial Emotional Intelligence

This piece belongs to a series of student posts written during the Neuroscience and Behavioral Biology Paris study abroad program taught by Dr. Karen Rommelfanger in June 2018.

By Ruhee Patel

Image courtesy of Pexels user, Mohamed Hassan
In the race for more effective marketing strategies, an enormous step forward came with artificial emotional intelligence (emotion AI). Companies have developed software that can track someone’s emotions over a given period of time. Affectiva is a company that develops emotion AI for companies to facilitate more directed marketing for consumers. Media companies and product brands can use this information to show consumers more of what they want to see based on products that made them feel positive emotions in the past.

Emotion tracking is accomplished by recording slight changes in facial expression and movement. The technology relies on algorithms that can be trained to recognize features of specific expressions (1). Companies such as Unilever are already using Affectiva software now for online focus groups to judge reactions to advertisements. Hershey is also partnering with Affectiva to develop a device for stores that tells users to smile in exchange for a treat (2). Facial emotion recognition usually works either through machine learning or the geometric feature-based approach. The machine learning approach involves feature selection for the training of the machine learning algorithms, feature classification, feature extraction, and data classification. In contrast, the geometric feature-based approach standardizes the images before facial component detection and the decision function. Some investigators have reached over 90% emotion recognition accuracy (3). Emotion AI can even measure heart rate by monitoring slight fluctuations in the color of a person’s face. Affectiva has developed software that would work through web cameras in stores or in computers, in the case of online shopping. Affectiva also created Affdex for Market Research, which provides companies with calculations based on the Affectiva database, so companies have points of comparison when making marketing decisions.

Tuesday, September 18, 2018

NeuroTechX and Future Considerations for Neurotechnology

By Maria Marano

Image courtesy of Wikimedia Commons
As society has seen bursts of activity in the technology sector, we are continually discovering ways to harness these new advances. While some fields, such as artificial intelligence and machine learning, have already been massively exploited by industry, neurotechnology hasn’t fully broken into consumer markets1. Generally, neurotechnology refers to any technology associated with the brain. Consumer products that use brain activity to modulate behaviour, such as the Muse headband, do exist, but neurotech remains predominantly in the hands of researchers and the science community1. As neurotechnological advances begin to take centre stage and become a part of the 21st-century zeitgeist, the ethical implications of these technologies must be fully appreciated and addressed2. One area of concern is the fear that limited access to neurotech will create further discrepancies between regions with regards to quality of life.

Ultimately, developers expect neurotechnology to be utilized for clinical purposes1. Brain-computer interface products are currently used to enhance meditation3 and attention4, but the primary goal is to use neurotechnology for therapeutics5. Prominent present-day examples of neurotech in the healthcare industry include virtual reality therapies for stroke rehabilitation6, phobias7, and autism spectrum disorders8. Unfortunately, as more of these fields develop and prosper, the improvements to health and wellness will be restricted to those who can access neurotechnologies. Furthermore, with Elon MuskBryan Johnson, and others work towards “cognitive enhancement” devices; “enhanced” individuals could easily gain an advantage over the unenhanced9. As is so often the case, these advantages will likely be conferred onto those in developed nations and, more specifically, wealthier individuals first. This distribution has the potential to exacerbate existing socio-economic differences; therefore, it is essential that as a society we democratically monitor progress and dictate guidelines as the neurotechnology industry advances.

Wednesday, September 12, 2018

Ethical Implications of the Neurotechnology Touchpoints

This piece belongs to a series of student posts written during the Neuroscience and Behavioral Biology Paris study abroad program taught by Dr. Karen Rommelfanger in June 2018.

By Janet Guo

The TouchPoint Solution™ (commonly referred to as TouchPoints™) is a noninvasive neurotechnology device that one can wear on any part of the body. The device can be accessorized (detachable wristband in each pack available), so it can be worn like a watch or placed inside a pocket or sock. The founders of TouchPoints™, Dr. Amy Serin and entrepreneur Vicki Mayo, consider it to be a neuroscientific device because of the bilateral alternating stimulation tactile (BLAST) action it allows the user’s brain to undergo. This is a device that can affect people in good health or those who suffer from a neurologic disease and is therefore classifiable as a neuroscientific device by the broad scientific definition proposed by Illes & Lombera (2009). The website even claims that the brain can “create new neural pathways that are net positive” and has a “lasting effect on your brain”. In many of the TouchPoints™ advertisements (many of which can be found on the official TouchPoints™ YouTube channel, TouchPoints™ devices are claimed to relieve stress by 70% in under 30 seconds. 

TouchPoints™ was originally launched in late 2015 with the mission of bringing relief to people who have high levels of stress and anxiety. This technology has been through several developments and newer, cheaper versions have been released since its initial launch.  Its presence in news media has been increasing-- Huffington Post (Wolfson, 2017), Mashable (Mashable staff, 2017), and The Washington Times (Szadkowski, 2017) are only a few of the popular news and opinion websites that have published pieces about TouchPoints™. An investigation of the science and ethics behind this device is warranted as the number of sales is increasing greatly due to the expansion of the company to the international level. This expansion was highlighted by founder Dr. Amy Serin at the 2017 SharpBrains Virtual Summit: Brain Health & Enhancement in the Digital Age (SharpBrains, 2018).

Tuesday, September 11, 2018

The future of an AI artist

This piece belongs to a series of student posts written during the Neuroscience and Behavioral Biology Paris study abroad program taught by Dr. Karen Rommelfanger in June 2018.

By Coco Cao

An example of AI-generated art
Image courtesy of Flickr
An article published on New Scientist entitled, “Artificially intelligent painters invent new styles of art” has captured my attention. The article discussed a recent study conducted by Elgammal et al. (2017), who developed a computational creative system (the Creative Adversarial Network) for art generation based on the Generative Adversarial Network (GAN), which has the ability to generate novel images simulating a given distribution. Originally, GAN consisted of two neural networks, a generator and a discriminator. To create the Creative Adversarial Network (CAN), scientists trained the discriminator with 75753 art works from 25 art styles so it learned to categorize art works based on their styles. The discriminator also learned to distinguish between art and non-art pieces, based on learned art styles. Then, the discriminator is able to correct the generator, a network that generates art pieces. The generator eventually learns and produces art pieces that are indistinguishable from the human produced art pieces. While ensuring the art piece is still aesthetically pleasing, CAN generates abstract arts that enhance creativity by maximizing deviation from established art styles. 

After learning about AI’s ability to be “creative” and generate art pieces, I was frightened. Unlike AI’s application in a scientific context, AI in an art context elicits human feelings. Is it possible that AI artists could replace human artists in the future? Considering the importance of the author’s creativity and originality in art, the critical ethical concern regards the individualism of AI artists. Can we consider the art pieces generated from AI as expressions of themselves? 

Tuesday, September 4, 2018

Organoids, Chimeras, Ex Vivo Brains – Oh My!

By Henry T. Greely

Image courtesy of Wikimedia Commons
At about the time of the birth of modern neuroethics, Adina Roskies usefully divided the field into two parts: the neuroscience of ethics, what neuroscience can tell us about ethics, and the ethics of neuroscience, what ethical issues neuroscience will bring us (1). At some point, in my own work, I broke her second point into the ethics of neuroscience research and the ethical (and social and legal) implications of neuroscience for the non-research world. (I have no clue now whether that was original with me.)

The second part of Roskies’ division of neuroethics, the ethics of neuroscience research, has always had a special place in my heart because early work in it really helped mold the field we have today. In the early ‘00s, groups that mixed scientists, physicians, and ethicists, largely through the efforts of Judy Illes, explored what to do about abnormal brain scans taken from otherwise healthy volunteers. (See, e.g., 2, 3) It had become clear that, in the computer-generated imagery of a brain MRI, more than 20 percent of “the usual subjects” (college undergraduates, usually psychology majors) and about half of “mature” subjects had something “odd” in their brains. These could be variations of no clinical significance, such as “silent” blockages or benign tumor to potentially very serious problems, such as malignant tumors or large “unpopped” aneurysms. Happily, only small fractions of those oddities held clinical significance, but this still posed hard questions for researchers, many of whom were not themselves clinicians. What, if anything, should they tell, and to whom? And so, working together, scientists, clinicians, and ethicists talked with each other, learned from each other, and came up with useful answers, usually involving both changes to the consent process and a procedure for expert review of some worrisome scans.

Tuesday, August 28, 2018

Smart AI

By Jonathan D. Moreno

Image courtesy of Flickr
Experiments that could enhance rodent intelligence are closely watched and long-term worries about super-intelligent machines are everywhere. But unlike avoiding smart mice, we’re not talking about industry standards for the almost daily steps toward computers that possess at least near-human intelligence. Why not? 

Computers are far more likely to achieve human or near-human intelligence than lab mice, however remote the odds for either. The prospects for making rodents smarter with implanted human neurons have dimmed as the potential for a smart computer continues to grow. For example, a recent paper on systems of human neurons implanted in mice didn’t make them smarter maze-runners. By contrast, in 2016 a computer program called AlphaGo showed it could defeat a professional human Go player. Those machine-learning algorithms continue to teach themselves new, human-like skills, like facial recognition -- except of course that they are better at it than the typical human. 

Tuesday, August 21, 2018

Worrisome Implications of Lack of Diversity in Silicon Valley

By Carolyn C. Meltzer, MD

Image courtesy of Wikimedia Commons
The term “artificial intelligence” (AI) was first used in 1955 by John McCarthy of Dartmouth College to describe complex information processing (McCarthy 1955). While the field has progressed slowly since that time, recent advancements in computational power, deep learning and neural network systems, and access to large datasets have set the stage for the rapid acceleration of AI.  While there is much painstaking work ahead before transformational uses of AI catch up with the hype (Kinsella 2017), substantial impact in nearly all aspects of human life is envisioned. 

AI is being integrated in fields as diverse as medicine, finance, journalism, transportation, and law enforcement. AI aims to mimic human cognitive processes, as imperfect as they may be.  Our human tendencies to generalize common associations, avoid ambiguity, and more tightly identify with others who are more like ourselves may help us navigate our world efficiently, yet how they may translate into our design of AI systems is yet unclear.  As is typically the case, technology is racing ahead of our ability to consider the societal and ethical consequences of its implementation (Horvitz 2017). 

Tuesday, August 14, 2018

The Stem Cell Debate: Is it Over?

By Katherine Bassil

Image courtesy of Flickr
In 2006, Yamanaka revolutionized the use of stem cells in research by revealing that adult mature cells can be reprogrammed to their precursor pluripotent state (Takahashi & Yamanaka, 2006). A pluripotent stem cell is a cell characterized by the ability to differentiate into each and every cell of our body (Gage, 2000). This discovery not only opened up new doors to regenerative and personalized medicine (Chun, Byun, & Lee, 2011; Hirschi, Li, & Roy, 2014), but it also overcame the numerous controversies that accompanied the use of embryonic stem (ES) cells for research purposes. For instance, one of the controversies raised by the public and scholars was that human life, at every stage of development, has dignity and as such requires rights and protections (Marwick, 2001). Thus, the use of biological material from embryos violates these rights, and the research findings gathered from this practice does not overrule basic human dignity. With a decline in the use of ES cells in research, the use of induced-pluripotent stem (iPS) cells opened up avenues for developing both two- and three-dimensional (2D and 3D, respectively) cultures that model human tissues and organs for both fundamental and translational research (Huch & Koo, 2015). While the developments in this field are still in an early phase, they are expected to grow significantly in the nearby future, thereby triggering a series of ethical questions of their own.

Tuesday, August 7, 2018

Is the concept of “will” useful in explaining addictive behaviour?

By Claudia Barned and Eric Racine

Image courtesy of Flickr
The effects of substance use and misuse have been key topics of discussion given the impact on healthcare costs, public safety, crime, and productivity (Gowing et al., 2015). The alarming global prevalence rates of substance use disorder and subthreshold “issues” associated with alcohol and other drugs have also been a cause for concern. For example, in the United States, with a population of over 318 million people (Statista, 2018), 21.5 million people were classified with a substance use disorder in 2014; 2.6 million had issues with alcohol and drugs, 4.5 million with drugs but not alcohol and 14.4 million had issues with alcohol only (SAMHSA, 2018). Similarly, in Canada, with a population of over 35 million people (Statistics Canada, 2018), a total of 6 million met the criteria for substance use disorders in 2013, with the highest rates among youth aged 18 – 24 (Statistics Canada, 2013). Concerns about addiction are particularly evident in widespread media alarm about the current fentanyl crisis affecting the U.S., Canada, Australia and the U.K, and the climbing rates of fentanyl related deaths globally (NIDA, 2017; UNDC, 2017).

Tuesday, July 31, 2018

The Missing Subject in Schizophrenia

By Anna K. Swartz

Image drawn by Anna Swartz
Since this is, in many ways, a post about narratives, I have decided I should begin with mine. 

Every morning I take an oblong green and white pill, every night I take another of the same oblong green and white pill. I also take circle and oval pills. This helps in keeping me tethered to reality, functioning with fewer hallucinations and delusions. My official diagnosis is schizoaffective, bipolar type 1. Schizoaffective disorder is closely allied to schizophrenia but is rarer, striking about 0.3 percent of the population. It’s also by many accounts “worse” in that it incorporates the severe depression and psychosis that is characteristic of bipolar disorder, as well as the loss of touch with reality wrought by schizophrenia. I find it easier to admit to being bipolar than I do schizophrenic. I have found a much more positive reception to bipolar disorder. It’s a disease often associated with creative individuals who are highly intelligent and have traits that many see as advantageous, even covetous. That is, there is something romantic about the disease even as it wreaks havoc in a person’s life. It’s also much easier to talk about depression and mania because the chances are overwhelming that during the span of a normal lifetime, we will come face-to-face with some manifestation of mania or depression, either in ourselves or someone close to us. It’s familiar and understandable. That is less the case when it comes to hallucinations and delusions. Everyone has an inner voice that they can talk to sometimes in their thoughts. But hearing voices is not like that. Auditory hallucinations sound like they are coming from outside your head. Have you ever tried to write or read while people are having a loud conversation around you? Now imagine them screaming at you. This is how I feel most days. The voices are almost always caustic and denigrating, telling me that I would be better off dead. Delusions are also hard to explain. With a head fizzing with mad thoughts, I’ve stared up at ceilings with blue and brown swirling irises like cars in the center of a volcano. More often, I will see objects sitting on surfaces and watch them tip over or fall out of the corner of my eye only to blink and have them be static. I also experience paranoid delusions which are commonly manifested as thoughts that others are plotting against me, following me, watching me, or talking about me. 

Tuesday, July 24, 2018

Exploring the Risks of Digital Health Research: Towards a Pragmatic Framework

By Dr. John Torous

Image courtesy of Flickr user Integrated Change
We often hear much about the potential of digital health to revolutionize medicine and transform care – but less about the risks and harms associated with the same technology-based monitoring and care. “It’s a smartphone app … how much harm can it really cause?” is a common thought today, but also the starting point for a deeper conversation. That conversation is increasingly happening at Institutional Review Boards (IRBs) as they are faced with an expanding number of research protocols feature digital- and smartphone-based technologies.

In our article, ‘Assessment of Risk Associated with Digital and Smartphone Health Research: a New Challenge for IRBs” published in the Journal of Technology and Behavioral Science [1], we explore the evolving ethical challenges in evaluating digital health risk, and here expand on them. While risk and harm in our 21st century digital era are themselves evolving topics that change with both technology and societal norms, how do we quantify them to help IRBs in making safe and ethical decisions regarding clinical research?

Tuesday, July 17, 2018

The interplay between social and scientific accounts of intergroup difference

By Cliodhna O’Connor

Image courtesy of Wikimedia Commons
The investigation of intergroup difference is a ubiquitous dimension of biological and behavioural research involving human subjects. Understanding almost any aspect of human variation involves the comparison of a group of people, who are defined by some common attribute, with a reference group which does not share that attribute. This is an inescapable corollary of applying the scientific method to study human minds, bodies and societies. However, this scientific practice can have unanticipated – and undesirable – social consequences. As my own research has shown in the contexts of psychiatric diagnosis (O’Connor, Kadianaki, Maunder, & McNicholas, in press), gender (O’Connor & Joffe, 2014) and sexual orientation (O’Connor, 2017), scientific accounts of intergroup differences can often function to reinforce long-established stereotypes, exaggerate the homogeneity of social groups, and impose overly sharp divisions between social categories.

Without disputing the scientific legitimacy of intergroup comparisons in research, it is important to acknowledge that the definitions and distinctions that determine which populations are compared are given by culture, not by nature. For one thing, there are relatively few discrete categories underlying human variability ‘in the wild:’ even for variables seen as the most obvious examples of natural kinds, such as sex, the boundaries between categories are much fuzzier than is typically acknowledged (Fausto-Sterling, 2000). The pragmatic demands of experimental design encourage scientists to carve the social world at joints that it may not naturally possess. Secondly, the choice of intergroup comparison is not value-neutral: the priorities of governments, industries, funding agencies, universities and individual scientists dictate which comparisons are deemed sufficiently interesting or important to investigate. Therefore, even within the scientific sphere, how questions are asked and answered is influenced by a priori understandings of social categories. These understandings are absorbed into all stages of the scientific process, from research design right through the collection, analysis and interpretation of data.

Tuesday, July 10, 2018

Solitary Confinement: Isolating the Neuroethical Dilemma

By Kristie Garza
 
Eastern State Penitentiary
Image courtesy of Wikimedia Commons
In 1842, Charles Dickens visited the Eastern Penitentiary in Philadelphia to examine what was being called a revolutionary form of rehabilitation. After his visit, he summarized his observations into an essay in which he stated, “I am only the more convinced that there is a depth of terrible endurance in it which none but the sufferers themselves can fathom, and which no man has a right to inflict upon his fellow-creature. I hold this slow and daily tampering with the mysteries of the brain, to be immeasurably worse than any torture of the body” (1).  Dickens’ words describe solitary confinement. While there is no one standard for solitary confinement conditions, it usually involves an individual being placed in complete sensory and social isolation for 23 hours a day. What Dickens observed in 1842 is not unlike current solitary confinement conditions.

Tuesday, July 3, 2018

Neuroethics: the importance of a conceptual approach

By Arleen Salles, Kathinka Evers, and Michele Farisco

Image courtesy of Wikimedia Commons.
What is neuroethics? While there is by now a considerable bibliography devoted to examining the philosophical, scientific, ethical, social, and regulatory issues raised by neuroscientific research and related technological applications (and a growing number of people in the world claim to take part in the neuroethical debate), less has been said about how to interpret the field that carries out such examination. And yet, this calls for discussion, particularly considering that the default understanding of neuroethics is one that sees the field as just another type of applied ethics, and, in particular, one dominated by a Western bioethical paradigm. The now-classic interpretation of neuroethics as the “neuroscience of ethics” and the “ethics of neuroscience” covers more ground, but still fails to exhaust the field (1).

As we have argued elsewhere, neuroethics is a complex field characterized by three main methodological approaches (2-4). “Neurobioethics” is a normative approach that applies ethical theory and reasoning to the ethical and social issues raised by neuroscience. This version of neuroethics, which generally mirrors bioethical methodology and goals, is predominant in healthcare, in regulatory contexts, and in the neuroscientific research setting.

Tuesday, June 26, 2018

Facial recognition, values, and the human brain

By Elisabeth Hildt

Image courtesy of Pixabay.
Research is not an isolated activity. It takes place in a social context, sometimes influenced by value assumptions and sometimes accompanied by social and ethical implications. A recent example of this complex interplay is an article, “Deep neural networks can detect sexual orientation from faces” by Yilun Wang and Michal Kosinski, accepted in 2017 for publication in the Journal of Personality and Social Psychology.

In this study on face recognition, the researchers used deep neural networks to classify the sexual orientations of persons depicted in facial images uploaded on a dating website. While the discriminatory power of the system was limited, the algorithm was reported to have achieved higher accuracy in the setting than human subjects. The study can be seen in the context of the “prenatal hormone theory of sexual orientation,” which claims that gay men and women tend to have gender-atypical facial morphology.

Tuesday, June 19, 2018

Disrupting diagnosis: speech patterns, AI, and ethical issues of digital phenotyping

By Ryan Purcell, PhD

Jim Schwoebel, presenter at April The Future Now: (NEEDS)
Diagnosing schizophrenia can be complex, time-consuming, and expensive. The April seminar on The Future Now: (NEEDs) Neuroscience and Emerging Ethical Dilemmas at Emory focused on one innovative effort to improve this process in the flourishing field of digital phenotyping. Presenter and NeuroLex founder and CEO Jim Schwoebel had witnessed his brother struggle for several years with frequent headaches and anxiety, and saw him accrue nearly $15,000 in medical expenses before his first psychotic break. From there it took many more years and additional psychotic episodes before Jim’s brother began responding to medication and his condition stabilized. Unfortunately, this experience is not uncommon; a recent study found that the median period from the onset of psychotic symptoms until treatment is 74 weeks. Naturally, Schwoebel thought deeply about how this had happened and what clues might have been seen earlier. “I had been sensing that something was off about my brother’s speech, so after he was officially diagnosed, I looked more closely at his text messages before his psychotic break and saw noticeable abnormalities,” Schwoebel told Psychiatric News. For Schwoebel, a Georgia Tech alum and co-founder of the neuroscience startup accelerator NeuroLaunch, this was the spark of an idea. Looking into the academic literature he found a 2015 study led by researchers from Columbia University who applied machine learning to speech from a sample of participants at high risk for psychosis. They found that the artificial intelligence correctly predicted which individuals would transition to psychosis over the next several years.

Tuesday, June 12, 2018

Ethical Concerns Surrounding Psychiatric Treatments: Do Academics Agree with the Public?

By Laura Y. Cabrera, Rachel McKenzie, Robyn Bluhm

Image courtesy of the
U.S. Airforce Special Operations Command.
Treatments for psychiatric disorders raise unique ethical issues because they aim to change behaviors, beliefs, and affective responses that are central to an individual’s sense of who they are. For example, interventions for depression aim to change feelings of guilt and worthlessness (as well as depressed mood), while treatments for obsessive-compulsive disorder try to diminish both problematic obsessive beliefs and compulsive behaviors. In addition to the specific mental states that are the target of intervention, these treatments can also affect non-pathological values, beliefs, and affective responses. The bioethics and neuroethics communities have been discussing the ethical concerns that these changes pose for individual identity [1,2], personality [3,4], responsibility [5], autonomy [6,7], authenticity [8], and agency [9,10]. 

Tuesday, June 5, 2018

Participatory Neuroscience: Something to Strive For?

By Phoebe Friesen

Image courtesy of Pixabay.
In the last few decades, there has been an increasing push towards making science more participatory by engaging those who are part of or invested in the community that will be impacted by the research in the actual research process, from determining the questions that are worth asking, to contributing to experimental design, to communicating findings to the public. Some of this push stems from the recognition that research is always value-laden and that the values guiding science have long been those of an elite and unrepresentative few (Longino, 1990). This push also has roots in feminist standpoint theory, which recognizes the way in which marginalized individuals may have an epistemic advantage when it comes to identifying problematic assumptions within a relevant knowledge project (Wylie, 2003). Additionally, many have noted how including the voices of those likely to be impacted by research can support the process itself (e.g. by identifying meaningful outcome measures) (Dickert & Sugarman, 2005). As a result, participatory research is becoming widely recognized as having both ethical and epistemic advantages. The field of neuroscience, however, which takes the brain as its primary target of investigation, has been slow to take up such insights. Here, I outline five stages of participatory research and the uptake of neuroscientific research in each, discuss the challenges and benefits of engaging in such research, and suggest that the field has an obligation, particularly in some cases, to shift towards more participatory research.

Tuesday, May 29, 2018

Ethical Implications of fMRI In Utero

By Molly Ann Kluck

Image courtesy of Wikimedia Commons.
When my neuroethics mentor approached me with a publication from Trends in Cognitive Science called “Functional Connectivity of the Human Brain in Utero” (1) in hand, I was immediately delighted by the idea of performing an ethical analysis on the use of functional Magnetic Resonance Imaging (fMRI) on fetuses in utero. As of right now, I’m still conducting this ethical analysis. 

Using fMRI to look at human brains as they develop in utero is groundbreaking for a couple reasons. For one, there is a vast difference between the fMRI method currently used to investigate developing brains and previous methods that were used to examine fetal brain development. Research on developing brains had utilized preterm neonates, or babies born prematurely. While these data are valuable, there are issues with validity associated with this method: early exposure to an abnormal environment (e.g. being in the intensive care unit, where many preterm babies go after birth, being in an MRI machine, etc.) for a fetal brain, incomplete exposure to the essential nutrients and protection offered by the womb, and the plasticity of the fetal brain all can cause preterm neonates to experience differences in brain development (2). An accurate map of the brain as it typically develops will not be truly accurate if it is produced solely using preterm neonates. However, surveying a developing brain while it is still in utero, as can be done with fMRI in utero, is a different matter altogether. The chances of this research providing a more accurate picture of the developing brain increase due to the uninterrupted development of the fetus in utero. 

Tuesday, May 22, 2018

Should you trust mental health apps?

By Stephen Schueller

Image courtesy of Pixabay.
If you were to search the Google Play or Apple iTunes store for an app to help support your mental health you’d find a bewildering range of options. This includes nearly 1000 apps focused on depression, nearly 600 focused on bipolar disorder, and 900 focused on suicide (Larsen, Nicholas, & Christensen, 2016). But how much faith should you have that these apps are actually helpful? Or to take an even more grim position, might some apps actually be harmful? Evidence suggests the latter might be true. In one study, researchers who examined the content in publicly available bipolar apps actually found one app, iBipolar, that instructed people to drink hard liquor during a bipolar episode to help them sleep (Nicholas, Larsen, Proudfoot, & Christensen, 2015). Thus, people should definitely approach app stores cautiously when searching for an app to promote their mental health.

Thursday, May 17, 2018

Presenting... The Neuroethics Blog Reader: Black Mirror Edition!

It is our pleasure to present you with The Neuroethics Blog Reader: Black Mirror Edition!


This reader features the seven contributions from the blog's Black Mirror series, in which six different student writers explored the technology and neuroethical considerations presented in various  episodes of the British science fiction anthology television series. 

As Dr. Karen Rommelfanger puts it: 

This reader "... features critical reflections on the intriguing, exciting and sometimes frightful imagined futures for neurotechnology. Every day, in real life, we move closer to unraveling the secrets of the brain and in so doing become closer to understanding how to intervene with the brain in ways previously unimaginable. Neuroscience findings and the accompanying neurotechnologies created from these findings promise to transform the landscape of every aspect of our lives. As neuroethicists, we facilitate discussions on the aspirations of neuroscience and what neuroscience discoveries will mean for society. Sometimes this means dismantling overhyped neuroscience and staving of possible dystopian futures, but ultimately neuroethics aims to make sure that the neuroscience of today and of the future advance human flourishing."

The Neuroethics Blog, now in its 7th year of creating weekly publications, runs mostly in part to our amazing blog editorial team. A special thank you to: Sunidhi Ramesh (Volume Editor of the reader and outgoing Assistant Managing Editor), Carlie Hoffman (Managing Editor), Nathan Ahlgrim (incoming Assistant Managing Editor), Kristie Garza (Supporting Editor and blog contributor), and Jonah Queen (Supporting Editor and blog contributor). We would also like to thank the authors of the pieces featured in the reader; you can read more about them on the last page of the publication.

Want to read more? Check out a digital copy of the reader below.



Tuesday, May 15, 2018

Regulating Minds: A Conceptual Typology

By Michael N. Tennison 

Image courtesy of Wikimedia Commons.
Bioethicists and neuroethicists distinguish therapy from enhancement to differentiate the clusters of ethical issues that arise based on the way a drug or device is used. Taking a stimulant to treat a diagnosed condition, such as ADHD, raises different and perhaps fewer ethical issues than taking it to perform better on a test. Using a drug or device to enhance performance—whether in the workplace, the classroom, the football field, or the battlefield—grants the user a positional advantage over one’s competitors. Positional enhancement raises issues of fairness, equality, autonomy, safety, and authenticity in ways that do not arise in therapy; accordingly, distinguishing enhancement from therapy makes sense as a heuristic to flag these ethical issues. 

Tuesday, May 8, 2018

Trust in the Privacy Concerns of Brain Recordings

By Ian Stevens

Ian is a 4th year undergraduate student at Northern Arizona University. He is majoring in Biomedical Sciences with minors in Psychological Sciences and Philosophy to pursue interdisciplinary research on how medicine, neuroscience, and philosophy connect. 

Introduction

Brain recording technologies (BRTs), such as brain-computer interfaces (BCIs) that collect various types of brain signals from on and around the brain could be creating privacy vulnerabilities in their users.1,2 These privacy concerns have been discussed in the marketplace as BCIs move from medical and research uses to novel consumer purposes. 3,4 Privacy concerns are grounded in the fact that brain signals can currently be decoded to interpret mental states such as emotions,5 moral attitudes,6 and intentions.7 However, what can be interpreted from these brain signals in the future is ambiguous.

Tuesday, May 1, 2018

The Promise of Brain-Machine Interfaces: Recap of March's The Future Now: NEEDs Seminar

Image courtesy of Wikimedia Commons.
By Nathan Ahlgrim

If we want to – to paraphrase the classic Six Million Dollar Man – rebuild people, rebuild them to be better, stronger, faster, we need more than fancy motors and titanium bones. Robot muscles cannot help a paralyzed person stand, and robot voices cannot restore communication to the voiceless, without some way for the person to control them. Methods of control need not be cutting-edge. The late Dr. Stephen Hawking’s instantly recognizable voice synthesizer was controlled by a single cheek movement, which seems shockingly analog in today’s world. Brain-machine interfaces (BMIs) are the emerging technology that promise to bypass all external input and allow robotic devices to communicate directly with the brain. Dr. Chethan Pandarinath, assistant professor of biomedical engineering at Georgia Tech and Emory University, discussed the good and bad of this technology in March’s The Future Now NEEDs seminar: "To Be Implanted and Wireless". He shared his experience and perspective, agreeing that these invasive technologies hold incredible promise. Keeping that promise both realistic and equitable, though, is an ongoing challenge.

Tuesday, April 24, 2018

The Effects of Neuroscientific Framing on Legal Decision Making

By Corey H. Allen

Corey Allen is a graduate research fellow in the Georgia State University Neuroscience and Philosophy departments with a concentration in Neuroethics. He is a member of the Cooperation, Conflict, and Cognition Lab, and his research investigates (1) the ethical and legal implications of neuropredictive models of high-risk behavior, (2) the role of consciousness in attributions of moral agency, and (3) the impact of neurobiological explanations in legal and moral decision making.

More than ever, an extraordinary amount of up-and-coming companies are jumping to attach the prefix “neuro” to their products. In many cases, this ”neurobabble” is inadequate and irrelevant, serving only to take advantage of the public’s preconceptions about the term. This hasty neuroscientific framing doesn’t stop with marketing but instead creeps into public and legal discourse surrounding action and responsibility. This leads to the question: does the framing of an issue as “neuroscientific” change the perceptions of and reactions to that issue? This question, especially in the realm of legal decision making, is the focus of ongoing research by Eyal Aharoni, Jennifer Blumenthal-Barby, Gidon Felsen, Karina Vold, and myself, with the support of Duke University and the John Templeton Foundation. With backgrounds varying from psychology, philosophy, neuroscience, to neuroethics, our team employs a multi-disciplinary approach to probe the effects of neuroscientific framing on public perceptions of legal evidence as well as the ethical issues surrounding such effects.

Tuesday, April 17, 2018

The Fake News Effect in Biomedicine

By Robert T. Thibault

Robert Thibault is interested in expediting scientific discoveries through efficient research practices. Throughout his PhD in the Integrated Program in Neuroscience at McGill University, he has established himself as a leading critical voice in the field of neurofeedback and published on the topic in Lancet Psychiatry, Brain, American Psychologist, and NeuroImage among other journals. He is currently finalizing an edited volume with Dr. Amir Raz, tentatively entitled “Casting light on the Dark Side of Brain Imaging,” slated for release through Academic Press in early 2019. 

We all hate being deceived. That feeling when we realize the “health specialist” who took our money was nothing more than a smooth-talking quack. When that politician we voted for never really planned to implement their platform. Or when that caller who took our bank information turned out to be a fraud. 

These deceptions share a common theme—the deceiver is easy to identify and even easier to resent. Once we understand what happened and who to blame, we’re unlikely to be misled by such chicanery again. 

But what if the perpetrator is more difficult to identify? What if they are someone we have a particular affection for? Can we maintain the same objectivity? 

What if the deceiver is you?