Tuesday, October 16, 2018

What can neuroscience tell us about ethics?

By Adina L. Roskies

Image courtesy of Bill Sanderson, Wellcome Collection
What can neuroscience tell us about ethics? Some say nothing – ethics is a normative discipline that concerns the way the world should be, while neuroscience is normatively insignificant: it is a descriptive science which tells us about the way the world is. This seems in line with what is sometimes called “Hume’s Law”, the claim that one cannot derive an ought from an is (Cohon, 2018). This claim is contentious and its scope unclear, but it certainly does seem true of demonstrative arguments, at the least. Neuroethics, by its name, however, seems to suggest that neuroscience is relevant for ethical thought, and indeed some have taken it to be a fact that neuroscience has delivered ethical consequences. It seems to me that there is some confusion about this issue, and so here I’d like to clarify the ways in which I think neuroscience can be relevant to ethics.

Wednesday, October 10, 2018

Ethical Considerations for Emergent Neuroprosthetic Technology

By Emily Sanborn

Image courtesy of Wikimedia Commons
In the 21st century, there is a push towards producing neurotechnology that will make our lives easier. A category of these technologies are neuroprosthetics, devices that can supplement or supplant the input or output of the nervous system to obtain normal function (Leuthardt, Roland, and Ray, 2014). In the emergence of these technologies, there are ethical issues presented and a question is formed: are we fixing what is not broken? (Moses, 2016). 

A recent article from the Smithsonian magazine reported a technology that will allow humans to develop a “sixth sense” (Keller, 2018). David Eagleman, an adjunct professor at Stanford University’s department of Psychiatry and Behavioral Science, invented a sensory augmentation device called Versatile Extra-Sensory Transducer (VEST), which is a vest covered with vibratory motors that is worn on the body. VEST works by receiving auditory signals from speech and the surrounding environment and translating that signal via Bluetooth to vibrations. The vibrations are transmitted to the vest in dynamic patterns that correlate to specific speech and auditory signals. The user is then able to feel the sonic world. In time, they may be able to use this new touch sensation to understand spoken word (Eagleman, 2015). 

Tuesday, October 9, 2018

An injection of RNA may transfer memories?

By Gabriella Caceres

Figure 1. Image by Bédécarrats et al. 2018
Imagine a future in which you could tell your spouse about your day by simply transferring the memory to them, or one in which you could pass your memories on even after your death. These scenarios may seem far ahead in the future, but steps are definitely being taken towards this development. To combat our natural memory inaccuracy and decline due to old age or Alzheimer’s disease, which has been found in 1 out of every 10 people over 65 years old (WHO, 2017), scientists are beginning to investigate the biology of memory and the ways in which the process of making memories can be improved. A recent and controversial article published by Science News reported that RNA may be used to transfer memories from one sea slug to another. Bedecarrats et al. 2018 claimed that they were able to transfer memories from neurons of sea slugs (Aplysia californica) by first sensitizing the slugs with shocks until they had a long-lasting withdrawal response to touch. Then, the researchers extracted the RNA from the sensory neurons of the shocked slugs, and injected that RNA into the sensory neurons of non-sensitized sea slugs (figure 1). The authors postulated that the sensitization occurred because the donor sea slug underwent epigenetic changes, or when a methyl group gets attached to the DNA and modulates gene expression (D’Urso et al. 2014). This whole process resulted in a transfer of sensitization (a form of implicit, or unconscious, memory) to the recipient slug, as it experienced the same long-lasting response to touch that the donor slug did.

Tuesday, October 2, 2018

How to be Opportunistic, Not Manipulative

By Nathan Ahlgrim

Opportunistic Research
Government data is often used to
answer key research questions.
Image courtesy of the U.S. Census Bureau

Opportunistic research has a long and prosperous history across the sciences. Research is classified as
opportunistic when researchers take advantage of a special situation. Quasi-experiments enabled by government programs, unique or isolated populations, and once-in-a-lifetime events can all trigger opportunistic research where no experiments were initially planned. Opportunistic research is not categorically problematic. If anything, it is categorically efficient. Many a study could not be ethically, financially, or logistically performed in the context of a randomized control trial.

Biomedical research is certainly not the only field that utilizes opportunistic research, but it does present additional ethical challenges. In contrast, many questions in social science research can only be ethically tested via opportunistic research, since funding agencies are wary of explicitly withholding resources from a ‘control’ population (Resch et al., 2014). We, as scientists, are indebted to patients who choose to donate their time and bodies to participate in scientific research while inside an inpatient ward; their volunteerism is the only way to perform some types of research.

Almost all information we have about human neurons comes from generous patients. For example, patients with treatment-resistant epilepsy can have tiny wires lowered into their brains, a technique known as intracranial microelectrode recording, enabling physicians to listen in on the neuronal chatter at a resolution normally restricted to animal models (Inman et al., 2017; Chiong et al., 2018). Seizures, caused by runaway excitation of the brain, are best detected by recording electrical signals throughout the brain. By having such fine spatial resolution inside a patient’s brain, surgeons can be incredibly precise in locating the site of the seizure and treating the patient. It’s what else those wires are used for that introduces thorny research ethics.

Wednesday, September 26, 2018

Caveats in Quantifying Consciousness

This piece belongs to a series of student posts written during the Neuroscience and Behavioral Biology Paris study abroad program taught by Dr. Karen Rommelfanger in June 2018.

By Ankita Moss

Image courtesy of Flickr user, Mike MacKenzie.
As I was listening to a presentation during the 2018 Neuroethics Network Conference in Paris, a particular phrase resonated with me: we must now contemplate the existence of “the minds of those that never lived.”

Dr. John Harris, a professor at the University of Manchester, discussed both the philosophical and practical considerations of emerging artificial intelligence technologies and their relationship to human notions of the theory of mind, or the ability to interpret the mental states of both oneself and others and use this to predict behavior.

Upon hearing this phrase and relating it to theory of mind, I immediately began to question my notions of “the self” and consciousness. To UC Berkeley philosopher Dr. Alva Noe, one manifests consciousness by building relationships with others, acting deliberately on the external environment in some capacity. Conversely, a group of Harvard scientists claim they have found the mechanistic origin of consciousness, a connection between the brainstem region responsible for arousal and regions of the brain that contribute to awareness.

Tuesday, September 25, 2018

Artificial Emotional Intelligence

This piece belongs to a series of student posts written during the Neuroscience and Behavioral Biology Paris study abroad program taught by Dr. Karen Rommelfanger in June 2018.

By Ruhee Patel

Image courtesy of Pexels user, Mohamed Hassan
In the race for more effective marketing strategies, an enormous step forward came with artificial emotional intelligence (emotion AI). Companies have developed software that can track someone’s emotions over a given period of time. Affectiva is a company that develops emotion AI for companies to facilitate more directed marketing for consumers. Media companies and product brands can use this information to show consumers more of what they want to see based on products that made them feel positive emotions in the past.

Emotion tracking is accomplished by recording slight changes in facial expression and movement. The technology relies on algorithms that can be trained to recognize features of specific expressions (1). Companies such as Unilever are already using Affectiva software now for online focus groups to judge reactions to advertisements. Hershey is also partnering with Affectiva to develop a device for stores that tells users to smile in exchange for a treat (2). Facial emotion recognition usually works either through machine learning or the geometric feature-based approach. The machine learning approach involves feature selection for the training of the machine learning algorithms, feature classification, feature extraction, and data classification. In contrast, the geometric feature-based approach standardizes the images before facial component detection and the decision function. Some investigators have reached over 90% emotion recognition accuracy (3). Emotion AI can even measure heart rate by monitoring slight fluctuations in the color of a person’s face. Affectiva has developed software that would work through web cameras in stores or in computers, in the case of online shopping. Affectiva also created Affdex for Market Research, which provides companies with calculations based on the Affectiva database, so companies have points of comparison when making marketing decisions.

Tuesday, September 18, 2018

NeuroTechX and Future Considerations for Neurotechnology

By Maria Marano

Image courtesy of Wikimedia Commons
As society has seen bursts of activity in the technology sector, we are continually discovering ways to harness these new advances. While some fields, such as artificial intelligence and machine learning, have already been massively exploited by industry, neurotechnology hasn’t fully broken into consumer markets1. Generally, neurotechnology refers to any technology associated with the brain. Consumer products that use brain activity to modulate behaviour, such as the Muse headband, do exist, but neurotech remains predominantly in the hands of researchers and the science community1. As neurotechnological advances begin to take centre stage and become a part of the 21st-century zeitgeist, the ethical implications of these technologies must be fully appreciated and addressed2. One area of concern is the fear that limited access to neurotech will create further discrepancies between regions with regards to quality of life.

Ultimately, developers expect neurotechnology to be utilized for clinical purposes1. Brain-computer interface products are currently used to enhance meditation3 and attention4, but the primary goal is to use neurotechnology for therapeutics5. Prominent present-day examples of neurotech in the healthcare industry include virtual reality therapies for stroke rehabilitation6, phobias7, and autism spectrum disorders8. Unfortunately, as more of these fields develop and prosper, the improvements to health and wellness will be restricted to those who can access neurotechnologies. Furthermore, with Elon MuskBryan Johnson, and others work towards “cognitive enhancement” devices; “enhanced” individuals could easily gain an advantage over the unenhanced9. As is so often the case, these advantages will likely be conferred onto those in developed nations and, more specifically, wealthier individuals first. This distribution has the potential to exacerbate existing socio-economic differences; therefore, it is essential that as a society we democratically monitor progress and dictate guidelines as the neurotechnology industry advances.

Wednesday, September 12, 2018

Ethical Implications of the Neurotechnology Touchpoints

This piece belongs to a series of student posts written during the Neuroscience and Behavioral Biology Paris study abroad program taught by Dr. Karen Rommelfanger in June 2018.

By Janet Guo

The TouchPoint Solution™ (commonly referred to as TouchPoints™) is a noninvasive neurotechnology device that one can wear on any part of the body. The device can be accessorized (detachable wristband in each pack available), so it can be worn like a watch or placed inside a pocket or sock. The founders of TouchPoints™, Dr. Amy Serin and entrepreneur Vicki Mayo, consider it to be a neuroscientific device because of the bilateral alternating stimulation tactile (BLAST) action it allows the user’s brain to undergo. This is a device that can affect people in good health or those who suffer from a neurologic disease and is therefore classifiable as a neuroscientific device by the broad scientific definition proposed by Illes & Lombera (2009). The website even claims that the brain can “create new neural pathways that are net positive” and has a “lasting effect on your brain”. In many of the TouchPoints™ advertisements (many of which can be found on the official TouchPoints™ YouTube channel, TouchPoints™ devices are claimed to relieve stress by 70% in under 30 seconds. 

TouchPoints™ was originally launched in late 2015 with the mission of bringing relief to people who have high levels of stress and anxiety. This technology has been through several developments and newer, cheaper versions have been released since its initial launch.  Its presence in news media has been increasing-- Huffington Post (Wolfson, 2017), Mashable (Mashable staff, 2017), and The Washington Times (Szadkowski, 2017) are only a few of the popular news and opinion websites that have published pieces about TouchPoints™. An investigation of the science and ethics behind this device is warranted as the number of sales is increasing greatly due to the expansion of the company to the international level. This expansion was highlighted by founder Dr. Amy Serin at the 2017 SharpBrains Virtual Summit: Brain Health & Enhancement in the Digital Age (SharpBrains, 2018).

Tuesday, September 11, 2018

The future of an AI artist

This piece belongs to a series of student posts written during the Neuroscience and Behavioral Biology Paris study abroad program taught by Dr. Karen Rommelfanger in June 2018.

By Coco Cao

An example of AI-generated art
Image courtesy of Flickr
An article published on New Scientist entitled, “Artificially intelligent painters invent new styles of art” has captured my attention. The article discussed a recent study conducted by Elgammal et al. (2017), who developed a computational creative system (the Creative Adversarial Network) for art generation based on the Generative Adversarial Network (GAN), which has the ability to generate novel images simulating a given distribution. Originally, GAN consisted of two neural networks, a generator and a discriminator. To create the Creative Adversarial Network (CAN), scientists trained the discriminator with 75753 art works from 25 art styles so it learned to categorize art works based on their styles. The discriminator also learned to distinguish between art and non-art pieces, based on learned art styles. Then, the discriminator is able to correct the generator, a network that generates art pieces. The generator eventually learns and produces art pieces that are indistinguishable from the human produced art pieces. While ensuring the art piece is still aesthetically pleasing, CAN generates abstract arts that enhance creativity by maximizing deviation from established art styles. 

After learning about AI’s ability to be “creative” and generate art pieces, I was frightened. Unlike AI’s application in a scientific context, AI in an art context elicits human feelings. Is it possible that AI artists could replace human artists in the future? Considering the importance of the author’s creativity and originality in art, the critical ethical concern regards the individualism of AI artists. Can we consider the art pieces generated from AI as expressions of themselves? 

Tuesday, September 4, 2018

Organoids, Chimeras, Ex Vivo Brains – Oh My!

By Henry T. Greely

Image courtesy of Wikimedia Commons
At about the time of the birth of modern neuroethics, Adina Roskies usefully divided the field into two parts: the neuroscience of ethics, what neuroscience can tell us about ethics, and the ethics of neuroscience, what ethical issues neuroscience will bring us (1). At some point, in my own work, I broke her second point into the ethics of neuroscience research and the ethical (and social and legal) implications of neuroscience for the non-research world. (I have no clue now whether that was original with me.)

The second part of Roskies’ division of neuroethics, the ethics of neuroscience research, has always had a special place in my heart because early work in it really helped mold the field we have today. In the early ‘00s, groups that mixed scientists, physicians, and ethicists, largely through the efforts of Judy Illes, explored what to do about abnormal brain scans taken from otherwise healthy volunteers. (See, e.g., 2, 3) It had become clear that, in the computer-generated imagery of a brain MRI, more than 20 percent of “the usual subjects” (college undergraduates, usually psychology majors) and about half of “mature” subjects had something “odd” in their brains. These could be variations of no clinical significance, such as “silent” blockages or benign tumor to potentially very serious problems, such as malignant tumors or large “unpopped” aneurysms. Happily, only small fractions of those oddities held clinical significance, but this still posed hard questions for researchers, many of whom were not themselves clinicians. What, if anything, should they tell, and to whom? And so, working together, scientists, clinicians, and ethicists talked with each other, learned from each other, and came up with useful answers, usually involving both changes to the consent process and a procedure for expert review of some worrisome scans.