Wednesday, September 26, 2018

Caveats in Quantifying Consciousness

This piece belongs to a series of student posts written during the Neuroscience and Behavioral Biology Paris study abroad program taught by Dr. Karen Rommelfanger in June 2018.

By Ankita Moss

Image courtesy of Flickr user, Mike MacKenzie.
As I was listening to a presentation during the 2018 Neuroethics Network Conference in Paris, a particular phrase resonated with me: we must now contemplate the existence of “the minds of those that never lived.”

Dr. John Harris, a professor at the University of Manchester, discussed both the philosophical and practical considerations of emerging artificial intelligence technologies and their relationship to human notions of the theory of mind, or the ability to interpret the mental states of both oneself and others and use this to predict behavior.

Upon hearing this phrase and relating it to theory of mind, I immediately began to question my notions of “the self” and consciousness. To UC Berkeley philosopher Dr. Alva Noe, one manifests consciousness by building relationships with others, acting deliberately on the external environment in some capacity. Conversely, a group of Harvard scientists claim they have found the mechanistic origin of consciousness, a connection between the brainstem region responsible for arousal and regions of the brain that contribute to awareness.

Tuesday, September 25, 2018

Artificial Emotional Intelligence

This piece belongs to a series of student posts written during the Neuroscience and Behavioral Biology Paris study abroad program taught by Dr. Karen Rommelfanger in June 2018.

By Ruhee Patel

Image courtesy of Pexels user, Mohamed Hassan
In the race for more effective marketing strategies, an enormous step forward came with artificial emotional intelligence (emotion AI). Companies have developed software that can track someone’s emotions over a given period of time. Affectiva is a company that develops emotion AI for companies to facilitate more directed marketing for consumers. Media companies and product brands can use this information to show consumers more of what they want to see based on products that made them feel positive emotions in the past.

Emotion tracking is accomplished by recording slight changes in facial expression and movement. The technology relies on algorithms that can be trained to recognize features of specific expressions (1). Companies such as Unilever are already using Affectiva software now for online focus groups to judge reactions to advertisements. Hershey is also partnering with Affectiva to develop a device for stores that tells users to smile in exchange for a treat (2). Facial emotion recognition usually works either through machine learning or the geometric feature-based approach. The machine learning approach involves feature selection for the training of the machine learning algorithms, feature classification, feature extraction, and data classification. In contrast, the geometric feature-based approach standardizes the images before facial component detection and the decision function. Some investigators have reached over 90% emotion recognition accuracy (3). Emotion AI can even measure heart rate by monitoring slight fluctuations in the color of a person’s face. Affectiva has developed software that would work through web cameras in stores or in computers, in the case of online shopping. Affectiva also created Affdex for Market Research, which provides companies with calculations based on the Affectiva database, so companies have points of comparison when making marketing decisions.

Tuesday, September 18, 2018

NeuroTechX and Future Considerations for Neurotechnology

By Maria Marano

Image courtesy of Wikimedia Commons
As society has seen bursts of activity in the technology sector, we are continually discovering ways to harness these new advances. While some fields, such as artificial intelligence and machine learning, have already been massively exploited by industry, neurotechnology hasn’t fully broken into consumer markets1. Generally, neurotechnology refers to any technology associated with the brain. Consumer products that use brain activity to modulate behaviour, such as the Muse headband, do exist, but neurotech remains predominantly in the hands of researchers and the science community1. As neurotechnological advances begin to take centre stage and become a part of the 21st-century zeitgeist, the ethical implications of these technologies must be fully appreciated and addressed2. One area of concern is the fear that limited access to neurotech will create further discrepancies between regions with regards to quality of life.

Ultimately, developers expect neurotechnology to be utilized for clinical purposes1. Brain-computer interface products are currently used to enhance meditation3 and attention4, but the primary goal is to use neurotechnology for therapeutics5. Prominent present-day examples of neurotech in the healthcare industry include virtual reality therapies for stroke rehabilitation6, phobias7, and autism spectrum disorders8. Unfortunately, as more of these fields develop and prosper, the improvements to health and wellness will be restricted to those who can access neurotechnologies. Furthermore, with Elon MuskBryan Johnson, and others work towards “cognitive enhancement” devices; “enhanced” individuals could easily gain an advantage over the unenhanced9. As is so often the case, these advantages will likely be conferred onto those in developed nations and, more specifically, wealthier individuals first. This distribution has the potential to exacerbate existing socio-economic differences; therefore, it is essential that as a society we democratically monitor progress and dictate guidelines as the neurotechnology industry advances.

Wednesday, September 12, 2018

Ethical Implications of the Neurotechnology Touchpoints

This piece belongs to a series of student posts written during the Neuroscience and Behavioral Biology Paris study abroad program taught by Dr. Karen Rommelfanger in June 2018.

By Janet Guo

The TouchPoint Solution™ (commonly referred to as TouchPoints™) is a noninvasive neurotechnology device that one can wear on any part of the body. The device can be accessorized (detachable wristband in each pack available), so it can be worn like a watch or placed inside a pocket or sock. The founders of TouchPoints™, Dr. Amy Serin and entrepreneur Vicki Mayo, consider it to be a neuroscientific device because of the bilateral alternating stimulation tactile (BLAST) action it allows the user’s brain to undergo. This is a device that can affect people in good health or those who suffer from a neurologic disease and is therefore classifiable as a neuroscientific device by the broad scientific definition proposed by Illes & Lombera (2009). The website even claims that the brain can “create new neural pathways that are net positive” and has a “lasting effect on your brain”. In many of the TouchPoints™ advertisements (many of which can be found on the official TouchPoints™ YouTube channel, TouchPoints™ devices are claimed to relieve stress by 70% in under 30 seconds. 

TouchPoints™ was originally launched in late 2015 with the mission of bringing relief to people who have high levels of stress and anxiety. This technology has been through several developments and newer, cheaper versions have been released since its initial launch.  Its presence in news media has been increasing-- Huffington Post (Wolfson, 2017), Mashable (Mashable staff, 2017), and The Washington Times (Szadkowski, 2017) are only a few of the popular news and opinion websites that have published pieces about TouchPoints™. An investigation of the science and ethics behind this device is warranted as the number of sales is increasing greatly due to the expansion of the company to the international level. This expansion was highlighted by founder Dr. Amy Serin at the 2017 SharpBrains Virtual Summit: Brain Health & Enhancement in the Digital Age (SharpBrains, 2018).

Tuesday, September 11, 2018

The future of an AI artist

This piece belongs to a series of student posts written during the Neuroscience and Behavioral Biology Paris study abroad program taught by Dr. Karen Rommelfanger in June 2018.

By Coco Cao

An example of AI-generated art
Image courtesy of Flickr
An article published on New Scientist entitled, “Artificially intelligent painters invent new styles of art” has captured my attention. The article discussed a recent study conducted by Elgammal et al. (2017), who developed a computational creative system (the Creative Adversarial Network) for art generation based on the Generative Adversarial Network (GAN), which has the ability to generate novel images simulating a given distribution. Originally, GAN consisted of two neural networks, a generator and a discriminator. To create the Creative Adversarial Network (CAN), scientists trained the discriminator with 75753 art works from 25 art styles so it learned to categorize art works based on their styles. The discriminator also learned to distinguish between art and non-art pieces, based on learned art styles. Then, the discriminator is able to correct the generator, a network that generates art pieces. The generator eventually learns and produces art pieces that are indistinguishable from the human produced art pieces. While ensuring the art piece is still aesthetically pleasing, CAN generates abstract arts that enhance creativity by maximizing deviation from established art styles. 

After learning about AI’s ability to be “creative” and generate art pieces, I was frightened. Unlike AI’s application in a scientific context, AI in an art context elicits human feelings. Is it possible that AI artists could replace human artists in the future? Considering the importance of the author’s creativity and originality in art, the critical ethical concern regards the individualism of AI artists. Can we consider the art pieces generated from AI as expressions of themselves? 

Tuesday, September 4, 2018

Organoids, Chimeras, Ex Vivo Brains – Oh My!

By Henry T. Greely

Image courtesy of Wikimedia Commons
At about the time of the birth of modern neuroethics, Adina Roskies usefully divided the field into two parts: the neuroscience of ethics, what neuroscience can tell us about ethics, and the ethics of neuroscience, what ethical issues neuroscience will bring us (1). At some point, in my own work, I broke her second point into the ethics of neuroscience research and the ethical (and social and legal) implications of neuroscience for the non-research world. (I have no clue now whether that was original with me.)

The second part of Roskies’ division of neuroethics, the ethics of neuroscience research, has always had a special place in my heart because early work in it really helped mold the field we have today. In the early ‘00s, groups that mixed scientists, physicians, and ethicists, largely through the efforts of Judy Illes, explored what to do about abnormal brain scans taken from otherwise healthy volunteers. (See, e.g., 2, 3) It had become clear that, in the computer-generated imagery of a brain MRI, more than 20 percent of “the usual subjects” (college undergraduates, usually psychology majors) and about half of “mature” subjects had something “odd” in their brains. These could be variations of no clinical significance, such as “silent” blockages or benign tumor to potentially very serious problems, such as malignant tumors or large “unpopped” aneurysms. Happily, only small fractions of those oddities held clinical significance, but this still posed hard questions for researchers, many of whom were not themselves clinicians. What, if anything, should they tell, and to whom? And so, working together, scientists, clinicians, and ethicists talked with each other, learned from each other, and came up with useful answers, usually involving both changes to the consent process and a procedure for expert review of some worrisome scans.