Tuesday, September 25, 2018

Artificial Emotional Intelligence

This piece belongs to a series of student posts written during the Neuroscience and Behavioral Biology Paris study abroad program taught by Dr. Karen Rommelfanger in June 2018.

By Ruhee Patel

Image courtesy of Pexels user, Mohamed Hassan
In the race for more effective marketing strategies, an enormous step forward came with artificial emotional intelligence (emotion AI). Companies have developed software that can track someone’s emotions over a given period of time. Affectiva is a company that develops emotion AI for companies to facilitate more directed marketing for consumers. Media companies and product brands can use this information to show consumers more of what they want to see based on products that made them feel positive emotions in the past.

Emotion tracking is accomplished by recording slight changes in facial expression and movement. The technology relies on algorithms that can be trained to recognize features of specific expressions (1). Companies such as Unilever are already using Affectiva software now for online focus groups to judge reactions to advertisements. Hershey is also partnering with Affectiva to develop a device for stores that tells users to smile in exchange for a treat (2). Facial emotion recognition usually works either through machine learning or the geometric feature-based approach. The machine learning approach involves feature selection for the training of the machine learning algorithms, feature classification, feature extraction, and data classification. In contrast, the geometric feature-based approach standardizes the images before facial component detection and the decision function. Some investigators have reached over 90% emotion recognition accuracy (3). Emotion AI can even measure heart rate by monitoring slight fluctuations in the color of a person’s face. Affectiva has developed software that would work through web cameras in stores or in computers, in the case of online shopping. Affectiva also created Affdex for Market Research, which provides companies with calculations based on the Affectiva database, so companies have points of comparison when making marketing decisions.

Tuesday, September 18, 2018

NeuroTechX and Future Considerations for Neurotechnology

By Maria Marano

Image courtesy of Wikimedia Commons
As society has seen bursts of activity in the technology sector, we are continually discovering ways to harness these new advances. While some fields, such as artificial intelligence and machine learning, have already been massively exploited by industry, neurotechnology hasn’t fully broken into consumer markets1. Generally, neurotechnology refers to any technology associated with the brain. Consumer products that use brain activity to modulate behaviour, such as the Muse headband, do exist, but neurotech remains predominantly in the hands of researchers and the science community1. As neurotechnological advances begin to take centre stage and become a part of the 21st-century zeitgeist, the ethical implications of these technologies must be fully appreciated and addressed2. One area of concern is the fear that limited access to neurotech will create further discrepancies between regions with regards to quality of life.

Ultimately, developers expect neurotechnology to be utilized for clinical purposes1. Brain-computer interface products are currently used to enhance meditation3 and attention4, but the primary goal is to use neurotechnology for therapeutics5. Prominent present-day examples of neurotech in the healthcare industry include virtual reality therapies for stroke rehabilitation6, phobias7, and autism spectrum disorders8. Unfortunately, as more of these fields develop and prosper, the improvements to health and wellness will be restricted to those who can access neurotechnologies. Furthermore, with Elon MuskBryan Johnson, and others work towards “cognitive enhancement” devices; “enhanced” individuals could easily gain an advantage over the unenhanced9. As is so often the case, these advantages will likely be conferred onto those in developed nations and, more specifically, wealthier individuals first. This distribution has the potential to exacerbate existing socio-economic differences; therefore, it is essential that as a society we democratically monitor progress and dictate guidelines as the neurotechnology industry advances.

Wednesday, September 12, 2018

Ethical Implications of the Neurotechnology Touchpoints

This piece belongs to a series of student posts written during the Neuroscience and Behavioral Biology Paris study abroad program taught by Dr. Karen Rommelfanger in June 2018.

By Janet Guo

The TouchPoint Solution™ (commonly referred to as TouchPoints™) is a noninvasive neurotechnology device that one can wear on any part of the body. The device can be accessorized (detachable wristband in each pack available), so it can be worn like a watch or placed inside a pocket or sock. The founders of TouchPoints™, Dr. Amy Serin and entrepreneur Vicki Mayo, consider it to be a neuroscientific device because of the bilateral alternating stimulation tactile (BLAST) action it allows the user’s brain to undergo. This is a device that can affect people in good health or those who suffer from a neurologic disease and is therefore classifiable as a neuroscientific device by the broad scientific definition proposed by Illes & Lombera (2009). The website even claims that the brain can “create new neural pathways that are net positive” and has a “lasting effect on your brain”. In many of the TouchPoints™ advertisements (many of which can be found on the official TouchPoints™ YouTube channel, TouchPoints™ devices are claimed to relieve stress by 70% in under 30 seconds. 

TouchPoints™ was originally launched in late 2015 with the mission of bringing relief to people who have high levels of stress and anxiety. This technology has been through several developments and newer, cheaper versions have been released since its initial launch.  Its presence in news media has been increasing-- Huffington Post (Wolfson, 2017), Mashable (Mashable staff, 2017), and The Washington Times (Szadkowski, 2017) are only a few of the popular news and opinion websites that have published pieces about TouchPoints™. An investigation of the science and ethics behind this device is warranted as the number of sales is increasing greatly due to the expansion of the company to the international level. This expansion was highlighted by founder Dr. Amy Serin at the 2017 SharpBrains Virtual Summit: Brain Health & Enhancement in the Digital Age (SharpBrains, 2018).

Tuesday, September 11, 2018

The future of an AI artist

This piece belongs to a series of student posts written during the Neuroscience and Behavioral Biology Paris study abroad program taught by Dr. Karen Rommelfanger in June 2018.

By Coco Cao

An example of AI-generated art
Image courtesy of Flickr
An article published on New Scientist entitled, “Artificially intelligent painters invent new styles of art” has captured my attention. The article discussed a recent study conducted by Elgammal et al. (2017), who developed a computational creative system (the Creative Adversarial Network) for art generation based on the Generative Adversarial Network (GAN), which has the ability to generate novel images simulating a given distribution. Originally, GAN consisted of two neural networks, a generator and a discriminator. To create the Creative Adversarial Network (CAN), scientists trained the discriminator with 75753 art works from 25 art styles so it learned to categorize art works based on their styles. The discriminator also learned to distinguish between art and non-art pieces, based on learned art styles. Then, the discriminator is able to correct the generator, a network that generates art pieces. The generator eventually learns and produces art pieces that are indistinguishable from the human produced art pieces. While ensuring the art piece is still aesthetically pleasing, CAN generates abstract arts that enhance creativity by maximizing deviation from established art styles. 

After learning about AI’s ability to be “creative” and generate art pieces, I was frightened. Unlike AI’s application in a scientific context, AI in an art context elicits human feelings. Is it possible that AI artists could replace human artists in the future? Considering the importance of the author’s creativity and originality in art, the critical ethical concern regards the individualism of AI artists. Can we consider the art pieces generated from AI as expressions of themselves? 

Tuesday, September 4, 2018

Organoids, Chimeras, Ex Vivo Brains – Oh My!

By Henry T. Greely

Image courtesy of Wikimedia Commons
At about the time of the birth of modern neuroethics, Adina Roskies usefully divided the field into two parts: the neuroscience of ethics, what neuroscience can tell us about ethics, and the ethics of neuroscience, what ethical issues neuroscience will bring us (1). At some point, in my own work, I broke her second point into the ethics of neuroscience research and the ethical (and social and legal) implications of neuroscience for the non-research world. (I have no clue now whether that was original with me.)

The second part of Roskies’ division of neuroethics, the ethics of neuroscience research, has always had a special place in my heart because early work in it really helped mold the field we have today. In the early ‘00s, groups that mixed scientists, physicians, and ethicists, largely through the efforts of Judy Illes, explored what to do about abnormal brain scans taken from otherwise healthy volunteers. (See, e.g., 2, 3) It had become clear that, in the computer-generated imagery of a brain MRI, more than 20 percent of “the usual subjects” (college undergraduates, usually psychology majors) and about half of “mature” subjects had something “odd” in their brains. These could be variations of no clinical significance, such as “silent” blockages or benign tumor to potentially very serious problems, such as malignant tumors or large “unpopped” aneurysms. Happily, only small fractions of those oddities held clinical significance, but this still posed hard questions for researchers, many of whom were not themselves clinicians. What, if anything, should they tell, and to whom? And so, working together, scientists, clinicians, and ethicists talked with each other, learned from each other, and came up with useful answers, usually involving both changes to the consent process and a procedure for expert review of some worrisome scans.

Tuesday, August 28, 2018

Smart AI

By Jonathan D. Moreno

Image courtesy of Flickr
Experiments that could enhance rodent intelligence are closely watched and long-term worries about super-intelligent machines are everywhere. But unlike avoiding smart mice, we’re not talking about industry standards for the almost daily steps toward computers that possess at least near-human intelligence. Why not? 

Computers are far more likely to achieve human or near-human intelligence than lab mice, however remote the odds for either. The prospects for making rodents smarter with implanted human neurons have dimmed as the potential for a smart computer continues to grow. For example, a recent paper on systems of human neurons implanted in mice didn’t make them smarter maze-runners. By contrast, in 2016 a computer program called AlphaGo showed it could defeat a professional human Go player. Those machine-learning algorithms continue to teach themselves new, human-like skills, like facial recognition -- except of course that they are better at it than the typical human. 

Tuesday, August 21, 2018

Worrisome Implications of Lack of Diversity in Silicon Valley

By Carolyn C. Meltzer, MD

Image courtesy of Wikimedia Commons
The term “artificial intelligence” (AI) was first used in 1955 by John McCarthy of Dartmouth College to describe complex information processing (McCarthy 1955). While the field has progressed slowly since that time, recent advancements in computational power, deep learning and neural network systems, and access to large datasets have set the stage for the rapid acceleration of AI.  While there is much painstaking work ahead before transformational uses of AI catch up with the hype (Kinsella 2017), substantial impact in nearly all aspects of human life is envisioned. 

AI is being integrated in fields as diverse as medicine, finance, journalism, transportation, and law enforcement. AI aims to mimic human cognitive processes, as imperfect as they may be.  Our human tendencies to generalize common associations, avoid ambiguity, and more tightly identify with others who are more like ourselves may help us navigate our world efficiently, yet how they may translate into our design of AI systems is yet unclear.  As is typically the case, technology is racing ahead of our ability to consider the societal and ethical consequences of its implementation (Horvitz 2017). 

Tuesday, August 14, 2018

The Stem Cell Debate: Is it Over?

By Katherine Bassil

Image courtesy of Flickr
In 2006, Yamanaka revolutionized the use of stem cells in research by revealing that adult mature cells can be reprogrammed to their precursor pluripotent state (Takahashi & Yamanaka, 2006). A pluripotent stem cell is a cell characterized by the ability to differentiate into each and every cell of our body (Gage, 2000). This discovery not only opened up new doors to regenerative and personalized medicine (Chun, Byun, & Lee, 2011; Hirschi, Li, & Roy, 2014), but it also overcame the numerous controversies that accompanied the use of embryonic stem (ES) cells for research purposes. For instance, one of the controversies raised by the public and scholars was that human life, at every stage of development, has dignity and as such requires rights and protections (Marwick, 2001). Thus, the use of biological material from embryos violates these rights, and the research findings gathered from this practice does not overrule basic human dignity. With a decline in the use of ES cells in research, the use of induced-pluripotent stem (iPS) cells opened up avenues for developing both two- and three-dimensional (2D and 3D, respectively) cultures that model human tissues and organs for both fundamental and translational research (Huch & Koo, 2015). While the developments in this field are still in an early phase, they are expected to grow significantly in the nearby future, thereby triggering a series of ethical questions of their own.

Tuesday, August 7, 2018

Is the concept of “will” useful in explaining addictive behaviour?

By Claudia Barned and Eric Racine

Image courtesy of Flickr
The effects of substance use and misuse have been key topics of discussion given the impact on healthcare costs, public safety, crime, and productivity (Gowing et al., 2015). The alarming global prevalence rates of substance use disorder and subthreshold “issues” associated with alcohol and other drugs have also been a cause for concern. For example, in the United States, with a population of over 318 million people (Statista, 2018), 21.5 million people were classified with a substance use disorder in 2014; 2.6 million had issues with alcohol and drugs, 4.5 million with drugs but not alcohol and 14.4 million had issues with alcohol only (SAMHSA, 2018). Similarly, in Canada, with a population of over 35 million people (Statistics Canada, 2018), a total of 6 million met the criteria for substance use disorders in 2013, with the highest rates among youth aged 18 – 24 (Statistics Canada, 2013). Concerns about addiction are particularly evident in widespread media alarm about the current fentanyl crisis affecting the U.S., Canada, Australia and the U.K, and the climbing rates of fentanyl related deaths globally (NIDA, 2017; UNDC, 2017).

Tuesday, July 31, 2018

The Missing Subject in Schizophrenia

By Anna K. Swartz

Image drawn by Anna Swartz
Since this is, in many ways, a post about narratives, I have decided I should begin with mine. 

Every morning I take an oblong green and white pill, every night I take another of the same oblong green and white pill. I also take circle and oval pills. This helps in keeping me tethered to reality, functioning with fewer hallucinations and delusions. My official diagnosis is schizoaffective, bipolar type 1. Schizoaffective disorder is closely allied to schizophrenia but is rarer, striking about 0.3 percent of the population. It’s also by many accounts “worse” in that it incorporates the severe depression and psychosis that is characteristic of bipolar disorder, as well as the loss of touch with reality wrought by schizophrenia. I find it easier to admit to being bipolar than I do schizophrenic. I have found a much more positive reception to bipolar disorder. It’s a disease often associated with creative individuals who are highly intelligent and have traits that many see as advantageous, even covetous. That is, there is something romantic about the disease even as it wreaks havoc in a person’s life. It’s also much easier to talk about depression and mania because the chances are overwhelming that during the span of a normal lifetime, we will come face-to-face with some manifestation of mania or depression, either in ourselves or someone close to us. It’s familiar and understandable. That is less the case when it comes to hallucinations and delusions. Everyone has an inner voice that they can talk to sometimes in their thoughts. But hearing voices is not like that. Auditory hallucinations sound like they are coming from outside your head. Have you ever tried to write or read while people are having a loud conversation around you? Now imagine them screaming at you. This is how I feel most days. The voices are almost always caustic and denigrating, telling me that I would be better off dead. Delusions are also hard to explain. With a head fizzing with mad thoughts, I’ve stared up at ceilings with blue and brown swirling irises like cars in the center of a volcano. More often, I will see objects sitting on surfaces and watch them tip over or fall out of the corner of my eye only to blink and have them be static. I also experience paranoid delusions which are commonly manifested as thoughts that others are plotting against me, following me, watching me, or talking about me.