Tuesday, November 29, 2016

“American Horror Story” in Real Life: Understanding Racialized Views of Mental Illness and Stigma

By Sunidhi Ramesh

Racial and ethnic discrimination have taken various forms in the
United States since its formation as a nation. The sign in the image
reads: "Deport all Iranians. Get the hell out of my country."
Image courtesy of Wikipedia.
From 245 years of slavery to indirect racism in police sanctioning and force, minority belittlement has remained rampant in American society (1). There is no doubt that this history has left minorities in the United States with a differential understanding of what it means to be American and, more importantly, what it means to be an individual in a larger humankind.

Generally, our day-to-day experiences shape the values, beliefs, and attitudes that allow us to navigate the real world (2). And so, with regards to minorities, consistent exposure to these subjective experiences (of belittlement and discrimination, for example) can begin to shape subjective perceptions that, in turn, can mold larger perspectives and viewpoints.

Last spring, I conducted a project for a class to address the reception (3) of white and non-white, or persons of color (POC), students to part of an episode from American Horror Story: Freak Show. The video I asked them to watch portrays a mentally incapacitated woman, Pepper, who is wrongfully framed for the murder of her sister’s child. The character’s blatant scapegoating is shocking not only for the lack of humanity it portrays but also for the reality of being a human being in society while not being viewed as human.

Although the episode remains to be somewhat of an exaggeration, the opinions of the interview respondents in my project ultimately suggested that there exists a racial basis of perceiving the mental disabilities of Pepper—a racial basis that may indeed be deeply rooted in the racial history of the United States.

Tuesday, November 22, 2016

Debating the Replication Crisis – Why Neuroethics Needs to Pay Attention

By Ben Wills

Ben Wills studied Cognitive Science at Vassar College, where his thesis examined cognitive neuroscience research on the self. He is currently a legal assistant at a Portland, Oregon law firm, where he continues to hone his interests at the intersections of brain, law, and society.

In 2010 Dana Carney, Amy Cuddy, and Andy Yap published a study showing that assuming an expansive posture, or “power pose,” leads to increased testosterone levels, task performance, and self-confidence. The popular media and public swooned at the idea that something as simple as standing like Wonder Woman could boost performance and confidence. A 2012 TED talk that author Amy Cuddy gave on her research has become the site’s second-most watched video, with over 37 million views. Over the past year and change, however, the power pose effect has gradually fallen out of favor in experimental psychology. A 2015 meta-analysis of power pose studies by Ranehill et al. concluded that power posing affects only self-reported feelings of power, not hormone levels or performance. This past September, reflecting mounting evidence that power pose effects are overblown, co-author Dana Carney denounced the construct, stating, “I do not believe that ‘power pose’ effects are real.”

What happened?

Tuesday, November 15, 2016

The 2016 Kavli Futures Symposium: Ethical foundations of Novel Neurotechnologies: Identity, Agency and Normality

By Sean Batir (1), Rafael Yuste (1), Sara Goering (2), and Laura Specker Sullivan (2)

Image from Kavli Futures Symposium
(1) Neurotechnology Center, Kavli Institute of Brain Science, Department of Biological Sciences, Columbia University, New York, NY 10027

(2) Department of Philosophy, and Center for Sensorimotor Neural Engineering, University of Washington, Seattle, WA 98195

Detailed biographies for each author are located at the end of this post

Often described as the “two cultures,” few would deny the divide between the humanities and the sciences. This divide must be broken down if humanistic progress is to be made in the future of transformative technologies. The 2016 Kavli Futures Symposium held by Dr. Rafael Yuste and Dr. Sara Goering at the Neurotechnology Center of Columbia University addressed the divide between the humanities and sciences by curating an interdisciplinary dialogue between leading neuroscientists, neural engineers, and bioethicists across three broad topics of conversation. These three topics include conversations on identity and mind reading, agency and brain stimulation, and definitions of normality in the context of brain enhancement. The message of such an event is clear: dialogue between neurotechnology and ethics is necessary because the novel neurotechnologies are poised to generate a profound transformation in our society.

Tuesday, November 8, 2016

On the ethics of machine learning applications in clinical neuroscience

By Philipp Kellmeyer

Dr. med. Philipp Kellmeyer, M.D., M.Phil. (Cantab) is a board-certified neurologist working as postdoctoral researcher in the Intracranial EEG and Brain Imaging group at the University of Freiburg Medical Center, German. His current projects include the preparation of a clinical trial for using a wireless brain-computer interface to restore communication in severely paralyzed patients. In neuroethics, he works on ethical issues of emerging neurotechnologies. He is a member of the Rapid Action Task Force of the International Neuroethics Society and the Advisory Committee of the Neuroethics Network.

What is machine learning, you ask? 
As a brief working definition up front: machine learning refers to software that can learn from experience and is thus particularly good at extracting knowledge from data and for generating predictions [1]. Recently, one particularly powerful variant called deep learning has become the staple of much of recent progress (and hype) in applied machine learning. Deep learning uses biologically inspired artificial neural networks with many processing stages (hence the word "deep"). These deep networks, together with the ever-growing computing power and larger datasets for learning, now deliver groundbreaking performances at many tasks. For example, Google’s AlphaGo program that comprehensively beat a Go champion in January 2016 uses deep learning algorithms for reinforcement learning (analyzing 30 million Go moves and playing against itself). Despite these spectacular (and media-friendly) successes, however, the interaction between humans and algorithms may also go badly awry.

Tuesday, November 1, 2016

A Good Death: Towards Alternative Dementia Personhoods

By Melissa Liu

Melissa is a Medical Anthropology PhD student at the U. of Washington, Seattle. Her nascent research circles the intersection of neuroscience, dementia, and design. Melissa is also a Neuroethics Fellow with the Center for Sensorimotor Neural Engineering, an NSF ERC.  

Something is amiss. Why is there a neighborhood of houses within this assisted living facility? Why do all the houses in the neighborhood have the same 1950s design? Am I standing on carpet? It looks like a garden path. The ceiling feels like a sunset in real time. [1] Where am I? When is this? The questions above are inspired by Lantern, one of several memory care facilities in Ohio based on a patent-pending memory care program created by Jean Makesh where rehabilitation is the goal [2] [3]. However, many more models around the world are based on Reminiscence therapy, a type of therapy which technically has “[no] single definition” but generally “[involves] the recalling of early life events and interaction between individuals” [4]. Research shows that “Reminiscence therapy is used extensively in dementia care and evidence shows when used effectively it helps individuals retain a sense of self-worth, identity and individuality” [4].