Tuesday, October 28, 2014

What is uniquely human? A report from The Social Brain Conference

Photo credit: Anders Gade
By James Burkett

James Burkett is a 2014 recipient of the Emory Center for Ethics Neuroethics Travel Award. He is a graduate student in Emory's Neuroscience program, conducting research on social attachment and empathy in Dr. Larry Young's lab.


This October 5th thru the 8th I had the pleasure of attending the Federation of European Neuroscience Societies’ (FENS) bi-annual Brain Conference, held in Copenhagen, Denmark. FENS represents the neuroscience societies of 42 different societies in 32 countries, and is the primary organization for neuroscience in Europe. The conference, titled “The Social Brain,” focused on how the brain produces and is affected by social behaviors in humans and in animals. Chaired by eminent scientists Sarah-Jayne Blakemore (Director of the University College London’s Institute of Cognitive Neuroscience), Frans de Waal (world-famous primatologist at Emory University), and Giacomo Rizzolatti (discoverer of mirror neurons at University of Parma, Italy), the conference brought together a wide array of human and animal researchers at the top of their fields. Throughout the conference, this bipolar grouping was frequently brought to the same question: what is it that makes humans different from animals? What is uniquely human? As with a sculpture, this conference seemed to answer this question by chipping away at the monolith of things commonly thought of as unique to the human species.

For a long time, humans were thought to be unique for their tool use [1,2]. However, many surprising examples of tool use have now been seen in animals. Chimpanzees are now known to fashion weapons for use in hunting, as well as using tools for nut cracking and termite retrieval; and will sometimes be seen carrying favorite tools for great distances [1]. Even this behavior is not unique to apes, however: Caledonian crows also craft and use tools for grub retrieval, and even have local tool-making traditions they pass on to the next generation [2]. There are now many internet videos showing crows solving extremely complex tasks with available tools.

Several speakers showed that the human species is not unique in its ability to cooperate and to understand cooperative relationships [1,3,4]. Chimpanzees, for instance, are perfectly capable of learning cooperative tasks without training, and even spontaneously develop individual styles, preferred partners, reputations, and feedback between partners on their choices [1]. They may do this through the use of specialized “mirror neurons,” which are present in motor planning and emotional areas of the brain and fire both when an action or emotion is being experienced, and when it is being observed in others [3,4]. These mirror neurons were first discovered in Rhesus macaques, but have since been found in humans and chimpanzees. Elephants readily learn cooperative tasks as well, even waiting for their partner to arrive when a task is presented that cannot successfully be performed alone [1]. Even more distant from humans was a striking example of inter-species cooperative hunting between groupers and moray eels, where groupers show signs of shared intentionality and referential gesturing in order to get moray eels to help them catch fish [5]. Tiny 5 gram cleaner wrasses, which have more than 2,000 inter-species social interactions a day while cleaning parasites off of other fish, show signs of cooperative strategies, individual recognition, social prestige, audience effects, tactical deception and reconciliation.

Tuesday, October 21, 2014

Burden of proof: does neuroscience have the upper hand?

As an undergraduate, I took several introductory level philosophy classes while majoring in neuroscience. Some of it I could appreciate and most of it went over my head, but a thought that kept nagging me was, “haven’t neuroscientists solved all of these issues by now?” It was only after I had worked in neuroscience laboratories for a few years that I began to realize just how qualified all of our statements had to be due to the plethora of limitations that go along with any result. I began to wince anytime I heard someone use the word “proof” (only salesmen use the term “clinically proven”, but don’t get me started on that…). It seems clear to me now that, for the most part, natural scientists, social scientists and humanities scholars are really all working toward the same goal just in different, albeit complimentary ways. At the first “Neuroscience, ethics and the news” journal club of the semester, Lindsey Grubbs, a PhD student in Emory University’s English Department, facilitated our discussion about a topic that she has previously written about for this site. The main focus was on what role neuroscience can and should play in answering questions that have long been in the realm of the humanities and how these results should be communicated to the general public.

From the Daily Mail Online

Tuesday, October 14, 2014

Ambivalence in the Cognitive Enhancement Debate

By Neil Levy, PhD

Neil Levy is the Deputy Director of the Oxford Centre for Neuroethics, Head of Neuroethics at Florey Neuroscience Institutes, University of Melbourne, and a member of the AJOB Neuroscience Editorial Board. His research examines moral responsibility and free will.

The most hotly debated topic in neuroethics surely concerns the ethics of cognitive enhancement. Is it permissible, or advisable, for human beings already functioning within the normal range to further enhance their capacities? Some people see in the prospect of enhancing ourselves the exciting prospect of becoming more than human; others see it as threatening our humanity so that we become something less than we were.

In an insightful article, Erik Parens (2005) has argued that truthfully we are all on both sides of this debate. We are at once attracted and repulsed by the prospect that we might become something more than we already are. Parens thinks both frameworks are deeply rooted in Western culture and history; perhaps they are universal themes. We are deeply attached to a gratitude framework and to a more Promeathean framework. Hence we find ourselves torn with regard to self-transformation.

When someone feels torn in this kind of way about how they should think about or respond to something, they are ambivalent. Parens thinks that ambivalence is in fact the right response to cognitive enhancement: we ought to recognize that we are torn in both directions and acknowledge and respect this fact. We should not seek to resolve the ambivalence; we ought to embrace it. While I think that Parens highlights something of great importance when he argues that we are torn, I think he is wrong that we ought to attempt to respect both frameworks.

Tuesday, October 7, 2014

Can neuroscience discuss religion?

In a previous post, Kim Lang presented the views of several prominent neuroscientists and neurologists on spirituality and religion. With the knowledge that atheism is prevalent in the scientific community, she wondered how is it that some neuroscientists are nevertheless able to integrate their religious and scientific beliefs. One of the neuroscientists whose standpoint she surveyed was Michael Graziano, a Professor of Neuroscience at the Princeton University Neuroscience Institute. Dr. Graziano believes that current research on the neurological basis of consciousness proves that spirituality is not only a natural tendency of humans, but also that its foundations are visible in the very structure of the brain [1].

Several questions arise from Dr. Graziano’s statement, and I will try to shed some light on each.

To start with, is neurotheology actually studying spirituality, religion, or both? What is the difference between the two? The conceptual separation between the two terms is definitely blurred. In this interview for Big Think, American Buddhist writer and academic Robert Thurman says that spirituality is “love and compassion”, is “going into a deeper area of your mind where you are asserting your free will”, where “you let go of your self-protective and defensive controls, and what you tap into is the nature of the universe, the flow of energy interconnecting things”. In contrast, Thurman believes religion is built upon spirituality, but has taken a secondary role as a tool of social and state organizations. Rituals and rules specific to each religion end up regulating the access to the spiritual, and become, in Thurman’s words, a control rather than a regulating mechanism. Neuroscientist and philosopher Sam Harris, who has recently authored a book called Waking Up: A Guide to Spirituality Without Religion, seems to have similar views on the issue. He explains:

"Although the claim seems to annoy believers and atheists equally, separating religion from spirituality is a perfectly reasonable thing to do. It is to assert two important truths simultaneously: Our world is dangerously riven by religious doctrines that all educated people should condemn, and yet there is more to understanding the human condition than science and secular culture generally admit."

Thursday, October 2, 2014

Neuroethics and the Emory Tibet Science Initiative

I recently interviewed fellow Neuroethics Blog contributors Dr. Julia Haas and Dr. Gillian Hue about courses they taught during the summer at Tibetan Buddhist monasteries in South India as part of the Emory-Tibet Science Initiative (ETSI). Julia (who is a Postdoctoral Research Fellow in the Philosophy-Neuroscience-Psychology Program at Washington University) taught philosophy of science at Drepung Loseling Monastery in Karnataka, and Gillian (who is the senior program coordinator for the Emory University Initiative to Maximize Student Development, program associate in the Emory Neuroethics Program, and the Managing Editor for AJOB Neuroscience) taught neuroscience at the Sera Jey Monastery in Bylakuppe.

A neuroscience class at Sera Jey Monastery (Photo credit: Gillian Hue)

The ETSI (part of the Emory-Tibet Partnership) works to introduce science programs into the education of Tibetan Buddhist monks and nuns. Julia explained that “for the past six years they had a pilot program in the north of India [in Dharamsala], and now it is gradually being rolled out as part of the curriculum for some of the higher monastic degrees.” Such degrees are offered at Drepung Loseling and Sera Jey, which function as monastic universities.

Tuesday, September 23, 2014

Neuroethics in Theory and in Practice: A First-hand Look into the Presidential Commission for the Study of Bioethical Issues

Anyone who turned on CNN this past summer probably remembers the most popular news stories ranging from Obama’s recent efforts to quell political violence in Iraq to Emory University’s admission of two Ebola patients. What was missing from several (if not all of these newscasts), however, was any mention of the continuation of President Obama’s BRAIN initiative right here at Emory University. Specifically, this past June, the Atlanta university welcomed the prestigious Presidential Commission for the Study of Bioethical Issues to confer over many pertinent issues surrounding various ethical and neuroscientific issues. I was lucky enough to be able to attend the session and catch a glimpse of the commission in action. Upon the commission’s completion, I found myself excited while simultaneously confused about the group’s overall mission and decided it would be worthwhile to investigate the Bioethics Commission further. Particularly, I hoped to understand why it is that Obama created a group to investigate several important issues discussed on this blog every week.

From Center for Genetics and Society

Tuesday, September 16, 2014

Teaching Tactics - Neuroethics in the Curriculum

I made and abandoned several attempts at an opening sentence for this post and all of them included a deft way to bury the lead. The lead is this: I’m thrilled for two of my students whose video, Empathy and your Brain, was selected as one of the top submissions to the 2014 Brain Awareness Video Contest. Their video now has the chance to be selected as the fan favorite in the People’s Choice contest.


But why was I burying it?

Tuesday, September 9, 2014

Big data and privacy on the Web: how should human research be conducted on the Internet?

“They said, ‘You can’t mess with my emotions. It’s like messing with me. It’s mind control.'” That’s what Cornell communication and information science professor Jeffrey T. Hancock reported in a recent New York Times article about the public outcry over the now infamous Facebook emotional manipulation study (read on for details). Hancock was surprised and dismayed over the response. He sees the advent of massive-scale sociology and psychology research on the Internet as a “new era” and he has a point. The days of mostly relying on college students as research subjects may be coming to an end. But how should research be conducted in this new online setting? Is it even appropriate to use data from web sites as it is collected now with little, if any, user knowledge and informed consent existing only in the form of privacy policies that nobody reads?1 In this post I argue that the Internet is not the Wild West and therefore internet-based research should not be allowed to side step established practices of informed consent. Furthermore, significant changes must be made so that these new research opportunities are maximized in the best way possible.

Via Linkedin

 Earlier this year Facebook, the social network with well over 1 billion users, found itself in hot water after publishing a study in collaboration with academic researchers (including Hancock above) that sought to measure “emotional contagion” online.2 In January of 2012 Facebook researchers altered the News Feed content of nearly 700,000 users without their knowledge or explicit consent. Users in a control group had random posts withheld from their News Feeds, irrespective of emotional content, whereas others either had some posts with a positive valence removed or some with a more negative tone hidden. They found that overall, omitting News Feed content – either negative or positive – seemed to affect the emotional valence of subsequent posts. And all those Facebook users who saw fewer emotion-tinged posts subsequently posted fewer updates. In effect, Facebook researchers were successfully able to manipulate the emotions of hundreds of thousands of users simply by altering their News Feed content. The researchers were able to study a phenomenon – emotional contagion – which can be very difficult to study in person due to a variety of potential experimental confounds and they were also able to achieve extraordinary statistical power. But by not explicitly asking for consent many users felt well, used.

Monday, September 1, 2014

The sound of silence: Fetal alcohol spectrum disorder

By Emily Bell, PhD

Dr. Emily Bell is Researcher at the Neuroethics Research Unit, Institut de recherches cliniques de Montréal (IRCM). Dr. Bell’s MSc and PhD research in Psychiatry at the University of Alberta focused on investigating brain activity in mood and anxiety disorders using functional magnetic resonance imaging (fMRI). Her postdoctoral work shifted her into the field of neuroethics, where she examined ethical and social challenges associated with deep brain stimulation in psychiatric disorders. As an investigator of the Neuroethics Core of NeuroDevNet, a Canadian Network of Centres of Excellence, Dr. Bell has been involved in a wide range of network activities and research in the area of pediatric ethics. This includes recent work on the implications of stigma for public health policies and practices in fetal alcohol spectrum disorder and ethical concerns associated with the transition of youth with neurodevelopmental disorders to adult health services. Dr. Bell has been awarded support from the Social Sciences and Humanities Research Council (SSHRC), the Fonds de la recherche en santé du Québec (FRSQ), the Canadian Institutes of Health Research (CIHR), and the Killam Trust. She is currently lead co-investigator on two CIHR grants, including one in the area of vulnerability and mental health research ethics.

Let me start by saying that I never planned to have a research or action agenda in ethics and fetal alcohol spectrum disorder. Fetal alcohol spectrum disorder (FASD), as an umbrella term, describes a range of adverse developmental outcomes resulting from exposure to alcohol during pregnancy. Sure, my graduate degrees in psychiatry were an entry to understanding the challenges experienced by vulnerable patients, the social and relational aspects of health, and the deep and enduring force of stigma. All this prepared me to some degree for the complex web of ethical and social tensions in the study of FASD. These tensions cut across disciplines and domains and touch on alcohol policy, public health initiatives, concepts of motherhood, of maternal/fetal rights, and of specialized care for persons with disability throughout the life course. I was fortunate, the sheer complexity of the issues at stake also gave way to a network of colleagues (NeuroDevNet, a Canadian Network of Centres of Excellence) who themselves were jack of all trades; scientists and physicians with a sincere interest in and powerful grasp of the ethical issues faced by policy makers, pregnant women, and those affected by prenatal alcohol exposure.

From neurodevnet.ca

FASD is a leading cause of developmental disability and a significant public health issue accompanied by substantial lifelong burden, especially through secondary disabilities (i.e., difficulties at school, trouble with the law, challenges living independently). One of the key challenges associated with developing a comprehensive strategy for managing and preventing FASD is the need for a coordinated approach across a variety of social systems (i.e., foster care system, criminal justice system, health systems). Moreover, despite the fact that prenatal alcohol exposure continues to be a prevalent cause of developmental disability it receives far less attention than some other neurodevelopmental conditions such as autism. It’s probably not hard to imagine why the field as a whole might suffer from a lack of concerted attention. The stigma associated with drinking during pregnancy is well known and can dissuade women who drink during pregnancy from seeking treatment or disclosing their drinking habits (Eggertson, 2013). We anticipate that this stigma also filters down to the child or the individual with FASD (Bell et al., under review). The moral elements potentially influencing the construction of fetal alcohol syndrome have been well characterized. Armstrong (1998) has described how the diagnosis risks becoming just another way to label women and children who are “beyond hope and destined to be societal problems”.

Tuesday, August 26, 2014

“Lifelogging” and neurophysiological computing: Will we forget how to forget?

One of the most famous examples of reminiscence includes a madeleine dipped in tea, which lead to almost 3,000 pages of recollection by the narrator in the beginning of Marcel Proust's novel In Search of Lost Time, and we have all experienced these sensory triggers to a particular memory. Remembering the past helps us to re-examine our lives, make choices, and share personal accomplishments. We often use external devices to help us remember events big and small, and with advances in technology, we often record and make plans using a variety of digital devices such as iPhones, Microsoft’s Outlook, and even smart watches. We have the capability to store a lifetime of data with these advanced technologies, and with the advent of Facebook, Twitter, “selfies”, and blogs it has become routine for many people to document their lives on a daily basis in a digital form, a practice that has been referred to as “lifelogging.” The outcome of documenting activities digitally are human digital memories (HDM), which have been defined as “a combination of many types of media, audio, video, images, and many texts of textual content [1].

The concept of recording and then later having the ability to review certain documents was first proposed by Dr. Vannevar Bush in 1945 when he described the “Memex” (a combination of “memory” and index”) in an issue of Atlantic Monthly [2]. As described in the article, a Memex was “a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory.”The device would look like a desk where documents were either recorded via microfilm or photography.

From u-tx.net

Since that time, many similar devices have been developed, but a revolutionary advance was seen with Microsoft’s SenseCam a wearable camera with a wide-angle lens and multiple sensors, including an infrared sensor to detect the presence of other people. The camera takes a photo every 30 seconds, resulting in up to 2,500 photos a day and is capable of storing 30,000 images in total. Photos can then uploaded to a computer and viewed later using a Microsoft application [3]. SenseCam was developed as a memory aid and there has been over 50 research institutions that have used the device in a variety of studies involving memory and behavior [4]. Notably the SenseCam has shown promising results in studies where it was used as memory aid for a child with anterograde amnesia [5] and with adult patients that were suffering from amnesia [3]. Aside from the medical purpose that a camera such as SenseCam could potentially serve, “lifelogging” has become more socially acceptable as we live in a digital age where Facebook posts and Twitter feeds are consumed constantly, and “selfies” are a regular occurrence at most events.

The SenseCam. From microsoft.com