Tuesday, February 23, 2016

A plea for “slow science” and philosophical patience in neuroethics

By Richard Ashcroft, PhD

Professor Richard Ashcroft, an AJOB Neuroscience Editorial Board Member, teaches medical law and ethics at both the undergraduate and postgraduate level in the Department of Law at Queen Mary University of London.

Readers of AJOB Neuroscience will be very familiar with the range and pace of innovation in applications of neurosciences to problems in mental health and wellbeing, education, criminology and criminal justice, defense, and love and sexuality – to name but a few areas of human concern. However, there is a skeptical tendency which pushes back against such innovation and claims. This skepticism takes a number of forms. One form is philosophical: some claims made about neurosciences and their applications just make no sense. They rest on conceptual mistakes or logical fallacies. This kind of attack has been made most persuasively by neuroscientist M.R. Bennett and philosopher P.M.S. Hacker in their Neurophilosophy: Philosophical Foundations of Neuroscience (2003). Another form is empirical: some claims are advanced on the basis of weak or flawed evidence, and may go well beyond what that evidence could support, even if on its own terms the data are robust and obtained in methodologically sound ways. A typical instance of this is the way that newspapers regularly report neuroimaging studies which purport to describe “the autistic brain,” when at best they describe some differences in one subset of autistic people carrying out one experimental task, compared with a small control group of putatively neurotypical people. Another form is ethical: some technologies raise significant ethical challenges. And obviously some challenges are political, bearing on interests of particular social groups or on competing visions of the society we want to live in. The standard examples here are drawn from debates about neuro- or psychopharmacological enhancement.

Tuesday, February 16, 2016

Our Lazy Brain Democracy: Are We Doomed?

By John Banja, PhD

Lately, I’ve been thinking about the Martin Shkreli embarrassment in connection with System 1 and 2 reasoning [1].  Popularized by thinkers like Nobel Laureate Daniel Kahneman, System 1 thinking refers to the fast, intuitive, reflexive, usually highly reliable cognition that humans deploy perhaps 95 percent of the time in navigating and making sense of their environments. System 2 thinking, on the other hand, is slow, effortful, plodding, analytical, and data dependent—in short, an activity that most humans don’t particularly gravitate towards perhaps because our brains, at least according to Kahneman, are inherently lazy [1]. Shkreli, you’ll recall, is a former pharmaceutical CEO who found himself at the top of everyone’s hate list when he announced that his company was going to increase the cost of its drug Daraprim by 5000 percent.  (Daraprim is used in the treatment of malaria and HIV.) The public’s System 1, gut-level outrage predictably kicked in and, within weeks, Shkreli found himself without a job and battling criminal charges for securities fraud he allegedly committed with a previous company.
Martin Shkreli arrest, image courtesy of YouTube 

Tuesday, February 9, 2016

AI and the Rise of Babybots: Book Review of Louisa Hall’s Speak

By Katie Strong, PhD

“Why should I be punished for the direction of our planet’s spin? With or without my intervention, we were headed towards robots,” writes Stephen Chinn, a main character in the novel Speak by Louisa Hall. Stephen has been imprisoned for his creation of robots deemed illegally lifelike, and in a brief moment of recrimination when writing his memoir from prison, he continues, “You blame me for the fact that your daughters found their mechanical dolls more human than you, but is it my fault, for making a doll too human? Or your fault, for being too mechanical?” 

The dolls that resemble humans are referred to as “babybots,” robots with minds that deviate only 10% from human thought and have the ability to process sensory information. Speak tells the story of how babybots come into being and then describes the aftermath once they have been deemed harmful and removed from society. The book moves between character’s stories taking place in four different time periods, from the 16th century to 2040, and the plot is told through letters, court transcripts, and diary selections from five main characters. Through these various first-person views, pieces of the story behind babybots and the rise of artificial intelligence are made clear.

Tuesday, February 2, 2016

Emotions without Emotion: A Challenge for the Neurophilosophy and Neuroscience of Emotion

By Louis Charland

Louis C. Charland is Professor in the Departments of Philosophy, Psychiatry, and the School of Health Studies, at Western University in London, Canada. He is also an International Partner Investigator with the Australian Research Council Centre of Excellence for the History of Emotions, based at the University of Western Australia, in Perth, Australia.

Many scholars of the affective domain now consider “emotion” to be the leading keyword of the philosophy of emotion and the affective sciences. Indeed, many major journals and books in the area refer directly to “emotion” in their titles: for example, Emotion Review, Cognition and Emotion, The Emotional Brain (Le Doux 1996), Cognitive Neuroscience of Emotion (Lane & Nadel 2002), and The Emotional Life of your Brain (Davidson & Begley 2012). At times, “feeling,” “mood,” “affect,” and “sentiment” are argued to be close contenders, but such challenges are normally formulated by contrasting their explanatory promise, and their theoretical status, with “emotion.” Historically, debates about the nature of affective terms and posits used to revolve, in conceptual orbit, around the term “passion” and its many variants (Dixon 2003). In our new emotion-centric universe, everything seems to revolve around “emotion” and its many variants.