Tuesday, July 31, 2018

The Missing Subject in Schizophrenia

By Anna K. Swartz

Image drawn by Anna Swartz
Since this is, in many ways, a post about narratives, I have decided I should begin with mine. 

Every morning I take an oblong green and white pill, every night I take another of the same oblong green and white pill. I also take circle and oval pills. This helps in keeping me tethered to reality, functioning with fewer hallucinations and delusions. My official diagnosis is schizoaffective, bipolar type 1. Schizoaffective disorder is closely allied to schizophrenia but is rarer, striking about 0.3 percent of the population. It’s also by many accounts “worse” in that it incorporates the severe depression and psychosis that is characteristic of bipolar disorder, as well as the loss of touch with reality wrought by schizophrenia. I find it easier to admit to being bipolar than I do schizophrenic. I have found a much more positive reception to bipolar disorder. It’s a disease often associated with creative individuals who are highly intelligent and have traits that many see as advantageous, even covetous. That is, there is something romantic about the disease even as it wreaks havoc in a person’s life. It’s also much easier to talk about depression and mania because the chances are overwhelming that during the span of a normal lifetime, we will come face-to-face with some manifestation of mania or depression, either in ourselves or someone close to us. It’s familiar and understandable. That is less the case when it comes to hallucinations and delusions. Everyone has an inner voice that they can talk to sometimes in their thoughts. But hearing voices is not like that. Auditory hallucinations sound like they are coming from outside your head. Have you ever tried to write or read while people are having a loud conversation around you? Now imagine them screaming at you. This is how I feel most days. The voices are almost always caustic and denigrating, telling me that I would be better off dead. Delusions are also hard to explain. With a head fizzing with mad thoughts, I’ve stared up at ceilings with blue and brown swirling irises like cars in the center of a volcano. More often, I will see objects sitting on surfaces and watch them tip over or fall out of the corner of my eye only to blink and have them be static. I also experience paranoid delusions which are commonly manifested as thoughts that others are plotting against me, following me, watching me, or talking about me. 

Tuesday, July 24, 2018

Exploring the Risks of Digital Health Research: Towards a Pragmatic Framework

By Dr. John Torous

Image courtesy of Flickr user Integrated Change
We often hear much about the potential of digital health to revolutionize medicine and transform care – but less about the risks and harms associated with the same technology-based monitoring and care. “It’s a smartphone app … how much harm can it really cause?” is a common thought today, but also the starting point for a deeper conversation. That conversation is increasingly happening at Institutional Review Boards (IRBs) as they are faced with an expanding number of research protocols feature digital- and smartphone-based technologies.

In our article, ‘Assessment of Risk Associated with Digital and Smartphone Health Research: a New Challenge for IRBs” published in the Journal of Technology and Behavioral Science [1], we explore the evolving ethical challenges in evaluating digital health risk, and here expand on them. While risk and harm in our 21st century digital era are themselves evolving topics that change with both technology and societal norms, how do we quantify them to help IRBs in making safe and ethical decisions regarding clinical research?

Tuesday, July 17, 2018

The interplay between social and scientific accounts of intergroup difference

By Cliodhna O’Connor

Image courtesy of Wikimedia Commons
The investigation of intergroup difference is a ubiquitous dimension of biological and behavioural research involving human subjects. Understanding almost any aspect of human variation involves the comparison of a group of people, who are defined by some common attribute, with a reference group which does not share that attribute. This is an inescapable corollary of applying the scientific method to study human minds, bodies and societies. However, this scientific practice can have unanticipated – and undesirable – social consequences. As my own research has shown in the contexts of psychiatric diagnosis (O’Connor, Kadianaki, Maunder, & McNicholas, in press), gender (O’Connor & Joffe, 2014) and sexual orientation (O’Connor, 2017), scientific accounts of intergroup differences can often function to reinforce long-established stereotypes, exaggerate the homogeneity of social groups, and impose overly sharp divisions between social categories.

Without disputing the scientific legitimacy of intergroup comparisons in research, it is important to acknowledge that the definitions and distinctions that determine which populations are compared are given by culture, not by nature. For one thing, there are relatively few discrete categories underlying human variability ‘in the wild:’ even for variables seen as the most obvious examples of natural kinds, such as sex, the boundaries between categories are much fuzzier than is typically acknowledged (Fausto-Sterling, 2000). The pragmatic demands of experimental design encourage scientists to carve the social world at joints that it may not naturally possess. Secondly, the choice of intergroup comparison is not value-neutral: the priorities of governments, industries, funding agencies, universities and individual scientists dictate which comparisons are deemed sufficiently interesting or important to investigate. Therefore, even within the scientific sphere, how questions are asked and answered is influenced by a priori understandings of social categories. These understandings are absorbed into all stages of the scientific process, from research design right through the collection, analysis and interpretation of data.

Tuesday, July 10, 2018

Solitary Confinement: Isolating the Neuroethical Dilemma

By Kristie Garza
 
Eastern State Penitentiary
Image courtesy of Wikimedia Commons
In 1842, Charles Dickens visited the Eastern Penitentiary in Philadelphia to examine what was being called a revolutionary form of rehabilitation. After his visit, he summarized his observations into an essay in which he stated, “I am only the more convinced that there is a depth of terrible endurance in it which none but the sufferers themselves can fathom, and which no man has a right to inflict upon his fellow-creature. I hold this slow and daily tampering with the mysteries of the brain, to be immeasurably worse than any torture of the body” (1).  Dickens’ words describe solitary confinement. While there is no one standard for solitary confinement conditions, it usually involves an individual being placed in complete sensory and social isolation for 23 hours a day. What Dickens observed in 1842 is not unlike current solitary confinement conditions.

Tuesday, July 3, 2018

Neuroethics: the importance of a conceptual approach

By Arleen Salles, Kathinka Evers, and Michele Farisco

Image courtesy of Wikimedia Commons.
What is neuroethics? While there is by now a considerable bibliography devoted to examining the philosophical, scientific, ethical, social, and regulatory issues raised by neuroscientific research and related technological applications (and a growing number of people in the world claim to take part in the neuroethical debate), less has been said about how to interpret the field that carries out such examination. And yet, this calls for discussion, particularly considering that the default understanding of neuroethics is one that sees the field as just another type of applied ethics, and, in particular, one dominated by a Western bioethical paradigm. The now-classic interpretation of neuroethics as the “neuroscience of ethics” and the “ethics of neuroscience” covers more ground, but still fails to exhaust the field (1).

As we have argued elsewhere, neuroethics is a complex field characterized by three main methodological approaches (2-4). “Neurobioethics” is a normative approach that applies ethical theory and reasoning to the ethical and social issues raised by neuroscience. This version of neuroethics, which generally mirrors bioethical methodology and goals, is predominant in healthcare, in regulatory contexts, and in the neuroscientific research setting.