Self-Regulation and the Boundaries of the Self: A Proposal for Determining Which DBS Applications Affect Autonomy in TRD Patients

By Abel Wajnerman Paz 

Relational autonomy and clDBS

Image courtesy of Wikimedia Commons
The adaptive BCI known as ‘closed-loop deep brain stimulation’ (clDBS) is a device that stimulates the brain in order to prevent or modulate pathological neural activity patterns. What makes this kind of DBS an adaptive neurotechnology is that it automatically adjusts stimulation levels based on computational algorithms that detect or predict those pathological processes.

One of the prominent ethical concerns raised by clDBS is that it may take subjects “out of the decisional loop.” By inhibiting or modulating undesirable neural states ‘automatically,’ i.e., without any control or supervision by the subject, the device potentially undermines her autonomy. 

Some argue that this problem can be solved or minimized simply by thinking through key ethical concepts, like ‘autonomy.’ For instance, it has been suggested that autonomy is not threatened by clDBS if we understand it in a relational way. According to the notion of relational autonomy (e.g., Mackenzie & Stoljar 2000, Stoljar 2018), acting autonomously does not depend only on internal mental/neural processes or states which may be affected by pathologies, cognitive impairments, or direct coercion. Autonomy also depends on external (e.g., social or cultural) factors. For instance, a social environment that fosters self-esteem and critical thinking and promotes emotional and intellectual development may be necessary for autonomy to emerge. Goering et al. (2017) propose to extend the relational account of agency and autonomy to include human-computer interaction and suggest that the mechanisms of clDBS, even if external to the brain and exerting some degree of control over the individual, may support a patient’s autonomy. 

I agree that relational autonomy is useful for conceptualizing the threats (or lack thereof) associated with some clDBS applications. However, DBS applications are substantially different from one another, each involving a specific neurological or psychiatric condition, neural target, mechanism and symptom(s), and therefore at least some of them may not fit into this analysis. In this entry, I will focus on clDBS for treatment resistant depression (TRD) and show that, even within this single condition, applications differ regarding how they affect autonomy1. I will contrast the application of clDBS to Brodmann area 25 in the subgenual cingulate gyrus (Cg25), which inhibits structures that are hyperactive during sadness, with internal capsule/ventral striatum (VC/VS) DBS, which re-enables the prefrontal circuits in charge of cognitive control, and argue that while the latter can be considered a case of relational autonomy, the former plausibly cannot. The upshot of the present analysis is that clDBS can enable autonomy in TRD patients unless it is used for ‘substituting’ (instead of supporting) some of its underlying cognitive processes. 

Substitutive applications of clDBS 

From a relational view, external factors could either enable, undermine, or have no impact on autonomy. How can we determine whether clDBS’s external control over the brain is actually one of the enabling factors for autonomy? Beyond relational aspects, we can likely agree that autonomous action requires some degree of ‘self-government.’ Self-government occurs if and only if an agent is motivated to act as she does only because this motivation coheres with some mental state(s) that can be characterized as her genuine point of view on the action. In other words, the autonomous agent (mostly) acts on motives she approves or identifies with (Buss & Westlund 2018). This condition is satisfied only if she can suppress or inhibit the motives that do not match her point of view. Autonomous action requires the ability to regulate or modulate our motives in order to match them to our point of view. Thus, an external factor can support autonomy by contributing to or enabling these regulatory cognitive processes. 

Taking an example from Goering et al. (2017), suppose that “your new love interest turns out to be controlling and isolating, but you can’t see it well—he’s charismatic and focused on you, and you see his actions only as evidence of his love—” (Goering et al. 2017, p. 66). In this case, recovering autonomy may require the ability to evaluate your attachment to this person as negative (i.e., as a feeling you should not identify with, or a feeling not matching your point of view regarding what a good partner is) and then modulate it accordingly. However, you may not be able to perform this evaluation and modulation by yourself; you may need your friends and family “to help make you aware of how this new relationship is changing you for the worse” (ibid.). In this case, the cognitive processes that constitute autonomy are only possible through the intervention of others.  

Unlike other DBS applications, DBS for TRD directly or indirectly affects motivation. Hence, it is reasonable to think that it may also have an impact on autonomy. For instance, while DBS for Parkinson’s Disease (PD) provides relief of specific motor symptoms by disrupting neural activity related to movement inhibition, DBS for TRD can produce changes in positive and negative affect and (in the long term) improvements in interest and activity (Mayberg et al. 2005, Herrington et al. 2016). 

How are these changes produced? In some cases, clDBS for TRD works in a way that is analogous to how friends and family intervene in the ‘love interest’ example, that is, by supporting the cognitive processes associated with autonomy. clDBS can help restore a function associated with autonomy by modulating or influencing the circuits which perform that function in healthy subjects. For instance, DBS can influence the prefrontal neural mechanisms implementing cognitive control, i.e., the flexible adjustment of mental states and processes (including motivation) in the face of changing environmental demands. The inflexible behavior characteristic of some mental disorders (e.g., the automatic negative thoughts of patients with major depression) are often explained as control deficits and therefore flexibility (acting opposite to a habit) is considered critical to clinical recovery. It has been shown that VC/VS clDBS in both major depression and obsessive-compulsive disorder can support or re-enable the prefrontal neural circuits in charge of cognitive control (e.g., Widge et al. 2019, see also Crocker et al. 2020). In this case, clDBS may contribute to restoring the patient’s ability to regulate her motives and therefore this can be considered a case of relational autonomy. 

By contrast, in other applications of clDBS for TRD, the device helps to recover functionality not by modulating or influencing the neural circuits that perform functions associated with autonomy, but rather by substituting them. In these cases, the clDBS device itself performs a key cognitive function that constitutes, in part, autonomy in healthy subjects. clDBS registers first-order motivational states (such as emotions) and can either suppress (or modulate) them or not depending on how they are ‘evaluated’ by the clDBS’s algorithm (e.g., as pathological or healthy).  The clDBS will be then triggered to act, depending on whether the recorded brain state matches the device’s ‘point of view’ on whether those states warrant ‘correction.’ Specifically, some of the circuits that are monitored, analyzed, and controlled or modulated by clDBS in TRD patients are related to moods which influence behavior in both psychiatric and healthy subjects. For instance, one of the three circuits targeted by clDBS for TRD, Cg25, was identified in imaging studies of sad/negative mood and depressive illness. This area is hyperactive during sadness in both depressed patients and healthy volunteers undergoing induced sadness (Mayberg 2009, Widge et al. 2018). Therefore, clDBS’s analysis and modulation of neural activity in these areas is equivalent to the direct regulation of motivation (as opposed to merely supporting the neural mechanisms that regulate motivation). This means that in this case clDBS substitutes the neural circuits that control motivational states in autonomous agents. 

In this particular case, prosthetic function substitution does not enable autonomy. If autonomy requires self-government, then it needs action to result from the subject’s (instead of the clDBS algorithm’s) evaluation and regulation of her motivational states. As mentioned, the modulation and influence of such self-government by external forces may, in some occasions, support autonomy (this is the key insight of relational approaches to autonomy). Nevertheless, giving to another (e.g., the clDBS device) the power to influence our motives (as opposed to merely letting others influence this power) is incompatible with self-government. Self-government requires the self and, at least prima facie, a prosthetic device is not part of it. 

Extended self-government?

A straightforward strategy for arguing that substitutive applications of clDBS do support autonomy is to show that the device can be considered as part of ourselves, in some sense, and therefore its influence over our motivational states constitutes some form of self-government. 

Image courtesy of Ryan Somma on Flickr
The idea that some neurotechnological devices are part of ourselves has been defended through ‘situated’ approaches to cognition. For instance, in a widely discussed proposal, Fenton & Alpert (2008) argued that the BCIs that restore the communication capacities of patients with locked-in syndrome extend the patients’ minds and their selves beyond their bodies (for a discussion, see Walter 2010, Heersmink 2013, Kyselo 2013, and Hibbert 2016). The ‘extended mind’ hypothesis posits that when external devices are functionally isomorphous and/or complementary with a brain mechanism, they can become ontologically on a par with neural tissue; that is, they become parts of the very make-up of a cognitive agent. Therefore, if the function of a clDBS device is sufficiently and relevantly similar to and/or complementary with a brain function of a subject (e.g., cognitive control), then the device can be considered as part of the subject’s make-up itself.    

The ethical implications of mind extension have been captured by the so-called ‘ethical parity argument.’ The main idea is that if an external device is ontologically on a par with neural tissue (e.g., it is part of what a subject is), then it should be treated as ethically on a par with it. This entails that alterations of this external device are ethically on a par with alterations of the subject’s brain (Heinrichs 2018, Heersming 2017). For instance, if non-consented alterations of her brain by an external agent are considered violations of her psychological integrity, then the same applies to alterations of the device. 

We can draw a second and different implication from ethical parity, which can be crucial for our discussion: If an external device is ontologically on a par with the neural mechanisms of a given brain, then the alteration of that brain by its own neural mechanisms should be seen as ethically on a par with alterations of that brain by that external device. Thus, if the former does not violate the subject’s autonomy, neither do the latter. Therefore, both the innate neural mechanisms’ and the external device’s control over our brain can both be considered as instances of self-government. 

Experience as a moral boundary

If successful, this application of the ethical parity argument entails that substitutive uses of clDBS support autonomy. However, there are reasons to believe that, even if some devices extend cognitive processes and capacities beyond the brain and the body, they may not extend the person beyond this boundary. If this is the case of clDBS, the extended mind hypothesis could not be used to address the concern of clDBS’s threat to autonomy.

For instance, Buller (2013) argues that the moral sense of ‘person’ (that is, the sense that is relevant for agency, autonomy, responsibility, etc.) is closely related to the notion of person as a subject of experiences and this, in turn, is related to the sensory and somatosensory aspects of the body.  According to his view, the cognitive mechanisms that constitute a person are only those underlying the subjective experience of herself and the world, that is, her body’s somatosensory, proprioceptive, and sensory mechanisms. Similarly, Herring & Wall (2017) argue that the inviolability our body (underpinned by the right to bodily integrity) relies on the fact that it is the point of implementation or realization of morally relevant aspects of our subjectivity and experience. For instance, states of well-being, pain and pleasure, states of flourishing, communing and relating are “all states that are located somewhere in the chain of physiological systems of our bodies” (p.13). A person has exclusive use of, and control over, their body on the basis that the body is the site of their subjectivity and experience. 

Thus, under this view, external devices can extend the inviolable boundaries of a person only if they are part of the mechanistic basis of her subjectivity and experience. For instance, this condition would be satisfied by devices, such as cochlear implants, that receive and/or process perceptual inputs, bypass damaged sensory channels, and stimulate sensory areas underlying perceptual experience. 

Does clDBS satisfy this condition? Unlike open-loop DBS, a clDBS device ‘senses’ neural activity. It detects and processes some kind of proprioceptive or metacognitive input, e.g., it can detect and process information about our emotional states.  However, this information is not fed into the patient’s input systems (e.g. her sensory, somatosensory or proprioceptive neural mechanisms). The device sends information directly to its own response systems in charge of modulating target neural states or processes, therefore bypassing the patient’s sensory, proprioceptive, or somatosensory systems altogether. Thus, given that clDBS is not part of the mechanistic basis of the patient’s subjectivity, it cannot be considered part of her as a person or self. This means that substitutive applications of clDBS do not contribute to recovering the self-government required by autonomy. 

An artificial surrogate? 

Image courtesy of Wikimedia Commons
In brief, although the two clDBS applications for TRD I described modulate motivation, they do not affect autonomy in the same way. VC/VS DBS seems to change motivation indirectly, by re-enabling neural circuits that regulate mental states and therefore supporting the patient’s self-control necessary for autonomy. By contrast, Cg25 DBS changes motivation directly (e.g., by inhibiting the states evaluated as pathological by the device), thus substituting the neural mechanisms that control motivation. Given that these mechanisms constitute the patient’s self-government and that the device is not part of the self, this application does not re-enable autonomy.  

Of course, the proposed analysis does not entail that substitutive applications of clDBS are to be avoided. Firstly, although these applications do not re-enable self-government, they plausibly do not undermine it either. As we saw, in these applications the device performs an autonomy-related function that the patient is often not able to perform by herself (e.g. the inhibition of a negative mood). However, this inability is not caused by the device, but is rather part of the pre-existing condition (e.g., TRD) that the device is supposed to treat.   

More importantly, despite not re-enabling autonomy, substitutive applications do produce an autonomy-related improvement of the patient’s situation. Consenting to these applications is analogous to assigning a surrogate decision maker. A surrogate has the power to make decisions on behalf of a patient when (and only as long as) she lacks the capacity to do so and, crucially, the deferred decision-making process is constrained by information about the patient’s wishes or goals (Moye et al. 2013).  Thus, assigning a surrogate does not re-enable the decision-making capacity of a patient, but she can be better off by letting others she trusts decide on her behalf. 

Similarly, although substitutive clDBS applications for TRD do not re-enable the patient’s autonomy, the device can do what she would if she were fully autonomous. The device becomes some kind of artificial surrogate for her ability to modulate her motives and match them to her point of view. This interpretation of substitutive applications not only shows that they can be ethically acceptable (as surrogates are), but also suggests that consent in these situations should be focused on assessing whether the specific way in which the device evaluates and modulates motivational states matches the patient’s point of view on those states (e.g., whether the states inhibited by the device are negatively evaluated by the patient), so that the device can “act on her behalf.” 

1I thank Karen Rommelfanger for pointing out that my analysis should be focused on TRD and the DBS-mediated modulation of motivation. 


References

  1. Buller, T. (2013). Neurotechnology, invasiveness and the extended mind. Neuroethics, 6(3), 593-605.
  2. Buss, S. and Andrea W. "Personal Autonomy", The Stanford Encyclopedia of Philosophy (Spring 2018 Edition), Edward N. Zalta (ed.). https://plato.stanford.edu/archives/spr2018/entries/personal-autonomy/.
  3. Crocker, B., Z., R., Paulk, A. C., Peled, N., Ellard, K. K., Weisholtz, D. S., ... & Widge, A. S. (2020). Closed loop enhancement and neural decoding of human cognitive control. bioRxiv.
  4. Gilbert, F., O'Brien, T., & Cook, M. (2018). The effects of closed-loop brain implants on autonomy and deliberation: what are the risks of being kept in the loop?. Cambridge Quarterly of Healthcare Ethics, 27(2), 316-325.
  5. Goering, S., Klein, E., Dougherty, D. D., & Widge, A. S. (2017). Staying in the loop: Relational agency and identity in next-generation DBS for psychiatry. AJOB Neuroscience, 8(2), 59-70.
  6. Heersmink, R. (2013). Embodied tools, cognitive tools and brain-computer interfaces. Neuroethics, 6(1), 207-219.
  7. Heersmink, R. (2017). Distributed selves: Personal identity and extended memory systems. Synthese, 194(8), 3135-3151.
  8. Heinrichs, J. H. (2018). Neuroethics, Cognitive Technologies and the Extended Mind Perspective. Neuroethics, 1-14.
  9. Herring, J., & Wall, J. (2017). The nature and significance of the right to bodily integrity. The Cambridge Law Journal, 76(3), 566-588.
  10. Herrington, T. M., Cheng, J. J., & Eskandar, E. N. (2016). Mechanisms of deep brain stimulation. Journal of neurophysiology, 115(1), 19-38.
  11. Hibbert, R. (2016). LIS and BCIs: a local, pluralist, and pragmatist approach to 4E cognition. Neuroethics, 9(2), 187-198.
  12. Kyselo, M. (2013). Locked-in syndrome and BCI-towards an enactive approach to the self. Neuroethics, 6(3), 579-591.
  13. Mackenzie, C., & Stoljar, N. (Eds.). (2000). Relational autonomy: Feminist perspectives on autonomy, agency, and the social self. Oxford University Press. 
  14. Mayberg, H. S., Lozano, A. M., Voon, V., McNeely, H. E., Seminowicz, D., Hamani, C., ... & Kennedy, S. H. (2005). Deep brain stimulation for treatment-resistant depression. Neuron, 45(5), 651-660.
  15. Mayberg, H. S. (2009). Targeted electrode-based modulation of neural circuits for depression. The Journal of clinical investigation, 119(4), 717-725.
  16. Millán, J. D. R., Rupp, R., Müller-Putz, G., Murray-Smith, R., Giugliemma, C., Tangermann, M., ... & Neuper, C. (2010). Combining brain–computer interfaces and assistive technologies: state-of-the-art and challenges. Frontiers in neuroscience, 4, 161.
  17. Moye, J., Sabatino, C. P., & Brendel, R. W. (2013). Evaluation of the capacity to appoint a healthcare proxy. The American Journal of Geriatric Psychiatry, 21(4), 326-336.
  18. Stoljar, N., "Feminist Perspectives on Autonomy", The Stanford Encyclopedia of Philosophy (Winter 2018 Edition), Edward N. Zalta (ed.). https://plato.stanford.edu/archives/win2018/entries/feminism-autonomy/.
  19. Walter, S. (2010). Locked-in syndrome, BCI, and a confusion about embodied, embedded, extended, and enacted cognition. Neuroethics, 3(1), 61-72.
  20. Widge, A. S., Malone Jr, D. A., & Dougherty, D. D. (2018). Closing the loop on deep brain stimulation for treatment-resistant depression. Frontiers in neuroscience, 12, 175.
  21. Widge, A. S., Zorowitz, S., Basu, I., Paulk, A. C., Cash, S. S., Eskandar, E. N., ... & Dougherty, D. D. (2019). Deep brain stimulation of the internal capsule enhances human cognitive control and prefrontal cortex function. Nature communications, 10(1), 1-11.

______________


Abel Wajnerman Paz is an Assistant Professor in the Department of Philosophy at the Alberto Hurtado University, Santiago de Chile. He obtained his PhD in Philosophy at the University of Buenos Aires (2015), and was a CONICET (2015-2017) and a FONDECYT Postdoctoral Fellow (2018-2021). His main areas of interest are the Philosophy of Cognitive Neuroscience and Neuroethics. He focuses on epistemic and conceptual issues related to neural coding, computation and information processing, their relation to mental capacities (specifically, perception, thought and consciousness) and their neuroethical implications regarding mental privacy, psychological integrity, and autonomy. 


Want to cite this post?

Wajnerman Paz, A. (2021). Self-Regulation and the Boundaries of the Self: A Proposal for Determining Which DBS Applications Affect Autonomy in TRD Patients. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2021/04/self-regulation-and-boundaries-of-self.html

Comments


Emory Neuroethics on Facebook

Emory Neuroethics on Twitter

AJOB Neuroscience on Facebook