Hannah Maslen is a Research Fellow in Ethics at the Oxford Martin School and the Oxford Uehiro Centre for Practical Ethics. She currently works on the Oxford Martin Programme on Mind and Machine, where she examines the ethical, legal, and social implications of various brain intervention and interface technologies, from brain stimulation devices to virtual reality.
This post is part of a series that recaps and offers perspectives on the conversations and debates that took place at the recent 2015 International Neuroethics Society meeting.
In its Gray Matters report, the United States Presidential Commission for the Study of Bioethical Issues underscored the importance of integrating ethics and neuroscience early and throughout the research endeavor. In particular, the Commission declared:
"As we anticipate personal and societal implications of using such technologies, ethical considerations must be further deliberated.
Executed well, ethics integration is an iterative and reflective process that enhances both scientific and ethical rigor."
What is required to execute ethics integration well? How can philosophers make sure that their work has a constructive role to play in shaping research and policy-making?
In a recent talk at the International Neuroethics Society Annual Meeting, I reflected on this, and on the proper place of anticipation in the work that philosophers and neuroethicists do in relation to technological advance. Anticipating, speculating and keeping ahead of the technological curve are all laudable aims. It is crucial that likely problems and potential solutions are identified ahead of time, to minimize harm and avoid knee-jerk policy reactions. Keeping a step ahead inevitably requires all involved to make predictions about the way a technology will develop and about its likely mechanisms and effects. Indeed, philosophers will sometimes take leave from discussion of an actual emerging or prototype technology and extrapolate to consider the ethical challenges that its hypothetical future versions might present to society in the near future. Key features of the technology are identified, distilled and carefully subjected to analysis.
|Gray Matters report|
Cognitive enhancement technologies – a topic discussed in depth in the second volume of the Gray Matters report – have received this sort of treatment. There has been a substantial amount of work dedicated to examining things like whether the use of cognitive enhancement drugs by students constitutes cheating, or whether professionals in high-risk jobs such as surgery or aviation should be required to take them. Some of this work appears to involve greater or lesser degrees of speculation. For example, a philosopher might present herself with the following sort of questions:Want to cite this post?
Why might theoretical analysis be difficult to integrate?
What to do?
Imagine that cognitive enhancer X improves a student’s performance to a level that would be achieved through having extra private tutorials. Does her use of cognitive enhancer X constitute cheating?
Imagine that cognitive enhancer Y is completely safe, and effective at remedying fatigue-related impairment. Should the surgeon be required to take cognitive enhancer Y?
Working through these sorts of examples can generate conclusions of great conceptual interest. In relation to the first, we might get clearer on what cheating precisely amounts to, and perhaps which sorts of advantages are and are not unfair in an educational setting. In relation to the second, we might come to interesting conclusions about the limits of professional obligations, or perhaps about the relationship between cognitive capacities and responsibility.
However, working at this level of abstraction – as valuable as it is from a philosophical perspective –cannot give us what we need to determine, for example, whether Duke University should uphold its policy on the use of off-label stimulants as a species of academic dishonesty, or whether the Royal College of Surgeons should recommend the use of Modafinil by surgeons as good practice. Abstracted work undeniably has its place, and is hugely interesting, but it does not integrate well with concrete discussions about scientific research directions and policy. Why is this?
To some extent, conducting the sort of thought experiments involving cognitive enhancer X and Y requires that we strip away the messiness of the details of the technologies. This allows us to carefully isolate and vary the features we think will be morally relevant to see how they affect our intuitions and reasoning. We want the principal consideration in the surgeon case to be the fact that the drug remedies fatigue and reduces error. It also makes the case sufficiently abstract be generalizable to a whole category of cognitive enhancers – there may be different drugs with a variety of properties that all share the impairment-reducing effect. The example might also extrapolate to near-future possible pharmaceuticals – we might not have such a drug now, but what if we did?
However – and this is the crucial point – many of the details that are stripped away to enable the philosophical question to be carefully defined and delineated are hugely relevant to determining what we should do; but we cannot add all this detail back in after reaching our conclusions and expect them to remain the same.
|Pharmaceuticals image courtesy of Flickr user Waleed Alzuhair|
In relation to a university’s policy on enhancers, the reality is that different drugs affect different people differently; they may simultaneously enhance one cognitive capacity whilst impairing another; some drugs might have their principal effects on working memory, whilst others enhance wakefulness and task enjoyment. All these features and many others are relevant to the question of fairness and what our policy for particular drugs should be. Importantly, the specific features of different drugs might lead to different conclusions.
In relation to professional duties, it is going to matter that a drug like modafinil is not without side effects; that it can cause gastrointestinal upset; that individuals can perceive themselves as functioning better than they in fact are, and so on. These features bear, amongst other things, on effectiveness, permissibility of professional coercion, and also on whether reasonable policy options might sit somewhere between a blanket requirement and a blanket ban.
It’s important that the reader does not take me to be saying that we should give up theoretical work on neurotechnologies. In fact, it is precisely through careful construction of the possible features of technologies that we can learn more about the socially important dimensions for which they have significance. If we want to get clearer on the boundaries of what we can and cannot require a surgeon to do, we need to consider many possibilities sitting just before and beyond the boundary: at some point, perhaps, a requirement would encroach too much into his life beyond his professional role to be justifiable. The degree of encroachment would have to be varied very slightly (almost certainly artificially) until we get to the point somewhere along the line from hand-washing to heroics where we identify the boundary.
Rather, my suggestion is that we need to be clear when we start an ethical analysis about whether we are doing something more conceptual or whether we want to make a statement about what should be done in a particular situation. When we want to do the latter, we have to make sure that we work with as much of the scientific detail as possible. This requires philosophers and ethicists to read scientific papers – perhaps at the detail of review articles – to make sure they retain the detail necessary to offer a practical recommendation. Ideally, such work would be completed in collaboration with scientists, or at least subjected to their scrutiny.
Of course, there’s a difference between speculating because you are not an expert on that technology and speculating because the information is not yet available. There should be none of the former, and the latter should be carefully managed so that recommendations do not far outstrip the limited information base: there’s a lot more we need to know about incorporating computer chips into brains, for example, before we can even start to say anything practical about what should and shouldn’t be done.
Scientific black boxes are to some extent inevitable when speculating about neurotechnological advances. The task for practical ethicists is to open as many as they can and to be mindful of the potential ethical significance of those they cannot. They also need to be careful to determine when they want to conduct theoretical analysis, using real and imagined technologies to illuminate conceptual truths, and when they want to argue for a course of action in relation to a particular neuroscientific application or technology, the details of which are crucial in order for ethical integration to be well executed.
Maslen, H. (2015). Shrewder speculation: the challenge of doing anticipatory ethics well. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2015/11/shrewder-speculation-challenge-of-doing.html