The Neuroethics of Decisional Enhancement

Peter B. Reiner, Saskia K. Nagel, and Imre Bárd 

Image courtesy of Pixabay
Humans would benefit from better decision-making. They spend more than they earn, eat more than they should, and overuse shared resources (Hardin, 1968; Schor, 1999; WHO, 2015). The dominant theory is that we make imperfect decisions because our cognitive resources are insufficient for the task (Simon, 1972; Kahneman, 2011; Gigerenzer et al., 2011). The implications for health and well-being are substantial: not only does the cognitive load of poverty diminish the quality of decision-making (Mullainathan and Shafir, 2013), but bad decisions themselves have been suggested to be a leading cause of death (Keeney, 2008). Thus, improving human decision-making - decisional enhancement - is an important objective for society. 

A range of decisional enhancement strategies already exist. Best known is the nudge, a change to the environment that makes ‘good’ decisions more likely (Thaler and Sunstein, 2009). Nudging has entered the policy arena, having been implemented by over 200 governments across the world (OECD, 2019). One of the most well-established nudges is “Save More Tomorrow”, developed by the behavioral economists Richard Thaler and Shlomo Benartzi (Thaler and Benartzi, 2004). Designed to make saving for retirement as easy and painless as possible, the basic premise of Save More Tomorrow is to ask people to commit to saving more in the future. Moreover, savings rates are linked to future pay raises, and people remain in the program unless they opt-out. Consistent with the priniciples of behavioural economics, each of these manipulations helps to overcome one or more of the cognitive biases that we all exhibit. In 2006, Save More Tomorrow was included as part of the Pension Protection Act of 2006. 

Image courtesy of Pixabay
Other decisional enhancements involve humans collaborating to solve problems, a concept known as shared decision-making. Best studied in the context of the doctor-patient relationship, shared decision-making is the hallmark of patient-centred care (Barry and Edgman-Levitan, 2012). Alternative examples include family decision-making (Rettig, 1993) and decisions rendered by juries (Devine et al., 2001). An entirely different avenue of decisional enhancements includes those that are implemented not by humans alone but by artificial intelligence (AI). Much ink has been spilled over the possibility that AIs will be carrying out tasks that were once the exclusive province of humans (Russell and Norvig, 2009), but there are myriad opportunities for AIs to work in concert with humans to produce better decisions. We call such tools AI-enabled decisional enhancements (AIDEs). 

Influencing human decision-making is a delicate enterprise, and ethical implications abound. A range of objections to nudges have surfaced (Bovens, 2009; Selinger and Whyte, 2011; Blumenthal-Barby and Burroughs, 2012), and in empirical studies, we and others have shown that people prefer nudges that are unconcealed (Felsen et al., 2013; Sunstein, 2016). We have suggested that with appropriate attention to the ethical outcomes, decisional enhancements can be developed to support autonomy (Nagel and Reiner, 2013). In a complementary vein, it has been suggested that we ought to revamp nudges as ‘boosts’, enhancements that do not impinge upon the decision itself but rather expand decision-makers’ competency (Grüne-Yanoff and Hertwig, 2016; Hertwig and Grüne-Yanoff, 2017). For example, rather than make one decision or another more likely (the explicit objective of a nudge), a boost might improve understanding of our cognitive biases, thereby empowering people to make better decisions for themselves. In this way, boosts fully support autonomy of decision-making in a manner that is often disregarded with nudges. 

Ethical consideration of decisional enhancement represents a logical evolution of debates about cognitive enhancement, a foundational issue in neuroethics (Parens, 1998; Greely, 2008; Fitz and Reiner, 2014). Key steps were Douglas’ (2008) introduction of moral enhancement and Danaher’s (2015) proposal of epistemic enhancement, both emphasizing the use of bioenhancement tools to improve the ability of humans to acquire and process knowledge. We subsequently suggested that algorithmic devices such as smartphones might be a more effective way to enhance our brains than biological tools (Fitz and Reiner, 2016), and amongst the domains that might be enhanced by algorithmic devices is decision-making (Reiner and Nagel, 2017). The neuroethical issue that arises most regularly when considering the propriety of decisional enhancement appears to be the autonomy of decision-making, irrespective of whether the influence derives from humans or an AI (Felsen et al., 2013; Reiner and Nagel, 2017; Niker et al., 2018). 

Image courtesy of Pixabay
Autonomy is traditionally thought of as the ability to make decisions about ourselves by ourselves. Decisions are considered autonomous when they meet specific internal and external conditions (Christman, 2010). For example, one commonly cited stipulation is that decisions ought to be free of undue influences (Frankfurt, 1971; Dworkin, 1981; Buss and Westlund, 2018). This hyper-individualized view has been called into question by scholars who have emphasized the social embeddedness of human nature. From this has emerged a new construct known as relational autonomy (Mackenzie and Stoljar, 2000) which accepts the inevitability of external influences not only upon our decision-making but indeed upon our lived experience as humans. Reconciling the tension between personal and relational autonomy is challenging. In the medical realm, personal autonomy reigns supreme, being invoked as a bulwark against physician paternalism (Beauchamp and Childress, 2001). In day to day life, the situation is complicated by cultural values; for some the notion of shared decision-making is more easily accepted than for others (Chirkov et al., 2010). 

A partial solution is provided by the concept of autonomy support - an external input to decision-making that aims to support individuals and does not contravene autonomy (Nagel and Reiner, 2013; Nagel 2015). Examples include instances when the decision maker has explicitly requested help with a decision, or when an external influence is delivered in a manner that is sensitive to the bounds of autonomy, both highly relevant for decision-making by patients in a medical context. One can also envision applying the principles of autonomy support to AIDEs: as they become ever more capable of predicting attributes of individuals on the basis of limited information (Kosinski et al., 2013), they may be able to custom tailor efforts to improve decision-making to the individual, in much the same way that kith and kin are able to understand the needs and desires of their friends and families when confronted with important decisions. Yet the line between autonomy support and unwelcome persuasion, either by an AIDE (Matz et al., 2017) or by a human (Cialdini, 1993) is thin. It is specifically to ensure that we understand how best to strike this delicate balance that there exists a need to explore the neuroethics of decisional enhancement. 


References 
  1. Barry MJ, Edgman-Levitan S. (2012). Shared Decision Making — The Pinnacle of Patient-Centered Care. N Engl J Med, 366:780-781. 
  2. Beauchamp TL, Childress JF. (2001). Principles of biomedical ethics. New York: Oxford University Press.
  3. Blumenthal-Barby JS, Burroughs H. (2012). Seeking Better Health Care Outcomes: The Ethics of Using the “Nudge.” Am J Bioeth, 12(2):1–10. 
  4. Bovens L. (2009). The Ethics of Nudge. In: Preference Change. New York: Springer, Dordrecht, pp. 207–19. 
  5. Buss S, Westlund AC. (2018). Personal Autonomy. Stanford University [Internet], 1–56. Available from: https://plato.stanford.edu/entries/personal-autonomy/ 
  6. Chirkov VI, Ryan R, Sheldon KM. (2010). Human Autonomy in Cross-Cultural Context. Chirkov VI, Ryan RM, Sheldon KM, editors. Vol. 1. Dordrecht: Springer Science and Business Media. 
  7. Christman J. (2010). The politics of persons: individual autonomy and socio-historical selves. Cambridge: Cambridge University Press. 
  8. Cialdini RB. (1993). Influence: The psychology of persuasion. NY: William Morrow. 
  9. Danaher J. (2015). On the Need for Epistemic Enhancement: Democratic Legitimacy and the Enhancement Project. Law, Innovation and Technology, 5(1):85–112. 
  10. Devine DJ, Clayton LD, Dunford BB, Seying R, Pryce J. (2001). Jury decision-making: 45 years of empirical research on deliberating groups. Psychology, Public Policy, and Law, 7(3):622–727. 
  11. Douglas T. (2008). Moral Enhancement. Journal of Applied Philosophy, 25(3):228–45. 
  12. Dworkin G. (1981). The concept of autonomy. Grazer Philosophische Studien, 12:203–13. 
  13. Felsen G, Castelo N, Reiner PB. (2013). Decisional enhancement and autonomy: public attitudes towards overt and covert nudges. Judgment and Decision Making, 8:202–13. 
  14. Fitz NS, Nadler R, Manogaran P, Chong EWJ, Reiner PB. (2014). Public Attitudes Toward Cognitive Enhancement. Neuroethics, 7:173–188. 
  15. Frankfurt HG. (1971). Freedom of the Will and the Concept of a Person. The Journal of Philosophy, 68:5–20. 
  16. Gigerenzer G, Hertwig R, Pachur T. (Eds.). (2011). Heuristics: The foundations of adaptive behavior. Oxford, UK: Oxford University Press. 
  17. Greely HT, Sahakian B, Harris J, Kessler RC, Gazzaniga MS, Campbell P, Farah MJ. (2008). Towards responsible use of cognitive-enhancing drugs by the healthy. Nature, 456(7223):702–705. 
  18. Grune-Yanoff T, Hertwig R. (2016). Nudge Versus Boost: How Coherent are Policy and Theory? Minds and Machines. 26(1-2):149–83. 
  19. Hardin G. (1968). The tragedy of the commons. The population problem has no technical solution; it requires a fundamental extension in morality. Science,162(3859):1243–1248. 
  20. Hertwig R, Grune-Yanoff T. (2017). Nudging and Boosting: Steering or Empowering Good Decisions:. Perspectives on Psychological Science. 12(6):973–86. 
  21. Kahneman D. (2011). Thinking, Fast and Slow. NY:Farrar Straus Giroux. 
  22. Keeney RL. (2008). Personal Decisions Are the Leading Cause of Death. Operations Research, 56(6):1335–47. 
  23. Kosinski M, Stillwell D, Graepel T. (2013). Private traits and attributes are predictable from digital records of human behavior. Proc Natl Acad Sci USA. National Acad Sciences; Apr 9;110(15):5802–5. 
  24. Mackenzie C, Stoljar N. (2000). Relational Autonomy. Oxford: Oxford University Press. 
  25. Matz SC, Kosinski M, Nave G, Stillwell DJ. (2017). Psychological targeting as an effective approach to digital mass persuasion. Proc National Acad Sci, 114:12714–12719. 
  26. Mullainathan, S., Shafir, E. (2013). Scarcity: The New Science of Having Less and How It Defines Our Lives. New York: Henry Holt. 
  27. Nagel SK, Reiner PB. (2013). Autonomy support to foster individuals' flourishing. Am J Bioeth, 13(6):36–7. 
  28. Nagel SK. (2015). When Aid Is a Good Thing: Trusting Relationships as Autonomy Support in Health Care Settings. Am J Bioeth, 15(10):49–51. 
  29. Niker F, Reiner PB, Felsen G. (2018). Perceptions of Undue Influence Shed Light on the Folk Conception of Autonomy. Front Psychology, 9:57–11. 
  30. OECD. (2019). Delivering better policies through behavioural insights: New approaches. Paris: OECD. 
  31. Parens E. (1998). Enhancing Human Traits: Ethical and Social Implications. Washington, DC: Georgetown Univ Press. 
  32. Reiner PB, Nagel SK. (2017). Technologies of the Extended Mind: Defining the Issues. In: Illes J, editor. Neuroethics: Anticipating the Future. Oxford University Press, pp. 111–26. 
  33. Russell S, Norvig P. (2009). Artificial Intelligence: A modern approach. New Jersey: Prentice Hall. 
  34. Schor JB. (1999). The Overspent American. NY: HarperCollins. 
  35. Selinger E, Whyte K. (2011). Is there a right way to nudge? The practice and ethics of choice architecture. Sociology Compass. 5(10):923–35. 
  36. Simon H. Theories of Bounded Rationality. In: McGuire CB, Radner R, editors. Decision and Organization. Amsterdam: North Holland; 1972. 
  37. Sunstein CR. (2016). People Prefer System 2 Nudges (Kind of). Duke Law Journal, 66:121–68. 
  38. Thaler RH, Sunstein CR. (2009) Nudge. München: Penguin. 
  39. Thaler RH, Benartzi S. (2004). “Save More Tomorrow™: Using Behavioral Economics to Increase Employee Saving.” Journal of Political Economy, 112:S164–S187. 
  40. World Health Organization. (2015). Obesity and Overweight. WHO, Fact Sheet no. 311.
____________

Peter Reiner is Professor of Neuroethics in the Department of Psychiatry at the University of British Columbia, a member of the Centre for Artificial Intelligence Decision-making and Action, and founder of the Neuroethics Collective, a virtual think tank of scholars who share an interest in issues of neuroethical import. 



Saskia Nagel is Full Professor and chair of the group Applied Ethics with a Focus on Ethics of Technology at the Department for Society, Technology, and Human Factors at RWTH Aachen University.  




Imre Bárd is a PhD candidate in Social Research Methodology at the London School of Economics and Political Science. 






Want to cite this post?

Reiner, P. B., Nagel, S. K., and Bárd, I. (2020). The Neuroethics of Decisional Enhancement. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2020/11/the-neuroethics-of-decisional.html


Emory Neuroethics on Facebook

Emory Neuroethics on Twitter

AJOB Neuroscience on Facebook