Privacy and Consent with Innovative Neurotech and Neuroinformation

By Ankita Moss

Image courtesy of Pixabay.
You open your home door and see you’ve received your ordered package from a new neuroscience startup. You’re excited because you will now get to experience a brain computer interface (BCI) that you and your friends have been talking about.

With this BCI you’ll be able to “communicate”, exclusively via brain signals, with your friends who own and use the interface. Inside the package is a privacy disclaimer that suggests all your data and communications collected from your device will be logged into the company’s neuro-data bank. This makes you question if protecting data or chatting with your friends via BCI is more important.

Dr. Emily Postan is an Early Career Fellow in Bioethics at the University of Edinburgh with a particular interest in managing neuroinformation. I asked Dr. Postan whether she predicted that, through large data collection and digital phenotyping efforts, neuroscience startups could one day change the norms of individual privacy standards and create a non-binary determination of consent (a gray form of consent in which “yes” or “no” are not sufficient answers).

Dr. Postan described that the public will need reliable and approachable sources of explanation and guidance about how neuroscience startups generate and use neuroinformation. This will help people understand how such innovation might impinge upon on their privacy in new or unpredictable ways, or have wider impacts on, for example, the definition of the “self.” We need greater public awareness of some of the ethical issues at stake with neuroethics start up technology.

New Companies and Consent

Wearable technology is a popular startup sector perfectly positioned to collect large volumes of neural data. The prospect of real-time insight and performance enhancement has already become commercialized and sensationalized within the public sphere. For example, Halo Neuroscience and Neurable, two current neuroscience ventures developing next-level wearables, are disrupting the area of human enhancement. However, in order to enhance human features, we need a baseline of human performance. To collect this baseline and improve technology over time, we must collect, analyze, and average a wide sample of human neural data. Endeavors such as human enhancement cannot be undertaken without data collection and phenotyping, as categorization helps to pinpoint trends in the information. Companies such as Halo Neuroscience and Neurable have already established thorough privacy policies that hint at a future direction involving mass neural data. For example, Halo Neuroscience, in its privacy policy, discloses the fact that “data [collected] includes information on the amount or type of Neuropriming delivered, impedance and other device performance information” (Halo Neuroscience). This statement leaves room for the collection of mass amounts of data in the future, as “data recorded on [the] Halo Device about your activity is [already] transferred from [the] Halo Device to [Halo’s] servers in the US” (Halo Neuroscience). Furthermore, Neurable, in an effort to use aggregated data to improve its device, ensures that its customers “obtain all third party consents and permissions needed” and are aware that “Application Data [may be used] to create and compile aggregated data and statistics…to provide to others.” Neurable also claims “that such aggregated data and statistics will not enable you or any living individual to be identified,” which provides collateral for the expansion of its data collection efforts.

Image courtesy of Needpix.
In this way, one needs to consider that “wearable tech data” provides an avenue for “digital phenotyping” in general. Not only does this future technology provide a gateway into the prospect of digital phenotyping, but it also challenges the conditions in which such data is collected. With companies aiming for real-time data collection and performance enhancement, data collection may be used to explore not only clinical, but also pre-disease states to predict the onset of disease. According to Dr. Postan, the progress of neuroscience startups and data collection in these pre-clinical states will soon change the concept of privacy and consent.

One must consider whether and how consent to utilize and manipulate neural data is given to a neuro-company upon one’s use of a neurotechnology product. For example, when participating in a new neurotechnology, one might need to consider that real people, not just computers, are behind the data collection efforts and use private neural data to advance technology. One might also consider that the decision to give consent to the company, through the mere participation in the neurotechnology, is not necessarily binary. For example, in the case of diagnostic neurotechnology, the pre-clinical insight might outweigh the cost of donating neural data to a company. After analyzing this consideration, one should ensure the company in question practices ethical data sharing and uses neural data only for the stated purpose (for example, pre-clinical insight) (Zook et al.). Dr. Postan suggests that “neuroethics, like bioethics, is going to recognize that consent is too small a tool” to manage all pertinent ethical issues related to privacy and products developed by neuroscience startups.

Image courtesy of Pixabay.
Access to information on neuro-startups will, in turn, enable individuals to raise their own concerns about privacy and brain phenotyping before startups make more progress in the future. Dr. Postan stated that “robust ethical governance and the ethical training of [product developers and engineers]” will be needed in the near future, to support them in thinking seriously about the ways that neuroinnovation may infringe privacy and change how we view ourselves. To navigate such neuroinnovation in general, efforts such as the OECD’s (the Organisation for Economic Cooperation and Development) recent Workshop on Minding Neurotechnology: Delivering Responsible Innovation for Health and Well-Being have come to fruition, signifying that neurotechnology and the integration of business, data, technology, and neuroscience has already become an imperative issue.

Such efforts provide concrete examples of “early detection screening,” if you will, of neuro-innovation and its future footprint on the limits of big data and privacy. In order to create progressive and impactful neurotechnology for the future, we must also consider the ethical implications of these technologies and anticipate issues such as privacy and consent.

________________

Ankita Moss is an undergraduate student at Emory University majoring in Neuroscience and Behavioral Biology. Ankita has had a strong interest in neuroethics since high-school and hopes to contribute to the field professionally in the future. Aside from neuroscience and neuroethics, she is also very passionate about start-ups and entrepreneurship and founded the Catalyst biotechnology think-tank at Emory Entrepreneurship and Venture Management. Therefore, Ankita hopes to one day specifically navigate the ethical implications of neurotechnology startups and their impact on issues of identity and personhood.


References
  1. Zook, M., Barocas, S., Boyd, D., Crawford, K., Keller, E., Gangadharan, S. P., et al. (2017). Ten simple rules for responsible big data research. PLoS Comput Biol, 13(3), e1005399. https://doi.org/10.1371/journal.pcbi.1005399

Want to cite this post?

Moss, A. (2019). Privacy and Consent with Innovative Neurotech and Neuroinformation. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2019/07/privacy-and-consent-with-innovative.html

Emory Neuroethics on Facebook

Emory Neuroethics on Twitter

AJOB Neuroscience on Facebook