Exploring the Risks of Digital Health Research: Towards a Pragmatic Framework
By Dr. John Torous
![]() |
Image courtesy of Flickr user Integrated Change |
In our article, ‘Assessment of Risk Associated with Digital and Smartphone Health Research: a New Challenge for IRBs” published in the Journal of Technology and Behavioral Science [1], we explore the evolving ethical challenges in evaluating digital health risk, and here expand on them. While risk and harm in our 21st century digital era are themselves evolving topics that change with both technology and societal norms, how do we quantify them to help IRBs in making safe and ethical decisions regarding clinical research?
A first step is to consider what is the baseline risk of any online or connected technology. Take for example privacy. In countries like the United States, internet service providers can now legally collect and sell users’ web browsing history without consent [2]. Popular websites such as Facebook may at times track users even when logged out or sometimes even without them ever having signed up [3]. The uses of this digital data can range the gamut from targeted advertising to police subpoenas of Fitbit and smartphone data for criminal prosecution [4]. With so much personal data already being collected in everyday life as the price of admission for use of today’s online services, what qualifies as high or low risk digital data collection in a clinical study or even everyday life? In Europe, this question has led to the new General Data Protection Regulation (GDPR) that took effect on May 25th 2018 [4] and set new strict and enforceable standards for online privacy, the right to be forgotten, data portability, data access, and breach notifications. Whether other countries will follow and pass legislation similar to the GDPR remains to be seen, but until then the question of assessing risks around privacy remains challenging for IRBs, researchers, and the public.
![]() |
Image courtesy of Pixabay |
Further considerations that are more unique to digital health studies include assessing technology literacy and bystander risk. While words like ‘GPS,’ ‘anonymized data,’ and ‘hashing’ are frequently used in informed consent documents for smartphone studies, it is important to ensure that those signing informed consent actually understand what these words mean. Do you know the difference between de-identified and anonymized data? There is some research suggesting that those with lower health literacy may also be vulnerable to assuming health technologies like smartphone apps, etc. are safer and more secure than they actually are [6]. This raises the issue of a new digital divide not based on access to technology, but rather on understanding risks and equitable utilization. Yet another risk to consider that does not often occur in classical clinical research but more frequently with digital technology studies is bystander risk. Voice recordings may capture other voices in the nearby vicinity, Bluetooth monitoring will record information about nearby smartphones, and cameras may capture an entire scene with others in it.
![]() |
Image courtesy of Wikimedia Commons |
Recognizing risks in digital health studies is not an exercise in hindering research, but rather the pathway to mitigate risk and help ensure safer and better studies. For example, the largest risk factor of privacy is actually often the easiest to mitigate with appropriate encryption and security protocols. Ensuring that informed consent language is appropriate for those who are less technology-literate can help them better understand the study and be more interested in meaningfully participating. Communities like the Connected and Open Research Ethics (CORE) offer free and easy access to support and online forums for researchers, IRBs, and anyone to ask questions and receive answers on digital health ethics. The Neuroethics Blog you are reading right now also offers a wealth of relevant posts to help guide ethical decision making in this digital era. But perhaps the best resources of all remains an open mind willing to explore not only the benefits of digital technology, but also ponder the risks in order to bring both sides together for more informed decision making.
_______________
References
1. Torous J, Roberts LW. Assessment of Risk Associated with Digital and Smartphone Health Research: a New Challenge for Institutional Review Boards. Journal of Technology in Behavioral Science. 2018:1-5.
2. FCC releases proposed rules to protect broadband consumer privacy (2016). Federal Communications Commission. https://www.fcc.gov/document/fcc-releases-proposed-rules-protect-broadband-consumer-privacy.
3. https://www.buzzfeed.com/alexkantrowitz/heres-how-facebook-tracks-you-when-youre-not-on-facebook?utm_term=.loGrdzO1g#.eiED5XZR2
4. https://www.washingtonpost.com/local/public-safety/commit-a-crime-your-fitbit-key-fob-or-pacemaker-could-snitch-on-you/2017/10/09/f35a4f30-8f50-11e7-8df5-c2e5cf46c1e2_story.html?noredirect=on&utm_term=.a4acd2fba4ea
5. https://www.eugdpr.org/
6. Mackert M, Mabry-Flynn A, Champlin S, Donovan EE, Pounders K. Health literacy and health information technology adoption: the potential for a new digital divide. Journal of medical Internet research. 2016 Oct;18(10).
Torous, J. (2018). Exploring the Risks of Digital Health Research: Towards a Pragmatic Framework . The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/07/exploring-risks-of-digital-health.html