How Social Media Can Revolutionize Mental Health: Recap of December’s The Future Now: NEEDs
By Kristie Garza
![]() |
Image courtesy of Flicker. |
Dr. De Choudhury’s talk focused on four key areas of her research. Figure 1 is a recreation of a diagram, similar to one Dr. De Choudhury referenced throughout her talk as she addressed how her research contributes to each quadrant of interaction of these key areas. She hopes to use social media as both a sensor and an intervention tool, as well as a diagnosis and a treatment tool.
![]() |
Figure 1. Recreation of figure used in Dr. De Choudhury's Future Now NEEDS talk |
Using models like the social media depression index, show social media as a diagnostic tool has powerful potential, but real humans have to build the model for platforms to identify at-risk individuals. Building the model involves reading countless social media posts which highlight individuals’ feelings. Closely reading and analyzing strong emotional circumstances has the potential to cause an individual second-hand trauma. Furthermore, this task is usually undertaken by graduate students, who are already at a heightened risk for mental health issues. When conducting research of this type, it is important to address the needs of researchers just has much as the patients they are trying to help.
![]() |
Image courtesy of Wikimedia Commons |
![]() |
Image courtesy of Pixabay. |
Usually, Institutional Review Boards are primarily responsible for identifying risks in studies, ensuring the ethical nature of research. In this case, due to the novelty of Dr. De Choudhury’s research, there is no existing ethical precedent. Dr. De Choudhury commented on the divergent and inconsistent methodological gaps in issues of privacy and ethics with her type of research. For example, what does “consent” mean in this type of research? Would checking yes when opening a Facebook account offer consent to a whole research team to analyze an individual’s social media life? This type of consent questioning is similar to when a participant enters a research study where they will receive a brain scan. In both cases, a “healthy” individual is consenting to participate in an activity that seems unrelated to their physical health, yet both have the potential to reveal medical illness. In neuroscience imaging research, if researchers find a tumor while scanning a participant, they face an ethical issue: should they tell the individual, about this finding and potentially save his/her life or should the research not disclose the information because the individual did not consent to a medical examination? Further, if the observation is revealed, how does the individual receive professional help and who is responsible for ensuring it? Do researchers in this case have a “duty to rescue”, and if so, how do they interpret this duty? More research on these types of “incidental findings” may inform consent procedures for social media as a diagnosing tool.
Further, algorithms are based solely on social media users and exclude individuals who do not frequent social media, adding a bias to the algorithm. Are future potential diagnosis biasing against a certain group of individuals? Not to mention how different age groups often use social media differently. For example, a 13 year-old who has had social media present their whole life may interact with platforms differently than a 70 old, who was introduced to technology as an adult. There could also be a bias on how people interact with social media platforms. Individuals can use social media as a way to portray an idealized version of their lives. In fact, a recent study showed the use of social media negatively correlates with individual well-being. It could be the case that the “happier” an account appears on social media is not positively and/or directly correlated with how an individual feels in his/her life.
![]() |
Image courtesy of Pexels. |
Dr. De Choudhury’s work provides a unique advantage of using an existing platform to fill current clinical gaps, but with its novelty comes uncharted ethical territories. Acknowledging this, Dr. De Choudhury continuously considers the importance of the implications of her work. In fact, her team recently published an article outlining the key ethical issues in her type of work. This paper outlines three key areas of ethical tensions: 1) Participants and Research oversight, 2) Validity, Interpretability, and Methods, and 3) Implications for Stakeholders. Many of the topics presented in this post, such as consent, the lack of formal regulatory agencies for this type of research, and population bias that can exist in algorithms, are also discussed in this paper. Another issue discussed in this paper is vulnerable populations; for example, an individual with schizophrenia may fear mass surveillance, and having researchers analyzing his/her online behavior may exacerbate their fear. Her team also asks how this type of data should be shared while still protecting the individual participant’s privacy. Dr. De Choudhury’s paper concludes with calls to action that may help alleviate some of the ethical issues inherent in this type of research: research teams should 1) include all key stakeholders in the research process, 2) conduct appropriate disclosure of study design and methods, and 3) continuously have ethics conversations throughout the research process.
As our reliance on technology grows, it only makes sense that algorithms will further engrain into our daily lives. As discussed in this post, from these novel applications of technology arise never-before-asked questions about how we want to interact with this technology and how we want the technology to interact with us. While it is illogical to conclude we will be relying on algorithms to independently diagnose or treat mental illness, the advancement of these types of technologies have potential to benefit public health. In the same way physicians use technological advancements, such as MRI, to aid in the diagnoses and treatment of patients, social media algorithms may provide another tool for mental health professionals to be able to more accurately diagnose a larger proportion of society. The potential for these technologies is great, but it is crucial to follow the calls to action which Dr. De Choudhury describes. Now, more than ever, is when we can set the precedents for how this technology will coexist with society.
To read more about some of the issues brought up in this post and the ethical tensions of using social media to infer changes in mental health, delve deeper into Chancellor et al.,2019!
________________

Want to cite this post?
Garza, K. (2019). How Social Media Can Revolutionize Mental Health: Recap of December’s The Future Now: NEEDS. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2019/02/how-social-media-can-revolutionize.html