Skip to main content

How Social Media Can Revolutionize Mental Health: Recap of December’s The Future Now: NEEDs

By Kristie Garza 

Image courtesy of Flicker.

The past decade has seen a rise in the use of social media, allowing a chronical of daily interactions and major life events on platforms such as Facebook and Twitter. As such, these platforms act as “archives of our lives,” as Dr. Munmun De Choudhury stated in her The Future Now: Neuroscience and Emerging Ethical Dillemas Series (NEEDs) talk in December. Dr. De Choudhury is an assistant professor in the School of Interactive Computing at Georgia Institute of Technology, where she studies computational social science. At Tech, she leads the Social Dynamics and Wellbeing lab where members of her lab can regularly be found browsing social media websites, such as Reddit. However, unlike many graduate students who frequent social media websites during work hours, Dr. De Choudhury’s students are actually working. Dr. De Choudhury and her team search for patterns of activity on social media platforms that may shed light on psychological behavioral states. 

Dr. De Choudhury’s talk focused on four key areas of her research. Figure 1 is a recreation of a diagram, similar to one Dr. De Choudhury referenced throughout her talk as she addressed how her research contributes to each quadrant of interaction of these key areas. She hopes to use social media as both a sensor and an intervention tool, as well as a diagnosis and a treatment tool.

Figure 1. Recreation of figure used in

Dr. De Choudhury’s Future Now NEEDS talk

Dr. De Choudhury first addressed the potential uses of social media for diagnosis, both as a sensor and an intervention tool (Figure 1, top row). For example, she outlined a study where her team assessed language on Twitter to predict onset of major depressive disorder. After obtaining consent, her team analyzed the Twitter language of individuals previously diagnosed with clinical depression. Using this data, they developed a model to identify signs of depression which allowed for the creation of the first “social media depression index.” The model was successful, as it correlated highly with the prevalence of depression observed by the Centers for Disease Control and Prevention (CDC), and identified similar trends, such as gender differences and circadian patterns related to depression diagnosis

Using models like the social media depression index, show social media as a diagnostic tool has powerful potential, but real humans have to build the model for platforms to identify at-risk individuals. Building the model involves reading countless social media posts which highlight individuals’ feelings. Closely reading and analyzing strong emotional circumstances has the potential to cause an individual second-hand trauma. Furthermore, this task is usually undertaken by graduate students, who are already at a heightened risk for mental health issues. When conducting research of this type, it is important to address the needs of researchers just has much as the patients they are trying to help. 

Image courtesy of Wikimedia Commons

Social media can also be used as an intervention tool, to diagnose and treat individuals (Figure 1, column 2). Dr. De Choudhury’s team analyzed the Reddit Suicide subreddit to search for identifying features of conversations that were associated with suicidal ideation. These platforms can potentially provide an avenue for proactive intervention. Using social media as a diagnosis tool offers the possibility for intervention to millions more than currently have access to intervention techniques. Still, Dr. De Choudhury acknowledges that ethical challenges still must be addressed in her research. She commented on how a personalized risk assessment, which is individually tailored to each user’s need, can build trust between an individual and the online support he/she is receiving. This type of assessment is unlike current sources of social media suicide support, which flag posts potentially indicative of suicidal thoughts and connects the user with suicide prevention links. A more personalized support from social media could be beneficial, but the long-term implications of this trust on an inorganic substance, such as a social media platform, are unknown. As a society, are willing to trust a computer program to know when to intervene to save an individual’s life based solely on online comments? To answer this question, it is important to consider the implications of faults in the program, as computer detection programs rarely operate with 100% accuracy. For example, if the algorithm suggested a social media user was suicidal and automatically took intervention steps, the algorithm is now nudging a healthy individual to change their [online] behavior. A situation like this example would manipulate innate human behavior and has the potential to negatively alter a healthy individuals’ emotions. A less than 100% accurate program could also be detrimental if we solely defer to such algorithms for clinical diagnosis and intervention and they fail to detect a suicidal individual. Cases like this pose the question: What is society’s responsibility to social media and, conversely, what does social media owe society? 

Image courtesy of Pixabay.

Dr. De Choudhury then referred to her recent study where she identified that some individuals turn to social media as their main source of support, potentially due the stigma around seeking clinical support. She found that certain types of internet support, such as emotional support, are more helpful than other, more informative types of support. On social media, support often originates from other social media users— from strangers attempting to help each other heal. It is important to note that most individuals who are offering support through social media are not properly trained in how to do so. While their support may be motivated by a desire to help, this lack of formal clinical training could potentially cause more harm than good. These types are issues are raised in a NYT article, mentioning Dr. De Choudhury’s research, about the risks of using social media for these purposes.

Usually, Institutional Review Boards are primarily responsible for identifying risks in studies, ensuring the ethical nature of research. In this case, due to the novelty of Dr. De Choudhury’s research, there is no existing ethical precedent. Dr. De Choudhury commented on the divergent and inconsistent methodological gaps in issues of privacy and ethics with her type of research. For example, what does “consent” mean in this type of research? Would checking yes when opening a Facebook account offer consent to a whole research team to analyze an individual’s social media life? This type of consent questioning is similar to when a participant enters a research study where they will receive a brain scan. In both cases, a “healthy” individual is consenting to participate in an activity that seems unrelated to their physical health, yet both have the potential to reveal medical illness. In neuroscience imaging research, if researchers find a tumor while scanning a participant, they face an ethical issue: should they tell the individual, about this finding and potentially save his/her life or should the research not disclose the information because the individual did not consent to a medical examination? Further, if the observation is revealed, how does the individual receive professional help and who is responsible for ensuring it? Do researchers in this case have a “duty to rescue”, and if so, how do they interpret this duty? More research on these types of “incidental findings” may inform consent procedures for social media as a diagnosing tool. 

Further, algorithms are based solely on social media users and exclude individuals who do not frequent social media, adding a bias to the algorithm. Are future potential diagnosis biasing against a certain group of individuals? Not to mention how different age groups often use social media differently. For example, a 13 year-old who has had social media present their whole life may interact with platforms differently than a 70 old, who was introduced to technology as an adult. There could also be a bias on how people interact with social media platforms. Individuals can use social media as a way to portray an idealized version of their lives. In fact, a recent study showed the use of social media negatively correlates with individual well-being. It could be the case that the “happier” an account appears on social media is not positively and/or directly correlated with how an individual feels in his/her life.

Image courtesy of Pexels.

The talk concluded with an audience member’s question asking if publishing this kind of work may have negative consequences. In science, publishing work and including a complete description of methodologies and techniques used is encouraged. Dr. De Choudhury comments on the importance of different groups validating methods and increasing reproducibility in science; however, in this specific field, that can be challenging, as a detailed description of her algorithms may be negatively taken advantage of. As an example, she described how her team had to cease a collaboration with an insurance company when she realized potential unethical implications for her work in the insurance field. In cases like this, life insurance companies could deny coverage or raise premiums if a customer has been detected as having risk for suicide, and this information would have originated from seemingly harmless social media activity. 

Dr. De Choudhury’s work provides a unique advantage of using an existing platform to fill current clinical gaps, but with its novelty comes uncharted ethical territories. Acknowledging this, Dr. De Choudhury continuously considers the importance of the implications of her work. In fact, her team recently published an article outlining the key ethical issues in her type of work. This paper outlines three key areas of ethical tensions: 1) Participants and Research oversight, 2) Validity, Interpretability, and Methods, and 3) Implications for Stakeholders. Many of the topics presented in this post, such as consent, the lack of formal regulatory agencies for this type of research, and population bias that can exist in algorithms, are also discussed in this paper. Another issue discussed in this paper is vulnerable populations; for example, an individual with schizophrenia may fear mass surveillance, and having researchers analyzing his/her online behavior may exacerbate their fear. Her team also asks how this type of data should be shared while still protecting the individual participant’s privacy. Dr. De Choudhury’s paper concludes with calls to action that may help alleviate some of the ethical issues inherent in this type of research: research teams should 1) include all key stakeholders in the research process, 2) conduct appropriate disclosure of study design and methods, and 3) continuously have ethics conversations throughout the research process. 

As our reliance on technology grows, it only makes sense that algorithms will further engrain into our daily lives. As discussed in this post, from these novel applications of technology arise never-before-asked questions about how we want to interact with this technology and how we want the technology to interact with us. While it is illogical to conclude we will be relying on algorithms to independently diagnose or treat mental illness, the advancement of these types of technologies have potential to benefit public health. In the same way physicians use technological advancements, such as MRI, to aid in the diagnoses and treatment of patients, social media algorithms may provide another tool for mental health professionals to be able to more accurately diagnose a larger proportion of society. The potential for these technologies is great, but it is crucial to follow the calls to action which Dr. De Choudhury describes. Now, more than ever, is when we can set the precedents for how this technology will coexist with society. 

To read more about some of the issues brought up in this post and the ethical tensions of using social media to infer changes in mental health, delve deeper into Chancellor et al.,2019!

Kristie is a 4th year graduate student in the Neuroscience PhD program at Emory University. Her research focuses on understanding how exogenously driving different brain rhythms can impact the brain’s immune environment. Kristie is also an editorial intern for The American Journal for Bioethics Neuroscience and a supporting editor of The Neuroethics Blog.

Want to cite this post?

Garza, K. (2019). How Social Media Can Revolutionize Mental Health: Recap of December’s The Future Now: NEEDS. The Neuroethics Blog. Retrieved on , from


Emory Neuroethics on Facebook