Social Media as a Neuroprosthetic for Mind and Emotion

By Yunmiao Wang

Image courtesy of Pixabay.
Roger McNamee, the author of a recent book Zucked and an early mentor to Mark Zuckerberg, condemns Facebook for the catastrophic damage it has done to society. This accusation is not new. It has long been an open secret that users of big tech companies such as Facebook, Twitter, and Google are not these companies’ main clients. Instead, they are the mine that produces the coal which supports the flourishing data economy. While public attention centers around users' privacy and the vendors who profit from selling data, experiments conducted by researchers on social media to study human mind and behaviors have not been given enough scrutiny.

In 2012, 689,003 Facebook users participated in an experiment that tested Facebook’s ability to stimulate an emotional contagion through manipulation of their News Feeds. Emotional contagion is an interpersonal influence where one person’s emotions or behaviors trigger similar emotions or response in other people. Throughout the week of the experiment, some people were shown more positive emotional posts from their friends in their News Feeds, while others were shown more negative emotional content. The researchers then analyzed the emotional states of these users through their subsequent posts on the social network. The study was later published in Proceedings of the National Academy of Sciences of the United States of America (PNAS), concluding that emotional contagions could occur outside of in-person interaction through verbal expression without nonverbal cues (Kramer, Guillory, & Hancock, 2014). The results were not surprising, as successful implementation of emotional contagion has long been established in the traditional laboratory setting (Hatfield, Cacioppo, & Rapson, 1993). There is one caveat to this massive scale experiment: unlike traditional experiments, the subjects entered the study unknowingly. 

The fact that Facebook conducted a psychological study on its users without having to gain the approval of any IRB (Institutional Review Board) led to controversy and intense ethical discussion. Is Facebook legally allowed to study its users’ data? Technically, yes. According to its Data Policy, which all users have “read” and agreed to when signing up for an account, the company has the right to use this information and conduct research to improve its products. However, should Facebook or other organizations be allowed to psychologically manipulate people and potentially alter their mental states and behaviors without their explicit consent? 

Image courtesy of pxhere.
Facebook might have been the first to conduct psychological experiments on such a large scale, but it certainly has not been the last company to take advantage of the loose ethical guidelines for online platforms to study human mind and behaviors. Twitter bots, essentially automated accounts, have been another popular approach for online experiments. For instance, a Twitter botnet, a string of connected Twitter bots with a large number of followers, was created to study the complex contagion of information on social media by another group of scientists (Monsted, Sapiezynski, Ferrara, & Lehmann, 2017). The bots were designed to be human-like, to post popular content, and to have a higher follower/follows ratio than a typical bot. They also shared follower information with each other to increase the likelihood of more bots being followed by the same users. With an established botnet, the scientists created novel hashtags for the bots to tweet and retweet, fabricating the popularity of certain topics. During the experiments, the bots tracked and recorded the targeted users’ interactions with the bots as well as with other real users. The study provided evidence that the information originated by the botnet was more likely to spread when users were exposed to multiple sources (a botnet) instead of a single source (a single bot). 

This way of diffusing information on social media could have wide applications such as developing more effective marketing and defenses against spread of malicious content. While the result could be valuable in terms of understanding human communication, it is undeniable that the bots have unfair advantage over traditional experiments in influencing people’s behaviors without their knowledge — not to mention their consent. Unlike the Facebook experiment discussed earlier, there is no prior agreement between the researchers and targeted users for this Twitter study. 

Even though the topics that the bots promoted were neutral or positive, the impact of these tweets on humans could still be unpredictable. And of course, not everyone who employs Twitter bots does so with good intentions. It has long been suggested that the 2016 US election was under the influence of cyberattacks by Russian hackers and trolls. According to the data collected by the U.S. House Intelligence Committee on the Internet Research Agency (IRA; the notorious Russian troll farm) over 11 million Facebook users were exposed to advertisements purchased by the IRA. The report found there were more than 36,000 automated Russian-linked accounts and roughly 288 million impressions of Russian bot tweets from Twitter alone from September 1 to November 15 in 2016. Several other case studies on social bots have already shown that bots help spread digital misinformation and promote violent content (Shao et al., 2018; Stella, Ferrara, & De Domenico, 2018). One could easily use a large number of bots to spread fake news, interfere with democracy, and destabilize society. 

Image courtesy of Pixabay.
Although intentionality matters, I would argue that the fundamental problem lies within the lack of regulation and guidance of such powerful tools that are able to effectively alter human behavior on a massive scale. The ex-president of Facebook, Sean Parker, says that social networks such as Facebook and Instagram prey on “a vulnerability in human psychology” to keep the users from leaving the social network sites. The vulnerability he refers to has biological basis. Research has indicated that Facebook “addiction” shares similar neural characteristics with that of substance and gambling addiction (Turel, He, Xue, Xiao, & Bechara, 2014). In addition, anatomical changes due to social network addiction have also been identified by imaging of the users’ brains. 

One could argue that Facebook is merely employing tactics that the advertisement industry has been using for centuries. After all, any interaction with the external world can lead to changes in the nervous system and behaviors. On the other hand, social media acts on individuals at a much larger scale, and much of its information is hidden. Given the excessive amount of information we willingly give out on a daily basis, the algorithms are able to capture our needs more accurately and to make us addicted to social media more effectively. Currently, there is no clear legal boundary between promoting a product or an idea and secretly manipulating someone's mind. 

Image courtesy of Pixabay.
This is not to say that we should ban all experiments on social media. The online platform does present many advantages over laboratory experiments in terms of understanding human behavior. For instance, the large number of subjects from online studies provides much stronger statistical power, and bots are able to create a more natural and better controlled experimental environment. With the large amount of data and advancing machine learning techniques, some Twitter studies have been focusing on diagnosing and quantifying mental illnesses such as depression and post-traumatic stress disorder (Coppersmith et al., 2014; also see a previous Neuroethics blog post on how social media can revolutionize mental health). On the bright side, there has been increased focus on the design and ethics of experimentation using bots (Krafft et al., 2017). Ideally, IRB oversight should be extended to protect the rights and welfare of human research subjects in the experiments on social media. Alternatively, an independent and non-profit association could supervise the design and execution of these studies even when housed in for-profit companies. In addition to regulation, users should be given the option to opt out of certain or all types of experiments at any given point. Debriefs should also be mandatory to minimize any potential harm on the subjects. 

Social media has become an extension of one’s self as it provides platforms for people to record important memories, connect with others, and to express their thoughts. With proper supervision, both the research teams from the social media companies and scientists from academic institutes could be held accountable for the research while still benefiting from this novel format of experimentation. 

________________


Yunmiao Wang is a PhD student in the Neuroscience Program at Emory University. She studies functional roles of brain regions in controlling movement at the Jaeger Lab. 




References:
  1. Coppersmith G, Dredze M, Harman C. Quantifying Mental Health Signals in Twitter. In: Proceedings of the Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality. ACL; 2014. p. 51–60.
  2. Krafft, P. M., Macy, M. & Pentland, A. S. (2017). Bots as virtual confederates: design and ethics. In CSCW, 183–190.
  3. Hatfield, E., Cacioppo, J. T., & Rapson, R. L. (1993). Emotional Contagion. Cuurent Directions in Psychological Science, 2(3), 96-100. 
  4. Monsted, B., Sapiezynski, P., Ferrara, E., & Lehmann, S. (2017). Evidence of complex contagion of information in social media: An experiment using Twitter bots. PLoS One, 12(9), e0184148. doi:10.1371/journal.pone.0184148
  5. Shao, C., Ciampaglia, G. L., Varol, O., Yang, K. C., Flammini, A., & Menczer, F. (2018). The spread of low-credibility content by social bots. Nat Commun, 9(1), 4787. doi:10.1038/s41467-018-06930-7
  6. Stella, M., Ferrara, E., & De Domenico, M. (2018). Bots increase exposure to negative and inflammatory content in online social systems. Proc Natl Acad Sci U S A, 115(49), 12435-12440. doi:10.1073/pnas.1803470115
  7. Turel, O., He, Q., Xue, G., Xiao, L., & Bechara, A. (2014). Examination of neural systems sub-serving facebook "addiction". Psychol Rep, 115(3), 675-695. doi:10.2466/18.PR0.115c31z

Want to cite this post?

Wang, Y. (2019). Social Media as Neuroprosthetic for Mind and Emotion. The Neuroethics Blog. Retrieved on , from
http://www.theneuroethicsblog.com/2019/06/social-media-as-neuroprosthetic-for.html

Emory Neuroethics on Facebook

Emory Neuroethics on Twitter

AJOB Neuroscience on Facebook