Weaponized Health Communication: Twitter Bots and Russian Trolls Amplify the Vaccine Debate

The George Washington University (Broniatowski, Qi, Alkulaib); University of Maryland, College Park (Jamison, Quinn); Johns Hopkins University (Chen, Benton, Dredze)
"Whereas bots that spread malware and unsolicited content disseminated antivaccine messages, Russian trolls promoted discord. Accounts masquerading as legitimate users create false equivalency, eroding public consensus on vaccination."
Despite their potential to enable dissemination of factual information, social media can be abused to spread harmful health content, including unverified and false information about vaccines. Such health misinformation may be promulgated by "bots", or accounts that automate content promotion, and "trolls", or individuals who misrepresent their identities with the intention of promoting discord. Amplification is an online disinformation strategy that seeks to create impressions of false equivalence or consensus through the use of bots and trolls. This paper reports the results of a retrospective observational study that sought to understand what role, if any, they play in the promotion of content related to vaccination. The investigation was carried out in the context of research showing that, for those inclined to distrust vaccines, the vaccine hesitant, a large number of tweets supporting an anti-vaccination agenda can increase their hesitancy.
Using a set of 1,793,690 tweets collected from July 14 2014 through September 26 2017, the researchers quantified the impact of known and suspected Twitter bots and trolls on amplifying polarising and antivaccine messages. This analysis is supplemented by a qualitative study of #VaccinateUS, a Twitter hashtag designed to promote discord using vaccination as a political wedge issue in the United States (US). #VaccinateUS tweets were uniquely identified with Russian troll accounts linked to the Internet Research Agency, a company backed by the Russian government specialising in online influence operations.
After analysing 900 tweets determined to belong to either bots or trolls, the researchers found that, compared with average users, Russian trolls (X2(1)=102.0; P<.001), sophisticated bots (X2(1)=28.6; P<.001), and "content polluters" (i.e., accounts that disseminate malware and unsolicited content) (X2(1)= 7.0; P<.001) tweeted about vaccination at higher rates. (For example, the latter type shared anti-vaccination messages 75% more than average Twitter users.) Whereas content polluters posted more antivaccine content (X2(1)= 11.18; P<.001), Russian trolls amplified both sides. Unidentifiable accounts were more polarised (X2(1)= 12.1; P<.001) and antivaccine (X2(1) = 35.9; P<.001). Analysis of the Russian troll hashtag showed that its messages were more political and divisive.
Qualitative findings are also shared regarding #VaccinateUS. Again, posts were not wholly pro- or anti-vaccination: 43% favoured vaccination, 38% were against, and 19% presented neutral vaccination content. As might be expected for individuals writing in a second language, the tweets featured more unnatural word choices and irregular phrasing, spelling, and punctuation. Furthermore, messages were more explicitly linked to US politics, featuring emotionally tinged words related to freedom or constitutional rights. In addition, Russian disinformation tended to speak in generalities rather than about local specifics. Russian tweets also include provocative or open-ended questions, designed to prolong the discussion and, again, amplify the debate and debaters. Yet survey data show a general consensus regarding the efficacy of vaccines in the general population, so the picture of controversy the tweets draw only serves to reinforce divisions and disagreements that are, in many cases, not in fact there.
As the researchers explain, the aim may not necessarily be to use outbreaks of disease as a weapon, as much as finding a new avenue to destabilise trust in democratic processes.
In short, the results "suggest that Twitter bots and trolls have a significant impact on online communications about vaccination." As noted above, malicious online behaviour varies by account type and, as suggested here, can be addressed by different strategies. For example, accounts that are known to distribute malware and commercial content are more likely to promote antivaccination messages, suggesting that antivaccine advocates may use preexisting infrastructures of bot networks to promote their agenda. These accounts may also use the compelling nature of antivaccine content as clickbait to drive up advertising revenue and expose users to malware. When faced with such content, public health communications officials may consider emphasising that the credibility of the source is dubious and that users exposed to such content may be more likely to encounter malware. In fact, antivaccine content may increase the risks of infection by both computer and biological viruses.
Beyond attempting to prevent bots from spreading messages over social media, public health practitioners are advised to focus on combating the messages themselves while not feeding the trolls. The researchers call this a ripe area for future research, which might include emphasising that a significant proportion of antivaccination messages are organised "astroturf" (i.e., not grassroots) and other bottom-line messages that put antivaccine messages in their proper contexts.
American Journal of Public Health: e1-e7. doi:10.2105/AJPH.2018.304567.
- Log in to post comments











































