After an election year marked by heated exchanges and the distribution of fake news, Twitter bots earned a bad reputation--but not all bots are bad, suggests a new study co-authored by Emilio Ferrara, a USC Information Sciences Institute computer scientist and a research assistant professor at the USC Viterbi School of Engineering's Department of Computer Science.
In a large-scale experiment designed to analyze the spread of information on social networks, Ferrara and a team from the Technical University of Denmark deployed a network of algorithm-driven Twitter accounts, or social bots, programmed to spread positive messages on Twitter.
"We found that bots can be used to run interventions on social media that trigger or foster good behaviors," says Ferrara, whose previous research focused on the proliferation of bots in the election campaign.
But it also revealed another intriguing pattern: information is much more likely to become viral when people are exposed to the same piece of information multiple times through multiple sources.
"This milestone shatters a long-held belief that ideas spread like an infectious disease, or contagion, with each exposure resulting in the same probability of infection," says Ferrara.
"Now we have seen empirically that when you are exposed to a given piece of information multiple times, your chances of adopting this information increase every time."
To reach these conclusions, the researchers first developed a dozen positive hashtags, ranging from health tips to fun activities, such as encouraging users to get the flu shot, high-five a stranger and even Photoshop a celebrity's face onto a turkey at Thanksgiving.
Then, they designed a network of 39 bots to deploy these hashtags in a synchronized manner to 25,000 real followers during a four-month period from October to December 2016.
Each bot automatically recorded when a target user retweeted intervention-related content and also each exposure that had taken place prior to retweeting. Several hashtags received more than one hundred retweets and likes, says Ferrara.
"We also saw that every exposure increased the probability of adoption - there is a cumulative reinforcement effect," says Ferrara.
"It seems there are some cognitive mechanisms that reinforce your likelihood to believe in or adopt a piece of information when it is validated by multiple sources in your social network."
This mechanism could explain, for example, why you might take one friend's movie recommendation with a grain of salt. But the probability that you will also see that movie increases cumulatively as each additional friend makes the same recommendation.
Aside from revealing the hidden dynamics that drive human behavior online, this discovery could also improve how positive intervention strategies are deployed on social networks in many scenarios, including public health announcements for disease control or emergency management in the wake of a crisis.
"The common approach is to have one broadcasting entity with many followers, but this study implies that it would be more effective to have multiple, decentralized bots share synchronized content," says Ferrara.
He adds that many communities are isolated from certain accounts due to Twitter's echo chamber effect: social media users tend to be exposed to content from those whose views match their own.
"What if there is a health crisis and you don't follow the Centers for Disease Control and Prevention account? By taking a grassroots approach, we could break down the silos of the echo chamber for the greater good," says Ferrara.
###
The study, entitled "Evidence of complex contagion of information in social media: An experiment using Twitter bots," was published in PLOS ONE on Sept. 22.
Journal
PLOS ONE