The application to the advertising industry is so obvious it is like a slap in the face with a wet fish.

By MediaStreet Staff Writers

Lately, social media has been all about heated exchanges and distribution of fake news. And right in the thick of these skirmishes are Twitter bots. They have certainly earned themselves a bad reputation, tweeting on behalf of politicians and driving troll trains through the media landscape with abandon.

But, not all bots are bad, according to a boffins at USC’s Information Sciences Institute. Computer scientist Emilio Ferrara undertook a large-scale experiment designed to analyse the spread of information on social networks. Ferrara teamed up with some Danish boffins from the Technical University of Denmark to deploy a network of “social bots,” programmed to spread positive messages on Twitter.

“We found that bots can be used to run interventions on social media that trigger or foster good behaviours,” says Ferrara, whose previous research focused on the proliferation of bots in the U.S. election campaign.

But it also revealed another intriguing pattern: information is much more likely to become viral when people are exposed to the same piece of information multiple times through multiple DIFERENT sources. Says Ferrara, “This milestone shatters a long-held belief that ideas spread like an infectious disease, or contagion, with each exposure resulting in the same probability of infection. Now we have seen empirically that when you are exposed to a given piece of information multiple times, your chances of adopting this information increase every time.”

To reach these conclusions, the researchers first developed a dozen positive hashtags, ranging from health tips to fun activities, such as encouraging users to get the flu shot, high-five a stranger and even Photoshop a celebrity’s face onto a turkey at Thanksgiving. Then, they designed a network of 39 bots to deploy these hashtags in a synchronised manner to 25,000 real followers during a four-month period from October to December 2016.

Each bot automatically recorded when a target user retweeted intervention-related content and also each exposure that had taken place prior to retweeting. Several hashtags received more than one hundred retweets and likes. “We also saw that every exposure increased the probability of adoption – there is a cumulative reinforcement effect,” says Ferrara. “It seems there are some cognitive mechanisms that reinforce your likelihood to believe in or adopt a piece of information when it is validated by multiple sources in your social network.”

This mechanism could explain, for example, why you might take one friend’s movie recommendation with a grain of salt. But the probability that you will also see that movie increases cumulatively as each additional friend makes the same recommendation.

This discovery could improve how positive intervention strategies are deployed in social networks in many scenarios, including public health announcements for disease control or emergency management in the wake of a crisis. The common approach is to have one broadcasting entity with many followers. But this study implies that it would be more effective to have multiple, decentralised bots share synchronised content.

Advertisers, mull this over. Bots can be your very best friend.