Tag

Twitter

Browsing

By Ilya Pestov.

Back in October, I wrote a piece on Medium that covered the numbers behind some of today’s top social media networks.

From usage numbers to engagement statistics, it was incredible to see just how impactful networks such as Facebook, Twitter, and Instagram have become. For example, not only is Facebook home to 1.23 billion daily active users on average, but those users come from all over the world — with 85.2% residing outside of the U.S. and Canada. That’s a crazy level of connectivity.

As I put together the post, it became obvious just how fast these networks were growing — and I thought a lot about how hard is it to keep up with all of these changes, especially for marketers. To make things a little easier to wrap your head around, I put together a simplified list of some standout statistics for Facebook, Twitter, LinkedIn, Instragram, and Pinterest. Check them out below if you’re looking for some guidance for your social media strategy this year.

34 Stats to Help You Plan Your Social Media Strategy on Facebook, Twitter, Instagram & More

Facebook

Twitter

  • Tweets with images receive 18% more clickthroughs, 89% more Likes, and 150% more retweets.
  • 60% of consumers expect brands to respond to their query within the hour, but the average is 1 hour 24 minutes.
  • Ideal tweet length: 100 characters.
  • Clickthrough rate is highest on Wednesdays.
  • Tweet that doesn’t include a # or @ mention will generate 23% more clicks. When the tweet is focused on driving an app install, for going a # or @ mention increases clicks by 11%. But according to Quicksprout, tweets with hashtags get 2X more engagement — clicks, retweets, favorites, and replies.

LinkedIn

Instagram

  • On average, people miss 70% of their feeds.
  • 1.1% average engagement rate of all posts (4.2% in 2014; 2.2% in 2015).
  • Images with a single dominant color generate 17% more Likes than images with multiple dominant colors. Images with a high amount of negative space generate 29% more Likes than those with minimal negative space. Images featuring blue as the dominant color generate 24% more Likes than images that are predominantly red.
  • Photos showing faces get 38% more Likes than photos not showing faces.
  • Photos see more engagement than videos on Instagram.
  • The red heart is the most frequently shared emoji on Instagram, which is shared 79% more than the next most popular symbol, a smiling face with heart eyes.
  • 50% of captions and comments on Instagram contain at least one emoji.
  • The most common posting frequency for brands on Instagram is 11–20 times per month.
  • Instagram audiences are more engaged on Mondays and Thursdays at 2 a.m., 8–9 a.m., and 5 p.m.

Pinterest

By Ilya Pestov

Sourced from HubSpot

 

By Emilio Ferrara.

At least four million election-related tweets were sent during the campaign, posted by more than 400,000 social bots.

Key to democracy is public engagement – when people discuss the issues of the day with each other openly, honestly and without outside influence. But what happens when large numbers of participants in that conversation are biased robots created by unseen groups with unknown agendas? As my research has found, that’s what has happened this election season.

Since 2012, I have been studying how people discuss social, political, ideological and policy issues online. In particular, I have looked at how social media is abused for manipulative purposes.

It turns out that much of the political content Americans see on social media every day is not produced by human users. Rather, about one in every five election-related tweets from September 16 to October 21 was generated by computer software programs called “social bots.”

These artificial intelligence systems can be rather simple or very sophisticated, but they share a common trait: they are set to automatically produce content following a specific political agenda determined by their controllers, who are nearly impossible to identify. These bots have affected the online discussion around the presidential election, including leading topics and how online activity was perceived by the media and the public.

 

How active are they?

The operators of these systems could be political parties, foreign governments, third-party organisations, or even individuals with vested interests in a particular election outcome. Their work amounts to at least four million election-related tweets during the period we studied, posted by more than 400,000 social bots.

That’s at least 15 per cent of all the users discussing election-related issues. It’s more than twice the overall concentration of bots on Twitter – which the company estimates at five to 8.5 per cent of all accounts.

To determine which accounts are bots and which are humans, we use Bot Or Not, a publicly available bot-detection service developed in collaboration with colleagues at Indiana University. Bot Or Not uses advanced machine learning algorithms to analyse multiple cues, including Twitter profile metadata, the content and topics posted by the account under inspection, the structure of its social network, the timeline of activity and much more. After considering more than 1,000 factors, Bot Or Not generates a likelihood score that the account under scrutiny is a bot. Our tool is 95 percent accurate at this determination.

There are many examples of bot-generated tweets, supporting their candidates, or attacking the opponents. The effectiveness of social bots depends on the reactions of actual people. We learned that people were not able to ignore, or develop a sort of immunity toward, the bots’ presence and activity. Instead, we found that most human users can’t tell whether a tweet is posted by another real user or by a bot. We know this because bots are being retweeted at the same rate as humans. Retweeting bots’ content without first verifying its accuracy can have real consequences, including spreading rumors, conspiracy theories or misinformation.

Some of these bots are very simple, and just retweet content produced by human supporters. Other bots, however, produce new tweets, jumping in the conversation by using existing popular hashtags (for instance, #NeverHillary or #NeverTrump). Real users who follow these Twitter hashtags will be exposed to bot-generated content seamlessly blended with the tweets produced by other actual people.

Bots produce content automatically, and therefore at a very fast and continuous rate. That means they form consistent and pervasive parts of the online discussion throughout the campaign. As a result, they were able to build significant influence, collecting large numbers of followers and having their tweets retweeted by thousands of humans.

Our investigation into these politically active social bots also uncovered information that can lead us to more nuanced understanding of them. One such lesson was that bots are biased, by design. For example, Trump-supporting bots systematically produced overwhelmingly positive tweets in support of their candidate. Previous studies showed that this systematic bias alters public perception. Specifically, it creates the false impression that there is grassroots, positive, sustained support for a certain candidate.

Location provided another lesson. Twitter provides metadata about the physical location of the device used to post a certain tweet.

By aggregating and analysing their digital footprints, we discovered that bots are not uniformly distributed across the United States; they are significantly overrepresented in some states, in particular southern states like Georgia and Mississippi. This suggests some bot operations may be based in those states.

Also, we discovered bots can operate in multiple ways. For example, when they are not engaged in producing content supporting their respective candidates, bots can target their opponents. We discovered that bots pollute certain hashtags, like #NeverHillary or #NeverTrump, where they smear the opposing candidate.

These strategies leverage known human biases, in particular the fact that negative content travels faster on social media, as one of our recent studies demonstrated. We found that, in general, negative tweets are retweeted at a pace 2.5 times higher than positive ones. This, in conjunction with the fact that people are naturally more inclined to retweet content that aligns with their preexisting political views, results in the spreading of content that is often defamatory or based on unsupported, or even false, claims.

It is hard to quantify the effects of bots on the actual election outcome, but it’s plausible to think they could affect voter turnout in some places. For example, some people may think there is so much local support for their candidate (or the opponent) that they don’t need to vote – even if what they’re seeing is actually artificial support provided by bots.

Our study hit the limits of what can be done today by using computational methods to fight the issue of bots. Our ability to identify the bot masters is bound by technical constraints on recognizing patterns in their behavior.

Social media is acquiring increasing importance in shaping political beliefs and influencing people’s online and offline behavior. The research community will need to continue to explore, to make these platforms as safe from abuse as possible.

Emilio Ferrara is research assistant professor of Computer Science at the University of Southern California and this article originally appeared on The Conversation.

By Emilio Ferrara

Sourced from WIRED