Tag

Clickbait

Browsing

By Jamie Bailey, 

Ever shared an article on social media after reading only its headline? Jamie Bailey of Ledger Bennett explains that slowing down can be key to making meaningful content.

“Polar bears face starvation threat as ice melts.”

What’s the point of a headline? To give the newspaper reader a clear picture of an event.

That’s a good newspaper headline because the message has been shared concisely and clearly. You don’t really need to know anything else. You can infer that the melting ice results in a lack of food for polar bears. It doesn’t take much work.

But there’s a big difference between newspapers reporting a factual story and the kind of thing we tend to see in the marketing articles all over our LinkedIn feeds. Polar bears starving is one thing, a deep dive into the transformative power of AI-driven omnichannel marketing is quite another.

Unfortunately, we’re all guilty of reading a headline and assuming we know what the rest of the content will say – and that affects how we read it, if we read it at all. And we’re just as guilty of forming opinions based on those initial assumptions.

It’s the same with B2B content. We see a snappy headline like: “AI-driven omnichannel marketing is the future of B2B marketing“ and share it on social media, without really knowing what the content is about.

Before you know it, there’s a ripple of: “AI-driven omnichannel marketing is the future of B2B marketing“ posts on social media from people who couldn’t tell you the first thing about omnichannel marketing – or all the other considerations and caveats that come with it.

And that’s a dumb thing for us to do.

Think slowly to avoid wrong conclusions

Many compelling stories are just waiting to be heard. But to be able to dive into world-changing arguments, we first need to get past the clickbait world of headlines.

Because some ideas need several paragraphs, not 70 characters.

So why do we often pay more attention to compelling headlines than the content that comes after?

Thankfully, it’s not our fault for thinking this way.

In Thinking, Fast and Slow, Daniel Kahneman outlines two systems of thought. System one (thinking fast) is responsible for our intuitive knowledge and the split-second decision-making we don’t even notice taking place. System two (thinking slow) is responsible for deeper, more deliberate, more active thought and decision-making.

But system two is notoriously lazy. If it can leave the heavy lifting to system one, it will.

The problem with system one? Its ability to map stored knowledge onto new events leads to a tendency to jump to conclusions. And they aren’t always right.

Deciding “ice melts“ means less food sources for polar bears – and less food for polar bears means a heightened risk of starvation – is an example of our system one jumping to a correct conclusion.

But deciding: “AI-driven omnichannel marketing is the future of B2B marketing“ means that all you need to succeed in 2024 is some more AI-driven omnichannel marketing – whatever that means – and you can ditch everything else?

That’s clearly a bit dumb.

And yet, that’s what you might end up thinking if you scour LinkedIn posts re-sharing the article.

It’s not all bad news

The good news is – it isn’t all bad. I’m not lamenting every single marketer in existence. Consider this more of a rallying cry to engage your system two brain a bit more and take the time to properly think about what the experts in our industry are really trying to tell us.

Think deeper. Think slower. Stop taking things at face value.

It won’t end world hunger.

But it might end a LinkedIn feed full of know-nothings.

Feature Image Credit:  Ian Maina via Unsplash

By Jamie Bailey, 

Sourced from The Drum

Major brands are paying for ads on these sites and funding the latest wave of clickbait, according to a new report.

This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

We’ve heard a lot about AI risks in the era of large language models like ChatGPT (including from me!)—risks such as prolific mis- and disinformation and the erosion of privacy. Back in April, my colleague Melissa Heikkilä also predicted that these new AI models would soon flood the internet with spam and scams. Today’s story explains that this new wave has already arrived, and it’s incentivized by ad money.

People are using AI to quickly spin up junk websites in order to capture some of the programmatic advertising money that’s sloshing around online, according to a new report by NewsGuard, exclusively shared with MIT Technology Review. That means that blue chip advertisers and major brands are essentially funding the next wave of content farms, likely without their knowledge.

NewsGuard, which rates the quality of websites, found over 140 major brands advertising on sites using AI-generated text that it considers “unreliable”, and the ads they found come from some of the most recognized companies in the world. Ninety percent of the ads from major brands were served through Google’s ad technology, despite the company’s own policies that prohibit sites from placing Google-served ads on pages with “spammy automatically generated content.”

The ploy works because programmatic advertising allows companies to buy ad spots on the internet without human oversight: algorithms bid on placements to optimize the number of relevant eyeballs likely to see that ad. Even before generative AI entered the scene, around 21% of ad impressions were taking place on junk “made for advertising” websites, wasting about $13 billion each year.

Now, people are using generative AI to make sites that capture ad dollars. NewsGuard has tracked over 200 “unreliable AI-generated news and information sites” since April 2023, and most seem to be seeking to profit off advertising money from, often, reputable companies.

NewsGuard identifies these websites by using AI to check whether they contain text that matches the standard error messages from large language models like ChatGPT. Those flagged are then reviewed by human researchers.

Most of the websites’ creators are completely anonymous, and some sites even feature fake, AI-generated creator bios and photos.

As Lorenzo Arvanitis, a researcher at NewsGuard, told me, “This is just kind of the name of the game on the internet.” Often, perfectly well-meaning companies end up paying for junk—and sometimes inaccurate, misleading, or fake—content because they are so keen to compete for online user attention. (There’s been some good stuff written about this before.)

The big story here is that generative AI is being used to supercharge this whole ploy, and it’s likely that this phenomenon is “going to become even more pervasive as these language models become more advanced and accessible,” according to Arvanitis.

And though we can expect it to be used by malign actors in disinformation campaigns, we shouldn’t overlook the less dramatic but perhaps more likely consequence of generative AI: huge amounts of wasted money and resources.

What else I’m reading

  • Chuck Schumer, the Senate majority leader in the US Congress, unveiled a plan for AI regulation in a speech last Wednesday, saying that innovation ought to be the “North Star” in legislation. President Biden also met with some AI experts in San Francisco last week, in another signal that regulatory action could be around the corner, but I’m not holding my breath.
  • Political campaigns are using generative AI, setting off alarm bells about disinformation, according to this great overview from the New York Times. “Political experts worry that artificial intelligence, when misused, could have a corrosive effect on the democratic process,” reporters Tiffany Hsu and Steven Lee Myers write.
  • Last week, Meta’s oversight board issued binding recommendations about how the company moderates content around war. The company will have to provide additional information about why material is left up or taken down, and preserve anything that documents human rights abuses. Meta has to share that documentation with authorities, when appropriate as well. Alexa Koenig, the executive director of the Human Rights Centre, wrote a sharp analysis for Tech Policy Press explaining why this is actually a pretty big deal.

What I learned this week

The science about the relationship between social media and mental health for teens is still pretty complicated. A few weeks ago, Kaitlyn Tiffany at the Atlantic wrote a really in-depth feature, surveying the existing, and sometimes conflicting, research in the field. Teens are indeed experiencing a sharp increase in mental-health issues in the United States, and social media is often considered a contributing factor to the crisis.

The science, however, is not as clear or illuminating as we might hope, and just exactly how and when social media is damaging is not yet well established in the research. Tiffany writes that “a decade of work and hundreds of studies have produced a mixture of results, in part because they’ve used a mixture of methods and in part because they’re trying to get at something elusive and complicated.” Importantly, “social media’s effects seem to depend a lot on the person using it.”

Sourced from MIT Technology Review

By .

Brands using influencer marketing for content likely had the best of intentions when they began.

But like much of the internet, things went south. Look at Mark Zuckerberg’s altruistic goal to create a more open and connected world with Facebook. Today, Facebook now has to hire thousands of screeners to monitor its Live product — weeding out low-quality clickbait articles, fake news, and worthless, sometimes extremely heinous video content.

The same thing happened to Twitter. As a result of an ongoing assault of abusive, bullying and harassing tweets, Twitter too is on a hiring spree, onboarding hundreds of engineers to help them automate the identification and removal of low-quality or abusive content.

Like much of the internet and our social media news feeds, influencer marketing started off with the best intentions: Brands hired the instafamous to help them promote their products. Sadly, the industry has effectively devolved into one similar to clickbait advertising, an industry filled with publishers that monetize their content through shocking headlines, while the instafamous monetize their large followings.

Instead of click-baity blogger titles like “How to Quit Your Job, Move to Paradise and Get Paid to Change the World,” the instafamous are now romanticizing their followers with the promise of glamour and the “good life,” as long as they “Like” their incessant updates and buy the products they’re pushing.

This is bad news for brands have embraced influencers to create “authentic conversations” with their fans and customers.

Brands continue to get eviscerated almost daily as a result of staged influencer advertising campaigns gone wrong. Why? Because consumers know better and can easily see through the inauthentic.

A bad business model
These repeated failed attempts are why the Federal Trade Commission has been forced to remind the industry, repeatedly, of their new rules regarding disclosure requirements. And whenever the FTC has to step into an industry and institute regulations in order to protect consumers, you know there’s a real problem. The FTC has better things to do with their time than monitoring IG posts from Bella Hadid and Emily Ratajkowski, making sure they’re following the rules and including the #ads hashtag on their IG posts.

Another Ogilvy quote that hopefully helps drive home the point:

“Viewers have a way of remembering the celebrity while forgetting the product. I did not know this when I paid Eleanor Roosevelt $35,000 to make a commercial for margarine. She reported that her mail was equally divided. “One-half was sad because I had damaged my reputation. The other half was happy because I had damaged my reputation. Not one of my proudest memories.”

The future of marketing isn’t in engaging influencers for over-staged, over-styled, made-to-look-authentic branded content. It’s an incredibly deceptive tactic, sure to follow a downward spiral from initial excitement to the ultimate letdown.

Busier than ever and armed with the smartest of smartphones, consumers are also smarter than ever. As a result, they demand straightforward, authentic conversations with their friends and the brands they love. They also demand full transparency.

Unfortunately, influencer marketing is manipulative and misrepresentative, and has effectively become the clickbait of marketing. And just like the major social networks having to deal with the consequences of clickbait in their newsfeeds, brands will need to address the inherent issues with influencer marketing.

Rather than engaging celebrities and the instafamous in order to engineer campaigns, brands should be engaging real fans and customers for authentic, brand stories — using those stories to help drive like-minded consumers down the path to purchase.

The future of marketing must be one where brands look to their real fans and customers for less (yet, better) branded content for their marketing campaigns. It’s a natural, authentic and, most importantly, honest approach to branded content creation. And it’s exactly what fans and customers need to stay engaged for the long term.

Unlike clickbait.

Image Credit: istockphoto  

By .

Sourced from Ad Age