Tag

AI

Browsing

By Hope Horner • Edited by Chelsea Brown 

AI has stripped the cost and complexity out of video production. The result? An endless stream of content where attention, not output, becomes the true competition.

Key Takeaways

  • AI video allows anyone to produce polished content on demand. What once required crews, budgets and weeks of production can now be generated in minutes.
  • With an avalanche of professional-looking content, companies must pivot from competing on production quality to competing on authentic insight, genuine expertise and human connection.
  • Brands must use AI as a tool to amplify human creativity and understand that having something meaningful to say matters more than saying it beautifully.

AI video tools have crossed a threshold. What used to require crews, budgets and weeks of post-production can now happen in minutes. Text-to-video generators can create actual clips that replace live-action filming — no cameras, no sets, no talent needed. Every brand, startup and side hustle can flood social feeds with polished content that would have cost thousands just a year ago.

The result is an avalanche of video content most marketers aren’t ready for. And when everyone has access to infinite content creation, the bottleneck shifts to something much scarcer: human attention.

The great video inflation of 2025

Think about what happened when desktop publishing killed the printing industry’s pricing power. Suddenly, every business could create professional-looking brochures and flyers on demand. The market got flooded with mediocre design, but the cost advantages were too compelling to ignore. Printing companies that survived had to find new ways to add value beyond just putting ink on paper.

AI video is that moment for content marketing. When every solopreneur can generate Hollywood-quality product demos and every startup can create testimonial footage without actual customers, the video landscape inflates, and we’re not talking about a gradual shift. This is a supply shock.

The number of professional-looking videos published daily is already increasing by orders of magnitude. Marketing teams that were previously constrained by video budgets suddenly have access to unlimited content creation. The creative brief that once became one hero video now becomes 50 variations optimized for every platform, demographic and use case.

For marketers, this feels like winning the lottery. Unlimited content at near-zero marginal cost? What’s not to love?

But there’s a catch. When everyone has the same superpower, no one has an advantage.

Why this time is different

Previous waves of content democratization (think YouTube, smartphones or social media) expanded the pool of creators but didn’t eliminate production friction entirely. You still needed some combination of equipment, skill or time to create compelling video content. That friction acted as a natural quality filter.

AI video removes that filter in many ways. The barrier between having an idea and having a polished video is getting smaller and smaller. A text prompt becomes footage. A description becomes a testimonial. A concept becomes a commercial.

This creates what economists call a “lemons market” — when quality becomes indistinguishable at first glance, markets get flooded with mediocre products. Your audience will face an unprecedented signal-to-noise problem. Professional-looking content will be everywhere, but most of it will have nothing meaningful to say.

The brands that understand this dynamic — and position themselves accordingly — will have a massive advantage over those caught off guard.

The coming brand extinction event

Here’s what most marketers aren’t seeing: AI video doesn’t just make content creation cheaper — it makes content forgettable. When every video looks professionally produced, none of them stand out visually. When everyone can create testimonials and product demos, the format itself loses credibility.

We’re heading toward a content landscape where production value becomes almost meaningless as a differentiator. The slick graphics, perfect lighting and smooth transitions that used to signal “professional brand” will be table stakes. Worse, they might even signal “generated content” to increasingly savvy audiences.

This shift will be brutal for brands that have built their entire content strategy around looking polished rather than saying something meaningful.

How to survive the content inflation

The companies that survive will be the ones that pivot from competing on production quality to competing on authentic insight, genuine expertise and genuine human connection. The production quality will be a given, so it’s the content strategy that will stand out.

This means treating AI video tools like what they actually are: incredibly powerful production assistants that still need direction, strategy and human judgment to create anything worth watching. The technology can generate and optimize the footage, but it can’t generate the insight that makes someone care.

Smart brands are already preparing for this shift. They’re investing more heavily in understanding their audiences, developing unique points of view and building authentic relationships that can’t be automated. They’re using AI to amplify their human creativity, not replace it.

Most importantly, they’re preparing for a world where having something meaningful to say matters more than saying it beautifully. Because when everyone can make beautiful content, the only competitive advantage left is having something worth saying.

The content inflation crisis isn’t coming — it’s already here. Early adopters are already flooding feeds with AI-generated content, and the volume is only going to increase. The brands that recognize this as an existential shift, not just a new tool to experiment with, will be the ones that survive.

Importantly, this conversation isn’t about whether AI video is good or bad. It’s about understanding that when production costs get lower, everything else about marketing changes. The rules, the strategies, the competitive advantages you’ve gotten used to — all of it gets rewritten.

Your choice is simple: Adapt to the new rules now, or get swept away by the brands that do.

By Hope Horner 

Hope Horner is a serial entrepreneur who built Lemonlight from her bedroom. She’s been named Inc.’s Top Female Founder (twice), landed on the Inc. 5000 list (seven times), and won 30+ awards. She writes about entrepreneurship with clarity, candor, and bite.

Edited by Chelsea Brown 

Sourced from Entrepreneur

By Greg Peters

AI is the biggest jolt of energy marketing has felt since the internet. Rather than fear it, smart operators will grab it and ride the wave.

In the Mad Men era of the 1960s, marketing lived in the boardroom, born from creative conversations and driven by strategy. The internet’s arrival in the mid-1990s flipped that world, pushing marketers from shaping big ideas to managing tactics like SEO, banner ads, pop-ups and content mills. Now AI is here, and the shifts feel constant. At a breakneck pace, it’s commoditizing once-core marketing tactics, doing the work so effectively that public opinion assumes machines can replace marketers.

Here’s where the pressure amps up: Clients and executives often don’t care how the work gets done, as long as it’s completed on time and within budget. You can manage revenue, risk, cost and cash flow however you see fit, as long as the numbers move in the right direction.

For some, that sounds terrifying and like a sure sign AI will decimate the creative process and eliminate jobs. But I’m here to tell you that this isn’t the end. You’re not going to lose your job to AI. But you could lose your job to someone who knows how to use it.

Creative Resistance And Adoption

You can see the resistance to AI playing out in the talent market. Countless writers have “open to work” on their LinkedIn profiles. The perceived value of writing has been eroded, hitting marketing intensely. AI makes tactics easier to access, so agencies and professionals must demonstrate that their work drives outcomes beyond what a tool can produce.

Those of us using AI daily know marketing has never stopped being valuable. Agencies need to demonstrate their value through tangible results. Smart AI adoption combined with expertise delivers faster, lasting outcomes. Understandably, the resistance often comes from creatives who are hesitant to adopt new tools out of fear.

I’ve always been a late tech adopter, but even I use ChatGPT. I rely on it for decks, engagement plans and strategy documentation. If I’m embracing it, the debate is over. The only question now is how to use it well.

Real-World Disruption In Action

Examples already show what this looks like. At my agency, we built an internal AI we call DirectorGPT. It captures our team’s knowledge so anyone can get quick answers without waiting for a senior lead. It saves time, facilitates onboarding and provides a reliable knowledge base. At the same time, agencies are experimenting with platforms that help analyse performance and optimize campaigns faster than ever before.

The lesson isn’t that agencies have no future. In fact, it’s a call to recognize where humans add the most value. Agencies must determine where AI is most effective and where human creativity remains essential. AI can generate a first draft of an email or a landing page. It can even create long-form narrative content and develop a brand strategy. But it can’t replace human creativity.

Inspired marketing pulls from culture, art, literature and even the bizarre. Think about campaigns that feel strange, yet stick because they capture attention in ways no tool could predict: A fast food brand sparring with competitors on social media. A beverage upstart disrupting the bottled water market with unconventional tactics.

True creativity takes something from one corner of culture and combines it with something unrelated to reveal something new. AI can’t make those leaps because it works only with what already exists. Humans can. When creatives use AI for mundane work, we gain time to focus on originality.

AI is the ultimate yes-man. It will flatter you into failure. It’s never going to push back and stop you from publishing something you’ll regret. The person behind the keyboard must be able to distinguish between good and bad. If those skills erode, teams will generate endless stale content that inspires no one to click, read or buy.

The Playbook For Using AI Right

Winning marketers will be the ones who use AI purposefully. These are the moves worth making:

• Leverage AI for speed. Summarize data, prepare talking points and cut down on research time.

• Build stronger engagement plans. Use AI to connect client objectives with practical marketing moves.

• Prompt with purpose. Iterate to refine results, and keep a library of the best prompts.

• Gut-check outputs. Never accept AI at face value. Apply human taste, style and critical thinking.

• Shift your lens to outcomes. Don’t view AI solely as a cost savings tool. Use it to drive outcomes and stay ahead.

Punk Rock Lessons For The Future

For me, adopting AI feels like punk rock. Punk was about breaking the rules, but the best musicians knew the rules first. It’s the same with AI—you must understand how the work is done before you can rebuild it with these tools.

The fear surrounding AI is loud, but like every disruptive technology, the noise will fade as adoption becomes commonplace. Conversations that feel urgent today will sound outdated soon. The same thing happened with the fax machine, the printer and the internet. Each one faced scepticism before becoming standard. AI is following the same path, albeit at a faster pace.

When the Spanish brought horses to North America, the indigenous Plains people had never encountered them before. Within a few generations, they’d incorporated horses into their way of life. They took a foreign technology and used it to leap forward. That’s what humans do. We harness technology and bound forward with it.

The tools are here, and the tide is rising. Marketing isn’t disappearing. It’s about to get more demanding, more creative and more fun. Grab hold, ride the wave and own it.

Feature image credit: getty

By Greg Peters

Find Greg Peters on LinkedIn. Visit Greg’s website.

COUNCIL POST | Membership (fee-based)

Greg Peters is the president and founder of 4B Marketing, a full-service tech marketing agency based in Denver, CO. Read Greg Peters’ full executive profile here.

Sourced from Forbes

“We will not stop until beauty is a source of happiness.”

Personal care brand Dove has become known for its campaigns championing real people with real bodies, as exemplified by its shunning of TikTok ‘beauty’ filters. And now, the brand is targeting AI in the latest iteration of its decades-old Real Beauty campaign.

The brand announced this week that it will never use AI-generated imagery to represent “real bodies” in its ads. And in a powerful short film, it takes aim at the generic and unrealistic beauty standards depicted in images churned out in text prompts such as “the most beautiful woman in the world.” (For more great ad campaigns, check out the best print ads of all time.)

Alessandro Manfredi, chief marketing officer at Dove, adds, “At Dove, we seek a future in which women get to decide and declare what real beauty looks like – not algorithms. As we navigate the opportunities and challenges that come with new and emerging technology, we remain committed to protect, celebrate, and champion Real Beauty. Pledging to never use AI in our communications is just one step. We will not stop until beauty is a source of happiness, not anxiety, for every woman and girl.”
Indeed, over the 20 year course of its Real Beauty campaign, Dove has repeatedly proven itself to be a force for good. From shunning AI to helping game developers code natural hair in an effort to increase diversity in video games, the brand’s inclusivity credentials continue to impress.
Feature Image Credit: Dove

By 

Daniel John is Senior News Editor at Creative Bloq. He reports on the worlds of art, design, branding and lifestyle tech (which often translates to tech made by Apple). He joined in 2020 after working in copywriting and digital marketing with brands including ITV, NBC, Channel 4 and more.

Sourced from CREATIVE BLOQ

 

By 

AI supplants conventional search engines, their loss of market share will change the digital ad landscape, says research firm Gartner.

A new report from the research firm Gartner, has some unsettling news for search engine giants like Google and Microsoft’s Bing. It predicts that as everyday net users become more comfortable with AI tech and incorporate it into their general net habits, chatbots and other agents will lead to a drop of 25 percent in “traditional search engine volume.” The search giants will then simply be “losing market share to AI chatbots and other virtual agents.”

One reason to care about this news is to remember that the search engine giants are really marketing giants. Search engines are useful, but Google makes money by selling ads that leverage data from its search engine. These ads are designed to convert to profits for the companies whose wares are being promoted. Plus placing Google ads on a website is a revenue source that many other companies rely on–perhaps best known for being used by media firms. If AI upends search, then by definition this means it will similarly upend current marketing practices. And disrupted marketing norms mean that how you think about using online systems to market your company’s products will have to change too.

AI already plays a role in marketing. Chatbots are touted as having copy generating skills that can boost small companies’ public relations efforts, but the tech is also having an effect inside the marketing process itself. An example of this is Shopify’s recent AI-powered Semantic Search system, which uses AI to sniff through the text and image data of a manufacturer’s products and then dream up better search-matching terms so that they don’t miss out on matching to customers searching for a particular phrase. But this is simply using AI to improve current search-based marketing systems.

AI–smart enough to steal traffic

More important is the notion that AI chatbots can “steal” search engine traffic. Think of how many of the queries that you usually direct at Google-from basic stuff like “what’s 200 Farenheit in Celsius?” to more complex matters like “what’s the most recent games console made by Sony?”–could be answered by a chatbot instead. Typing those queries into ChatGPT or a system like Microsoft’s Copilot could mean they aren’t directed through Google’s labyrinthine search engine systems.

There’s also a hint that future web surfing won’t be as search-centric as it is now, thanks to the novel Arc app. Arc leverages search engine results as part of its answers to user queries, but the app promises to do the boring bits of web searching for you, neatly curating the answers above more traditional search engine results. AI “agents” are another emergent form of the tech that could impact search-AI systems that’re able to go off and perform a complex sequence of tasks for you, like searching for some data and analysing it automatically.

Google, of course, is savvy regarding these trends, and last year launched its own AI search push, with its Search Generative Experience. This is an effort to add in some of the clever summarizing abilities of generative AI systems to Google’s traditional search system, saving users time they’d otherwise have spent trawling through a handful of the top search results in order to learn the actual answer to the queries they typed in.

But as AI use expands, and firms like Microsoft double– and triple-down on their efforts to incorporate AI into everyone’s digital lives, the question of the role of traditional search compared to AI chatbots and similar tech remains an open one. AI will soon impact how you think about marketing your company’s products and Search Engine Optimization to bolster traffic to your website may even stop being such an important factor.

So if you’re building a long-term marketing strategy right now it might be worth examining how you can leverage AI products to market your wares alongside more traditional search systems. It’s always smart to skate to where the puck is going to be versus where it currently is.

Feature Image Credit: Getty Images

By 

Sourced from Inc.

 and

Social media’s unregulated evolution over the past decade holds a lot of lessons that apply directly to AI companies and technologies.

Oh, how the mighty have fallen. A decade ago, social media was celebrated for sparking democratic uprisings in the Arab world and beyond. Now front pages are splashed with stories of social platforms’ role in misinformation, business conspiracymalfeasance, and risks to mental health. In a 2022 survey, Americans blamed social media for the coarsening of our political discourse, the spread of misinformation, and the increase in partisan polarization.

Today, tech’s darling is artificial intelligence. Like social media, it has the potential to change the world in many ways, some favourable to democracy. But at the same time, it has the potential to do incredible damage to society.

There is a lot we can learn about social media’s unregulated evolution over the past decade that directly applies to AI companies and technologies. These lessons can help us avoid making the same mistakes with AI that we did with social media.

In particular, five fundamental attributes of social media have harmed society. AI also has those attributes. Note that they are not intrinsically evil. They are all double-edged swords, with the potential to do either good or ill. The danger comes from who wields the sword, and in what direction it is swung. This has been true for social media, and it will similarly hold true for AI. In both cases, the solution lies in limits on the technology’s use.

#1: Advertising

The role advertising plays in the internet arose more by accident than anything else. When commercialization first came to the internet, there was no easy way for users to make micropayments to do things like viewing a web page. Moreover, users were accustomed to free access and wouldn’t accept subscription models for services. Advertising was the obvious business model, if never the best one. And it’s the model that social media also relies on, which leads it to prioritize engagement over anything else.

Both Google and Facebook believe that AI will help them keep their stranglehold on an 11-figure online ad market (yep, 11 figures), and the tech giants that are traditionally less dependent on advertising, like Microsoft and Amazon, believe that AI will help them seize a bigger piece of that market.

Big Tech needs something to persuade advertisers to keep spending on their platforms. Despite bombastic claims about the effectiveness of targeted marketing, researchers have long struggled to demonstrate where and when online ads really have an impact. When major brands like Uber and Procter & Gamble recently slashed their digital ad spending by the hundreds of millions, they proclaimed that it made no dent at all in their sales.

AI-powered ads, industry leaders say, will be much better. Google assures you that AI can tweak your ad copy in response to what users search for, and that its AI algorithms will configure your campaigns to maximize success. Amazon wants you to use its image generation AI to make your toaster product pages look cooler. And IBM is confident its Watson AI will make your ads better.

These techniques border on the manipulative, but the biggest risk to users comes from advertising within AI chatbots. Just as Google and Meta embed ads in your search results and feeds, AI companies will be pressured to embed ads in conversations. And because those conversations will be relational and human-like, they could be more damaging. While many of us have gotten pretty good at scrolling past the ads in Amazon and Google results pages, it will be much harder to determine whether an AI chatbot is mentioning a product because it’s a good answer to your question or because the AI developer got a kickback from the manufacturer.

#2: Surveillance

Social media’s reliance on advertising as the primary way to monetize websites led to personalization, which led to ever-increasing surveillance. To convince advertisers that social platforms can tweak ads to be maximally appealing to individual people, the platforms must demonstrate that they can collect as much information about those people as possible.

It’s hard to exaggerate how much spying is going on. A recent analysis by Consumer Reports about Facebook—just Facebook—showed that every user has more than 2,200 different companies spying on their web activities on its behalf.

AI-powered platforms that are supported by advertisers will face all the same perverse and powerful market incentives that social platforms do. It’s easy to imagine that a chatbot operator could charge a premium if it were able to claim that its chatbot could target users on the basis of their location, preference data, or past chat history and persuade them to buy products.

The possibility of manipulation is only going to get greater as we rely on AI for personal services. One of the promises of generative AI is the prospect of creating a personal digital assistant advanced enough to act as your advocate with others and as a butler to you. This requires more intimacy than you have with your search engine, email provider, cloud storage system, or phone. You’re going to want it with you constantly, and to most effectively work on your behalf, it will need to know everything about you. It will act as a friend, and you are likely to treat it as such, mistakenly trusting its discretion.

Even if you choose not to willingly acquaint an AI assistant with your lifestyle and preferences, AI technology may make it easier for companies to learn about you. Early demonstrations illustrate how chatbots can be used to surreptitiously extract personal data by asking you mundane questions. And with chatbots increasingly being integrated with everything from customer service systems to basic search interfaces on websites, exposure to this kind of inferential data harvesting may become unavoidable.

#3: Virality

Social media allows any user to express any idea with the potential for instantaneous global reach. A great public speaker standing on a soapbox can spread ideas to maybe a few hundred people on a good night. A kid with the right amount of snark on Facebook can reach a few hundred million people within a few minutes.

A decade ago, technologists hoped this sort of virality would bring people together and guarantee access to suppressed truths. But as a structural matter, it is in a social network’s interest to show you the things you are most likely to click on and share, and the things that will keep you on the platform.

As it happens, this often means outrageous, lurid, and triggering content. Researchers have found that content expressing maximal animosity toward political opponents gets the most engagement on Facebook and Twitter. And this incentive for outrage drives and rewards misinformation.

As Jonathan Swift once wrote, “Falsehood flies, and the Truth comes limping after it.” Academics seem to have proved this in the case of social media; people are more likely to share false information—perhaps because it seems more novel and surprising. And unfortunately, this kind of viral misinformation has been pervasive.

AI has the potential to supercharge the problem because it makes content production and propagation easier, faster, and more automatic. Generative AI tools can fabricate unending numbers of falsehoods about any individual or theme, some of which go viral. And those lies could be propelled by social accounts controlled by AI bots, which can share and launder the original misinformation at any scale.

Remarkably powerful AI text generators and autonomous agents are already starting to make their presence felt in social media. In July, researchers at Indiana University revealed a botnet of more than 1,100 Twitter accounts that appeared to be operated using ChatGPT.

AI will help reinforce viral content that emerges from social media. It will be able to create websites and web content, user reviews, and smartphone apps. It will be able to simulate thousands, or even millions, of fake personas to give the mistaken impression that an idea, or a political position, or use of a product, is more common than it really is. What we might perceive to be vibrant political debate could be bots talking to bots. And these capabilities won’t be available just to those with money and power; the AI tools necessary for all of this will be easily available to us all.

#4: Lock-in

Social media companies spend a lot of effort making it hard for you to leave their platforms. It’s not just that you’ll miss out on conversations with your friends. They make it hard for you to take your saved data—connections, posts, photos—and port it to another platform. Every moment you invest in sharing a memory, reaching out to an acquaintance, or curating your follows on a social platform adds a brick to the wall you’d have to climb over to go to another platform.

This concept of lock-in isn’t unique to social media. Microsoft cultivated proprietary document formats for years to keep you using its flagship Office product. Your music service or e-book reader makes it hard for you to take the content you purchased to a rival service or reader. And if you switch from an iPhone to an Android device, your friends might mock you for sending text messages in green bubbles. But social media takes this to a new level. No matter how bad it is, it’s very hard to leave Facebook if all your friends are there. Coordinating everyone to leave for a new platform is impossibly hard, so no one does.

Similarly, companies creating AI-powered personal digital assistants will make it hard for users to transfer that personalization to another AI. If AI personal assistants succeed in becoming massively useful time-savers, it will be because they know the ins and outs of your life as well as a good human assistant; would you want to give that up to make a fresh start on another company’s service? In extreme examples, some people have formed close, perhaps even familial, bonds with AI chatbots. If you think of your AI as a friend or therapist, that can be a powerful form of lock-in.

Lock-in is an important concern because it results in products and services that are less responsive to customer demand. The harder it is for you to switch to a competitor, the more poorly a company can treat you. Absent any way to force interoperability, AI companies have less incentive to innovate in features or compete on price, and fewer qualms about engaging in surveillance or other bad behaviours.

#5: Monopolization

Social platforms often start off as great products, truly useful and revelatory for their consumers, before they eventually start monetizing and exploiting those users for the benefit of their business customers. Then the platforms claw back the value for themselves, turning their products into truly miserable experiences for everyone. This is a cycle that Cory Doctorow has powerfully written about and traced through the history of Facebook, Twitter, and more recently TikTok.

The reason for these outcomes is structural. The network effects of tech platforms push a few firms to become dominant, and lock-in ensures their continued dominance. The incentives in the tech sector are so spectacularly, blindingly powerful that they have enabled six megacorporations (Amazon, Apple, Google, Facebook parent Meta, Microsoft, and Nvidia) to command a trillion dollars each of market value—or more. These firms use their wealth to block any meaningful legislation that would curtail their power. And they sometimes collude with each other to grow yet fatter.

This cycle is clearly starting to repeat itself in AI. Look no further than the industry poster child OpenAI, whose leading offering, ChatGPT, continues to set marks for uptake and usage. Within a year of the product’s launch, OpenAI’s valuation had skyrocketed to about $90 billion.

OpenAI once seemed like an “open” alternative to the megacorps—a common carrier for AI services with a socially oriented nonprofit mission. But the Sam Altman firing-and-rehiring debacle at the end of 2023, and Microsoft’s central role in restoring Altman to the CEO seat, simply illustrated how venture funding from the familiar ranks of the tech elite pervades and controls corporate AI. In January 2024, OpenAI took a big step toward monetization of this user base by introducing its GPT Store, wherein one OpenAI customer can charge another for the use of its custom versions of OpenAI software; OpenAI, of course, collects revenue from both parties. This sets in motion the very cycle Doctorow warns about.

In the middle of this spiral of exploitation, little or no regard is paid to externalities visited upon the greater public—people who aren’t even using the platforms. Even after society has wrestled with their ill effects for years, the monopolistic social networks have virtually no incentive to control their products’ environmental impact, tendency to spread misinformation, or pernicious effects on mental health. And the government has applied virtually no regulation toward those ends.

Likewise, few or no guardrails are in place to limit the potential negative impact of AI. Facial recognition software that amounts to racial profiling, simulated public opinions supercharged by chatbots, fake videos in political ads—all of it persists in a legal grey area. Even clear violators of campaign advertising law might, some think, be let off the hook if they simply do it with AI.

Mitigating the risks

The risks that AI poses to society are strikingly familiar, but there is one big difference: it’s not too late. This time, we know it’s all coming. Fresh off our experience with the harms wrought by social media, we have all the warning we should need to avoid the same mistakes.

The biggest mistake we made with social media was leaving it as an unregulated space. Even now—after all the studies and revelations of social media’s negative effects on kids and mental health, after Cambridge Analytica, after the exposure of Russian intervention in our politics, after everything else—social media in the US remains largely an unregulated “weapon of mass destruction.” Congress will take millions of dollars in contributions from Big Tech, and legislators will even invest millions of their own dollars with those firms, but passing laws that limit or penalize their behaviour seems to be a bridge too far.

We can’t afford to do the same thing with AI, because the stakes are even higher. The harm social media can do stems from how it affects our communication. AI will affect us in the same ways and many more besides. If Big Tech’s trajectory is any signal, AI tools will increasingly be involved in how we learn and how we express our thoughts. But these tools will also influence how we schedule our daily activities, how we design products, how we write laws, and even how we diagnose diseases. The expansive role of these technologies in our daily lives gives for-profit corporations opportunities to exert control over more aspects of society, and that exposes us to the risks arising from their incentives and decisions.

The good news is that we have a whole category of tools to modulate the risk that corporate actions pose for our lives, starting with regulation. Regulations can come in the form of restrictions on activity, such as limitations on what kinds of businesses and products are allowed to incorporate AI tools. They can come in the form of transparency rules, requiring disclosure of what data sets are used to train AI models or what new preproduction-phase models are being trained. And they can come in the form of oversight and accountability requirements, allowing for civil penalties in cases where companies disregard the rules.

The single biggest point of leverage governments have when it comes to tech companies is antitrust law. Despite what many lobbyists want you to think, one of the primary roles of regulation is to preserve competition—not to make life harder for businesses. It is not inevitable for OpenAI to become another Meta, an 800-pound gorilla whose user base and reach are several times those of its competitors. In addition to strengthening and enforcing antitrust law, we can introduce regulation that supports competition-enabling standards specific to the technology sector, such as data portability and device interoperability. This is another core strategy for resisting monopoly and corporate control.

Additionally, governments can enforce existing regulations on advertising. Just as the US regulates what media can and cannot host advertisements for sensitive products like cigarettes, and just as many other jurisdictions exercise strict control over the time and manner of politically sensitive advertising, so too could the US limit the engagement between AI providers and advertisers.

Lastly, we should recognize that developing and providing AI tools does not have to be the sovereign domain of corporations. We, the people and our government, can do this too. The proliferation of open-source AI development in 2023, successful to an extent that startled corporate players, is proof of this. And we can go further, calling on our government to build public-option AI tools developed with political oversight and accountability under our democratic system, where the dictatorship of the profit motive does not apply.

Which of these solutions is most practical, most important, or most urgently needed is up for debate. We should have a vibrant societal dialogue about whether and how to use each of these tools. There are lots of paths to a good outcome.

The problem is that this isn’t happening now, particularly in the US. And with a looming presidential election, conflict spreading alarmingly across Asia and Europe, and a global climate crisis, it’s easy to imagine that we won’t get our arms around AI any faster than we have (not) with social media. But it’s not too late. These are still the early years for practical consumer AI applications. We must and can do better.

Feature Image Credit: STEPHANIE ARNETT/MITTR | GETTY, ENVATO

 and

Nathan E. Sanders is a data scientist and an affiliate with the Berkman Klein Center at Harvard University. Bruce Schneier is a security technologist and a fellow and lecturer at the Harvard Kennedy School.

Sourced from MIT Technology Review

BY BILLY JONES.

Hootsuite’s VP of marketing explains how incorporating AI as an integral part of strategy and brainstorming processes has transformed everything.

BY BILLY JONES3 MINUTE READ

In the fast-paced world of marketing, I’ve always approached creativity as an organization’s bread and butter, with innovation as the knife that spreads it. As the VP of marketing at Hootsuite, I’ve found an unexpected ally in this creative quest—artificial intelligence and, more specifically, ChatGPT.

I’ve incorporated AI as an integral part of my strategy and brainstorming process in the past year—transforming the way I think, create, and deliver business value for my organization. Here are five ways it’s made an impact.

REINVENTING THE CREATIVE BRIEF

My years in agency life at BBDO have ingrained in me a love for structured creativity. The “Get-Who-To-By-Because” brief has always been a staple in my toolbox. It helps zone in on who I am trying to target, pushes me to identify the pain point I am trying to solve, how I plan to solve it, the key message that I’m trying to drive home, and the why behind the entire campaign.

Recently I began using ChatGPT to reframe these briefs. By feeding it relevant information and asking for multiple versions of a brief within the “Get-Who-To-By-Because” format, I’ve been amazed by the unexpected perspectives it offers. This process has helped fuel my creativity. Coupled with my experience in the creative space and deep understanding of my customer, it ensures that the final output is both human-centric and insight-driven.

CRAFTING TARGET PERSONAS WITH PRECISION

We all know that data is king. But the interpretation of any data is the key to the kingdom. ChatGPT’s ability to dive into vast public data pools has been a game changer for developing customer personas.

For instance, I asked ChatGPT to define the core demographics of North American social media managers.

From there, I used that very demographic output as an input to a user persona framework. ChatGPT was able to create detailed user personas that captured everything from challenges and joys to the preferred technology stack, budget, and even their favored media outlets. These insights have been invaluable in refining my team’s content and paid media strategies.

ENHANCING RESPONSE-BASED ADVERTISING

In marketing’s creative landscape, a tactical approach is sometimes crucial. ChatGPT excels here, notably during a time-strapped holiday season. Tasked with creating a compelling email for a January webinar with little time and lots of folks on holiday, we used prompted ChatGPT with our holiday webinar theme “Supercharge Your 24 Social Strategy” and asked for it to help us craft a click-worthy email via the AIDA (Attention-Interest-Desire-Action) framework. The outcome was a remarkable 300% increase in click-through rates, showcasing AI’s power in strategic, responsive advertising.

ACKNOWLEDGING THE POWER OF AI EDITING

From crafting a Slack message to assisting with internal briefs, ChatGPT has been my go-to editor. Its ability to tailor certain messages to specific communication styles—such as being jargon-free—is nothing short of impressive. This has enhanced the clarity and impact of my communications across the board.

SERVING AS A CREATIVE ARCHIVIST

In preparing for a product launch, ChatGPT has served me well as a creative archivist—providing insights on past marketing campaigns from companies who have similarly launched disruptive products.

By getting specific around needing to understand the specific tactics that drove success, ChatGPT has helped shape our approach to generating fame and achieving widespread industry impact.

These are just a few examples of how I’ve used AI in the past year. It has played multiple roles—from a strategist and brainstorming partner to a copywriter and researcher.

Throughout all of this, it’s important to remember that AI is a tool and not a replacement for human creativity.

To me, AI provides deep insights based on what’s been done. But it’s our creativity that dreams up ideas that have never been done. As we continue to harness AI’s power, it’s our human touch that will continue to make a real difference in the world of marketing.

Feature Image Credit: Getty Images

BY BILLY JONES.

Billy Jones is the VP of Marketing at Hootsuite. More

Sourced from FastCompany

BY MICHAEL GROTHAUS.

People who work in IT, software development, and advertising appear to be the most anxious.

Mass layoffs in the tech industry have made headlines nearly every week since late 2022. Combine that constant barrage with the rise of AI and uncertainty over the global economy and you have the perfect recipe for increasing anxiety across the American workforce when it comes to fears about job security.

Now a new survey out from online marketing firm Authority Hacker put some concrete numbers on just how many of the currently employed are worried about their job security in the years ahead. In a survey of 1,200 workers, Authority Hacker found that:

  • 54.58% of full-time workers have increased concerns about their job security.
  • Men (62.87%) are more likely than women (47.53%) to fear for their job security, which Authority Hacker says may reflect the 3:1 gender ratio of male to female employees in tech firms.
  • The more a person makes, the more likely they are to worry about their job security. Those making $150,000 or more worry the most about their job security (72.48%), while those making $50,000 or less worry the least (50.26%).
  • The younger an employee, the more likely they are to worry about their job security, with 62.2% of 25 to 44 year-olds worried versus less than 50% of those over the age of 45 worried.
  • C-suite execs are the most worried about their job security at 79.31%.
  • But just 46.82% and 45.80% of non management staff and admin staff, respectively, are worried about their job security.

The larger the company is, the more likely employees are to worry about their job security. Authority Hacker found that 74.33% of those at companies that employ between 500 and 1,000 workers worry about their job security, while only 45.38% of workers at companies with 25 or fewer employees worry about their job security.

And when it comes to concerns by profession, workers most likely to fear for their jobs happen to be those whose industries are most at risk of being impacted by AI. Those professions are:

  1. IT – Services & Data: 89.66%
  2. Software development: 74.42%
  3. Advertising: 70.00%
  4. Finance and Insurance: 67.56%
  5. Human Resources: 64.29%

 

To arrive at its findings, Authority Hacker surveyed 1,200 full-time workers in the United States aged 25 and above.

Feature Image Credit: Aziz Acharki/Unsplash, Richard Horvath/Unsplash

BY MICHAEL GROTHAUS

Michael Grothaus is a novelist and author. He has written for Fast Company since 2013, where he’s interviewed some of the tech industry’s most prominent leaders and writes about everything from Apple and artificial intelligence to the effects of technology on individuals and society. Michael’s current tech-focused areas of interest include AI, quantum computing, and the ways tech can improve the quality of life for the elderly and individuals with disabilities More

Sourced from FastCompany

By Alessio Francesco Fedeli

The current landscape of digital technology is marked by the struggle to achieve visibility for your business online and target the appropriate audience amidst a wave of competition. Search engine marketing (SEM) has pivotal strategies that will allow a business to achieve this but with ongoing advancements in artificial intelligence (AI) and machine learning, more marketers have opportunities for maximum growth. These advancements are revolutionising SEM and will help enhance the efficiency and effectiveness of business campaigns significantly.

AI-enhanced SEM tools stand at the vanguard of this revolution, utilizing advanced algorithms and machine learning capabilities to transform every facet of search engine marketing comprehensively. From automating the process of keyword research to refining advertisement creation, and from optimising bid management to improving performance analysis, these tools furnish marketers with the capacity to attain exceptional outcomes. They transcend conventional tool functionality; they act as catalysts for change, facilitating precise targeting and real-time modifications previously considered unattainable.

Exploring further into AI and machine learning within SEM reveals that these technologies are not only augmenting existing methodologies but also fostering novel strategies. Marketers harnessing these tools gain the ability to predict market trends accurately, comprehend consumer behaviour with enhanced precision, and implement campaigns that are both cost-efficient and high-impact. The advent of AI-driven SEM marks a transformative era in digital advertising, reshaping the landscape in ways that are beginning to unfold.

Leveraging AI and machine learning in SEM

Leveraging AI and machine learning can revolutionise your campaigns | News by Thaiger
Photo by Steve Johnson on Unsplash

The Role of AI in search engine marketing

AI revolutionises SEM by making complex tasks simple. It sifts through vast datasets to unearth insights beyond human capability. By fine-tuning keyword research and bid optimisation, AI ensures ads hit the mark every time. It doesn’t stop there; AI tailors ad content for individual users, predicting trends and making swift, informed decisions. This not only sharpens the marketer’s toolbox but also enhances the consumer’s journey, significantly boosting conversion rates. With AI in SEM, ads become more than just noise; they’re strategic moves in the digital marketplace.

Benefits of Using Machine Learning in SEM

Although there is some apprehension from some, it is important to understand that there are benefits to incorporating machine learning into your SEM strategy.

Benefits of machine learning in SEM

BENEFIT DESCRIPTION
Enhanced targeting accuracy By analysing user data, machine learning identifies the most relevant audience segments, improving the precision of targeting efforts.
Optimised bid adjustments Machine learning algorithms navigate the volatile bidding landscape, making real-time adjustments to maximize ROI.
Improved ad performance It analyses what works best for ad performance, from copy to design, ensuring optimal engagement and conversion rates.
Fraud detection and protection Machine learning acts as a guardian against click fraud, safeguarding advertising budgets from dishonest practices by spotting and mitigating fraudulent activities.

This integration offers strategic advantages that will enable marketers to be more effective in this competitive digital landscape. However, by implementing machine learning, businesses can not only optimise their advertising efforts but also protect their investments. This way, every dollar spent is an investment towards achieving tangible results.

Incorporating AI and machine learning technologies in SEM campaigns

Choosing the right AI tools is the first step to SEM success. The ideal tool offers a comprehensive suite for managing keywords, bids, ads, and performance, fitting seamlessly into your marketing stack. On the machine learning front, clarity in objectives paves the way for impactful integration. Whether aiming for higher CTRs or lower CPA, leveraging historical data and machine learning algorithms to predict and adjust is key. Constant experimentation and analysis refine strategies, ensuring SEM campaigns not only meet but exceed expectations. In the rapidly evolving world of SEM, AI and machine learning are not just options but necessities.

Strategies for successful implementation

Leveraging AI and machine learning can revolutionise your campaigns | News by Thaiger
This photo was generated using Dall-E

In the evolving landscape of search engine marketing (SEM), leveraging AI and machine learning can set a campaign apart, maximising efficiency and returns. Below are strategies detailing how to integrate these advanced technologies effectively.

Choosing the right AI tools for SEM

In the realm of SEM, it is critical to select AI tools that are congruent with your marketing objectives. The market is replete with a myriad of options, each purporting to transform your SEM strategies radically. Nonetheless, not every tool offers equal value. It is advisable to opt for tools that provide an extensive analysis of keywords, insights into competitors, and capabilities for automated bid management. These functionalities ensure that your campaigns are both precisely targeted and economically efficient. Furthermore, the implementation of AI-driven tools for content optimisation can notably increase ad relevance, thereby enhancing click-through rates (CTR) and reducing cost per acquisition (CPA).

Conducting trials with various tools before finalizing a decision is imperative to identify a solution that is specifically catered to your requirements. Platforms offering advanced analytics should be given priority as they afford actionable insights critical for ongoing refinement. It is important to recognize that the effective use of AI in SEM transcends merely selecting cutting-edge technology; it encompasses the strategic application of these tools to continually refine and advance marketing strategies over time.

Integrating machine learning algorithms into SEM practices

Machine learning algorithms come in as a cornerstone in the advancement of search engine marketing (SEM) strategies. With this, businesses can gain insights into consumer behaviour and preferences and to capitalise on this, it will be important to integrate it.

Machine learning algorithms constitute a cornerstone in the advancement of Search Engine Marketing (SEM) strategies, offering unprecedented insights into consumer behaviour and preferences. To capitalize on this opportunity, it is essential to integrate machine learning SEM technologies, emphasizing predictive analytics. Such an approach enables a deeper understanding of the interactions between different demographics and your advertisements, thereby improving audience segmentation.

Moreover, machine learning capabilities enable the automation of the most labour-intensive tasks within SEM, including bid management and A/B testing. This automation not only conserves precious time but also markedly elevates the efficiency of marketing campaigns. By adapting SEM practices to incorporate these algorithms, advertisements are perpetually optimised for performance, obviating the need for continuous manual intervention.

The fusion of machine learning’s predictive analytics with AI-enabled creative optimisation represents a pivotal evolution in Search Engine Marketing (SEM) strategies. This integrative approach allows for the real-time modification of advertisement components, including imagery and text, to better match user intentions, thereby markedly enhancing campaign outcomes.

Employing machine learning and AI within SEM goes beyond simply embracing cutting-edge technology; it denotes an ongoing dedication to a cycle of testing, education, and improvement. This dedication positions marketing endeavours at the vanguard of innovation during a period marked by rapid digital change.

Measuring success and ROI

Leveraging AI and machine learning can revolutionise your campaigns | News by Thaiger
Photo by krakenimages on Unsplash

Utilising metrics and KPIs to evaluate AI and machine learning impact

The integration of Artificial Intelligence (AI) and Machine Learning (ML) into Search Engine Marketing (SEM) strategies has profoundly altered the approaches utilized by digital marketing experts.

  • For an accurate assessment of the effectiveness of these advanced SEM technologies, focusing on relevant metrics and Key Performance Indicators (KPIs) is essential.
  • These criteria provide a transparent evaluation of the performance enhancements brought about by AI and ML.
  • They enable organizations to measure success and calculate Return on Investment (ROI) with greater accuracy.

Primarily, conversion rates emerge as a crucial metric. They serve as direct indicators of the efficiency of AI-enhanced ad targeting and bid management strategies, reflecting whether such technological advancements result in an increased proportion of visitors performing desired actions, such as completing purchases or registering for newsletters.

Cost per Acquisition (CPA) represents another fundamental metric. It illustrates the effectiveness with which AI and ML tools manage advertising expenditures to secure new clientele. Reduced CPA values indicate that these advanced SEM technologies are not only pinpointing the appropriate audience but also achieving this in a financially prudent manner.

Click-through rates (CTR) hold significant importance as well. An elevated CTR signifies that the predictive analytics and automated content optimisation facilitated by AI are effectively engaging the target demographic, thereby increasing their propensity to interact with advertisements.

Moreover, Return on Ad Spend (ROAS) is an essential measure of overall operational efficacy. It quantifies the revenue generated for every unit of currency expended on SEM initiatives. An enhancement in ROAS denotes that integrating AI and ML into SEM strategies is yielding more lucrative campaigns.

Through meticulous observation of these metrics, organizations can comprehensively assess the impact of Artificial Intelligence (AI) and Machine Learning (ML) on their Search Engine Marketing (SEM) strategies. This analysis highlights not only the achievement of set goals but also identifies potential areas for enhancement. As AI and ML evolve, securing a competitive advantage in SEM requires ongoing vigilance and an adaptable methodology informed by data-driven insights.

Utilising machine learning and AI is pretty important in the pursuit of finding success in digital marketing. However, SEM is just one aspect of marketing that stands shoulder to shoulder with methods like SEO. Knowing the difference between these two will help determine which one to use or utilise together to have a more prosperous digital marketing campaign.

Feature Image Credit: This photo was generated using Dall-E

By Alessio Francesco Fedeli

Graduating from Webster University with a degree of Management with an emphasis on International Business, Alessio is a Thai-Italian with a multicultural perspective regarding Thailand and abroad. On the same token, as a passionate person for sports and activities, Alessio also gives insight to various spots for a fun and healthy lifestyle.

Sourced from Thaiger

&

Social media’s unregulated evolution over the past decade holds a lot of lessons that apply directly to AI companies and technologies.

Oh, how the mighty have fallen. A decade ago, social media was celebrated for sparking democratic uprisings in the Arab world and beyond. Now front pages are splashed with stories of social platforms’ role in misinformation, business conspiracymalfeasance, and risks to mental health. In a 2022 survey, Americans blamed social media for the coarsening of our political discourse, the spread of misinformation, and the increase in partisan polarization.

Today, tech’s darling is artificial intelligence. Like social media, it has the potential to change the world in many ways, some favourable to democracy. But at the same time, it has the potential to do incredible damage to society.

There is a lot we can learn about social media’s unregulated evolution over the past decade that directly applies to AI companies and technologies. These lessons can help us avoid making the same mistakes with AI that we did with social media.

In particular, five fundamental attributes of social media have harmed society. AI also has those attributes. Note that they are not intrinsically evil. They are all double-edged swords, with the potential to do either good or ill. The danger comes from who wields the sword, and in what direction it is swung. This has been true for social media, and it will similarly hold true for AI. In both cases, the solution lies in limits on the technology’s use.

#1: Advertising

The role advertising plays in the internet arose more by accident than anything else. When commercialization first came to the internet, there was no easy way for users to make micropayments to do things like viewing a web page. Moreover, users were accustomed to free access and wouldn’t accept subscription models for services. Advertising was the obvious business model, if never the best one. And it’s the model that social media also relies on, which leads it to prioritize engagement over anything else.

Both Google and Facebook believe that AI will help them keep their stranglehold on an 11-figure online ad market (yep, 11 figures), and the tech giants that are traditionally less dependent on advertising, like Microsoft and Amazon, believe that AI will help them seize a bigger piece of that market.

Big Tech needs something to persuade advertisers to keep spending on their platforms. Despite bombastic claims about the effectiveness of targeted marketing, researchers have long struggled to demonstrate where and when online ads really have an impact. When major brands like Uber and Procter & Gamble recently slashed their digital ad spending by the hundreds of millions, they proclaimed that it made no dent at all in their sales.

AI-powered ads, industry leaders say, will be much better. Google assures you that AI can tweak your ad copy in response to what users search for, and that its AI algorithms will configure your campaigns to maximize success. Amazon wants you to use its image generation AI to make your toaster product pages look cooler. And IBM is confident its Watson AI will make your ads better.

These techniques border on the manipulative, but the biggest risk to users comes from advertising within AI chatbots. Just as Google and Meta embed ads in your search results and feeds, AI companies will be pressured to embed ads in conversations. And because those conversations will be relational and human-like, they could be more damaging. While many of us have gotten pretty good at scrolling past the ads in Amazon and Google results pages, it will be much harder to determine whether an AI chatbot is mentioning a product because it’s a good answer to your question or because the AI developer got a kickback from the manufacturer.

#2: Surveillance

Social media’s reliance on advertising as the primary way to monetize websites led to personalization, which led to ever-increasing surveillance. To convince advertisers that social platforms can tweak ads to be maximally appealing to individual people, the platforms must demonstrate that they can collect as much information about those people as possible.

It’s hard to exaggerate how much spying is going on. A recent analysis by Consumer Reports about Facebook—just Facebook—showed that every user has more than 2,200 different companies spying on their web activities on its behalf.

AI-powered platforms that are supported by advertisers will face all the same perverse and powerful market incentives that social platforms do. It’s easy to imagine that a chatbot operator could charge a premium if it were able to claim that its chatbot could target users on the basis of their location, preference data, or past chat history and persuade them to buy products.

The possibility of manipulation is only going to get greater as we rely on AI for personal services. One of the promises of generative AI is the prospect of creating a personal digital assistant advanced enough to act as your advocate with others and as a butler to you. This requires more intimacy than you have with your search engine, email provider, cloud storage system, or phone. You’re going to want it with you constantly, and to most effectively work on your behalf, it will need to know everything about you. It will act as a friend, and you are likely to treat it as such, mistakenly trusting its discretion.

Even if you choose not to willingly acquaint an AI assistant with your lifestyle and preferences, AI technology may make it easier for companies to learn about you. Early demonstrations illustrate how chatbots can be used to surreptitiously extract personal data by asking you mundane questions. And with chatbots increasingly being integrated with everything from customer service systems to basic search interfaces on websites, exposure to this kind of inferential data harvesting may become unavoidable.

#3: Virality

Social media allows any user to express any idea with the potential for instantaneous global reach. A great public speaker standing on a soapbox can spread ideas to maybe a few hundred people on a good night. A kid with the right amount of snark on Facebook can reach a few hundred million people within a few minutes.

A decade ago, technologists hoped this sort of virality would bring people together and guarantee access to suppressed truths. But as a structural matter, it is in a social network’s interest to show you the things you are most likely to click on and share, and the things that will keep you on the platform.

As it happens, this often means outrageous, lurid, and triggering content. Researchers have found that content expressing maximal animosity toward political opponents gets the most engagement on Facebook and Twitter. And this incentive for outrage drives and rewards misinformation.

As Jonathan Swift once wrote, “Falsehood flies, and the Truth comes limping after it.” Academics seem to have proved this in the case of social media; people are more likely to share false information—perhaps because it seems more novel and surprising. And unfortunately, this kind of viral misinformation has been pervasive.

AI has the potential to supercharge the problem because it makes content production and propagation easier, faster, and more automatic. Generative AI tools can fabricate unending numbers of falsehoods about any individual or theme, some of which go viral. And those lies could be propelled by social accounts controlled by AI bots, which can share and launder the original misinformation at any scale.

Remarkably powerful AI text generators and autonomous agents are already starting to make their presence felt in social media. In July, researchers at Indiana University revealed a botnet of more than 1,100 Twitter accounts that appeared to be operated using ChatGPT.

AI will help reinforce viral content that emerges from social media. It will be able to create websites and web content, user reviews, and smartphone apps. It will be able to simulate thousands, or even millions, of fake personas to give the mistaken impression that an idea, or a political position, or use of a product, is more common than it really is. What we might perceive to be vibrant political debate could be bots talking to bots. And these capabilities won’t be available just to those with money and power; the AI tools necessary for all of this will be easily available to us all.

#4: Lock-in

Social media companies spend a lot of effort making it hard for you to leave their platforms. It’s not just that you’ll miss out on conversations with your friends. They make it hard for you to take your saved data—connections, posts, photos—and port it to another platform. Every moment you invest in sharing a memory, reaching out to an acquaintance, or curating your follows on a social platform adds a brick to the wall you’d have to climb over to go to another platform.

This concept of lock-in isn’t unique to social media. Microsoft cultivated proprietary document formats for years to keep you using its flagship Office product. Your music service or e-book reader makes it hard for you to take the content you purchased to a rival service or reader. And if you switch from an iPhone to an Android device, your friends might mock you for sending text messages in green bubbles. But social media takes this to a new level. No matter how bad it is, it’s very hard to leave Facebook if all your friends are there. Coordinating everyone to leave for a new platform is impossibly hard, so no one does.

Similarly, companies creating AI-powered personal digital assistants will make it hard for users to transfer that personalization to another AI. If AI personal assistants succeed in becoming massively useful time-savers, it will be because they know the ins and outs of your life as well as a good human assistant; would you want to give that up to make a fresh start on another company’s service? In extreme examples, some people have formed close, perhaps even familial, bonds with AI chatbots. If you think of your AI as a friend or therapist, that can be a powerful form of lock-in.

Lock-in is an important concern because it results in products and services that are less responsive to customer demand. The harder it is for you to switch to a competitor, the more poorly a company can treat you. Absent any way to force interoperability, AI companies have less incentive to innovate in features or compete on price, and fewer qualms about engaging in surveillance or other bad behaviours.

#5: Monopolization

Social platforms often start off as great products, truly useful and revelatory for their consumers, before they eventually start monetizing and exploiting those users for the benefit of their business customers. Then the platforms claw back the value for themselves, turning their products into truly miserable experiences for everyone. This is a cycle that Cory Doctorow has powerfully written about and traced through the history of Facebook, Twitter, and more recently TikTok.

The reason for these outcomes is structural. The network effects of tech platforms push a few firms to become dominant, and lock-in ensures their continued dominance. The incentives in the tech sector are so spectacularly, blindingly powerful that they have enabled six megacorporation’s (Amazon, Apple, Google, Facebook parent Meta, Microsoft, and Nvidia) to command a trillion dollars each of market value—or more. These firms use their wealth to block any meaningful legislation that would curtail their power. And they sometimes collude with each other to grow yet fatter.

This cycle is clearly starting to repeat itself in AI. Look no further than the industry poster child OpenAI, whose leading offering, ChatGPT, continues to set marks for uptake and usage. Within a year of the product’s launch, OpenAI’s valuation had skyrocketed to about $90 billion.

OpenAI once seemed like an “open” alternative to the megacorps—a common carrier for AI services with a socially oriented nonprofit mission. But the Sam Altman firing-and-rehiring debacle at the end of 2023, and Microsoft’s central role in restoring Altman to the CEO seat, simply illustrated how venture funding from the familiar ranks of the tech elite pervades and controls corporate AI. In January 2024, OpenAI took a big step toward monetization of this user base by introducing its GPT Store, wherein one OpenAI customer can charge another for the use of its custom versions of OpenAI software; OpenAI, of course, collects revenue from both parties. This sets in motion the very cycle Doctorow warns about.

In the middle of this spiral of exploitation, little or no regard is paid to externalities visited upon the greater public—people who aren’t even using the platforms. Even after society has wrestled with their ill effects for years, the monopolistic social networks have virtually no incentive to control their products’ environmental impact, tendency to spread misinformation, or pernicious effects on mental health. And the government has applied virtually no regulation toward those ends.

Likewise, few or no guardrails are in place to limit the potential negative impact of AI. Facial recognition software that amounts to racial profiling, simulated public opinions supercharged by chatbots, fake videos in political ads—all of it persists in a legal grey area. Even clear violators of campaign advertising law might, some think, be let off the hook if they simply do it with AI.

Mitigating the risks

The risks that AI poses to society are strikingly familiar, but there is one big difference: it’s not too late. This time, we know it’s all coming. Fresh off our experience with the harms wrought by social media, we have all the warning we should need to avoid the same mistakes.

The biggest mistake we made with social media was leaving it as an unregulated space. Even now—after all the studies and revelations of social media’s negative effects on kids and mental health, after Cambridge Analytica, after the exposure of Russian intervention in our politics, after everything else—social media in the US remains largely an unregulated “weapon of mass destruction.” Congress will take millions of dollars in contributions from Big Tech, and legislators will even invest millions of their own dollars with those firms, but passing laws that limit or penalize their behaviour seems to be a bridge too far.

We can’t afford to do the same thing with AI, because the stakes are even higher. The harm social media can do stems from how it affects our communication. AI will affect us in the same ways and many more besides. If Big Tech’s trajectory is any signal, AI tools will increasingly be involved in how we learn and how we express our thoughts. But these tools will also influence how we schedule our daily activities, how we design products, how we write laws, and even how we diagnose diseases. The expansive role of these technologies in our daily lives gives for-profit corporations opportunities to exert control over more aspects of society, and that exposes us to the risks arising from their incentives and decisions.

The good news is that we have a whole category of tools to modulate the risk that corporate actions pose for our lives, starting with regulation. Regulations can come in the form of restrictions on activity, such as limitations on what kinds of businesses and products are allowed to incorporate AI tools. They can come in the form of transparency rules, requiring disclosure of what data sets are used to train AI models or what new preproduction-phase models are being trained. And they can come in the form of oversight and accountability requirements, allowing for civil penalties in cases where companies disregard the rules.

The single biggest point of leverage governments have when it comes to tech companies is antitrust law. Despite what many lobbyists want you to think, one of the primary roles of regulation is to preserve competition—not to make life harder for businesses. It is not inevitable for OpenAI to become another Meta, an 800-pound gorilla whose user base and reach are several times those of its competitors. In addition to strengthening and enforcing antitrust law, we can introduce regulation that supports competition-enabling standards specific to the technology sector, such as data portability and device interoperability. This is another core strategy for resisting monopoly and corporate control.

Additionally, governments can enforce existing regulations on advertising. Just as the US regulates what media can and cannot host advertisements for sensitive products like cigarettes, and just as many other jurisdictions exercise strict control over the time and manner of politically sensitive advertising, so too could the US limit the engagement between AI providers and advertisers.

Lastly, we should recognize that developing and providing AI tools does not have to be the sovereign domain of corporations. We, the people and our government, can do this too. The proliferation of open-source AI development in 2023, successful to an extent that startled corporate players, is proof of this. And we can go further, calling on our government to build public-option AI tools developed with political oversight and accountability under our democratic system, where the dictatorship of the profit motive does not apply.

Which of these solutions is most practical, most important, or most urgently needed is up for debate. We should have a vibrant societal dialogue about whether and how to use each of these tools. There are lots of paths to a good outcome.

The problem is that this isn’t happening now, particularly in the US. And with a looming presidential election, conflict spreading alarmingly across Asia and Europe, and a global climate crisis, it’s easy to imagine that we won’t get our arms around AI any faster than we have (not) with social media. But it’s not too late. These are still the early years for practical consumer AI applications. We must and can do better.

Feature Image Credit: STEPHANIE ARNETT/MITTR | GETTY, ENVATO

&

Nathan E. Sanders is a data scientist and an affiliate with the Berkman Klein Center at Harvard University. Bruce Schneier is a security technologist and a fellow and lecturer at the Harvard Kennedy School.

Sourced from MIT Technology Review

 

 

& archive page

By Webb Wright 

If you’re considering launching a new AI-centered brand or product, you may want to go beyond simply adding ‘AI’ to the end of the name.

The AI Gold Rush is in full swing and brands of all stripes are rushing to establish their particular niches in this hugely profitable and increasingly crowded industry. New AI-centered brands, departments and products are cropping up by the day, each requiring a name that is, ideally, both memorable and unique.

“Every single company, whether a candy bar manufacturer or a software company, seemingly has to show that it is doing something to leverage AI,” says Jonathan Bell, founder and CEO of Want Branding. “And that often requires some kind of adjacent brand, which, of course, then needs a name.”

Several brands, as you may have noticed, have simply taken to adding ‘AI’ (or ‘.AI’) to the ends of their names. Think Stability AI, Spot AI, Mistral AI, Shield AI, People.ai, Otter.ai, Arize AI, Crowd AI, Toggle AI and so on. And, of course, there’s OpenAI, the company that has become something of a flagship for the entire wave of AI innovation that’s currently underway following its hugely successful launch of ChatGPT in late 2022 and that has probably helped to establish the ‘AI‘ suffix as the name du jour for up-and-coming brands looking to make a name for themselves in the industry.

Adding ‘AI‘ to the end of a brand or product name “is an easy but often perhaps a cheap way of doing it without much thought,” says Bell.

A parallel can be drawn between this naming phenomenon and a similar one that followed in the wake of the dawn of the internet in the late 90s when scores of new brands with ‘.com‘ at the ends of their names began to emerge. In those early days of the world wide web, it made practical sense for companies to make unambiguously clear that they were technologically savvy enough to have an online presence. (Remember, this was back when ‘online‘ was itself a new, hip word.)

Over the slow process of many years, however, the internet became so deeply embedded into most of our day-to-day lives, into the very fabric of popular culture and commerce, that it became more or less superfluous to add ‘.com‘ to the end of a brand name. Most people these days automatically assume that any given brand – unless it‘s incompetent beyond belief or run by a group of Luddites – has a website and probably some degree of social media presence.

The ‘.com‘ naming trend, in other words, began as a worthwhile marketing tactic, but “at a certain point that was eroded and it became meaningless,” says David Placek, founder and CEO of Lexicon Branding. There are still, of course, some brands (Hotels.com, for example) that have chosen to use their domain names as their official names, but such a strategy is far less common today than it was when the internet had the shiny-new-toy factor.

AI could follow a similar trajectory of cultural adoption as that of the internet: today, it’s all anyone can talk about; tomorrow, it’s basically taken for granted. Just as people today assume that brands today have an online presence – even when they don’t have ‘.com‘ in their names – we could soon reach a point as a society in which AI is so ubiquitous, so deeply integrated into our devices and our modes of working and communicating with one another, that adding ‘AI‘ to a brand or product name becomes passé. Placek says he’s “absolutely positive” that we’ll cross that threshold sometime within the next two years, after which point “everybody will assume that there’s something AI-related” built into most brands and products.

Given that forecast, adding ‘AI‘ to the end of a name “can be a disservice for building brand strength over time, because [the market] becomes crowded,” says marketing agency Tenet Partners CEO Hampton Bridwell. “There are a lot of names with a similar sound or styling and that creates a situation where you don’t have differentiation or memorability within the name.”

Anthropomorphic names and the sad tale of Clippy

There have, of course, been other naming trends that have recently emerged around AI. For example, many AI-centered products have been given human-sounding names, apparently in an effort to make the underlying technology – which could potentially come across as a bit threatening to a culture that’s been weaned on films like 2001: A Space Odyssey and The Matrix – feel a bit less alien and intimidating.

Consider IBM’s Watson, an AI model originally designed to answer questions that gained global fame when it won Jeopardy! in 2011. There are also more recent examples, including Siri (Apple), Alexa (Amazon) and Einstein (Salesforce).

As the journalist Charles Duhigg points out in a recent article in The New Yorker, Microsoft (which became a leader in the burgeoning AI industry following its recent multi-billion-dollar investments in OpenAI) has had to learn the hard way about the risks involved with trying to anthropomorphize AI. In 1996, the company introduced Clippy, a smiling virtual assistant with big eyes and a paperclip for a body, who could answer simple user questions on Microsoft Office platforms. The character became widely loathed by users. The Smithsonian called Clippy “one of the worst software design blunders in the annals of computing,” as Duhigg quotes in his article. Microsoft killed Clippy off in 2001.

The company once again tried its hand at anthropomorphizing algorithms in 2016 with the launch of Tay, an AI-powered chatbot whose conversational style reflected that of a typical teenage girl. Tay rather quickly descended into a fit of hate speech and was deactivated less than 24 hours after its launch.

Apparently wiser after the Clippy and Tay debacles, Microsoft is now naming its AI products in a manner that suggests utility and even a touch of fallibility. Copilot, the name of the company’s recently launched suite of AI-powered productivity tools, insinuates something that can be reasonably relied upon to provide a measure of assistance, not something into which one should invest one’s whole trust.

The curious case of ChatGPT

Perhaps the biggest irony in the realm of AI names is the fact that ChatGPT, the product that, more than any other, catalyzed the burgeoning AI Revolution, has such a widely disliked name.

For one thing, says Bridwell, the word ‘chat‘ in a brand name “is pretty limiting – it really doesn’t embody what the whole thing is about in terms of [how] it delivers value. It’s a terrible name. Over time, [OpenAI] should really think about rebranding it.”

Even OpenAI CEO Sam Altman agrees that it’s not an ideal name. During a recent podcast hosted by comedian Trevor Noah, Altman said that ChatGPT is “a horrible name, but it may be too ubiquitous to ever change.”

ChatGPT’s suboptimal name could stem in part from the fact that the OpenAI team that built it did not initially have high hopes for its prospects as an uber popular app. It was referred to internally as a “low-key research preview” in the period leading up to its launch and it was intended as a means through which the public could begin to interact with OpenAI’s GPT large language model more broadly so that the company could then collect feedback and fine-tune the technology accordingly.

Many within the OpenAI team were surprised when ChatGPT attracted its first million users in just five days, becoming the fastest-growing app in history.

Advice for marketers

According to Want Branding’s Jonathan Bell, brands that are looking to promote their use of AI through an optimized name should take their time. “It needs to be well thought-out,” he says. “It shouldn’t be something that’s done casually over a quick meeting, where you just simply add ‘AI’ to [the name]. Companies need to think about: What are they specifically doing? Can they deploy AI in a way that is really effective, or is this something that’s been done that could come across as bandwagon-jumping?”

Placek, who’s prone to referencing cognitive science and linguistics when discussing the psychology of brand- and product-naming, highlights the importance of sound symbolism – that is, the associations between particular sounds and the concepts that they evoke in the mind of the hearer. “You don’t want something too soft and you don’t want something too clever,” he says. “[You want something that’s] a little bit on the more serious side that [suggests] intelligence … sound symbolism should play a role in selecting and developing your names.”

When prompted to describe the qualities of a great name for an AI brand or product in fewer than 10 words, ChatGPT wrote: “Memorable, clear, unique, relevant, easy to pronounce, globally appealing, scalable.”

Feature Image Credit: Adobe Stock

By Webb Wright 

Sourced from The Drum