Tag

AI

Browsing

By 

A day after OpenAI impressed with a startlingly improved ChatGPT AI model, Google showed off an equally stunning vision for how AI will improve the products that billions of people use every day.

The updates, announced at its annual Google I/O developer conference, come as the company is trying to push beyond its core advertising business with new devices and AI-powered tools. Artificial intelligence was so top of mind during the event, Google CEO Sundar Pichai said at the end of the presentation the term “AI” was said 120 times – as counted by none other than its AI platform Gemini.

During the keynote, Google showed how it wants its AI products to become a bigger part of users’ lives, such as by sharing information, interacting with others, finding objects around the house, making schedules, shopping and using an Android device. Google essentially wants its AI to be part of everything you do.

Pichai kicked off the event by highlighting various new features powered by its latest AI model Gemini 1.5 Pro. One new feature, called Ask Photos, allows users to search photos for deeper insights, such as asking when your daughter learned to swim or recall what your license plate number is, by looking through saved pictures.

He also showed how users can ask Gemini 1.5 Pro to summarize all recent emails from your child’s school by analysing attachments, and summarizing key points and spitting out action items.

Meanwhile, Google executives took turns demonstrating other capabilities, such as how the latest model could “read” a textbook and turn it into a kind of AI lecture featuring natural-sounding teachers that answer questions.

Just one day before, OpenAI — one of the tech industry’s leaders in artificial intelligence — unveiled a new AI model that it says will make chatbot ChatGPT smarter and easier to use. GPT-4o aims to turn ChatGPT into a digital personal assistant that can engage in real-time, spoken conversations and interact using text and “vision.” It can view screenshots, photos, documents or charts uploaded by users and have a conversation about them.

Google also showed off Gemini’s latest abilities to take different kinds of input — “multimodal” capabilities to take in text, voice or images — as a direct response to ChatGPT’s efforts. A Google executive also demoed a virtual “teammate” that can help stay on top of to-do lists, organize data and manage workflow.

The company also highlighted search improvements by allowing users to ask more natural or more focused questions, and providing various versions of the responses, such as in-depth or summarized results. It can also make targeted suggestions, such as recommending kid friendly restaurants in certain locations, or note what might be wrong with a gadget, such as a camera, by taking a video of the issue via Google Lens. The goal is to take the legwork out of searching on Google, the company said.

The company also briefly teased Project Astra, developed by Google’s DeepMind AI lab, which will allow AI assistants to help users’ everyday lives by using phone cameras to interpret information about the real worldsuch as identifying objects and even finding misplaced items. It also hinted at how it would work on augmented reality glasses.

Google said that later this year it will integrate more AI functions into phones. For example, users will be able to drag and drop images created by AI into Google Messages and Gmail and ask questions about YouTube videos and PDFs on an Android device.

And in a move that will likely appeal to many, a new built-in tool for Android will help detect suspicious activity in the middle of a call, such as a scammer trying to imitate a user’s bank.

According to analyst Jacob Bourne, from market research firm Emarketer, it’s no surprise AI took centre stage at this year’s Google developer conference.

“By showcasing its latest models and how they’ll power existing products with strong consumer reach, Google is demonstrating how it can effectively differentiate itself from rivals,” he said.

He believes the reception of the new tools will be an indicator of how well Google can adapt its search product to meet the demands of the generative AI era.

“To maintain its competitive edge and satisfy investors, Google will need to focus on translating its AI innovations into profitable products and services at scale,” he said.

As the company grows its AI footprint, it said it will introduce more protections to cut down on potential misuse. Google is expanding its existing SynthID feature to detect AI-generated content. Last year, the tool added watermarks to AI-generated images and audio.

Google said it is also partnering with experts and institutions to test and improve the capabilities in its new models.

Although the company has doubled down on artificial intelligence in the past year, it also met significant roadblocks. Last year, shortly after introducing its generative AI tool — then called Bard and since renamed Gemini — Google’s share price dropped after a demo video of the tool showed it producing a factually inaccurate response to a question about the James Webb Space Telescope.

More recently, the company hit pause in February on Gemini’s ability to generate images of people after it was blasted on social media for producing historically inaccurate images that largely showed people of colour in place of White people.

Gemini, like other AI tools such as ChatGPT, is trained on vast troves of online data. Experts have long warned about the shortcomings around AI tools, such as the potential for inaccuracies, biases and the spreading of misinformation. Still, many companies are forging ahead on AI tools or partnerships.

Apple may be interested in licensing and building Google’s Gemini AI engine, which includes chatbots and other AI tools, into upcoming iPhones and its iOS 18 features, Bloomberg reported in March. The company is also reportedly talking to ChatGPT creator OpenAI.

Feature Image Credit: Google. Sundar Pichai speaks about Gemini 1.5 pro during Google I/O developer conference today. 

By 

Sourced from CNN Business

“We will not stop until beauty is a source of happiness.”

Personal care brand Dove has become known for its campaigns championing real people with real bodies, as exemplified by its shunning of TikTok ‘beauty’ filters. And now, the brand is targeting AI in the latest iteration of its decades-old Real Beauty campaign.

The brand announced this week that it will never use AI-generated imagery to represent “real bodies” in its ads. And in a powerful short film, it takes aim at the generic and unrealistic beauty standards depicted in images churned out in text prompts such as “the most beautiful woman in the world.” (For more great ad campaigns, check out the best print ads of all time.)

Alessandro Manfredi, chief marketing officer at Dove, adds, “At Dove, we seek a future in which women get to decide and declare what real beauty looks like – not algorithms. As we navigate the opportunities and challenges that come with new and emerging technology, we remain committed to protect, celebrate, and champion Real Beauty. Pledging to never use AI in our communications is just one step. We will not stop until beauty is a source of happiness, not anxiety, for every woman and girl.”
Indeed, over the 20 year course of its Real Beauty campaign, Dove has repeatedly proven itself to be a force for good. From shunning AI to helping game developers code natural hair in an effort to increase diversity in video games, the brand’s inclusivity credentials continue to impress.
Feature Image Credit: Dove

By 

Daniel John is Senior News Editor at Creative Bloq. He reports on the worlds of art, design, branding and lifestyle tech (which often translates to tech made by Apple). He joined in 2020 after working in copywriting and digital marketing with brands including ITV, NBC, Channel 4 and more.

Sourced from CREATIVE BLOQ

 

By 

AI supplants conventional search engines, their loss of market share will change the digital ad landscape, says research firm Gartner.

A new report from the research firm Gartner, has some unsettling news for search engine giants like Google and Microsoft’s Bing. It predicts that as everyday net users become more comfortable with AI tech and incorporate it into their general net habits, chatbots and other agents will lead to a drop of 25 percent in “traditional search engine volume.” The search giants will then simply be “losing market share to AI chatbots and other virtual agents.”

One reason to care about this news is to remember that the search engine giants are really marketing giants. Search engines are useful, but Google makes money by selling ads that leverage data from its search engine. These ads are designed to convert to profits for the companies whose wares are being promoted. Plus placing Google ads on a website is a revenue source that many other companies rely on–perhaps best known for being used by media firms. If AI upends search, then by definition this means it will similarly upend current marketing practices. And disrupted marketing norms mean that how you think about using online systems to market your company’s products will have to change too.

AI already plays a role in marketing. Chatbots are touted as having copy generating skills that can boost small companies’ public relations efforts, but the tech is also having an effect inside the marketing process itself. An example of this is Shopify’s recent AI-powered Semantic Search system, which uses AI to sniff through the text and image data of a manufacturer’s products and then dream up better search-matching terms so that they don’t miss out on matching to customers searching for a particular phrase. But this is simply using AI to improve current search-based marketing systems.

AI–smart enough to steal traffic

More important is the notion that AI chatbots can “steal” search engine traffic. Think of how many of the queries that you usually direct at Google-from basic stuff like “what’s 200 Farenheit in Celsius?” to more complex matters like “what’s the most recent games console made by Sony?”–could be answered by a chatbot instead. Typing those queries into ChatGPT or a system like Microsoft’s Copilot could mean they aren’t directed through Google’s labyrinthine search engine systems.

There’s also a hint that future web surfing won’t be as search-centric as it is now, thanks to the novel Arc app. Arc leverages search engine results as part of its answers to user queries, but the app promises to do the boring bits of web searching for you, neatly curating the answers above more traditional search engine results. AI “agents” are another emergent form of the tech that could impact search-AI systems that’re able to go off and perform a complex sequence of tasks for you, like searching for some data and analysing it automatically.

Google, of course, is savvy regarding these trends, and last year launched its own AI search push, with its Search Generative Experience. This is an effort to add in some of the clever summarizing abilities of generative AI systems to Google’s traditional search system, saving users time they’d otherwise have spent trawling through a handful of the top search results in order to learn the actual answer to the queries they typed in.

But as AI use expands, and firms like Microsoft double– and triple-down on their efforts to incorporate AI into everyone’s digital lives, the question of the role of traditional search compared to AI chatbots and similar tech remains an open one. AI will soon impact how you think about marketing your company’s products and Search Engine Optimization to bolster traffic to your website may even stop being such an important factor.

So if you’re building a long-term marketing strategy right now it might be worth examining how you can leverage AI products to market your wares alongside more traditional search systems. It’s always smart to skate to where the puck is going to be versus where it currently is.

Feature Image Credit: Getty Images

By 

Sourced from Inc.

 and

Social media’s unregulated evolution over the past decade holds a lot of lessons that apply directly to AI companies and technologies.

Oh, how the mighty have fallen. A decade ago, social media was celebrated for sparking democratic uprisings in the Arab world and beyond. Now front pages are splashed with stories of social platforms’ role in misinformation, business conspiracymalfeasance, and risks to mental health. In a 2022 survey, Americans blamed social media for the coarsening of our political discourse, the spread of misinformation, and the increase in partisan polarization.

Today, tech’s darling is artificial intelligence. Like social media, it has the potential to change the world in many ways, some favourable to democracy. But at the same time, it has the potential to do incredible damage to society.

There is a lot we can learn about social media’s unregulated evolution over the past decade that directly applies to AI companies and technologies. These lessons can help us avoid making the same mistakes with AI that we did with social media.

In particular, five fundamental attributes of social media have harmed society. AI also has those attributes. Note that they are not intrinsically evil. They are all double-edged swords, with the potential to do either good or ill. The danger comes from who wields the sword, and in what direction it is swung. This has been true for social media, and it will similarly hold true for AI. In both cases, the solution lies in limits on the technology’s use.

#1: Advertising

The role advertising plays in the internet arose more by accident than anything else. When commercialization first came to the internet, there was no easy way for users to make micropayments to do things like viewing a web page. Moreover, users were accustomed to free access and wouldn’t accept subscription models for services. Advertising was the obvious business model, if never the best one. And it’s the model that social media also relies on, which leads it to prioritize engagement over anything else.

Both Google and Facebook believe that AI will help them keep their stranglehold on an 11-figure online ad market (yep, 11 figures), and the tech giants that are traditionally less dependent on advertising, like Microsoft and Amazon, believe that AI will help them seize a bigger piece of that market.

Big Tech needs something to persuade advertisers to keep spending on their platforms. Despite bombastic claims about the effectiveness of targeted marketing, researchers have long struggled to demonstrate where and when online ads really have an impact. When major brands like Uber and Procter & Gamble recently slashed their digital ad spending by the hundreds of millions, they proclaimed that it made no dent at all in their sales.

AI-powered ads, industry leaders say, will be much better. Google assures you that AI can tweak your ad copy in response to what users search for, and that its AI algorithms will configure your campaigns to maximize success. Amazon wants you to use its image generation AI to make your toaster product pages look cooler. And IBM is confident its Watson AI will make your ads better.

These techniques border on the manipulative, but the biggest risk to users comes from advertising within AI chatbots. Just as Google and Meta embed ads in your search results and feeds, AI companies will be pressured to embed ads in conversations. And because those conversations will be relational and human-like, they could be more damaging. While many of us have gotten pretty good at scrolling past the ads in Amazon and Google results pages, it will be much harder to determine whether an AI chatbot is mentioning a product because it’s a good answer to your question or because the AI developer got a kickback from the manufacturer.

#2: Surveillance

Social media’s reliance on advertising as the primary way to monetize websites led to personalization, which led to ever-increasing surveillance. To convince advertisers that social platforms can tweak ads to be maximally appealing to individual people, the platforms must demonstrate that they can collect as much information about those people as possible.

It’s hard to exaggerate how much spying is going on. A recent analysis by Consumer Reports about Facebook—just Facebook—showed that every user has more than 2,200 different companies spying on their web activities on its behalf.

AI-powered platforms that are supported by advertisers will face all the same perverse and powerful market incentives that social platforms do. It’s easy to imagine that a chatbot operator could charge a premium if it were able to claim that its chatbot could target users on the basis of their location, preference data, or past chat history and persuade them to buy products.

The possibility of manipulation is only going to get greater as we rely on AI for personal services. One of the promises of generative AI is the prospect of creating a personal digital assistant advanced enough to act as your advocate with others and as a butler to you. This requires more intimacy than you have with your search engine, email provider, cloud storage system, or phone. You’re going to want it with you constantly, and to most effectively work on your behalf, it will need to know everything about you. It will act as a friend, and you are likely to treat it as such, mistakenly trusting its discretion.

Even if you choose not to willingly acquaint an AI assistant with your lifestyle and preferences, AI technology may make it easier for companies to learn about you. Early demonstrations illustrate how chatbots can be used to surreptitiously extract personal data by asking you mundane questions. And with chatbots increasingly being integrated with everything from customer service systems to basic search interfaces on websites, exposure to this kind of inferential data harvesting may become unavoidable.

#3: Virality

Social media allows any user to express any idea with the potential for instantaneous global reach. A great public speaker standing on a soapbox can spread ideas to maybe a few hundred people on a good night. A kid with the right amount of snark on Facebook can reach a few hundred million people within a few minutes.

A decade ago, technologists hoped this sort of virality would bring people together and guarantee access to suppressed truths. But as a structural matter, it is in a social network’s interest to show you the things you are most likely to click on and share, and the things that will keep you on the platform.

As it happens, this often means outrageous, lurid, and triggering content. Researchers have found that content expressing maximal animosity toward political opponents gets the most engagement on Facebook and Twitter. And this incentive for outrage drives and rewards misinformation.

As Jonathan Swift once wrote, “Falsehood flies, and the Truth comes limping after it.” Academics seem to have proved this in the case of social media; people are more likely to share false information—perhaps because it seems more novel and surprising. And unfortunately, this kind of viral misinformation has been pervasive.

AI has the potential to supercharge the problem because it makes content production and propagation easier, faster, and more automatic. Generative AI tools can fabricate unending numbers of falsehoods about any individual or theme, some of which go viral. And those lies could be propelled by social accounts controlled by AI bots, which can share and launder the original misinformation at any scale.

Remarkably powerful AI text generators and autonomous agents are already starting to make their presence felt in social media. In July, researchers at Indiana University revealed a botnet of more than 1,100 Twitter accounts that appeared to be operated using ChatGPT.

AI will help reinforce viral content that emerges from social media. It will be able to create websites and web content, user reviews, and smartphone apps. It will be able to simulate thousands, or even millions, of fake personas to give the mistaken impression that an idea, or a political position, or use of a product, is more common than it really is. What we might perceive to be vibrant political debate could be bots talking to bots. And these capabilities won’t be available just to those with money and power; the AI tools necessary for all of this will be easily available to us all.

#4: Lock-in

Social media companies spend a lot of effort making it hard for you to leave their platforms. It’s not just that you’ll miss out on conversations with your friends. They make it hard for you to take your saved data—connections, posts, photos—and port it to another platform. Every moment you invest in sharing a memory, reaching out to an acquaintance, or curating your follows on a social platform adds a brick to the wall you’d have to climb over to go to another platform.

This concept of lock-in isn’t unique to social media. Microsoft cultivated proprietary document formats for years to keep you using its flagship Office product. Your music service or e-book reader makes it hard for you to take the content you purchased to a rival service or reader. And if you switch from an iPhone to an Android device, your friends might mock you for sending text messages in green bubbles. But social media takes this to a new level. No matter how bad it is, it’s very hard to leave Facebook if all your friends are there. Coordinating everyone to leave for a new platform is impossibly hard, so no one does.

Similarly, companies creating AI-powered personal digital assistants will make it hard for users to transfer that personalization to another AI. If AI personal assistants succeed in becoming massively useful time-savers, it will be because they know the ins and outs of your life as well as a good human assistant; would you want to give that up to make a fresh start on another company’s service? In extreme examples, some people have formed close, perhaps even familial, bonds with AI chatbots. If you think of your AI as a friend or therapist, that can be a powerful form of lock-in.

Lock-in is an important concern because it results in products and services that are less responsive to customer demand. The harder it is for you to switch to a competitor, the more poorly a company can treat you. Absent any way to force interoperability, AI companies have less incentive to innovate in features or compete on price, and fewer qualms about engaging in surveillance or other bad behaviours.

#5: Monopolization

Social platforms often start off as great products, truly useful and revelatory for their consumers, before they eventually start monetizing and exploiting those users for the benefit of their business customers. Then the platforms claw back the value for themselves, turning their products into truly miserable experiences for everyone. This is a cycle that Cory Doctorow has powerfully written about and traced through the history of Facebook, Twitter, and more recently TikTok.

The reason for these outcomes is structural. The network effects of tech platforms push a few firms to become dominant, and lock-in ensures their continued dominance. The incentives in the tech sector are so spectacularly, blindingly powerful that they have enabled six megacorporations (Amazon, Apple, Google, Facebook parent Meta, Microsoft, and Nvidia) to command a trillion dollars each of market value—or more. These firms use their wealth to block any meaningful legislation that would curtail their power. And they sometimes collude with each other to grow yet fatter.

This cycle is clearly starting to repeat itself in AI. Look no further than the industry poster child OpenAI, whose leading offering, ChatGPT, continues to set marks for uptake and usage. Within a year of the product’s launch, OpenAI’s valuation had skyrocketed to about $90 billion.

OpenAI once seemed like an “open” alternative to the megacorps—a common carrier for AI services with a socially oriented nonprofit mission. But the Sam Altman firing-and-rehiring debacle at the end of 2023, and Microsoft’s central role in restoring Altman to the CEO seat, simply illustrated how venture funding from the familiar ranks of the tech elite pervades and controls corporate AI. In January 2024, OpenAI took a big step toward monetization of this user base by introducing its GPT Store, wherein one OpenAI customer can charge another for the use of its custom versions of OpenAI software; OpenAI, of course, collects revenue from both parties. This sets in motion the very cycle Doctorow warns about.

In the middle of this spiral of exploitation, little or no regard is paid to externalities visited upon the greater public—people who aren’t even using the platforms. Even after society has wrestled with their ill effects for years, the monopolistic social networks have virtually no incentive to control their products’ environmental impact, tendency to spread misinformation, or pernicious effects on mental health. And the government has applied virtually no regulation toward those ends.

Likewise, few or no guardrails are in place to limit the potential negative impact of AI. Facial recognition software that amounts to racial profiling, simulated public opinions supercharged by chatbots, fake videos in political ads—all of it persists in a legal grey area. Even clear violators of campaign advertising law might, some think, be let off the hook if they simply do it with AI.

Mitigating the risks

The risks that AI poses to society are strikingly familiar, but there is one big difference: it’s not too late. This time, we know it’s all coming. Fresh off our experience with the harms wrought by social media, we have all the warning we should need to avoid the same mistakes.

The biggest mistake we made with social media was leaving it as an unregulated space. Even now—after all the studies and revelations of social media’s negative effects on kids and mental health, after Cambridge Analytica, after the exposure of Russian intervention in our politics, after everything else—social media in the US remains largely an unregulated “weapon of mass destruction.” Congress will take millions of dollars in contributions from Big Tech, and legislators will even invest millions of their own dollars with those firms, but passing laws that limit or penalize their behaviour seems to be a bridge too far.

We can’t afford to do the same thing with AI, because the stakes are even higher. The harm social media can do stems from how it affects our communication. AI will affect us in the same ways and many more besides. If Big Tech’s trajectory is any signal, AI tools will increasingly be involved in how we learn and how we express our thoughts. But these tools will also influence how we schedule our daily activities, how we design products, how we write laws, and even how we diagnose diseases. The expansive role of these technologies in our daily lives gives for-profit corporations opportunities to exert control over more aspects of society, and that exposes us to the risks arising from their incentives and decisions.

The good news is that we have a whole category of tools to modulate the risk that corporate actions pose for our lives, starting with regulation. Regulations can come in the form of restrictions on activity, such as limitations on what kinds of businesses and products are allowed to incorporate AI tools. They can come in the form of transparency rules, requiring disclosure of what data sets are used to train AI models or what new preproduction-phase models are being trained. And they can come in the form of oversight and accountability requirements, allowing for civil penalties in cases where companies disregard the rules.

The single biggest point of leverage governments have when it comes to tech companies is antitrust law. Despite what many lobbyists want you to think, one of the primary roles of regulation is to preserve competition—not to make life harder for businesses. It is not inevitable for OpenAI to become another Meta, an 800-pound gorilla whose user base and reach are several times those of its competitors. In addition to strengthening and enforcing antitrust law, we can introduce regulation that supports competition-enabling standards specific to the technology sector, such as data portability and device interoperability. This is another core strategy for resisting monopoly and corporate control.

Additionally, governments can enforce existing regulations on advertising. Just as the US regulates what media can and cannot host advertisements for sensitive products like cigarettes, and just as many other jurisdictions exercise strict control over the time and manner of politically sensitive advertising, so too could the US limit the engagement between AI providers and advertisers.

Lastly, we should recognize that developing and providing AI tools does not have to be the sovereign domain of corporations. We, the people and our government, can do this too. The proliferation of open-source AI development in 2023, successful to an extent that startled corporate players, is proof of this. And we can go further, calling on our government to build public-option AI tools developed with political oversight and accountability under our democratic system, where the dictatorship of the profit motive does not apply.

Which of these solutions is most practical, most important, or most urgently needed is up for debate. We should have a vibrant societal dialogue about whether and how to use each of these tools. There are lots of paths to a good outcome.

The problem is that this isn’t happening now, particularly in the US. And with a looming presidential election, conflict spreading alarmingly across Asia and Europe, and a global climate crisis, it’s easy to imagine that we won’t get our arms around AI any faster than we have (not) with social media. But it’s not too late. These are still the early years for practical consumer AI applications. We must and can do better.

Feature Image Credit: STEPHANIE ARNETT/MITTR | GETTY, ENVATO

 and

Nathan E. Sanders is a data scientist and an affiliate with the Berkman Klein Center at Harvard University. Bruce Schneier is a security technologist and a fellow and lecturer at the Harvard Kennedy School.

Sourced from MIT Technology Review

BY BILLY JONES.

Hootsuite’s VP of marketing explains how incorporating AI as an integral part of strategy and brainstorming processes has transformed everything.

BY BILLY JONES3 MINUTE READ

In the fast-paced world of marketing, I’ve always approached creativity as an organization’s bread and butter, with innovation as the knife that spreads it. As the VP of marketing at Hootsuite, I’ve found an unexpected ally in this creative quest—artificial intelligence and, more specifically, ChatGPT.

I’ve incorporated AI as an integral part of my strategy and brainstorming process in the past year—transforming the way I think, create, and deliver business value for my organization. Here are five ways it’s made an impact.

REINVENTING THE CREATIVE BRIEF

My years in agency life at BBDO have ingrained in me a love for structured creativity. The “Get-Who-To-By-Because” brief has always been a staple in my toolbox. It helps zone in on who I am trying to target, pushes me to identify the pain point I am trying to solve, how I plan to solve it, the key message that I’m trying to drive home, and the why behind the entire campaign.

Recently I began using ChatGPT to reframe these briefs. By feeding it relevant information and asking for multiple versions of a brief within the “Get-Who-To-By-Because” format, I’ve been amazed by the unexpected perspectives it offers. This process has helped fuel my creativity. Coupled with my experience in the creative space and deep understanding of my customer, it ensures that the final output is both human-centric and insight-driven.

CRAFTING TARGET PERSONAS WITH PRECISION

We all know that data is king. But the interpretation of any data is the key to the kingdom. ChatGPT’s ability to dive into vast public data pools has been a game changer for developing customer personas.

For instance, I asked ChatGPT to define the core demographics of North American social media managers.

From there, I used that very demographic output as an input to a user persona framework. ChatGPT was able to create detailed user personas that captured everything from challenges and joys to the preferred technology stack, budget, and even their favored media outlets. These insights have been invaluable in refining my team’s content and paid media strategies.

ENHANCING RESPONSE-BASED ADVERTISING

In marketing’s creative landscape, a tactical approach is sometimes crucial. ChatGPT excels here, notably during a time-strapped holiday season. Tasked with creating a compelling email for a January webinar with little time and lots of folks on holiday, we used prompted ChatGPT with our holiday webinar theme “Supercharge Your 24 Social Strategy” and asked for it to help us craft a click-worthy email via the AIDA (Attention-Interest-Desire-Action) framework. The outcome was a remarkable 300% increase in click-through rates, showcasing AI’s power in strategic, responsive advertising.

ACKNOWLEDGING THE POWER OF AI EDITING

From crafting a Slack message to assisting with internal briefs, ChatGPT has been my go-to editor. Its ability to tailor certain messages to specific communication styles—such as being jargon-free—is nothing short of impressive. This has enhanced the clarity and impact of my communications across the board.

SERVING AS A CREATIVE ARCHIVIST

In preparing for a product launch, ChatGPT has served me well as a creative archivist—providing insights on past marketing campaigns from companies who have similarly launched disruptive products.

By getting specific around needing to understand the specific tactics that drove success, ChatGPT has helped shape our approach to generating fame and achieving widespread industry impact.

These are just a few examples of how I’ve used AI in the past year. It has played multiple roles—from a strategist and brainstorming partner to a copywriter and researcher.

Throughout all of this, it’s important to remember that AI is a tool and not a replacement for human creativity.

To me, AI provides deep insights based on what’s been done. But it’s our creativity that dreams up ideas that have never been done. As we continue to harness AI’s power, it’s our human touch that will continue to make a real difference in the world of marketing.

Feature Image Credit: Getty Images

BY BILLY JONES.

Billy Jones is the VP of Marketing at Hootsuite. More

Sourced from FastCompany

BY MICHAEL GROTHAUS.

People who work in IT, software development, and advertising appear to be the most anxious.

Mass layoffs in the tech industry have made headlines nearly every week since late 2022. Combine that constant barrage with the rise of AI and uncertainty over the global economy and you have the perfect recipe for increasing anxiety across the American workforce when it comes to fears about job security.

Now a new survey out from online marketing firm Authority Hacker put some concrete numbers on just how many of the currently employed are worried about their job security in the years ahead. In a survey of 1,200 workers, Authority Hacker found that:

  • 54.58% of full-time workers have increased concerns about their job security.
  • Men (62.87%) are more likely than women (47.53%) to fear for their job security, which Authority Hacker says may reflect the 3:1 gender ratio of male to female employees in tech firms.
  • The more a person makes, the more likely they are to worry about their job security. Those making $150,000 or more worry the most about their job security (72.48%), while those making $50,000 or less worry the least (50.26%).
  • The younger an employee, the more likely they are to worry about their job security, with 62.2% of 25 to 44 year-olds worried versus less than 50% of those over the age of 45 worried.
  • C-suite execs are the most worried about their job security at 79.31%.
  • But just 46.82% and 45.80% of non management staff and admin staff, respectively, are worried about their job security.

The larger the company is, the more likely employees are to worry about their job security. Authority Hacker found that 74.33% of those at companies that employ between 500 and 1,000 workers worry about their job security, while only 45.38% of workers at companies with 25 or fewer employees worry about their job security.

And when it comes to concerns by profession, workers most likely to fear for their jobs happen to be those whose industries are most at risk of being impacted by AI. Those professions are:

  1. IT – Services & Data: 89.66%
  2. Software development: 74.42%
  3. Advertising: 70.00%
  4. Finance and Insurance: 67.56%
  5. Human Resources: 64.29%

 

To arrive at its findings, Authority Hacker surveyed 1,200 full-time workers in the United States aged 25 and above.

Feature Image Credit: Aziz Acharki/Unsplash, Richard Horvath/Unsplash

BY MICHAEL GROTHAUS

Michael Grothaus is a novelist and author. He has written for Fast Company since 2013, where he’s interviewed some of the tech industry’s most prominent leaders and writes about everything from Apple and artificial intelligence to the effects of technology on individuals and society. Michael’s current tech-focused areas of interest include AI, quantum computing, and the ways tech can improve the quality of life for the elderly and individuals with disabilities More

Sourced from FastCompany

By Alessio Francesco Fedeli

The current landscape of digital technology is marked by the struggle to achieve visibility for your business online and target the appropriate audience amidst a wave of competition. Search engine marketing (SEM) has pivotal strategies that will allow a business to achieve this but with ongoing advancements in artificial intelligence (AI) and machine learning, more marketers have opportunities for maximum growth. These advancements are revolutionising SEM and will help enhance the efficiency and effectiveness of business campaigns significantly.

AI-enhanced SEM tools stand at the vanguard of this revolution, utilizing advanced algorithms and machine learning capabilities to transform every facet of search engine marketing comprehensively. From automating the process of keyword research to refining advertisement creation, and from optimising bid management to improving performance analysis, these tools furnish marketers with the capacity to attain exceptional outcomes. They transcend conventional tool functionality; they act as catalysts for change, facilitating precise targeting and real-time modifications previously considered unattainable.

Exploring further into AI and machine learning within SEM reveals that these technologies are not only augmenting existing methodologies but also fostering novel strategies. Marketers harnessing these tools gain the ability to predict market trends accurately, comprehend consumer behaviour with enhanced precision, and implement campaigns that are both cost-efficient and high-impact. The advent of AI-driven SEM marks a transformative era in digital advertising, reshaping the landscape in ways that are beginning to unfold.

Leveraging AI and machine learning in SEM

Leveraging AI and machine learning can revolutionise your campaigns | News by Thaiger
Photo by Steve Johnson on Unsplash

The Role of AI in search engine marketing

AI revolutionises SEM by making complex tasks simple. It sifts through vast datasets to unearth insights beyond human capability. By fine-tuning keyword research and bid optimisation, AI ensures ads hit the mark every time. It doesn’t stop there; AI tailors ad content for individual users, predicting trends and making swift, informed decisions. This not only sharpens the marketer’s toolbox but also enhances the consumer’s journey, significantly boosting conversion rates. With AI in SEM, ads become more than just noise; they’re strategic moves in the digital marketplace.

Benefits of Using Machine Learning in SEM

Although there is some apprehension from some, it is important to understand that there are benefits to incorporating machine learning into your SEM strategy.

Benefits of machine learning in SEM

BENEFIT DESCRIPTION
Enhanced targeting accuracy By analysing user data, machine learning identifies the most relevant audience segments, improving the precision of targeting efforts.
Optimised bid adjustments Machine learning algorithms navigate the volatile bidding landscape, making real-time adjustments to maximize ROI.
Improved ad performance It analyses what works best for ad performance, from copy to design, ensuring optimal engagement and conversion rates.
Fraud detection and protection Machine learning acts as a guardian against click fraud, safeguarding advertising budgets from dishonest practices by spotting and mitigating fraudulent activities.

This integration offers strategic advantages that will enable marketers to be more effective in this competitive digital landscape. However, by implementing machine learning, businesses can not only optimise their advertising efforts but also protect their investments. This way, every dollar spent is an investment towards achieving tangible results.

Incorporating AI and machine learning technologies in SEM campaigns

Choosing the right AI tools is the first step to SEM success. The ideal tool offers a comprehensive suite for managing keywords, bids, ads, and performance, fitting seamlessly into your marketing stack. On the machine learning front, clarity in objectives paves the way for impactful integration. Whether aiming for higher CTRs or lower CPA, leveraging historical data and machine learning algorithms to predict and adjust is key. Constant experimentation and analysis refine strategies, ensuring SEM campaigns not only meet but exceed expectations. In the rapidly evolving world of SEM, AI and machine learning are not just options but necessities.

Strategies for successful implementation

Leveraging AI and machine learning can revolutionise your campaigns | News by Thaiger
This photo was generated using Dall-E

In the evolving landscape of search engine marketing (SEM), leveraging AI and machine learning can set a campaign apart, maximising efficiency and returns. Below are strategies detailing how to integrate these advanced technologies effectively.

Choosing the right AI tools for SEM

In the realm of SEM, it is critical to select AI tools that are congruent with your marketing objectives. The market is replete with a myriad of options, each purporting to transform your SEM strategies radically. Nonetheless, not every tool offers equal value. It is advisable to opt for tools that provide an extensive analysis of keywords, insights into competitors, and capabilities for automated bid management. These functionalities ensure that your campaigns are both precisely targeted and economically efficient. Furthermore, the implementation of AI-driven tools for content optimisation can notably increase ad relevance, thereby enhancing click-through rates (CTR) and reducing cost per acquisition (CPA).

Conducting trials with various tools before finalizing a decision is imperative to identify a solution that is specifically catered to your requirements. Platforms offering advanced analytics should be given priority as they afford actionable insights critical for ongoing refinement. It is important to recognize that the effective use of AI in SEM transcends merely selecting cutting-edge technology; it encompasses the strategic application of these tools to continually refine and advance marketing strategies over time.

Integrating machine learning algorithms into SEM practices

Machine learning algorithms come in as a cornerstone in the advancement of search engine marketing (SEM) strategies. With this, businesses can gain insights into consumer behaviour and preferences and to capitalise on this, it will be important to integrate it.

Machine learning algorithms constitute a cornerstone in the advancement of Search Engine Marketing (SEM) strategies, offering unprecedented insights into consumer behaviour and preferences. To capitalize on this opportunity, it is essential to integrate machine learning SEM technologies, emphasizing predictive analytics. Such an approach enables a deeper understanding of the interactions between different demographics and your advertisements, thereby improving audience segmentation.

Moreover, machine learning capabilities enable the automation of the most labour-intensive tasks within SEM, including bid management and A/B testing. This automation not only conserves precious time but also markedly elevates the efficiency of marketing campaigns. By adapting SEM practices to incorporate these algorithms, advertisements are perpetually optimised for performance, obviating the need for continuous manual intervention.

The fusion of machine learning’s predictive analytics with AI-enabled creative optimisation represents a pivotal evolution in Search Engine Marketing (SEM) strategies. This integrative approach allows for the real-time modification of advertisement components, including imagery and text, to better match user intentions, thereby markedly enhancing campaign outcomes.

Employing machine learning and AI within SEM goes beyond simply embracing cutting-edge technology; it denotes an ongoing dedication to a cycle of testing, education, and improvement. This dedication positions marketing endeavours at the vanguard of innovation during a period marked by rapid digital change.

Measuring success and ROI

Leveraging AI and machine learning can revolutionise your campaigns | News by Thaiger
Photo by krakenimages on Unsplash

Utilising metrics and KPIs to evaluate AI and machine learning impact

The integration of Artificial Intelligence (AI) and Machine Learning (ML) into Search Engine Marketing (SEM) strategies has profoundly altered the approaches utilized by digital marketing experts.

  • For an accurate assessment of the effectiveness of these advanced SEM technologies, focusing on relevant metrics and Key Performance Indicators (KPIs) is essential.
  • These criteria provide a transparent evaluation of the performance enhancements brought about by AI and ML.
  • They enable organizations to measure success and calculate Return on Investment (ROI) with greater accuracy.

Primarily, conversion rates emerge as a crucial metric. They serve as direct indicators of the efficiency of AI-enhanced ad targeting and bid management strategies, reflecting whether such technological advancements result in an increased proportion of visitors performing desired actions, such as completing purchases or registering for newsletters.

Cost per Acquisition (CPA) represents another fundamental metric. It illustrates the effectiveness with which AI and ML tools manage advertising expenditures to secure new clientele. Reduced CPA values indicate that these advanced SEM technologies are not only pinpointing the appropriate audience but also achieving this in a financially prudent manner.

Click-through rates (CTR) hold significant importance as well. An elevated CTR signifies that the predictive analytics and automated content optimisation facilitated by AI are effectively engaging the target demographic, thereby increasing their propensity to interact with advertisements.

Moreover, Return on Ad Spend (ROAS) is an essential measure of overall operational efficacy. It quantifies the revenue generated for every unit of currency expended on SEM initiatives. An enhancement in ROAS denotes that integrating AI and ML into SEM strategies is yielding more lucrative campaigns.

Through meticulous observation of these metrics, organizations can comprehensively assess the impact of Artificial Intelligence (AI) and Machine Learning (ML) on their Search Engine Marketing (SEM) strategies. This analysis highlights not only the achievement of set goals but also identifies potential areas for enhancement. As AI and ML evolve, securing a competitive advantage in SEM requires ongoing vigilance and an adaptable methodology informed by data-driven insights.

Utilising machine learning and AI is pretty important in the pursuit of finding success in digital marketing. However, SEM is just one aspect of marketing that stands shoulder to shoulder with methods like SEO. Knowing the difference between these two will help determine which one to use or utilise together to have a more prosperous digital marketing campaign.

Feature Image Credit: This photo was generated using Dall-E

By Alessio Francesco Fedeli

Graduating from Webster University with a degree of Management with an emphasis on International Business, Alessio is a Thai-Italian with a multicultural perspective regarding Thailand and abroad. On the same token, as a passionate person for sports and activities, Alessio also gives insight to various spots for a fun and healthy lifestyle.

Sourced from Thaiger

&

Social media’s unregulated evolution over the past decade holds a lot of lessons that apply directly to AI companies and technologies.

Oh, how the mighty have fallen. A decade ago, social media was celebrated for sparking democratic uprisings in the Arab world and beyond. Now front pages are splashed with stories of social platforms’ role in misinformation, business conspiracymalfeasance, and risks to mental health. In a 2022 survey, Americans blamed social media for the coarsening of our political discourse, the spread of misinformation, and the increase in partisan polarization.

Today, tech’s darling is artificial intelligence. Like social media, it has the potential to change the world in many ways, some favourable to democracy. But at the same time, it has the potential to do incredible damage to society.

There is a lot we can learn about social media’s unregulated evolution over the past decade that directly applies to AI companies and technologies. These lessons can help us avoid making the same mistakes with AI that we did with social media.

In particular, five fundamental attributes of social media have harmed society. AI also has those attributes. Note that they are not intrinsically evil. They are all double-edged swords, with the potential to do either good or ill. The danger comes from who wields the sword, and in what direction it is swung. This has been true for social media, and it will similarly hold true for AI. In both cases, the solution lies in limits on the technology’s use.

#1: Advertising

The role advertising plays in the internet arose more by accident than anything else. When commercialization first came to the internet, there was no easy way for users to make micropayments to do things like viewing a web page. Moreover, users were accustomed to free access and wouldn’t accept subscription models for services. Advertising was the obvious business model, if never the best one. And it’s the model that social media also relies on, which leads it to prioritize engagement over anything else.

Both Google and Facebook believe that AI will help them keep their stranglehold on an 11-figure online ad market (yep, 11 figures), and the tech giants that are traditionally less dependent on advertising, like Microsoft and Amazon, believe that AI will help them seize a bigger piece of that market.

Big Tech needs something to persuade advertisers to keep spending on their platforms. Despite bombastic claims about the effectiveness of targeted marketing, researchers have long struggled to demonstrate where and when online ads really have an impact. When major brands like Uber and Procter & Gamble recently slashed their digital ad spending by the hundreds of millions, they proclaimed that it made no dent at all in their sales.

AI-powered ads, industry leaders say, will be much better. Google assures you that AI can tweak your ad copy in response to what users search for, and that its AI algorithms will configure your campaigns to maximize success. Amazon wants you to use its image generation AI to make your toaster product pages look cooler. And IBM is confident its Watson AI will make your ads better.

These techniques border on the manipulative, but the biggest risk to users comes from advertising within AI chatbots. Just as Google and Meta embed ads in your search results and feeds, AI companies will be pressured to embed ads in conversations. And because those conversations will be relational and human-like, they could be more damaging. While many of us have gotten pretty good at scrolling past the ads in Amazon and Google results pages, it will be much harder to determine whether an AI chatbot is mentioning a product because it’s a good answer to your question or because the AI developer got a kickback from the manufacturer.

#2: Surveillance

Social media’s reliance on advertising as the primary way to monetize websites led to personalization, which led to ever-increasing surveillance. To convince advertisers that social platforms can tweak ads to be maximally appealing to individual people, the platforms must demonstrate that they can collect as much information about those people as possible.

It’s hard to exaggerate how much spying is going on. A recent analysis by Consumer Reports about Facebook—just Facebook—showed that every user has more than 2,200 different companies spying on their web activities on its behalf.

AI-powered platforms that are supported by advertisers will face all the same perverse and powerful market incentives that social platforms do. It’s easy to imagine that a chatbot operator could charge a premium if it were able to claim that its chatbot could target users on the basis of their location, preference data, or past chat history and persuade them to buy products.

The possibility of manipulation is only going to get greater as we rely on AI for personal services. One of the promises of generative AI is the prospect of creating a personal digital assistant advanced enough to act as your advocate with others and as a butler to you. This requires more intimacy than you have with your search engine, email provider, cloud storage system, or phone. You’re going to want it with you constantly, and to most effectively work on your behalf, it will need to know everything about you. It will act as a friend, and you are likely to treat it as such, mistakenly trusting its discretion.

Even if you choose not to willingly acquaint an AI assistant with your lifestyle and preferences, AI technology may make it easier for companies to learn about you. Early demonstrations illustrate how chatbots can be used to surreptitiously extract personal data by asking you mundane questions. And with chatbots increasingly being integrated with everything from customer service systems to basic search interfaces on websites, exposure to this kind of inferential data harvesting may become unavoidable.

#3: Virality

Social media allows any user to express any idea with the potential for instantaneous global reach. A great public speaker standing on a soapbox can spread ideas to maybe a few hundred people on a good night. A kid with the right amount of snark on Facebook can reach a few hundred million people within a few minutes.

A decade ago, technologists hoped this sort of virality would bring people together and guarantee access to suppressed truths. But as a structural matter, it is in a social network’s interest to show you the things you are most likely to click on and share, and the things that will keep you on the platform.

As it happens, this often means outrageous, lurid, and triggering content. Researchers have found that content expressing maximal animosity toward political opponents gets the most engagement on Facebook and Twitter. And this incentive for outrage drives and rewards misinformation.

As Jonathan Swift once wrote, “Falsehood flies, and the Truth comes limping after it.” Academics seem to have proved this in the case of social media; people are more likely to share false information—perhaps because it seems more novel and surprising. And unfortunately, this kind of viral misinformation has been pervasive.

AI has the potential to supercharge the problem because it makes content production and propagation easier, faster, and more automatic. Generative AI tools can fabricate unending numbers of falsehoods about any individual or theme, some of which go viral. And those lies could be propelled by social accounts controlled by AI bots, which can share and launder the original misinformation at any scale.

Remarkably powerful AI text generators and autonomous agents are already starting to make their presence felt in social media. In July, researchers at Indiana University revealed a botnet of more than 1,100 Twitter accounts that appeared to be operated using ChatGPT.

AI will help reinforce viral content that emerges from social media. It will be able to create websites and web content, user reviews, and smartphone apps. It will be able to simulate thousands, or even millions, of fake personas to give the mistaken impression that an idea, or a political position, or use of a product, is more common than it really is. What we might perceive to be vibrant political debate could be bots talking to bots. And these capabilities won’t be available just to those with money and power; the AI tools necessary for all of this will be easily available to us all.

#4: Lock-in

Social media companies spend a lot of effort making it hard for you to leave their platforms. It’s not just that you’ll miss out on conversations with your friends. They make it hard for you to take your saved data—connections, posts, photos—and port it to another platform. Every moment you invest in sharing a memory, reaching out to an acquaintance, or curating your follows on a social platform adds a brick to the wall you’d have to climb over to go to another platform.

This concept of lock-in isn’t unique to social media. Microsoft cultivated proprietary document formats for years to keep you using its flagship Office product. Your music service or e-book reader makes it hard for you to take the content you purchased to a rival service or reader. And if you switch from an iPhone to an Android device, your friends might mock you for sending text messages in green bubbles. But social media takes this to a new level. No matter how bad it is, it’s very hard to leave Facebook if all your friends are there. Coordinating everyone to leave for a new platform is impossibly hard, so no one does.

Similarly, companies creating AI-powered personal digital assistants will make it hard for users to transfer that personalization to another AI. If AI personal assistants succeed in becoming massively useful time-savers, it will be because they know the ins and outs of your life as well as a good human assistant; would you want to give that up to make a fresh start on another company’s service? In extreme examples, some people have formed close, perhaps even familial, bonds with AI chatbots. If you think of your AI as a friend or therapist, that can be a powerful form of lock-in.

Lock-in is an important concern because it results in products and services that are less responsive to customer demand. The harder it is for you to switch to a competitor, the more poorly a company can treat you. Absent any way to force interoperability, AI companies have less incentive to innovate in features or compete on price, and fewer qualms about engaging in surveillance or other bad behaviours.

#5: Monopolization

Social platforms often start off as great products, truly useful and revelatory for their consumers, before they eventually start monetizing and exploiting those users for the benefit of their business customers. Then the platforms claw back the value for themselves, turning their products into truly miserable experiences for everyone. This is a cycle that Cory Doctorow has powerfully written about and traced through the history of Facebook, Twitter, and more recently TikTok.

The reason for these outcomes is structural. The network effects of tech platforms push a few firms to become dominant, and lock-in ensures their continued dominance. The incentives in the tech sector are so spectacularly, blindingly powerful that they have enabled six megacorporation’s (Amazon, Apple, Google, Facebook parent Meta, Microsoft, and Nvidia) to command a trillion dollars each of market value—or more. These firms use their wealth to block any meaningful legislation that would curtail their power. And they sometimes collude with each other to grow yet fatter.

This cycle is clearly starting to repeat itself in AI. Look no further than the industry poster child OpenAI, whose leading offering, ChatGPT, continues to set marks for uptake and usage. Within a year of the product’s launch, OpenAI’s valuation had skyrocketed to about $90 billion.

OpenAI once seemed like an “open” alternative to the megacorps—a common carrier for AI services with a socially oriented nonprofit mission. But the Sam Altman firing-and-rehiring debacle at the end of 2023, and Microsoft’s central role in restoring Altman to the CEO seat, simply illustrated how venture funding from the familiar ranks of the tech elite pervades and controls corporate AI. In January 2024, OpenAI took a big step toward monetization of this user base by introducing its GPT Store, wherein one OpenAI customer can charge another for the use of its custom versions of OpenAI software; OpenAI, of course, collects revenue from both parties. This sets in motion the very cycle Doctorow warns about.

In the middle of this spiral of exploitation, little or no regard is paid to externalities visited upon the greater public—people who aren’t even using the platforms. Even after society has wrestled with their ill effects for years, the monopolistic social networks have virtually no incentive to control their products’ environmental impact, tendency to spread misinformation, or pernicious effects on mental health. And the government has applied virtually no regulation toward those ends.

Likewise, few or no guardrails are in place to limit the potential negative impact of AI. Facial recognition software that amounts to racial profiling, simulated public opinions supercharged by chatbots, fake videos in political ads—all of it persists in a legal grey area. Even clear violators of campaign advertising law might, some think, be let off the hook if they simply do it with AI.

Mitigating the risks

The risks that AI poses to society are strikingly familiar, but there is one big difference: it’s not too late. This time, we know it’s all coming. Fresh off our experience with the harms wrought by social media, we have all the warning we should need to avoid the same mistakes.

The biggest mistake we made with social media was leaving it as an unregulated space. Even now—after all the studies and revelations of social media’s negative effects on kids and mental health, after Cambridge Analytica, after the exposure of Russian intervention in our politics, after everything else—social media in the US remains largely an unregulated “weapon of mass destruction.” Congress will take millions of dollars in contributions from Big Tech, and legislators will even invest millions of their own dollars with those firms, but passing laws that limit or penalize their behaviour seems to be a bridge too far.

We can’t afford to do the same thing with AI, because the stakes are even higher. The harm social media can do stems from how it affects our communication. AI will affect us in the same ways and many more besides. If Big Tech’s trajectory is any signal, AI tools will increasingly be involved in how we learn and how we express our thoughts. But these tools will also influence how we schedule our daily activities, how we design products, how we write laws, and even how we diagnose diseases. The expansive role of these technologies in our daily lives gives for-profit corporations opportunities to exert control over more aspects of society, and that exposes us to the risks arising from their incentives and decisions.

The good news is that we have a whole category of tools to modulate the risk that corporate actions pose for our lives, starting with regulation. Regulations can come in the form of restrictions on activity, such as limitations on what kinds of businesses and products are allowed to incorporate AI tools. They can come in the form of transparency rules, requiring disclosure of what data sets are used to train AI models or what new preproduction-phase models are being trained. And they can come in the form of oversight and accountability requirements, allowing for civil penalties in cases where companies disregard the rules.

The single biggest point of leverage governments have when it comes to tech companies is antitrust law. Despite what many lobbyists want you to think, one of the primary roles of regulation is to preserve competition—not to make life harder for businesses. It is not inevitable for OpenAI to become another Meta, an 800-pound gorilla whose user base and reach are several times those of its competitors. In addition to strengthening and enforcing antitrust law, we can introduce regulation that supports competition-enabling standards specific to the technology sector, such as data portability and device interoperability. This is another core strategy for resisting monopoly and corporate control.

Additionally, governments can enforce existing regulations on advertising. Just as the US regulates what media can and cannot host advertisements for sensitive products like cigarettes, and just as many other jurisdictions exercise strict control over the time and manner of politically sensitive advertising, so too could the US limit the engagement between AI providers and advertisers.

Lastly, we should recognize that developing and providing AI tools does not have to be the sovereign domain of corporations. We, the people and our government, can do this too. The proliferation of open-source AI development in 2023, successful to an extent that startled corporate players, is proof of this. And we can go further, calling on our government to build public-option AI tools developed with political oversight and accountability under our democratic system, where the dictatorship of the profit motive does not apply.

Which of these solutions is most practical, most important, or most urgently needed is up for debate. We should have a vibrant societal dialogue about whether and how to use each of these tools. There are lots of paths to a good outcome.

The problem is that this isn’t happening now, particularly in the US. And with a looming presidential election, conflict spreading alarmingly across Asia and Europe, and a global climate crisis, it’s easy to imagine that we won’t get our arms around AI any faster than we have (not) with social media. But it’s not too late. These are still the early years for practical consumer AI applications. We must and can do better.

Feature Image Credit: STEPHANIE ARNETT/MITTR | GETTY, ENVATO

&

Nathan E. Sanders is a data scientist and an affiliate with the Berkman Klein Center at Harvard University. Bruce Schneier is a security technologist and a fellow and lecturer at the Harvard Kennedy School.

Sourced from MIT Technology Review

 

 

& archive page

By Webb Wright 

If you’re considering launching a new AI-centered brand or product, you may want to go beyond simply adding ‘AI’ to the end of the name.

The AI Gold Rush is in full swing and brands of all stripes are rushing to establish their particular niches in this hugely profitable and increasingly crowded industry. New AI-centered brands, departments and products are cropping up by the day, each requiring a name that is, ideally, both memorable and unique.

“Every single company, whether a candy bar manufacturer or a software company, seemingly has to show that it is doing something to leverage AI,” says Jonathan Bell, founder and CEO of Want Branding. “And that often requires some kind of adjacent brand, which, of course, then needs a name.”

Several brands, as you may have noticed, have simply taken to adding ‘AI’ (or ‘.AI’) to the ends of their names. Think Stability AI, Spot AI, Mistral AI, Shield AI, People.ai, Otter.ai, Arize AI, Crowd AI, Toggle AI and so on. And, of course, there’s OpenAI, the company that has become something of a flagship for the entire wave of AI innovation that’s currently underway following its hugely successful launch of ChatGPT in late 2022 and that has probably helped to establish the ‘AI‘ suffix as the name du jour for up-and-coming brands looking to make a name for themselves in the industry.

Adding ‘AI‘ to the end of a brand or product name “is an easy but often perhaps a cheap way of doing it without much thought,” says Bell.

A parallel can be drawn between this naming phenomenon and a similar one that followed in the wake of the dawn of the internet in the late 90s when scores of new brands with ‘.com‘ at the ends of their names began to emerge. In those early days of the world wide web, it made practical sense for companies to make unambiguously clear that they were technologically savvy enough to have an online presence. (Remember, this was back when ‘online‘ was itself a new, hip word.)

Over the slow process of many years, however, the internet became so deeply embedded into most of our day-to-day lives, into the very fabric of popular culture and commerce, that it became more or less superfluous to add ‘.com‘ to the end of a brand name. Most people these days automatically assume that any given brand – unless it‘s incompetent beyond belief or run by a group of Luddites – has a website and probably some degree of social media presence.

The ‘.com‘ naming trend, in other words, began as a worthwhile marketing tactic, but “at a certain point that was eroded and it became meaningless,” says David Placek, founder and CEO of Lexicon Branding. There are still, of course, some brands (Hotels.com, for example) that have chosen to use their domain names as their official names, but such a strategy is far less common today than it was when the internet had the shiny-new-toy factor.

AI could follow a similar trajectory of cultural adoption as that of the internet: today, it’s all anyone can talk about; tomorrow, it’s basically taken for granted. Just as people today assume that brands today have an online presence – even when they don’t have ‘.com‘ in their names – we could soon reach a point as a society in which AI is so ubiquitous, so deeply integrated into our devices and our modes of working and communicating with one another, that adding ‘AI‘ to a brand or product name becomes passé. Placek says he’s “absolutely positive” that we’ll cross that threshold sometime within the next two years, after which point “everybody will assume that there’s something AI-related” built into most brands and products.

Given that forecast, adding ‘AI‘ to the end of a name “can be a disservice for building brand strength over time, because [the market] becomes crowded,” says marketing agency Tenet Partners CEO Hampton Bridwell. “There are a lot of names with a similar sound or styling and that creates a situation where you don’t have differentiation or memorability within the name.”

Anthropomorphic names and the sad tale of Clippy

There have, of course, been other naming trends that have recently emerged around AI. For example, many AI-centered products have been given human-sounding names, apparently in an effort to make the underlying technology – which could potentially come across as a bit threatening to a culture that’s been weaned on films like 2001: A Space Odyssey and The Matrix – feel a bit less alien and intimidating.

Consider IBM’s Watson, an AI model originally designed to answer questions that gained global fame when it won Jeopardy! in 2011. There are also more recent examples, including Siri (Apple), Alexa (Amazon) and Einstein (Salesforce).

As the journalist Charles Duhigg points out in a recent article in The New Yorker, Microsoft (which became a leader in the burgeoning AI industry following its recent multi-billion-dollar investments in OpenAI) has had to learn the hard way about the risks involved with trying to anthropomorphize AI. In 1996, the company introduced Clippy, a smiling virtual assistant with big eyes and a paperclip for a body, who could answer simple user questions on Microsoft Office platforms. The character became widely loathed by users. The Smithsonian called Clippy “one of the worst software design blunders in the annals of computing,” as Duhigg quotes in his article. Microsoft killed Clippy off in 2001.

The company once again tried its hand at anthropomorphizing algorithms in 2016 with the launch of Tay, an AI-powered chatbot whose conversational style reflected that of a typical teenage girl. Tay rather quickly descended into a fit of hate speech and was deactivated less than 24 hours after its launch.

Apparently wiser after the Clippy and Tay debacles, Microsoft is now naming its AI products in a manner that suggests utility and even a touch of fallibility. Copilot, the name of the company’s recently launched suite of AI-powered productivity tools, insinuates something that can be reasonably relied upon to provide a measure of assistance, not something into which one should invest one’s whole trust.

The curious case of ChatGPT

Perhaps the biggest irony in the realm of AI names is the fact that ChatGPT, the product that, more than any other, catalyzed the burgeoning AI Revolution, has such a widely disliked name.

For one thing, says Bridwell, the word ‘chat‘ in a brand name “is pretty limiting – it really doesn’t embody what the whole thing is about in terms of [how] it delivers value. It’s a terrible name. Over time, [OpenAI] should really think about rebranding it.”

Even OpenAI CEO Sam Altman agrees that it’s not an ideal name. During a recent podcast hosted by comedian Trevor Noah, Altman said that ChatGPT is “a horrible name, but it may be too ubiquitous to ever change.”

ChatGPT’s suboptimal name could stem in part from the fact that the OpenAI team that built it did not initially have high hopes for its prospects as an uber popular app. It was referred to internally as a “low-key research preview” in the period leading up to its launch and it was intended as a means through which the public could begin to interact with OpenAI’s GPT large language model more broadly so that the company could then collect feedback and fine-tune the technology accordingly.

Many within the OpenAI team were surprised when ChatGPT attracted its first million users in just five days, becoming the fastest-growing app in history.

Advice for marketers

According to Want Branding’s Jonathan Bell, brands that are looking to promote their use of AI through an optimized name should take their time. “It needs to be well thought-out,” he says. “It shouldn’t be something that’s done casually over a quick meeting, where you just simply add ‘AI’ to [the name]. Companies need to think about: What are they specifically doing? Can they deploy AI in a way that is really effective, or is this something that’s been done that could come across as bandwagon-jumping?”

Placek, who’s prone to referencing cognitive science and linguistics when discussing the psychology of brand- and product-naming, highlights the importance of sound symbolism – that is, the associations between particular sounds and the concepts that they evoke in the mind of the hearer. “You don’t want something too soft and you don’t want something too clever,” he says. “[You want something that’s] a little bit on the more serious side that [suggests] intelligence … sound symbolism should play a role in selecting and developing your names.”

When prompted to describe the qualities of a great name for an AI brand or product in fewer than 10 words, ChatGPT wrote: “Memorable, clear, unique, relevant, easy to pronounce, globally appealing, scalable.”

Feature Image Credit: Adobe Stock

By Webb Wright 

Sourced from The Drum

By Alon Goren

At this point, most enterprises are dabbling in generative AI or planning to leverage the technology soon.

According to an October 2023 Gartner, Inc. survey, 45% of organizations are currently piloting generative AI, while 10% have deployed it in full production. Companies are eager to move from pilot to production and start seeing some real business results.

However, enterprises getting started with generative AI often run into a common stumbling block right out of the gate: They suffer analysis paralysis before they can even begin using the technology. There are tons of generative AI tools available today, both broad and highly specialized. Moreover, these tools can be leveraged for all sorts of professions and business purposes—sales, product development, finance, etc.

With so many choices and possibilities, enterprises often get stuck in the planning phase—debating where they should deploy generative AI first. Every business unit (and all of the business’s key stakeholders) wants to own a part of the company’s generative AI initiatives.

Things can get messy. To stay on track, businesses should follow these guidelines when experimenting with generative AI.

Focus On Specific Use Cases With Measurable Goals

Enterprises need to recognize that every part of the organization can benefit from generative AI—eventually. To get there, however, they need to get off the ground with a pilot project.

How do you decide where to get started? Keep it simple and identify a small, specific problem that exists today that can be improved with generative AI. Be practical. Choose an issue that’s been challenging the business for a while, has been difficult to fix in the past and will make a visibly positive impact once resolved. Next, enterprises need to agree upon metrics and goals. The problem can’t be too nebulous or vague; the impact of AI (success or failure) has to be easily measurable.

With that in mind, the pilot project should have a contained scope. The purpose is to demonstrate the real-world value of the technology, build support for it across the organization and then broaden adoption from there.

If organizations try to leverage AI in too many different ways and solve multiple problems, it’ll cause the scope to grow out of control and make it impossible to complete the pilot within a reasonable timeframe. Ambition has to be balanced with practicality. Launching a massive pilot project that requires extensive resources and long timelines is a recipe for failure.

What’s a good timeline for the pilot? It depends on the circumstances, of course. Generally speaking, however, it should only take a few weeks or a couple of months to execute, not multiple quarters or an entire year.

Start small, get something functional quickly and then iterate on it. This iterative approach allows for continuous learning and improvement, which is essential given the nascent state of generative AI technology.

Organizations must also be sure to keep humans in the loop from the very beginning of the experimentation phase. The rise of AI doesn’t render human expertise obsolete; it amplifies it. As productivity and business benefits increase with generative AI, human employees become even more valuable as supervisors and validators of AI output. This is essential for maintaining control and building trust in AI. In addition, the pool of early participants will also help champion the technology throughout the organization once the enterprise is ready to deploy it widely.

Finally, once the project has begun, organizations have to stick with it until it’s complete. Don’t waste time starting over or shifting to other use cases prematurely. Just get going and stay the course. After that’s been completed successfully, companies can expand their use of generative AI more broadly across the organization.

Choosing The Right Technology

The other major component of the experimentation phase is selecting the right vendor. With the generative AI market booming, it can seem impossible to tell the differences between one solution and another. Lots of noisy marketing only makes things more confusing.

The best way to cut through the noise is to identify the requirements that are most important to the organization (e.g., data security, governance, scalability, compatibility with existing infrastructure) and look for the vendor that best meets those needs.

It’s extremely important to understand where vendors stand on each of these things early on to avoid the headache of discovering that they don’t really check those boxes later. The only way to do that is by talking to the vendor (especially its sales engineering team) and seeing these capabilities demoed first-hand.

Get Ahead Of The Competition With A Strong Start

Within the next couple of years, I expect almost every enterprise will employ generative AI in production. Those wielding it effectively will get a leg up on their competition, while those struggling will be at risk of falling behind. Though the road may be uncharted, enterprises can succeed by focusing on contained, valuable projects, leveraging human expertise and selecting strategic technology partners.

Don’t wait. Embrace this unique opportunity to innovate and take that crucial first step now.

Feature Image Credit: GETTY

By Alon Goren

Follow me on LinkedIn. Check out my website.

CEO and Cofounder of AnswerRocket. Read Alon Goren’s full executive profile here.

Sourced from Forbes