Tag

generative AI

Browsing

By

The social media company’s engineers wanted the technology to improve experiences and engagement. But the final product required more tweaking than previously thought.

Around seven months ago, LinkedIn engineers set out to improve user experience and engagement by embedding generative AI capabilities into its platform.

The efforts resulted in a new AI-powered premium subscription offering, which required energy and time to adjust to internal standards and best practices.

“You can build something that looks and feels very useful, that maybe once every five times completely messes up… and that’s fine for a lot of use cases, [but] that was not fine for us,” Juan Bottaro, principal staff software engineer at LinkedIn, told CIO Dive.

Users can turn to the platform to get assistance with effective writing, information gathering and skills assessments. The interface offers job seekers tailored profile suggestions and users can access key takeaways from posts.

Like other enterprises, LinkedIn wanted its AI-generated responses to be factual, yet empathetic.

If a user wants to know whether a job posting in biology is a good fit with their professional profile, despite having no experience, the social media company wanted its AI assistant to suggest LinkedIn Learning courses in addition to saying the role wasn’t a fit — rather than a blunt response.

Enhancing the user experience is a common goal for using generative AI. But just adding technology for the sake of novelty can have consequences.

If solutions are interacting with customers, the stakes can be even higher.

Despite running into a few unanticipated roadblocks, LinkedIn engineers continued to iterate on the product, mitigating risks along the way.

“Don’t expect that you’re going to hit a home run at the first try,” Bottaro said. “But you do get to build that muscle very quickly, and, fortunately, it’s a technology that gives you a very quick feedback loop.”

Crafting quality experiences can be time-consuming

LinkedIn engineers spent an unexpected amount of time tweaking the experience. Bottaro said the majority of the team’s efforts were focused on fine-tuning, rather than on the actual development stages.

“Technology and product development requires a lot of work,” said Bottaro, who has spent more than a decade at the social media company for professionals, owned by Microsoft. “The evaluation criteria and guidelines grew and grew because it’s very hard to codify.”

The team achieved around 80% of its experience target, then spent four additional months refining, tweaking and improving the system.

“The initial pace created a false sense of ‘almost there,’ which became discouraging as the rate of improvement slowed significantly for each subsequent 1% gain,” Bottaro explained in a co-authored report with LinkedIn Distinguished Engineer Karthik Ramgopal.

Evaluation frameworks are critical

In one of the company’s first prototypes, the chatbot would tell users they were a bad fit for a job without any sort of helpful information.

“That is not a good response, even if it’s correct,” Bottaro said. “That’s why when you’re developing the criteria and guidelines, it’s hand in hand with product development. “

Curating the evaluation criteria is specific to the business. Bottaro compared the process to different teachers grading a paper rather than a multiple choice exam.

“We have a very, very high bar,” Bottaro said. “These topics of quality and evaluation [have] become so much more prominent than in other instances.”

Feature Image Credit: Justin Sullivan via Getty Images

By

Sourced from CIO DIVE

By Jasmine Sheena.

Billion Dollar Boy, Gut, and Mischief are focused on ensuring that work powered by the tech retains a human touch.

It’s been over a year since ChatGPT first rolled out, and while constantly hearing the phrase “generative AI” has been really a(i)nnoying, there’s no doubt the technology has transformed the world. It was one of the hottest topics at CES earlier this year, and SXSW has a dedicated track for the tech.

When something’s trendy, marketers tend to take notice, and we spoke to execs at several agencies about how they have taken ChatGPT and other generative AI tools into their own hands. They told Marketing Brew that, so far, adland has found unique ways to incorporate generative AI into workflows while working to ensure there is still a human touch, all while tech giants and the federal government alike weigh potential restrictions on the tech.

Lead by example

For independent shop Billion Dollar Boy, generative AI has been useful in influencer marketing. The agency set up Muse, an emerging tech arm to help leverage AI for influencer content creation for clients, Thomas Walters, Billion Dollar Boy’s founder and its European CEO, told us. Muse, which has worked with AI artists like Jo Ann and Elmo Mistiaen on brand campaigns, has also worked with brands including Lipton Iced Tea and Versace, Walters said.

“[It’s] really at the bleeding edge of advertising,” he said.

Internally, the agency is interrogating ways to use AI to optimize work, Walters said. BDB set up a taskforce made up of folks across its departments, from leadership to business affairs, to identify workflow problems and figure out how to solve them using AI tools, Walters said. For example, after realizing the agency’s staff was spending a lot of time manually performing due diligence checks on influencers, the agency created a tool it built using ChatGPT that evaluates influencers’ posts and applies a “risk rating.”

Feature Image Credit: Amelia Kinsinger

By Jasmine Sheena

Sourced from Marketing Brew

Yeah, I’m not sure about Google’s various names for its generative AI products.

To clarify:

  • Bard is Google’s generative AI chatbot, much like ChatGPT
  • Gemini is Google’s large language model (LLM) group, like GPT
  • Imagen is Google’s AI image generation system

All clear?

Okay, then this paragraph from Google should now make more sense.

Last December, we brought Gemini Pro into Bard in English, giving Bard more advanced understanding, reasoning, summarizing and coding abilities. Today Gemini Pro in Bard will be available in over 40 languages and more than 230 countries and territories, so more people can collaborate with this faster, more capable version of Bard.”

I’m guessing that for most people, without the preceding context, the above explanation would have been somewhat bewildering, but basically, Google’s now making its Bard chatbot more powerful, with advanced AI models powering its responses, while it’s also adding image generation capability within Bard itself, powered by Imagen.

Google Imagen 2 in Bard

Google has taken a cautious approach with generative AI development, and has criticized others for pushing too hard, too fast, with their generative AI tools. Some have viewed this as anti-competitive bias, and Google simply protecting its turf, as more people turn to tools like ChatGPT for search queries. But Google’s view is that generative AI needs to be deployed slowly in order to mitigate misuse, which has already led to various issues in a regulatory sense.

But today, Google‘s taking the next steps with several of its generative AI tools, with Bard, as noted, getting improved system thinking and image creation, Google Maps now getting new conversational queries, powered by AI, to facilitate place discovery, and Imagen 2, the next stage of its visual creation system, also being rolled out within its image-generation tools.

Google Imagen 2

As explained by Google:

Imagen 2 has been trained on higher-quality, image-description pairings and generates more detailed images that are better aligned with the semantics of people’s language prompts. It’s more accurate than our previous system at processing details, and it’s more capable at capturing nuance – delivering more photorealistic images across a range of styles and use cases.”

That’ll provide more opportunity to create better visuals within Google’s systems, which will also be created with various safeguards in place, in order to limit “problematic outputs like violent, offensive, or sexually explicit content”.

“All images generated with Imagen 2 in our consumer products will be marked by SynthID, a tool developed by Google DeepMind, that adds a digital watermark directly into the pixels of images we generate. SynthID watermarks are imperceptible to the human eye but detectable for identification.”

Given the recent controversy surrounding AI generated images of Taylor Swift, this is an important measure, and is one of several concerns that Google has repeatedly raised in the rapid rollout of AI tools, that we don’t yet have all the systems and processes in place to fully protect against this kind of misuse.

Sourced from SocialMediaToday

By Alon Goren

At this point, most enterprises are dabbling in generative AI or planning to leverage the technology soon.

According to an October 2023 Gartner, Inc. survey, 45% of organizations are currently piloting generative AI, while 10% have deployed it in full production. Companies are eager to move from pilot to production and start seeing some real business results.

However, enterprises getting started with generative AI often run into a common stumbling block right out of the gate: They suffer analysis paralysis before they can even begin using the technology. There are tons of generative AI tools available today, both broad and highly specialized. Moreover, these tools can be leveraged for all sorts of professions and business purposes—sales, product development, finance, etc.

With so many choices and possibilities, enterprises often get stuck in the planning phase—debating where they should deploy generative AI first. Every business unit (and all of the business’s key stakeholders) wants to own a part of the company’s generative AI initiatives.

Things can get messy. To stay on track, businesses should follow these guidelines when experimenting with generative AI.

Focus On Specific Use Cases With Measurable Goals

Enterprises need to recognize that every part of the organization can benefit from generative AI—eventually. To get there, however, they need to get off the ground with a pilot project.

How do you decide where to get started? Keep it simple and identify a small, specific problem that exists today that can be improved with generative AI. Be practical. Choose an issue that’s been challenging the business for a while, has been difficult to fix in the past and will make a visibly positive impact once resolved. Next, enterprises need to agree upon metrics and goals. The problem can’t be too nebulous or vague; the impact of AI (success or failure) has to be easily measurable.

With that in mind, the pilot project should have a contained scope. The purpose is to demonstrate the real-world value of the technology, build support for it across the organization and then broaden adoption from there.

If organizations try to leverage AI in too many different ways and solve multiple problems, it’ll cause the scope to grow out of control and make it impossible to complete the pilot within a reasonable timeframe. Ambition has to be balanced with practicality. Launching a massive pilot project that requires extensive resources and long timelines is a recipe for failure.

What’s a good timeline for the pilot? It depends on the circumstances, of course. Generally speaking, however, it should only take a few weeks or a couple of months to execute, not multiple quarters or an entire year.

Start small, get something functional quickly and then iterate on it. This iterative approach allows for continuous learning and improvement, which is essential given the nascent state of generative AI technology.

Organizations must also be sure to keep humans in the loop from the very beginning of the experimentation phase. The rise of AI doesn’t render human expertise obsolete; it amplifies it. As productivity and business benefits increase with generative AI, human employees become even more valuable as supervisors and validators of AI output. This is essential for maintaining control and building trust in AI. In addition, the pool of early participants will also help champion the technology throughout the organization once the enterprise is ready to deploy it widely.

Finally, once the project has begun, organizations have to stick with it until it’s complete. Don’t waste time starting over or shifting to other use cases prematurely. Just get going and stay the course. After that’s been completed successfully, companies can expand their use of generative AI more broadly across the organization.

Choosing The Right Technology

The other major component of the experimentation phase is selecting the right vendor. With the generative AI market booming, it can seem impossible to tell the differences between one solution and another. Lots of noisy marketing only makes things more confusing.

The best way to cut through the noise is to identify the requirements that are most important to the organization (e.g., data security, governance, scalability, compatibility with existing infrastructure) and look for the vendor that best meets those needs.

It’s extremely important to understand where vendors stand on each of these things early on to avoid the headache of discovering that they don’t really check those boxes later. The only way to do that is by talking to the vendor (especially its sales engineering team) and seeing these capabilities demoed first-hand.

Get Ahead Of The Competition With A Strong Start

Within the next couple of years, I expect almost every enterprise will employ generative AI in production. Those wielding it effectively will get a leg up on their competition, while those struggling will be at risk of falling behind. Though the road may be uncharted, enterprises can succeed by focusing on contained, valuable projects, leveraging human expertise and selecting strategic technology partners.

Don’t wait. Embrace this unique opportunity to innovate and take that crucial first step now.

Feature Image Credit: GETTY

By Alon Goren

Follow me on LinkedIn. Check out my website.

CEO and Cofounder of AnswerRocket. Read Alon Goren’s full executive profile here.

Sourced from Forbes

By Chad S. White

Brands have two major levers they can pull to protect themselves from the negative effects of growing use of generative AI.

The Gist

  • AI disruption. Generative AI is set to disrupt SEO significantly.
  • Content shielding. Brands need strategies to protect their content from AI.
  • Direct relationships. Building strong direct relationships is key.

Do your customers trust your brand more than ChatGPT?

The answer to that question will determine which brands truly have credibility and authority in the years ahead and which do not.

Those who are more trustworthy than generative AI engines will:

  1. Be destinations for answer-seekers, generating strong direct traffic to their websites and robust app usage.
  2. Be able to build large first-party audiences via email, SMS, push and other channels.

Both of those will be critical for any brand wanting to insulate themselves from the search engine optimization (SEO) traffic loss that will be caused by generative AI.

The Threat to SEO

Despite racking up 100 million users just two months after launching — an all-time record — ChatGPT doesn’t appear to be having a noticeable impact on the many billions of searches that happen every day yet. However, it’s not hard to imagine it and other large language models (LLMs) taking a sizable bite out of search market share as they improve and become more reliable.

And improve they will. After all, Microsoft, Google and others are investing tens of billions of dollars into generative AI engines. Long dominating the search engine market, Google in particular is keenly aware of the enormous risk to its business, which is why it declared a Code Red and marshalled all available resources into AI development.

If you accept that generative AI will improve significantly over the next few years — and probably dramatically by the end of the decade — and therefore consumers will inevitability get more answers to their questions through zero-click engagements, which are already sizable, then it begs the question:

What should brands consider doing to maintain brand visibility and authority, as well as avoid losing value on the investments they’ve made in content?

Protective Measures From Negative Generative AI Effects

Brands have two major levers they can pull to protect themselves from the negative effects of growing use of generative AI.

1. Shielding Content From Generative AI Training

Major legal battles will be fought in the years ahead to clarify what rights copyright holders have in this new age and what still constitutes Fair Use. Content and social media platforms are likely to try to redefine the copyright landscape in their favor, amending their user agreements to give themselves more rights over the content that’s shared on their platforms.

A white robot hand holds a gavel above a sound block sitting on a wooden table.
Andrey Popov on Adobe Stock Photo

You can already see the split in how companies are deciding to proceed. For example, while Getty Images’ is suing Stable Diffusion over copyright violations in training its AI, Shutterstock is instead partnering with OpenAI, having decided that it has the right to sell its contributors’ content as training material to AI engines. Although Shutterstock says it doesn’t need to compensate its contributors, it has created a contributors fund to pay those whose works are used most by AI engines. It is also giving contributors the ability to opt out of having their content used as AI training material.

Since Google was permitted to scan and share copyrighted books without compensating authors, it’s entirely reasonable to assume that generative AI will also be allowed to use copyrighted works without agreements or compensation of copyright holders. So, content providers shouldn’t expect the law to protect them.

Given all of that, brands can protect themselves by:

  • Gating more of their web content, whether that’s behind paywalls, account logins or lead generation forms. Although there are disputes, both search and AI engines shouldn’t be crawling behind paywalls.
  • Releasing some content in password-protected PDFs. While web-hosted PDFs are crawlable, password-protected ones are not. Because consumers aren’t used to frequently encountering password-protected PDFs, some education would be necessary. Moreover, this approach would be most appropriate for your highest-value content.
  • Distributing more content via subscriber-exclusive channels, including email, push and print. Inboxes are considered privacy spaces, so crawling this content is already a no-no. While print publications like books have been scanned in the past by Google and others, smaller publications would likely be safe from scanning efforts.

In addition to those, hopefully brands will gain a noindex equivalent to tell companies not to train their large language models (LLMs) and other AI tools on the content of their webpages.

Of course, while shielding their content from external generative AI engines, brands could also deploy generative AI within their own sites as a way to help visitors and customers find the information they’re looking for. For most brands, this would be a welcome augmentation to their site search functionality.

2. Building Stronger Direct Relationships

While shielding your content is the defensive play, building your first-party audiences is the offensive play. Put another way, now that you’ve kept your valuable content out of the hands of generative AI engines, you need to get it into the hands of your target audience.

You do that by building out your subscription-based channels like email and push. On your email signup forms, highlight the exclusive nature of the content you’ll be sharing. If you’re going to be personalizing the content that you send, highlight that, too.

Brands have the opportunity to both turn their emails into personalized homepages for their subscribers, as well as to turn their subscribers’ inboxes into personalized search engines.

Email Marketing Reinvents Itself Again

Brands already have urgent reasons to build out their first-party audiences. One is the sunsetting of third-party cookies and the need for more customer data. Email marketing and loyalty programs, in particular, along with SMS, are great at collecting both zero-party data through preference centers and progressive profiling, as well as first-party data through channel engagement data.

Another is the increasingly evident dangers of building on the “rented land” of social media. For example, Facebook is slowly declining, Twitter has cut 80% of its staff to avoid bankruptcy as its value plunges, and TikTok faces growing bans around the world. Some are even claiming we’re witnessing the beginning of the end of the age of social media. I wouldn’t go that far, but brands certainly have lots of reasons to focus more on those channels they have much more control over, including the web, loyalty, SMS, and, of course, email.

So, the disruption of search engine optimization by generative AI is just providing another compelling reason to invest more into email programs, or to acquire them. It’s hard not to see this as just another case of email marketing reinventing itself and making itself more relevant to brands yet again.

Feature Image Credit: Andrey Popov on Adobe Stock Photo

By Chad S. White

Chad S. White is the author of four editions of Email Marketing Rules and Head of Research for Oracle Marketing Consulting, a global full-service digital marketing agency inside of Oracle. Connect with Chad S. White:  

Sourced from CMSWIRE

Brands have two major levers they can pull to protect themselves from the negative effects of growing use of generative AI.

The Gist

  • AI disruption. Generative AI is set to disrupt SEO significantly.
  • Content shielding. Brands need strategies to protect their content from AI.
  • Direct relationships. Building strong direct relationships is key.

Do your customers trust your brand more than ChatGPT?

The answer to that question will determine which brands truly have credibility and authority in the years ahead and which do not.

Those who are more trustworthy than generative AI engines will:

  1. Be destinations for answer-seekers, generating strong direct traffic to their websites and robust app usage.
  2. Be able to build large first-party audiences via email, SMS, push and other channels.

Both of those will be critical for any brand wanting to insulate themselves from the search engine optimization (SEO) traffic loss that will be caused by generative AI.

The Threat to SEO

Despite racking up 100 million users just two months after launching — an all-time record — ChatGPT doesn’t appear to be having a noticeable impact on the many billions of searches that happen every day yet. However, it’s not hard to imagine it and other large language models (LLMs) taking a sizable bite out of search market share as they improve and become more reliable.

And improve they will. After all, Microsoft, Google and others are investing tens of billions of dollars into generative AI engines. Long dominating the search engine market, Google in particular is keenly aware of the enormous risk to its business, which is why it declared a Code Red and marshalled all available resources into AI development.

If you accept that generative AI will improve significantly over the next few years — and probably dramatically by the end of the decade — and therefore consumers will inevitability get more answers to their questions through zero-click engagements, which are already sizable, then it begs the question:

What should brands consider doing to maintain brand visibility and authority, as well as avoid losing value on the investments they’ve made in content?

Protective Measures From Negative Generative AI Effects

Brands have two major levers they can pull to protect themselves from the negative effects of growing use of generative AI.

1. Shielding Content From Generative AI Training

Major legal battles will be fought in the years ahead to clarify what rights copyright holders have in this new age and what still constitutes Fair Use. Content and social media platforms are likely to try to redefine the copyright landscape in their favour, amending their user agreements to give themselves more rights over the content that’s shared on their platforms.

A white robot hand holds a gavel above a sound block sitting on a wooden table.
Andrey Popov on Adobe Stock Photo

You can already see the split in how companies are deciding to proceed. For example, while Getty Images’ is suing Stable Diffusion over copyright violations in training its AI, Shutterstock is instead partnering with OpenAI, having decided that it has the right to sell its contributors’ content as training material to AI engines. Although Shutterstock says it doesn’t need to compensate its contributors, it has created a contributors fund to pay those whose works are used most by AI engines. It is also giving contributors the ability to opt out of having their content used as AI training material.

Since Google was permitted to scan and share copyrighted books without compensating authors, it’s entirely reasonable to assume that generative AI will also be allowed to use copyrighted works without agreements or compensation of copyright holders. So, content providers shouldn’t expect the law to protect them.

Given all of that, brands can protect themselves by:

  • Gating more of their web content, whether that’s behind paywalls, account logins or lead generation forms. Although there are disputes, both search and AI engines shouldn’t be crawling behind paywalls.
  • Releasing some content in password-protected PDFs. While web-hosted PDFs are crawlable, password-protected ones are not. Because consumers aren’t used to frequently encountering password-protected PDFs, some education would be necessary. Moreover, this approach would be most appropriate for your highest-value content.
  • Distributing more content via subscriber-exclusive channels, including email, push and print. Inboxes are considered privacy spaces, so crawling this content is already a no-no. While print publications like books have been scanned in the past by Google and others, smaller publications would likely be safe from scanning efforts.

In addition to those, hopefully brands will gain a noindex equivalent to tell companies not to train their large language models (LLMs) and other AI tools on the content of their webpages.

Of course, while shielding their content from external generative AI engines, brands could also deploy generative AI within their own sites as a way to help visitors and customers find the information they’re looking for. For most brands, this would be a welcome augmentation to their site search functionality.

2. Building Stronger Direct Relationships

While shielding your content is the defensive play, building your first-party audiences is the offensive play. Put another way, now that you’ve kept your valuable content out of the hands of generative AI engines, you need to get it into the hands of your target audience.

You do that by building out your subscription-based channels like email and push. On your email signup forms, highlight the exclusive nature of the content you’ll be sharing. If you’re going to be personalizing the content that you send, highlight that, too.

Brands have the opportunity to both turn their emails into personalized homepages for their subscribers, as well as to turn their subscribers’ inboxes into personalized search engines.

Email Marketing Reinvents Itself Again

Brands already have urgent reasons to build out their first-party audiences. One is the sunsetting of third-party cookies and the need for more customer data. Email marketing and loyalty programs, in particular, along with SMS, are great at collecting both zero-party data through preference centers and progressive profiling, as well as first-party data through channel engagement data.

Another is the increasingly evident dangers of building on the “rented land” of social media. For example, Facebook is slowly declining, Twitter has cut 80% of its staff to avoid bankruptcy as its value plunges, and TikTok faces growing bans around the world. Some are even claiming we’re witnessing the beginning of the end of the age of social media. I wouldn’t go that far, but brands certainly have lots of reasons to focus more on those channels they have much more control over, including the web, loyalty, SMS, and, of course, email.

So, the disruption of search engine optimization by generative AI is just providing another compelling reason to invest more into email programs, or to acquire them. It’s hard not to see this as just another case of email marketing reinventing itself and making itself more relevant to brands yet again.

By Chad S. White

Chad S. White is the author of four editions of Email Marketing Rules and Head of Research for Oracle Marketing Consulting, a global full-service digital marketing agency inside of Oracle.

Sourced from CMSWIRE

chatgpt,  digital experience, search, email marketing, artificial intelligence, generative ai, artificial intelligence in marketing

 

By Chad S. White
Brands have two major levers they can pull to protect themselves from the negative effects of growing use of generative AI.

The Gist

  • AI disruption. Generative AI is set to disrupt SEO significantly.
  • Content shielding. Brands need strategies to protect their content from AI.
  • Direct relationships. Building strong direct relationships is key.

Do your customers trust your brand more than ChatGPT?

The answer to that question will determine which brands truly have credibility and authority in the years ahead and which do not.

Those who are more trustworthy than generative AI engines will:

  1. Be destinations for answer-seekers, generating strong direct traffic to their websites and robust app usage.
  2. Be able to build large first-party audiences via email, SMS, push and other channels.

Both of those will be critical for any brand wanting to insulate themselves from the search engine optimization (SEO) traffic loss that will be caused by generative AI.

The Threat to SEO

Despite racking up 100 million users just two months after launching — an all-time record — ChatGPT doesn’t appear to be having a noticeable impact on the many billions of searches that happen every day yet. However, it’s not hard to imagine it and other large language models (LLMs) taking a sizable bite out of search market share as they improve and become more reliable.

And improve they will. After all, Microsoft, Google and others are investing tens of billions of dollars into generative AI engines. Long dominating the search engine market, Google in particular is keenly aware of the enormous risk to its business, which is why it declared a Code Red and marshalled all available resources into AI development.

If you accept that generative AI will improve significantly over the next few years — and probably dramatically by the end of the decade — and therefore consumers will inevitability get more answers to their questions through zero-click engagements, which are already sizable, then it begs the question:

What should brands consider doing to maintain brand visibility and authority, as well as avoid losing value on the investments they’ve made in content?

Protective Measures From Negative Generative AI Effects

Brands have two major levers they can pull to protect themselves from the negative effects of growing use of generative AI.

1. Shielding Content From Generative AI Training

Major legal battles will be fought in the years ahead to clarify what rights copyright holders have in this new age and what still constitutes Fair Use. Content and social media platforms are likely to try to redefine the copyright landscape in their favour, amending their user agreements to give themselves more rights over the content that’s shared on their platforms.

A white robot hand holds a gavel above a sound block sitting on a wooden table.
Andrey Popov on Adobe Stock Photo

You can already see the split in how companies are deciding to proceed. For example, while Getty Images’ is suing Stable Diffusion over copyright violations in training its AI, Shutterstock is instead partnering with OpenAI, having decided that it has the right to sell its contributors’ content as training material to AI engines. Although Shutterstock says it doesn’t need to compensate its contributors, it has created a contributors fund to pay those whose works are used most by AI engines. It is also giving contributors the ability to opt out of having their content used as AI training material.

Since Google was permitted to scan and share copyrighted books without compensating authors, it’s entirely reasonable to assume that generative AI will also be allowed to use copyrighted works without agreements or compensation of copyright holders. So, content providers shouldn’t expect the law to protect them.

Given all of that, brands can protect themselves by:

  • Gating more of their web content, whether that’s behind paywalls, account logins or lead generation forms. Although there are disputes, both search and AI engines shouldn’t be crawling behind paywalls.
  • Releasing some content in password-protected PDFs. While web-hosted PDFs are crawlable, password-protected ones are not. Because consumers aren’t used to frequently encountering password-protected PDFs, some education would be necessary. Moreover, this approach would be most appropriate for your highest-value content.
  • Distributing more content via subscriber-exclusive channels, including email, push and print. Inboxes are considered privacy spaces, so crawling this content is already a no-no. While print publications like books have been scanned in the past by Google and others, smaller publications would likely be safe from scanning efforts.

In addition to those, hopefully brands will gain a noindex equivalent to tell companies not to train their large language models (LLMs) and other AI tools on the content of their webpages.

Of course, while shielding their content from external generative AI engines, brands could also deploy generative AI within their own sites as a way to help visitors and customers find the information they’re looking for. For most brands, this would be a welcome augmentation to their site search functionality.

2. Building Stronger Direct Relationships

While shielding your content is the defensive play, building your first-party audiences is the offensive play. Put another way, now that you’ve kept your valuable content out of the hands of generative AI engines, you need to get it into the hands of your target audience.

You do that by building out your subscription-based channels like email and push. On your email signup forms, highlight the exclusive nature of the content you’ll be sharing. If you’re going to be personalizing the content that you send, highlight that, too.

Brands have the opportunity to both turn their emails into personalized homepages for their subscribers, as well as to turn their subscribers’ inboxes into personalized search engines.

Email Marketing Reinvents Itself Again

Brands already have urgent reasons to build out their first-party audiences. One is the sunsetting of third-party cookies and the need for more customer data. Email marketing and loyalty programs, in particular, along with SMS, are great at collecting both zero-party data through preference centers and progressive profiling, as well as first-party data through channel engagement data.

Another is the increasingly evident dangers of building on the “rented land” of social media. For example, Facebook is slowly declining, Twitter has cut 80% of its staff to avoid bankruptcy as its value plunges, and TikTok faces growing bans around the world. Some are even claiming we’re witnessing the beginning of the end of the age of social media. I wouldn’t go that far, but brands certainly have lots of reasons to focus more on those channels they have much more control over, including the web, loyalty, SMS, and, of course, email.

So, the disruption of search engine optimization by generative AI is just providing another compelling reason to invest more into email programs, or to acquire them. It’s hard not to see this as just another case of email marketing reinventing itself and making itself more relevant to brands yet again.

Feature Image Credit: Andrey Popov on Adobe Stock Photo

By Chad S. White

Chad S. White is the author of four editions of Email Marketing Rules and Head of Research for Oracle Marketing Consulting, a global full-service digital marketing agency inside of Oracle. Connect with Chad S. White:  

Sourced from CMSWIRE

By Luke Hurst

 has led to an increase in websites producing low-quality or fake content – and major brands’ advertising budgets may be funding them.

The Internet is awash with not only low-quality content, but content that is misleading, misinformation, or completely false.

The availability of generative artificial intelligence (AI) tools such as OpenAI’s ChatGPT and Google’s Bard, meanwhile, has meant AI-generated news and information has added to this tidal wave of content over the past year.

A new analysis from NewsGuard, a company that gives trust ratings to online news outlets, has found the proliferation of this poor quality, AI-generated content is being supported financially thanks to the advertising budgets of major global brands, including tech giants and banks.

The adverts appear to be generated programmatically, so the brands aren’t necessarily choosing to advertise on the websites that NewsGuard dubs “unreliable AI-generated news and information websites (UAINs)”.

According to NewsGuard, most of the ads are placed by Google, and they fail to protect the companies’ brand safety – as many legitimate companies don’t want to be seen to be advertising on sites that host fake news, misinformation, or just low-quality content.

NewsGuard, which says it provides “transparent tools to counter misinformation on behalf of readers, brands, and democracies,” defines UAINs as websites that operate with little or no human oversight, and publish articles that are written largely or entirely by bots.

Their analysts have added 217 sites to its UAIN site tracker, many of which appear to be entirely financed by programmatic advertising.

Incentivised to publish low-quality content

Because the websites can make money from programmatic advertising, they are incentivised to publish often. One UAIN the company identified – world-today-news.com – published around 8,600 articles in the week of June 9 to June 15 this year. That’s an average of around 1,200 articles a day.

The New York Times, by comparison, publishes around 150 articles a day, with a large staff headcount.

NewsGuard hasn’t named the big brands that are advertising on these low-quality websites, as they do not expect the brands to know their ads are ending up on those sites.

They did say the brands include six major banks and financial-services firms, four luxury department stores, three leading brands in sports apparel, three appliance manufacturers, two of the world’s biggest consumer technology companies, two global e-commerce companies, two US broadband providers, three streaming services, a Silicon Valley digital platform, and a major European supermarket chain.

Many brands and advertising agencies have “exclusion lists” that stop their ads from being shown on unwelcome websites, but according to NewsGuard, these lists aren’t always kept up to date.

In its report, the company behind the Internet trust tool says it contacted Google multiple times asking for comment about its monetisation of the UIAN sites.

Google asked for more context over email, and upon receiving the additional content as of June 25, Google has not replied again.

Google’s ad policies are supposed to prohibit sites from placing Google-served ads on pages that include “spammy automatically-generated content,” which can be AI-generated content that doesn’t produce anything original or of “sufficient value”.

A previous report from NewsGuard this year highlighted how AI chatbots were being used to publish a new wave of fake news and misinformation online.

In their latest research, conducted over May and June this year, analysts found 393 programmatic ads from 141 major brands that appeared on 55 of the 217 UAIN sites.

The analysts were browsing the sites from the US, Germany, France, and Italy.

All of the ads identified appeared on pages that had error messages generated by AI chatbots, which say things such as: “Sorry, as an AI language model, I am not able to access external links or websites on my own”.

More than 90 per cent of these ads were served by Google Ads, a platform that brings in billions in revenue for Google each year.

By Luke Hurst

Sourced from euronews.next

By

Unstructured text and data are like gold for business applications and the company bottom line, but where to start? Here are three tools worth a look.

Developers and data scientists use generative AI and large language models (LLMs) to query volumes of documents and unstructured data. Open source LLMs, including Dolly 2.0, EleutherAI Pythia, Meta AI LLaMa, StabilityLM, and others, are all starting points for experimenting with artificial intelligence that accepts natural language prompts and generates summarized responses.

“Text as a source of knowledge and information is fundamental, yet there aren’t any end-to-end solutions that tame the complexity in handling text,” says Brian Platz, CEO and co-founder of Fluree. “While most organizations have wrangled structured or semi-structured data into a centralized data platform, unstructured data remains forgotten and underleveraged.”

If your organization and team aren’t experimenting with natural language processing (NLP) capabilities, you’re probably lagging behind competitors in your industry. In the 2023 Expert NLP Survey Report, 77% of organizations said they planned to increase spending on NLP, and 54% said their time-to-production was a top return-on-investment (ROI) metric for successful NLP projects.

Use cases for NLP

If you have a corpus of unstructured data and text, some of the most common business needs include

  • Entity extraction by identifying names, dates, places, and products
  • Pattern recognition to discover currency and other quantities
  • Categorization into business terms, topics, and taxonomies
  • Sentiment analysis, including positivity, negation, and sarcasm
  • Summarizing the document’s key points
  • Machine translation into other languages
  • Dependency graphs that translate text into machine-readable semi-structured representations

Sometimes, having NLP capabilities bundled into a platform or application is desirable. For example, LLMs support asking questions; AI search engines enable searches and recommendations; and chatbots support interactions. Other times, it’s optimal to use NLP tools to extract information and enrich unstructured documents and text.

Let’s look at three popular open source NLP tools that developers and data scientists are using to perform discovery on unstructured documents and develop production-ready NLP processing engines.

Natural Language Toolkit

The Natural Language Toolkit (NLTK), released in 2001, is one of the older and more popular NLP Python libraries. NLTK boasts more than 11.8 thousand stars on GitHub and lists over 100 trained models.

“I think the most important tool for NLP is by far Natural Language Toolkit, which is licensed under Apache 2.0,” says Steven Devoe, director of data and analytics at SPR. “In all data science projects, the processing and cleaning of the data to be used by algorithms is a huge proportion of the time and effort, which is particularly true with natural language processing. NLTK accelerates a lot of that work, such as stemming, lemmatization, tagging, removing stop words, and embedding word vectors across multiple written languages to make the text more easily interpreted by the algorithms.”

NLTK’s benefits stem from its endurance, with many examples for developers new to NLP, such as this beginner’s hands-on guide and this more comprehensive overview. Anyone learning NLP techniques may want to try this library first, as it provides simple ways to experiment with basic techniques such as tokenization, stemming, and chunking.

spaCy

spaCy is a newer library, with its version 1.0 released in 2016. spaCy supports over 72 languages and publishes its performance benchmarks, and it has amassed more than 25,000 stars on GitHub.

“spaCy is a free, open-source Python library providing advanced capabilities to conduct natural language processing on large volumes of text at high speed,” says Nikolay Manchev, head of data science, EMEA, at Domino Data Lab. “With spaCy, a user can build models and production applications that underpin document analysis, chatbot capabilities, and all other forms of text analysis. Today, the spaCy framework is one of Python’s most popular natural language libraries for industry use cases such as extracting keywords, entities, and knowledge from text.”

Tutorials for spaCy show similar capabilities to NLTK, including named entity recognition and part-of-speech (POS) tagging. One advantage is that spaCy returns document objects and supports word vectors, which can give developers more flexibility for performing additional post-NLP data processing and text analytics.

Spark NLP

If you already use Apache Spark and have its infrastructure configured, then Spark NLP may be one of the faster paths to begin experimenting with natural language processing. Spark NLP has several installation options, including AWS, Azure Databricks, and Docker.

“Spark NLP is a widely used open-source natural language processing library that enables businesses to extract information and answers from free-text documents with state-of-the-art accuracy,” says David Talby, CTO of John Snow Labs. “This enables everything from extracting relevant health information that only exists in clinical notes, to identifying hate speech or fake news on social media, to summarizing legal agreements and financial news.

Spark NLP’s differentiators may be its healthcare, finance, and legal domain language models. These commercial products come with pre-trained models to identify drug names and dosages in healthcare, financial entity recognition such as stock tickers, and legal knowledge graphs of company names and officers.

Talby says Spark NLP can help organizations minimize the upfront training in developing models. “The free and open source library comes with more than 11,000 pre-trained models plus the ability to reuse, train, tune, and scale them easily,” he says.

Best practices for experimenting with NLP

Earlier in my career, I had the opportunity to oversee the development of several SaaS products built using NLP capabilities. My first NLP was an SaaS platform to search newspaper classified advertisements, including searching cars, jobs, and real estate. I then led developing NLPs for extracting information from commercial construction documents, including building specifications and blueprints.

When starting NLP in a new area, I advise the following:

  • Begin with a small but representable example of the documents or text.
  • Identify the target end-user personas and how extracted information improves their workflows.
  • Specify the required information extractions and target accuracy metrics.
  • Test several approaches and use speed and accuracy metrics to benchmark.
  • Improve accuracy iteratively, especially when increasing the scale and breadth of documents.
  • Expect to deliver data stewardship tools for addressing data quality and handling exceptions.

You may find that the NLP tools used to discover and experiment with new document types will aid in defining requirements. Then, expand the review of NLP technologies to include open source and commercial options, as building and supporting production-ready NLP data pipelines can get expensive. With LLMs in the news and gaining interest, underinvesting in NLP capabilities is one way to fall behind competitors. Fortunately, you can start with one of the open source tools introduced here and build your NLP data pipeline to fit your budget and requirements.

Feature Image Credit: TippaPatt/Shutterstock

By

Isaac Sacolick is president of StarCIO and the author of the Amazon bestseller Driving Digital: The Leader’s Guide to Business Transformation through Technology and Digital Trailblazer: Essential Lessons to Jumpstart Transformation and Accelerate Your Technology Leadership. He covers agile planning, devops, data science, product management, and other digital transformation best practices. Sacolick is a recognized top social CIO and digital transformation influencer. He has published more than 900 articles at InfoWorld.com, CIO.com, his blog Social, Agile, and Transformation, and other sites.

Sourced from InfoWorld

By Forrester

Generative AI (gen AI) was born on November 30, 2022, with the release of ChatGPT, and it’s been moving 100 miles an hour ever since, drawing in 100 million people and counting. As new and surprisingly powerful as gen AI is, we can already see how companies will incorporate gen AI capabilities into their businesses’ strategies and operations. Our experience with two earlier, explosive technologies show you how.

  1. The BYO explosion of the late 2000s taught us how to incorporate employee-led disruption. We learned that when employees brought personal technology to solve customer and business problems. We empowered, guided, and protected employees and the firm while taking advantage of the new value that personal technologies in business brought.
  2. The mobile, social, original internet explosions taught us how to respond to and take advantage of customer-led disruption. We built mobile apps to help customers in their mobile moments of need; we adopted social media communications to improve engagement and collaboration; and we tooled up to take full advantage of the business models shaped by the internet.

Technology executives should prepare for generative AI to follow both paths and sprint into your business through four doors:

  • Bottom-up. Some of the 100 million people already using generative AI work for you. As you learned in the BYOD era, employees will adopt any tool that makes them more successful. The hyperadoption of gen AI leads to rampant BYOAI adoption. You can’t stop them, not fully. Your job is to put up guardrails that protect the firm’s IP and teach the skills of responsible AI. You need guardrails because your company IP is at risk. Just like with the original onslaught of BYO, you need to tune in now and empower, guide, and protect employees and the firm. Sharpen your listening tools and network sniffers. Revisit and promote your responsible AI policies ASAP. Your response to BYOAI will shape your top-down approach to gen AI, because employees will have elevated their robotics quotient and will be ready to go.
  • Top-down. Gen AI will unlock the value of 10-plus years of investments in data, insights, and artificial intelligence, including machine-learning models. This is where your investments in trusted AI will pay off, because you’re ready to use them. Already, the hyperscalers and software-as-a-service platform providers have announced and will trickle release gen AI-infused applications. Already, service providers and you are using TuringBots to generate and test code. Already, you’re incorporating marketing content generated from text prompts to hyperpersonalize engagement. And soon, you’ll overhaul your usability with text-based interfaces to business and analytics applications. Every part of your business will have ideas on how to use generative AI, mostly to optimize, automate, or augment something. Some will be great. Pick the ones that are easiest, safest, and most practical to deploy first.
  • Outside-in. Customers’ expectations for what gen AI can do for them are rising faster than anybody can keep up with. Every day, there is a new application using gen AI to do something useful. The latest I saw was a “free” cover-letter generator using GPT-4. (“Free” means that they’re accumulating your job preferences to resell as insights.) Microsoft triggered the search wars with OpenAI in Bing, and Google is now full-on engaged with Bard. Already, in the US, 35% of Gen Zers and 25% of Millennials have used bots to help buy hard-to-find inventory. That bot habit will be supercharged with gen AI, raising expectations even higher. Your job starts by anticipating where customers’ adoption will directly affect your company. If a customer has a better idea of your product landscape than your salespeople, that’s not good. If they are getting gen AI-powered customer care from a competitor and not you, not good. If your competitors’ stuff is in a next-generation recommendation engine and yours isn’t, that’s not good. Just like with mobile, your response will be to ramp up your customer-facing gen AI capabilities inside-out.
  • Inside-out. As you move through the gen AI opportunity thicket, you will quickly identify ways to help customers and deliver more value with your own gen AI-infused applications. Customer care or empowering frontline employees will be an early payoff, we expect. But you’ll find opportunities to streamline customer onboarding, hyper personalize engagement, provide better customer self-service, and stimulate a new round of value creation like what was triggered by mobile apps. Sort the scenarios based on the readiness of your data, the impact you will have, and your confidence that you can anticipate and manage the costs that go along with gen AI licensing and computing. The technical architectures are still in flux, but we believe that it will incorporate layers of intelligence — some of yours, some from others, and some public — protected by control gates for inputs and outputs and piped together into gen AI-infused applications. This “layers, gates, and pipes” approach will help you scale, take advantage of all the capabilities, and give you intense visibility into how it’s going and where the costs lie.

By Ted Schadler

This post was written by VP, Principal Analyst Ted Schadler and it originally appeared here. Follow me on Twitter or LinkedIn. Check out my website

Sourced from Forbes