Tag

OpenAI

Browsing

By Jose Enrico

A staggering $15.5 million for just a URL.

In a savvy business move, OpenAI CEO Sam Altman kept attention this week with a single, innocuous tweet: chat.com. Clickers were neatly redirected to OpenAI’s much-touted AI-powered chatbot ChatGPT.

Far from simply a redirect, the acquisition of this domain speaks to the continued effort of OpenAI to consolidate its branding and expand its influence within the conversational AI space. It’s just a URL, but its price is enough to give you the shock value you need for a week.

How Much is Chat.com Domain?

One highly prized domain name for chat.com was owned by Dharmesh Shah, the cofounder and CTO of HubSpot. In early 2023, Shah bought the extremely costly chat.com domain for a staggering $15.5 million, The Verge reports.

It was that same year when Shah admitted to having sold the domain again, this time mum as to the identity of the buyer or even the exact sale amount. Shah did admit that it was a profitable sale.

Then, Altman confirmed that OpenAI bought chat.com, and Shah further added to the mystery by claiming in an X (formerly Twitter) post that there might have been shared aspects of the sale, perhaps it was not as simple as cash, after all.

Why It Matters in Dropping ‘GPT’ Within the Overall Brand Evolution for Open AI

The deal fits with a generally larger effort at rebranding. A cleaner, more universal domain name is a move away from the technical “GPT” designation toward one that reflects a name that can be valued by more people.

It comes just after OpenAI launched in September the “o1” reasoning models—a move the company believes will push the firm toward the simplification of nomenclature. So said Bob McGrew, former chief research officer at OpenAI, on the new model names: meant to be more intuitive of OpenAI’s mission.

Domain Hoarding: The Internet’s High-Stakes Real Estate Game

The sale of chat.com is symptomatic of a trend towards increasingly expensive domain deals. Domain names that are unusual and memorable have always been seen as the digital version of real estate.

Companies fight over vanity URLs that can enhance their brand image. Recently, AI startup Friend made headlines by buying friend.com for $1.8 million and then raising $2.5 million in funding.

In contrast, OpenAI’s $15 million+ investment in chat.com, if paid in full or shares, is very small compared to the company just closing on a $6.6 billion funding round.

The Value of Chat.com for Open AI Future

OpenAI is projected to continue shaping the conversations that take place around conversational AI. A simple yet memorable URL provides access to AI tools and leverages opportunities for further growth in this rapidly changing world of artificial intelligence.

This strategic acquisition for OpenAI underlines a commitment to brand clarity and user accessibility, leaving it well-poised for expansion.

Well, Altman knows that this was necessary in the competition against other AI companies. It’s a brilliant take to give users a chance to access ChatGPT in the easiest way possible. On top of that, it’s one of the keys to win the chatbot war—the modern war of tech giants in AI space.

Feature Image Credit: Kevork Djansezian/Getty Images

By Jose Enrico

Sourced from TECH TiMES

By Hayden Field

OpenAI announced it will launch a new AI model, “GPT-4o mini,” the artificial intelligence startup’s latest effort to expand use of its popular chatbot, on Thursday.

The company called the new release “the most capable and cost-efficient small model available today,” and it plans to integrate image, video and audio into it later.

The mini AI model is an offshoot of GPT-4o, OpenAI’s fastest and most powerful model, which it launched in May during a livestreamed event with executives. The “o” in GPT-4o stands for omni, and GPT-4o has improved audio, video and text capabilities, with the ability to handle 50 different languages at improved speed and quality, according to the company.

OpenAI, backed by Microsoft, has been valued at more than $80 billion by investors. The company, founded in 2015, is under pressure to stay on top of the generative AI market while finding ways to make money as it spends massive sums on processors and infrastructure to build and train its models.

The mini AI model announced Thursday is part of OpenAI’s push to be at the forefront of “multimodality,” or the ability to offer a wide range of types of AI-generated media, like text, images, audio and video, inside one tool: ChatGPT.

Last year, OpenAI Chief Operating Officer Brad Lightcap told CNBC: “The world is multimodal. If you think about the way we as humans process the world and engage with the world, we see things, we hear things, we say things — the world is much bigger than text. So to us, it always felt incomplete for text and code to be the single modalities, the single interfaces that we could have to how powerful these models are and what they can do.”

GPT-4o mini will be available on Thursday to free users of ChatGPT, along with ChatGPT Plus and Team subscribers, and it will be available to ChatGPT Enterprise users next week, the company said in a press release.

Feature Image Credit: Jason Redmond | AFP | Getty Images

By Hayden Field

Sourced from CNBC

By Shannon Thaler

The CEO of WPP fell victim to an elaborate deepfake scam that involved voice-cloning the boss to solicit money and personal details from the company’s workforce.

Mark Read, the CEO of WPP, a London-based communications and advertising company whose clients include Dell, Wendy’s, Victoria’s Secret and Coca-Cola, saw his voice cloned and likeness stolen by fraudsters who created a WhatsApp account seemingly belonging to him.

They were using a publicly available photo of Read as the profile picture to trick fellow users, according to an email explaining the scam and sent to WPP’s leadership earlier reviewed by the Guardian.

WPP CEO Mark Read’s voice and likeness were stolen as part of an elaborate deepfake scam to get the advertising giant’s fellow leaders to hand over their personal details and funds.REUTERS

The WhatsApp account was used to set up a Microsoft Teams meetings with another WPP executive.

During the meeting, the crooks deployed a fake, artificial intelligence-generated video of Read — also known as a “deepfake” — including the voice cloning.

They also tried using the meeting’s chat function to impersonate Read and target a fellow “agency leader” at WPP — whose market cap sits around $11.3 billion — by asking them to hand over money and other personal details, according to the Guardian.

“Fortunately the attackers were not successful,” Read wrote in the email obtained by the Guardian.

“We all need to be vigilant to the techniques that go beyond emails to take advantage of virtual meetings, AI and deepfakes.”

A WPP spokesperson confirmed to The Post that the attempt at scamming the company’s leadership was unsuccessful.

“Thanks to the vigilance of our people, including the executive concerned, the incident was prevented,” the company rep added.

The scammers reportedly used a photo of Read to set up a WhatsApp account, which was then used to make a Microsoft Teams account to communicate with other WPP leaders while pretending to be Read.diy13 – stock.adobe.com

It wasn’t immediately clear which other WPP executives were involved in the scheme, or when the attack attempt took place.

WPP’s spokesperson declined to provide further details about the scam.

“We have seen increasing sophistication in the cyber-attacks on our colleagues, and those targeted at senior leaders in particular,” Read added in the email, per the Guardian, in reference to the myriad ways in which criminals can impersonate real people.

Read’s email included a number of bullet points that he advised recipients to look out for as red flags, including requests for passports, money transfers and any mention of a “secret acquisition, transaction or payment that no one else knows about.”

WPP, a London-based communications and advertising company whose clients include Dell, Wendy’s, Victoria’s Secret and Coca-Cola, confirmed to The Post that the scammers were unsuccessful in tricking its executives.AFP via Getty Images

“Just because the account has my photo doesn’t mean it’s me,” Read said in the email, according to the Guardian.

The Post has sought comment from WPP, which includes a notice on its Contacts landing page that its “name and those of its agencies have been fraudulently used by third parties.”

Deepfake audio has been on the rise as deepfake images have become a hotly debated topic among AI firms.

While Google has recently moved to distance itself from the dark side of AI, cracking down on the creation of deepfakes — most of which are pornographic — as it deems them “egregious,” ChatGPT maker OpenAI is reportedly considering allowing users to create AI-generated pornography and other explicit content with its tech tools.

Deepfakes like the graphic nude images of Taylor Swift, however, will be banned.

Deepfakes mostly involve fake pornographic images, with celebrities like Taylor Swift, Bella Hadid and US Rep. Alexandria Ocasio-Cortez falling victim.AFP via Getty Images

The Sam Altman-run company said it is “exploring whether we can responsibly provide the ability to generate NSFW (not-safe-for-work) content in age-appropriate contexts.”

“We look forward to better understanding user and societal expectations of model behaviour in this area,” OpenAI added, noting that examples could include “erotica, extreme gore, slurs and unsolicited profanity.”

OpenAI’s foray into creating fake X-rated content comes just months after it unveiled revolutionary new software that can produce high-caliber video in response to a few simple text queries called Sora.

The technology marks a dazzling breakthrough from the ChatGPT maker that could also take concerns about deepfakes and ripoffs of licensed content to a new level.

By Shannon Thaler

Sourced from New York Post

By 

A day after OpenAI impressed with a startlingly improved ChatGPT AI model, Google showed off an equally stunning vision for how AI will improve the products that billions of people use every day.

The updates, announced at its annual Google I/O developer conference, come as the company is trying to push beyond its core advertising business with new devices and AI-powered tools. Artificial intelligence was so top of mind during the event, Google CEO Sundar Pichai said at the end of the presentation the term “AI” was said 120 times – as counted by none other than its AI platform Gemini.

During the keynote, Google showed how it wants its AI products to become a bigger part of users’ lives, such as by sharing information, interacting with others, finding objects around the house, making schedules, shopping and using an Android device. Google essentially wants its AI to be part of everything you do.

Pichai kicked off the event by highlighting various new features powered by its latest AI model Gemini 1.5 Pro. One new feature, called Ask Photos, allows users to search photos for deeper insights, such as asking when your daughter learned to swim or recall what your license plate number is, by looking through saved pictures.

He also showed how users can ask Gemini 1.5 Pro to summarize all recent emails from your child’s school by analysing attachments, and summarizing key points and spitting out action items.

Meanwhile, Google executives took turns demonstrating other capabilities, such as how the latest model could “read” a textbook and turn it into a kind of AI lecture featuring natural-sounding teachers that answer questions.

Just one day before, OpenAI — one of the tech industry’s leaders in artificial intelligence — unveiled a new AI model that it says will make chatbot ChatGPT smarter and easier to use. GPT-4o aims to turn ChatGPT into a digital personal assistant that can engage in real-time, spoken conversations and interact using text and “vision.” It can view screenshots, photos, documents or charts uploaded by users and have a conversation about them.

Google also showed off Gemini’s latest abilities to take different kinds of input — “multimodal” capabilities to take in text, voice or images — as a direct response to ChatGPT’s efforts. A Google executive also demoed a virtual “teammate” that can help stay on top of to-do lists, organize data and manage workflow.

The company also highlighted search improvements by allowing users to ask more natural or more focused questions, and providing various versions of the responses, such as in-depth or summarized results. It can also make targeted suggestions, such as recommending kid friendly restaurants in certain locations, or note what might be wrong with a gadget, such as a camera, by taking a video of the issue via Google Lens. The goal is to take the legwork out of searching on Google, the company said.

The company also briefly teased Project Astra, developed by Google’s DeepMind AI lab, which will allow AI assistants to help users’ everyday lives by using phone cameras to interpret information about the real worldsuch as identifying objects and even finding misplaced items. It also hinted at how it would work on augmented reality glasses.

Google said that later this year it will integrate more AI functions into phones. For example, users will be able to drag and drop images created by AI into Google Messages and Gmail and ask questions about YouTube videos and PDFs on an Android device.

And in a move that will likely appeal to many, a new built-in tool for Android will help detect suspicious activity in the middle of a call, such as a scammer trying to imitate a user’s bank.

According to analyst Jacob Bourne, from market research firm Emarketer, it’s no surprise AI took centre stage at this year’s Google developer conference.

“By showcasing its latest models and how they’ll power existing products with strong consumer reach, Google is demonstrating how it can effectively differentiate itself from rivals,” he said.

He believes the reception of the new tools will be an indicator of how well Google can adapt its search product to meet the demands of the generative AI era.

“To maintain its competitive edge and satisfy investors, Google will need to focus on translating its AI innovations into profitable products and services at scale,” he said.

As the company grows its AI footprint, it said it will introduce more protections to cut down on potential misuse. Google is expanding its existing SynthID feature to detect AI-generated content. Last year, the tool added watermarks to AI-generated images and audio.

Google said it is also partnering with experts and institutions to test and improve the capabilities in its new models.

Although the company has doubled down on artificial intelligence in the past year, it also met significant roadblocks. Last year, shortly after introducing its generative AI tool — then called Bard and since renamed Gemini — Google’s share price dropped after a demo video of the tool showed it producing a factually inaccurate response to a question about the James Webb Space Telescope.

More recently, the company hit pause in February on Gemini’s ability to generate images of people after it was blasted on social media for producing historically inaccurate images that largely showed people of colour in place of White people.

Gemini, like other AI tools such as ChatGPT, is trained on vast troves of online data. Experts have long warned about the shortcomings around AI tools, such as the potential for inaccuracies, biases and the spreading of misinformation. Still, many companies are forging ahead on AI tools or partnerships.

Apple may be interested in licensing and building Google’s Gemini AI engine, which includes chatbots and other AI tools, into upcoming iPhones and its iOS 18 features, Bloomberg reported in March. The company is also reportedly talking to ChatGPT creator OpenAI.

Feature Image Credit: Google. Sundar Pichai speaks about Gemini 1.5 pro during Google I/O developer conference today. 

By 

Sourced from CNN Business

&

Social media’s unregulated evolution over the past decade holds a lot of lessons that apply directly to AI companies and technologies.

Oh, how the mighty have fallen. A decade ago, social media was celebrated for sparking democratic uprisings in the Arab world and beyond. Now front pages are splashed with stories of social platforms’ role in misinformation, business conspiracymalfeasance, and risks to mental health. In a 2022 survey, Americans blamed social media for the coarsening of our political discourse, the spread of misinformation, and the increase in partisan polarization.

Today, tech’s darling is artificial intelligence. Like social media, it has the potential to change the world in many ways, some favourable to democracy. But at the same time, it has the potential to do incredible damage to society.

There is a lot we can learn about social media’s unregulated evolution over the past decade that directly applies to AI companies and technologies. These lessons can help us avoid making the same mistakes with AI that we did with social media.

In particular, five fundamental attributes of social media have harmed society. AI also has those attributes. Note that they are not intrinsically evil. They are all double-edged swords, with the potential to do either good or ill. The danger comes from who wields the sword, and in what direction it is swung. This has been true for social media, and it will similarly hold true for AI. In both cases, the solution lies in limits on the technology’s use.

#1: Advertising

The role advertising plays in the internet arose more by accident than anything else. When commercialization first came to the internet, there was no easy way for users to make micropayments to do things like viewing a web page. Moreover, users were accustomed to free access and wouldn’t accept subscription models for services. Advertising was the obvious business model, if never the best one. And it’s the model that social media also relies on, which leads it to prioritize engagement over anything else.

Both Google and Facebook believe that AI will help them keep their stranglehold on an 11-figure online ad market (yep, 11 figures), and the tech giants that are traditionally less dependent on advertising, like Microsoft and Amazon, believe that AI will help them seize a bigger piece of that market.

Big Tech needs something to persuade advertisers to keep spending on their platforms. Despite bombastic claims about the effectiveness of targeted marketing, researchers have long struggled to demonstrate where and when online ads really have an impact. When major brands like Uber and Procter & Gamble recently slashed their digital ad spending by the hundreds of millions, they proclaimed that it made no dent at all in their sales.

AI-powered ads, industry leaders say, will be much better. Google assures you that AI can tweak your ad copy in response to what users search for, and that its AI algorithms will configure your campaigns to maximize success. Amazon wants you to use its image generation AI to make your toaster product pages look cooler. And IBM is confident its Watson AI will make your ads better.

These techniques border on the manipulative, but the biggest risk to users comes from advertising within AI chatbots. Just as Google and Meta embed ads in your search results and feeds, AI companies will be pressured to embed ads in conversations. And because those conversations will be relational and human-like, they could be more damaging. While many of us have gotten pretty good at scrolling past the ads in Amazon and Google results pages, it will be much harder to determine whether an AI chatbot is mentioning a product because it’s a good answer to your question or because the AI developer got a kickback from the manufacturer.

#2: Surveillance

Social media’s reliance on advertising as the primary way to monetize websites led to personalization, which led to ever-increasing surveillance. To convince advertisers that social platforms can tweak ads to be maximally appealing to individual people, the platforms must demonstrate that they can collect as much information about those people as possible.

It’s hard to exaggerate how much spying is going on. A recent analysis by Consumer Reports about Facebook—just Facebook—showed that every user has more than 2,200 different companies spying on their web activities on its behalf.

AI-powered platforms that are supported by advertisers will face all the same perverse and powerful market incentives that social platforms do. It’s easy to imagine that a chatbot operator could charge a premium if it were able to claim that its chatbot could target users on the basis of their location, preference data, or past chat history and persuade them to buy products.

The possibility of manipulation is only going to get greater as we rely on AI for personal services. One of the promises of generative AI is the prospect of creating a personal digital assistant advanced enough to act as your advocate with others and as a butler to you. This requires more intimacy than you have with your search engine, email provider, cloud storage system, or phone. You’re going to want it with you constantly, and to most effectively work on your behalf, it will need to know everything about you. It will act as a friend, and you are likely to treat it as such, mistakenly trusting its discretion.

Even if you choose not to willingly acquaint an AI assistant with your lifestyle and preferences, AI technology may make it easier for companies to learn about you. Early demonstrations illustrate how chatbots can be used to surreptitiously extract personal data by asking you mundane questions. And with chatbots increasingly being integrated with everything from customer service systems to basic search interfaces on websites, exposure to this kind of inferential data harvesting may become unavoidable.

#3: Virality

Social media allows any user to express any idea with the potential for instantaneous global reach. A great public speaker standing on a soapbox can spread ideas to maybe a few hundred people on a good night. A kid with the right amount of snark on Facebook can reach a few hundred million people within a few minutes.

A decade ago, technologists hoped this sort of virality would bring people together and guarantee access to suppressed truths. But as a structural matter, it is in a social network’s interest to show you the things you are most likely to click on and share, and the things that will keep you on the platform.

As it happens, this often means outrageous, lurid, and triggering content. Researchers have found that content expressing maximal animosity toward political opponents gets the most engagement on Facebook and Twitter. And this incentive for outrage drives and rewards misinformation.

As Jonathan Swift once wrote, “Falsehood flies, and the Truth comes limping after it.” Academics seem to have proved this in the case of social media; people are more likely to share false information—perhaps because it seems more novel and surprising. And unfortunately, this kind of viral misinformation has been pervasive.

AI has the potential to supercharge the problem because it makes content production and propagation easier, faster, and more automatic. Generative AI tools can fabricate unending numbers of falsehoods about any individual or theme, some of which go viral. And those lies could be propelled by social accounts controlled by AI bots, which can share and launder the original misinformation at any scale.

Remarkably powerful AI text generators and autonomous agents are already starting to make their presence felt in social media. In July, researchers at Indiana University revealed a botnet of more than 1,100 Twitter accounts that appeared to be operated using ChatGPT.

AI will help reinforce viral content that emerges from social media. It will be able to create websites and web content, user reviews, and smartphone apps. It will be able to simulate thousands, or even millions, of fake personas to give the mistaken impression that an idea, or a political position, or use of a product, is more common than it really is. What we might perceive to be vibrant political debate could be bots talking to bots. And these capabilities won’t be available just to those with money and power; the AI tools necessary for all of this will be easily available to us all.

#4: Lock-in

Social media companies spend a lot of effort making it hard for you to leave their platforms. It’s not just that you’ll miss out on conversations with your friends. They make it hard for you to take your saved data—connections, posts, photos—and port it to another platform. Every moment you invest in sharing a memory, reaching out to an acquaintance, or curating your follows on a social platform adds a brick to the wall you’d have to climb over to go to another platform.

This concept of lock-in isn’t unique to social media. Microsoft cultivated proprietary document formats for years to keep you using its flagship Office product. Your music service or e-book reader makes it hard for you to take the content you purchased to a rival service or reader. And if you switch from an iPhone to an Android device, your friends might mock you for sending text messages in green bubbles. But social media takes this to a new level. No matter how bad it is, it’s very hard to leave Facebook if all your friends are there. Coordinating everyone to leave for a new platform is impossibly hard, so no one does.

Similarly, companies creating AI-powered personal digital assistants will make it hard for users to transfer that personalization to another AI. If AI personal assistants succeed in becoming massively useful time-savers, it will be because they know the ins and outs of your life as well as a good human assistant; would you want to give that up to make a fresh start on another company’s service? In extreme examples, some people have formed close, perhaps even familial, bonds with AI chatbots. If you think of your AI as a friend or therapist, that can be a powerful form of lock-in.

Lock-in is an important concern because it results in products and services that are less responsive to customer demand. The harder it is for you to switch to a competitor, the more poorly a company can treat you. Absent any way to force interoperability, AI companies have less incentive to innovate in features or compete on price, and fewer qualms about engaging in surveillance or other bad behaviours.

#5: Monopolization

Social platforms often start off as great products, truly useful and revelatory for their consumers, before they eventually start monetizing and exploiting those users for the benefit of their business customers. Then the platforms claw back the value for themselves, turning their products into truly miserable experiences for everyone. This is a cycle that Cory Doctorow has powerfully written about and traced through the history of Facebook, Twitter, and more recently TikTok.

The reason for these outcomes is structural. The network effects of tech platforms push a few firms to become dominant, and lock-in ensures their continued dominance. The incentives in the tech sector are so spectacularly, blindingly powerful that they have enabled six megacorporation’s (Amazon, Apple, Google, Facebook parent Meta, Microsoft, and Nvidia) to command a trillion dollars each of market value—or more. These firms use their wealth to block any meaningful legislation that would curtail their power. And they sometimes collude with each other to grow yet fatter.

This cycle is clearly starting to repeat itself in AI. Look no further than the industry poster child OpenAI, whose leading offering, ChatGPT, continues to set marks for uptake and usage. Within a year of the product’s launch, OpenAI’s valuation had skyrocketed to about $90 billion.

OpenAI once seemed like an “open” alternative to the megacorps—a common carrier for AI services with a socially oriented nonprofit mission. But the Sam Altman firing-and-rehiring debacle at the end of 2023, and Microsoft’s central role in restoring Altman to the CEO seat, simply illustrated how venture funding from the familiar ranks of the tech elite pervades and controls corporate AI. In January 2024, OpenAI took a big step toward monetization of this user base by introducing its GPT Store, wherein one OpenAI customer can charge another for the use of its custom versions of OpenAI software; OpenAI, of course, collects revenue from both parties. This sets in motion the very cycle Doctorow warns about.

In the middle of this spiral of exploitation, little or no regard is paid to externalities visited upon the greater public—people who aren’t even using the platforms. Even after society has wrestled with their ill effects for years, the monopolistic social networks have virtually no incentive to control their products’ environmental impact, tendency to spread misinformation, or pernicious effects on mental health. And the government has applied virtually no regulation toward those ends.

Likewise, few or no guardrails are in place to limit the potential negative impact of AI. Facial recognition software that amounts to racial profiling, simulated public opinions supercharged by chatbots, fake videos in political ads—all of it persists in a legal grey area. Even clear violators of campaign advertising law might, some think, be let off the hook if they simply do it with AI.

Mitigating the risks

The risks that AI poses to society are strikingly familiar, but there is one big difference: it’s not too late. This time, we know it’s all coming. Fresh off our experience with the harms wrought by social media, we have all the warning we should need to avoid the same mistakes.

The biggest mistake we made with social media was leaving it as an unregulated space. Even now—after all the studies and revelations of social media’s negative effects on kids and mental health, after Cambridge Analytica, after the exposure of Russian intervention in our politics, after everything else—social media in the US remains largely an unregulated “weapon of mass destruction.” Congress will take millions of dollars in contributions from Big Tech, and legislators will even invest millions of their own dollars with those firms, but passing laws that limit or penalize their behaviour seems to be a bridge too far.

We can’t afford to do the same thing with AI, because the stakes are even higher. The harm social media can do stems from how it affects our communication. AI will affect us in the same ways and many more besides. If Big Tech’s trajectory is any signal, AI tools will increasingly be involved in how we learn and how we express our thoughts. But these tools will also influence how we schedule our daily activities, how we design products, how we write laws, and even how we diagnose diseases. The expansive role of these technologies in our daily lives gives for-profit corporations opportunities to exert control over more aspects of society, and that exposes us to the risks arising from their incentives and decisions.

The good news is that we have a whole category of tools to modulate the risk that corporate actions pose for our lives, starting with regulation. Regulations can come in the form of restrictions on activity, such as limitations on what kinds of businesses and products are allowed to incorporate AI tools. They can come in the form of transparency rules, requiring disclosure of what data sets are used to train AI models or what new preproduction-phase models are being trained. And they can come in the form of oversight and accountability requirements, allowing for civil penalties in cases where companies disregard the rules.

The single biggest point of leverage governments have when it comes to tech companies is antitrust law. Despite what many lobbyists want you to think, one of the primary roles of regulation is to preserve competition—not to make life harder for businesses. It is not inevitable for OpenAI to become another Meta, an 800-pound gorilla whose user base and reach are several times those of its competitors. In addition to strengthening and enforcing antitrust law, we can introduce regulation that supports competition-enabling standards specific to the technology sector, such as data portability and device interoperability. This is another core strategy for resisting monopoly and corporate control.

Additionally, governments can enforce existing regulations on advertising. Just as the US regulates what media can and cannot host advertisements for sensitive products like cigarettes, and just as many other jurisdictions exercise strict control over the time and manner of politically sensitive advertising, so too could the US limit the engagement between AI providers and advertisers.

Lastly, we should recognize that developing and providing AI tools does not have to be the sovereign domain of corporations. We, the people and our government, can do this too. The proliferation of open-source AI development in 2023, successful to an extent that startled corporate players, is proof of this. And we can go further, calling on our government to build public-option AI tools developed with political oversight and accountability under our democratic system, where the dictatorship of the profit motive does not apply.

Which of these solutions is most practical, most important, or most urgently needed is up for debate. We should have a vibrant societal dialogue about whether and how to use each of these tools. There are lots of paths to a good outcome.

The problem is that this isn’t happening now, particularly in the US. And with a looming presidential election, conflict spreading alarmingly across Asia and Europe, and a global climate crisis, it’s easy to imagine that we won’t get our arms around AI any faster than we have (not) with social media. But it’s not too late. These are still the early years for practical consumer AI applications. We must and can do better.

Feature Image Credit: STEPHANIE ARNETT/MITTR | GETTY, ENVATO

&

Nathan E. Sanders is a data scientist and an affiliate with the Berkman Klein Center at Harvard University. Bruce Schneier is a security technologist and a fellow and lecturer at the Harvard Kennedy School.

Sourced from MIT Technology Review

 

 

& archive page

By Gary Fowler

It’s been almost six months since OpenAI dropped the ChatGPT bomb and effectively reinvigorated AI adoption across a multitude of verticals, sectors, industries and users. For the first time in years, we are seeing a large-scale commercial adoption of generative AI technology—and new applications of LLMO models are introduced by the day.

While individual uses have comprised some of the most common examples of how AI has been transforming our day-to-day workflows, the enterprise-level application of the technology is relatively overlooked, even though it holds immense potential.

Due to transformative strides on an individual consumer level, AU holds immense promise on an enterprise growth and process level—from enhancing creativity and productivity to streamlining business processes and decision making, generative AI can reform how organizations operate today.

In this article, I will explore five potential generative AI use cases for the enterprise sector which stand to unlock new opportunities and help drive innovation.

1. Creative Innovation And Branding

Generative AI opens new doors for enterprises to reach new levels of creativity and innovation—from branding and marketing efforts to content creation and internal communication or data visualization. With the help of such tools as MidJourney, Stable Diffusion, Dall-E and ChatGPT, among others, companies can produce a high-quality and high volume of content, both visual and textual.

Leaders should seek solutions that allow the brand to push boundaries when it comes to branding, brand storytelling, content creation (photo, video, text), A/B testing, blog writing, headline generation and a variety of other assets that can help push the boundaries of any brand’s online and offline presence.

Generative AI is also a powerful lever to pull for user feedback, persona development, value proposition exploration and developing customized marketing campaigns and experiences that stem directly from customer needs and asks.

2. Decisions And Predictions Based Upon Deeper Foresight

Forecasting, future-proofing and planning are the cornerstones of successful enterprise management. In other words, leaders must constantly strive for efficient and sustainable growth built upon data-driven and well-informed decision making.

Generative AI offers a significant boost in this area by providing organizations with advanced data analysis and predictive analytics capabilities. By analysing large volumes of structured and unstructured data, generative AI can provide immediate and accurate insights, aiding decision makers in formulating strategies, optimizing processes and predicting industry trends.

Generative AI isn’t just a powerful source of predictive analytics. Its progressive capabilities also include simulations of potential scenarios if provided with all variables to account for—which creates a clear scenario to aid in data-driven decision making, risk mitigation and opportunity discovery.

3. Streamlined Workflows, Operations And Processes

Generative AI’s ability to analyse data and identify patterns stands as a powerful solution for optimizing workflows, identifying inefficiencies and building processes that are significantly more streamlined and automated. The application of this includes but is not limited to supply chain management, resource allocation and workflow automation—with generative AI transforming labour-intensive tasks into efficient and accurate processes.

In other words, generative AI is also able to assume more administrative or time-consuming, repetitive tasks that would otherwise prove to be a waste of time for employees who strive to drive more impact and plan strategically on a managerial level. This can not only save time and reduce costs but also enhance overall productivity, enabling employees to focus on more strategic and value-driven/high-involvement activities.

4. Personalized Customer Experiences

Building on the major branding, storytelling and marketing benefits of generative AI, the technology also opens doors to new ways of delivering exceptional customer experiences. Generative AI provides brands and organizations with the tools to personalize their interactions with customers on a whole new level—from more powerful chatbots that proactively react to the customer needs to providing unique personalized recommendations to customers based on their preferences and previous activity.

By analysing large amounts of customer data, generative AI can generate individualized product/service recommendations, highly targeted ads and customized user interfaces fit for every user’s personal activity patterns. This is a direct path to higher customer satisfaction, loyalty and engagement while ensuring higher consistency in conversions and bottom-line impact.

5. Accelerated Research And Development

Research and development (R&D) is largely where innovation happens within the enterprise sector. Improving this sector is like throwing a brand new lifeline to any enterprise business—and generative AI has the power to significantly accelerate the R&D process by assisting in the ideation, prototyping and testing phases.

The simulation capabilities generative AI offers can allow for the mapping and exploration of a variety of formulas, outcomes, products and prototypes while significantly shortening time to market and product launches. This can allow businesses to stay ahead of the competition, adapt to rapidly changing markets and deliver cutting-edge solutions to their customers.

Revolution In The Enterprise Sector

Generative AI holds tremendous potential to provide organizations with unprecedented capabilities to innovate, streamline internal and external workflows and deliver custom-tailored experiences to their customers. From empowering creative positioning and branding initiatives to improving data-backed decision making, generative AI holds the potential to drive enterprise to new heights of success.

As the enterprise sector continues to adopt generative AI, leaders must build out strategic initiatives that invest in the necessary infrastructure, talent and resources to remain ahead of the curve and leverage the technology in full capacity. Close partnerships with AI experts, data scientists and researchers are only one of the many ways to efficiently incorporate generative AI into existing business processes and systems.

Feature Image Credit: Getty

By Gary Fowler

Gary Fowler is a serial AI entrepreneur with numerous startups and an IPO. He is CEO and cofounder of GSDVS.com and Yva.ai. Read Gary Fowler’s full executive profile here.

Sourced from Forbes

By James Vincent

Anthropic has expanded the context window of its chatbot Claude to 75,000 words — a big improvement on current models. Anthropic says it can process a whole novel in less than a minute.

An often overlooked limitation for chatbots is memory. While it’s true that the AI language models that power these systems are trained on terabytes of text, the amount these systems can process when in use — that is, the combination of input text and output, also known as their “context window” — is limited. For ChatGPT it’s around 3,000 words. There are ways to work around this, but it’s still not a huge amount of information to play with.

Now, AI startup Anthropic (founded by former OpenAI engineers) has hugely expanded the context window of its own chatbot Claude, pushing it to around 75,000 words. As the company points out in a blog post, that’s enough to process the entirety of The Great Gatsby in one go. In fact, the company tested the system by doing just this — editing a single sentence in the novel and asking Claude to spot the change. It did so in 22 seconds.

You may have noticed my imprecision in describing the length of these context windows. That’s because AI language models measure information not by number of characters or words, but in tokens; a semantic unit that doesn’t map precisely onto these familiar quantities. It makes sense when you think about it. After all, words can be long or short, and their length does not necessarily correspond to their complexity of meaning. (The longest definitions in the dictionary are often for the shortest words.) The use of “tokens” reflects this truth, and so, to be more precise: Claude’s context window can now process 100,000 tokens, up from 9,000 before. By comparison, OpenAI’s GPT-4 processes around 8,000 tokens (that’s not the standard model available in ChatGPT — you have to pay for access) while a limited-release full-fat model of GPT-4 can handle up to 32,000 tokens.

Right now, Claude’s new capacity is only available to Anthropic’s business partners, who are tapping into the chatbot via the company’s API. The pricing is also unknown, but is certain to be a significant bump. Processing more text means spending more on compute.

But the news shows AI language models’ capacity to process information is increasing, and this will certainly make these systems more useful. As Anthropic notes, it takes a human around five hours to read 75,000 words of text, but with Claude’s expanded context window, it can potentially take on the task of reading, summarizing and analyzing a long documents in a matter of minutes. (Though it doesn’t do anything about chatbots’ persistent tendency to make information up.) A bigger context window also means the system is able to hold longer conversations. One factor in chatbots going off the rails is that when their context window fills up they forget what’s been said and it’s why Bing’s chatbot is limited to 20 turns of conversation. More context equals more conversation.

Feature Image Credit: Anthropic

By James Vincent

A senior reporter who has covered AI, robotics, and more for eight years at The Verge.

Sourced from The Verge