By Rachel Curry
SearchGPT is already dubbed by some as the “Google killer.”
Arvind Jain, a former Google (GOOGL) engineer and now CEO of the enterprise A.I. search platform Glean, never saw Google’s approximately 90 percent market share in online search as overtly anticompetitive—after all, Google always had a superior search product, Jain said. In recent years, however, innovation seems to have given way to profitability. “The experience was getting worse, especially on mobile devices, where there are just way too many ads on the page,” the former Googler told Observer.
For the first time in many years, competition is ramping up. In July, OpenAI announced SearchGPT, an A.I.-powered search engine that many already dubbed the “Google killer.” Smaller players, such as Perplexity AI, are also gaining momentum in the search space.
“There is more serious competition than ever before,” Ashwini Karandikar, executive vice president of media, technology and data at the American Association of Advertising Agencies, an industry group, told Observer. Karandikar’s prescience is rooted in decades of industry experience, during which she witnessed digital advertising go from just 5 percent of a company’s advertising budget to practically 100 percent.
Technologically, answer engines powered by large language models (LLMs) have the potential to shake up the search and digital advertising markets, but Jain doesn’t think they’re not yet commercially ready. “Personally, as a user, I don’t feel comfortable going to these answer engines,” he said. That’s because most of them don’t provide the source of information from which they generate answers. Some chatbots are starting to cite sources, but this feature is still in the early stages. Ultimately, competitors will have to lean into a hybridized search solution, said Jain, which will combine plain search and plain answers for an optimized user experience.
That need for transparency has roots in the trust gap highlighted by consumer-facing A.I. products. The A.I. trust gap is “the sum of the persistent risks (both real and perceived) associated with A.I.,” Bhaskar Chakravorti, a business professor at Tufts University, wrote in a recent article for the Harvard Business Review. Common concerns around A.I. include deepfakes, hallucinations, data privacy and A.I.’s inherent black-box problem. Last year, Pew Research found that 52 percent of Americans feel more concerned than excited about the increased use of A.I., with people particularly torn about its application for finding accurate information online.
To establish public trust, companies like OpenAI, Google, Microsoft, Meta and Amazon are all prioritizing self-regulation. These companies are on a steering committee for a truth-seeking organization called C2PA, or the Coalition for Content Provenance and Authenticity. It’s an “open technical standard providing publishers, creators and consumers the ability to trace the origin of different types of media,” according to the coalition’s website.
Feature Image Credit: Jakub Porzycki/NurPhoto via Getty Images
By Rachel Curry
Sourced from Observer