Share

By  and 

Generative AI is starting to change shopping. Instead of scrolling on websites or strolling through stores, people are beginning to prompt AI agents to find, compare, and even purchase products. Ask for something like a handmade gift under $100, a pair of vintage jeans from the 1970s, or a digital camera for a teenager, and watch a list of curated options appear in the chat. It’s fast and frictionless. But it’s also early days. And just as companies had to adapt to the new rules of e-commerce, they’re now faced with a new set of challenges around how they manage their reputations, connect with customers, and what it looks like to compete in this new paradigm.

Categories like beauty, lifestyle, and apparel are moving fastest, and early adopters are already experimenting. But if things go wrong, the consequences could be both immediate and lasting. For consumer-facing brands, there are five core risks that could break consumer trust as AI agents begin to shop on customers’ behalf:

  1. Agents misunderstand products and make the wrong choice. When product attributes aren’t structured for machines, AI agents guess. They can misinterpret sizing, miss constraints, hallucinate features, or recommend items that are not aligned with the customer’s intent.
  2. Agents act beyond what customers expected or authorized. Without clear delegation boundaries, agents can overspend, ignore constraints, or make irreversible decisions without confirmation.
  3. Sensitive conversational data becomes a liability. Agentic shopping captures more than transactions. It captures intent, emotion, and context. If that data is stored opaquely, reused unexpectedly, or exposed through a breach, customers can feel surveilled rather than served.
  4. Brands lose control of how they’re represented. In agent ecosystems, outdated prices, inaccurate information, or undisclosed sponsored placements can reach customers before marketing or legal teams ever see them.
  5. When something breaks, there’s no clear way back. In automated journeys, failures feel colder and harder to resolve. If customers can’t understand what went wrong, reach a human, or be made whole quickly, a single bad interaction can permanently sever the relationship.

Left unaddressed, these issues don’t just frustrate customers. They create real operational and financial impact: chargebacks, returns, and customer support costs; privacy violations that trigger regulatory scrutiny or lawsuits; and reputational damage that erodes loyalty and slows adoption.

Much of this comes down to trust. To drive agentic commerce adoption at scale, brands need to figure out how to earn—and keep—customers’ trust. And to do that, they need to understand what can go wrong and the steps they can take now to prevent trust from being broken.

The Trust Gap is Measurable

According to PwC’s 2025 Future of Consumer Shopping Survey, 64% of respondents said they need at least one safeguard, like a money-back guarantee, to feel comfortable letting an AI agent purchase for them. Even Gen Z and Gen Alpha, the most digitally native demographics, express caution alongside curiosity. Fundamental questions remain unanswered: Who has access to payment information? Who can authorize purchases? How is personal data stored and shared? Whose interests does the agent represent: the consumer’s, the tech platform’s, or the advertiser’s?

The challenge for brands in retail, consumer goods, and travel is both clear and urgent: How do you prepare for agentic commerce when the rules are still being written? You can’t fully control whether consumers adopt these tools. But you do have control over how your brand shows up in agent-driven experiences, and whether customers feel protected when they delegate decisions to AI.

Building the Trust Layer

We’ve seen this pattern before. In the early days of e-commerce, consumers were wary of entering credit card information on websites. But SSL encryption, PCI standards, and fraud protection transformed scepticism into confidence and unlocked mass adoption.

Agentic commerce needs its own trust infrastructure—what we call the trust layer. While trust can feel like an abstract concept, it breaks in specific, predictable ways: when agents misunderstand products, act beyond what customers expect, mishandle sensitive data, misrepresent brands, or leave consumers stranded when something goes wrong.

Addressing those risks requires concrete changes to how product data is structured, how delegation and consent are enforced, how data is protected, how brand presence is monitored in agent ecosystems, and how relationships are preserved when automation fails.

We recommend companies take five actions now to build that trust layer.

1. Structure your content for machines, not just humans.

To trust an AI agent, customers need it to return accurate and relevant information every time. This isn’t possible unless the agent can correctly understand the product and its features.

AI agents don’t browse visually or interpret nuance the way humans do. They digest text and numbers. That means product discoverability in agent-driven shopping depends less on branding or traditional search engine optimization (SEO) and more on machine-readable product data, an approach often referred to as generative engine optimization (GEO). Pricing, sizing, availability, materials, use cases, and constraints need to be expressed in formats agents can reliably parse and compare.

Consider two descriptions of the same hoodie:

  • “This sweatshirt is perfect for cozy fall nights.”
  • Material: fleece; temperature range: < 40°F; category: loungewear; fit: relaxed

While the first is written to evoke a specific vision in a customer, the second is optimized for an AI agent. To scale agentic commerce, companies may need to speak to both humans and agents, and be sure that they’re translating terms that customers naturally use—“lightweight,” “sustainable,” or “good for travel”—into an agent-focused product catalogue that maps those terms onto specific attributes.

Brands also may need to make sure that this information is accessible. While humans click from page to page and scan prose descriptions, descriptions for agents should be captured in machine-readable formats in your existing product information management systems and ecommerce platforms. They should also be formatted so agents can access them through APIs or web markup standards. Return policies, shipping info, and FAQs should similarly be modular and labelled. With information formatted and organized in the right way, agents can translate customer requests into precise matches.

2. Define clear boundaries and build in consent.

Consumers won’t delegate purchasing decisions to AI agents unless they understand, clearly and upfront, what those agents are allowed to do. This requires explicit delegation boundaries and consent that is embedded into the experience, not buried in terms and conditions. Safe delegation requires three things: clear limits, traceability, and reversibility. Every agent action should be attributable to a specific authorization, under defined conditions, with a clear way to undo or dispute the outcome.

In their own channels—the company website, app, or branded agent—brands can set spending caps, require approval for purchases over certain amounts, and build in confirmation steps before checkout. For example, a retailer could program its agent to surface return policies before a final purchase, or to pause and ask for confirmation if a recommendation falls outside a user’s budget.

When consumers use general-purpose AI platforms like ChatGPT, Claude, Google’s Gemini, or others to shop across multiple retailers, the brands’ direct control is limited. But they can still influence the experience by ensuring product data is accurate and structured (see action #1). While it may be technically possible to support safeguards like confirmation prompts or return-policy disclosures within these platforms, doing so requires collaboration between brands and platform providers. In the meantime, brands can still influence outcomes by ensuring their product data is accurate, structured, and complete.

Industry efforts—such as Google’s Universal Commerce ProtocolStripe and OpenAI’s Agentic Commerce Protocol, and Anthropic’s new constitution for Claude—point toward standardized ways to express what agents may do, when they must ask, and how consent is enforced. As agentic commerce moves from experimentation to scale, brands that treat delegation as an essential design problem will be the ones consumers trust.

3. Protect customer data and make that protection visible.

When consumers delegate tasks to AI agents, they share more than payment details. They share conversational context: preferences, constraints, intent, and often emotion. That context is what makes agentic shopping powerful, and what makes it uniquely sensitive. If customers don’t understand how that data is used, remembered, or protected, they won’t delegate in the first place.

As brands launch their own AI agents to help customers shop for products, they should embed privacy-preserving design directly into agentic interactions. For example, brands can use data minimization and anonymization techniques, so their agents retain only what is necessary to complete a task. Sensitive conversational signals can be processed transiently rather than stored indefinitely. Consent should be explicit and configurable, with clear choices about what is remembered, what is shared across sessions or platforms, and what is not.

Visibility matters as much as protection. Consumers should be able to see—and change—their privacy posture in real time. Some interactions may warrant persistence, such as remembering a preferred size or brand. Others may not. An “incognito” or one-time shopping mode, where interactions are not retained or used for future recommendations, gives customers a sense of control that mirrors how people already manage privacy in browsers and payments.

4. Observe how your brand shows up in agent ecosystems.

In agentic commerce, AI platforms may become the first (and sometimes only) interface between your brand and a customer. When that happens, trust depends on what the platform’s agent says on your behalf. If an agent surfaces outdated pricing, invents product features, omits critical context, or cites unreliable sources, customers don’t see a system error. They see a brand failure.

That’s why brands need agentic observability: the ability to monitor, in real time, how AI agents describe their products, which sources they rely on, how recommendations are framed, and what actions are being taken downstream. This requires ongoing visibility into prompts, responses, citations, and decision logic across the agent ecosystems where customers are shopping.

Without observability, brands lose the ability to detect misrepresentation, correct errors, or understand why a product was or wasn’t recommended. As agents increasingly act as intermediaries, monitoring how your brand shows up is no longer optional.

5. Preserve relationships and plan for recovery.

Even when agents handle transactions, brands still own the relationship. And as shopping becomes more automated, brands should embed branded agents in third-party platforms, extend loyalty programs through agents, and design seamless escalation paths to reach a human when needed.

When things break, and they will, the response matters more than the failure. Recovery mechanisms should be built in from the start: real-time alerts, clear escalation paths, and explain ability. Some brands are already simulating agentic shopping journeys with synthetic customers to stress-test before launch. Trust is built through accountability, transparency, and making customers whole when errors occur.

Trust as Strategy, Not Compliance

AI-driven shopping will scale when consumers feel secure. That requires systems that are well-governed, transparent, and aligned with human expectations. The brands that lead won’t treat trust as a compliance exercise. They’ll treat it as a core part of their commerce strategy—building the technical standards, business practices, and consumer protections that make delegation safe. Those who act now will help define the rules of this emerging ecosystem.

Feature image credit: KKGAS/Stocksy

By , ,  and 

Ali Furman is the consumer markets industry leader at PwC and an M&A partner. She writes and speaks widely on consumer markets trends and the future of business. She has been featured in many outlets including ABC, CBS, CNBC, Forbes, Vogue Business, and Bloomberg.
Ege Gürdeniz is an AI trust leader and technology risk expert at PwC. He advises companies on how to build trust, safety, and governance into AI-driven products, platforms, and business models.
Rima Safari leads data, analytics, and AI for PwC US and serves as the firm’s strategic alliance leader with OpenAI. She writes and speaks widely on AI strategy, agentic systems, and data readiness required for scaling AI, and her perspectives have been featured across leading business and technology forums.
Remzi Ural is the AI leader for consumer markets within PwC. He has been recognized as a thought leader for AI strategy definition and adoption, particularly with retail and consumer packaged goods clients, driving business outcomes and standing up modern AI capabilities.

Sourced from Harvard Business Review

Write A Comment