All articles

Club Med's CAIO on Why 'Super Agents' Sometimes Compound CX Problems Before Solving Them

Cresta News Desk
published
September 17, 2025
Credit: Outlever

Key Points

  • Siddhartha Chatterjee, Club Med's Global Chief Data & AI Officer, warns against the "super agent" concept, advocating for a disciplined multi-agent pipeline.

  • Chatterjee attributes Club Med's 83% AI project success rate to a framework emphasizing governance, ethics, and data quality.

  • He advises focusing on a modular backend and unified UX to avoid "Shadow AI" and enhance AI project success.

You can’t have one employee who does everything. That’s impossible. That’s why organizations have departments, experts, managers, and leaders, all working in sync. AI will be the same. Companies will have hundreds or even thousands of agents, but they need to be categorized, linked, governed, and centrally monitored so their performance can be tracked.

Siddhartha Chatterjee

VP and Global Chief Data & AI Officer

Siddhartha Chatterjee

VP and Global Chief Data & AI Officer
Club Med

As retail giants like Walmart bet big on single, all-knowing "super agents" to simplify the AI tool sprawl and create a unified customer experience, it begs the question: Are super agents the future of CX or just a powerful mirage? A growing chorus of practitioners warn that chasing a monolithic mega-agent could be a dangerous distraction from the real path to scalable AI, which lies in a disciplined pipeline of specialized agents unified by a clean UX and anchored in governance.

We spoke with Siddhartha Chatterjee, VP and Global Chief Data & AI Officer at Club Med. Chatterjee has spent his career driving business transformation at global firms like Publicis Groupe and Ogilvy, where he spearheaded data strategies for brands including Nestlé, AXA, and Netflix. He argued that the CX industry’s current obsession with the "super agent" is pulling focus away from the foundational work that actually moves the needle.

"I'm against a super agent approach because they're simply not secure enough, and they're difficult to govern," said Chatterjee. "There could be hallucinations, loss of data control, and regulatory challenges. These LLMs struggle when too much data is fed into the context window, which increases the risk of errors. What you need instead is a more modular backend architecture."

  • A tale of two architectures: If cohesion is the goal, super agents aren't the answer. "A unified interface might let you activate one or two different agents, but that's a UX philosophy rather than a technical architecture," he explained. In other words, the simplicity leaders crave at the front end does not inherently require—and should not be confused with—a single, all-powerful agent at the back end.

  • The one employee fallacy: To illustrate, Chatterjee compared AI agents to the workforce inside any company. "You can’t have one employee who does everything. That’s impossible," he said. "That’s why organizations have departments, experts, managers, and leaders, all working in sync. AI will be the same. Companies will have hundreds or even thousands of agents, but they need to be categorized, linked, governed, and centrally monitored so their performance can be tracked." Effective AI isn’t about one all-powerful agent, but about building a coordinated system of specialists under strong oversight.

Before even considering super agents, many organizations struggle to escape "pilot purgatory" on typical targeted AI applications, with industry reports suggesting only 25% to 30% of AI projects ever generate ROI. But Chatterjee’s team at Club Med stands as a stark outlier, built on a foundation of discipline that predates the generative AI boom.

  • A disciplined pipeline: "Today we have a production rate of around 80%, compared with a market average where only a fraction of use cases make it into production," Chatterjee said. This success is no accident. "We have a framework for developing AI products that runs from ideation to production to adoption, and we follow it meticulously. Nobody is allowed to bypass this framework. Even if a request comes from the very top, we don’t just start building."

  • Quality ethics, quality data: Governance is built directly into the framework. Chatterjee emphasized two key safeguards: a formal ethics charter and strict data governance. "We have an AI ethics committee in place with all the major stakeholders, from the CISO to legal to the executive committee," he explained. "We also publish our guiding principles—transparency, accountability, safety, confidentiality, and user centricity, whether customer or employee." That same rigor extends to data, where governance and quality standards ensure systems remain trustworthy. By linking ethics and data discipline directly to customer and employee trust, Chatterjee’s team ensures adoption is not only possible but sustainable.

Chatterjee stressed that this framework and governance approach is what ensures that AI use remains safe, secure, and on-the-rails. "You need to be proactive to avoid shadow AI. When you hand out tools and systems with no governance, people start using off-the-shelf solutions without proper understanding or training."

The lesson for CX leaders is clear: don’t chase the fantasy of a single, all-powerful agent. Scalable AI is built on a disciplined pipeline of well-defined use cases, simplified experiences that encourage adoption, and a modular backend designed for flexibility. And critically, all of it must sit on a foundation of strong governance to protect trust and ensure consistency across touchpoints. "It's impossible to have one agent that can handle all of your use cases. If you want a do-everything agent, you're better off just using ChatGPT as a generalized agent."