All articles

Why One Bank’s Data Expert Says the Real AI Race Is Fixing Foundational Governance

Cresta News Desk
published
October 3, 2025
Credit: Outlever

Key Points

  • Enterprise AI's vision for autonomous agent teams and improved customer experience is hindered by poor data quality.

  • Scotiabank's Rahul Iyer champions centralized data as the foundational strategy for AI success, directly impacting customer interactions.

  • He foresees the human role evolving into "curators of intelligence" who build essential context, leading to more reliable AI and better customer service.

  • A critical executive blind spot exists, he says, obscuring the foundational data work and human effort needed to deliver AI-driven customer value.

We shifted swiftly towards the data governance aspect because senior leadership knew that if the data is correct, then there is potential for an AI agent to become so good that it can actually change the role of a lot of analysts within the team.

Rahul Iyer

digital analytics professional and data strategist

Rahul Iyer

digital analytics professional and data strategist
Scotiabank

Moving from single, experimental bots to entire teams of autonomous agents for enterprise AI promises better customer experiences and operational efficiencies, but a more fundamental problem is stopping that vision in its tracks: data quality. A messy web of siloed and inconsistent data hinders advanced AI capabilities and their potential to transform customer interactions. The unseen architecture of data governance and human expertise will determine whether these digital teams succeed or stumble.

It's a reality Rahul Iyer, a digital analytics professional and data strategist at Scotiabank, confronts daily. Specializing in turning complex data into actionable insights for banking, publishing, and logistics, Iyer first encountered the data roadblock during an internal hackathon at Scotiabank. His team sees centralized data as the starting point for enterprise AI's long-term strategy, ultimately designed to deliver more seamless and personalized customer experiences.

Crucially, a centralized data foundation is the essential prerequisite for achieving strategic alignment and, by extension, superior customer outcomes. Once this foundation is in place, the problem of distributed agent teams then becomes solvable through a three-part framework: a shared metadata layer, AI-powered onboarding tools, and a collaborative culture, all contributing to a more cohesive customer journey.

  • Bedrock of bots: For Iyer, any conversation about advanced agentic workflows must begin with the data itself. He believes that before an enterprise can successfully deploy AI, it must first ensure data integrity, which directly impacts the reliability of AI-driven customer interactions. "We don't want our stakeholders to ask, 'Hey, is the AI agent correct, or is the data underneath it incorrect?'" Iyer says. He adds that the goal is to be in a position where "the data is always correct and the AI agent just builds on top of it," ensuring customers receive accurate and consistent service.

  • Curating intelligence: Beyond data integrity, the analyst's role is redefined. He sees it shifting from data processing to a central function: becoming a "curator of intelligence," the human expert building the context AI needs to thrive and intelligently serve customers. "We shifted swiftly towards the data governance aspect because senior leadership knew that if the data is correct, then there is potential for an AI agent to become so good that it can actually change the role of a lot of analysts within the team," freeing them to focus on more nuanced customer challenges.

According to Iyer, some leaders' intense focus on flashy AI outputs can create a dangerous blind spot, obscuring the human effort behind the technology. This can lead to customer-facing AI that lacks necessary context, resulting in frustrating or inaccurate interactions. "You need to feed descriptions for every single column in your dataset," he explains. "They still need that context so the AI understands what each data point is." This painstaking human effort to create metadata makes the technology intelligent and ensures it can truly understand customer needs. "Executives need to have some sort of awareness of how much work is involved and how humans are making the AI work, and then make the right decisions. I don't think they have that visibility right now." This lack of visibility poses a strategic risk for AI projects and an existential threat to employees, as well as compromises the quality of service delivered to customers.

  • Herding digital cats: An AI-managed future presents its own set of problems. In Iyer's vision, small teams of humans will oversee the work of much larger digital teams, whose collective performance directly impacts customer service. That kind of structure, however, introduces new problems in enterprise architecture. "I'm foreseeing a future where two humans might have a team of 10 or 12 AI agents under them," he predicts. "Each of these AI agents would be evaluated based on their own performance metrics. I'm assuming it would be similar to how performance reviews happen for employees today," ensuring they are optimized for customer outcomes. Iyer notes that a single, monolithic AI team would be difficult to debug and maintain, potentially leading to service disruptions. Instead, he suggests that each team or business unit would likely develop its own agentic AI team, allowing for more agile and responsive customer solutions.

  • No insight is an island: Iyer explains that if different teams can bring different perspectives on how to prompt the same AI, and one team gets a unique insight, "that should be broadcasted across the company or taught to relevant teams." This, he believes, fosters a collaborative "prompt engineering culture" where shared knowledge prevents duplication, boosts value, and ensures a consistent, high-quality customer experience regardless of the interaction point. AI can also assist in this, using documentation to build onboarding materials for new teams, bridging knowledge gaps to ensure all AI-powered customer touchpoints are informed.

But that forward-looking vision is tempered by the foundational work Iyer emphasizes is required to make it real. He concludes that the focus for the next six to twelve months is standardization. "If a data point means something in my PowerPoint, it should mean the exact same thing in other platforms that other team members are working on," he explains. Establishing this consistency is the necessary first step to standardizing subsequent processes, reducing confusion, time, and budget, and ultimately leading to more reliable and trustworthy AI that can power a better customer experience.