All articles

A Cresta Customer Success Leader Explains Why Flawed Metrics Stall AI Adoption and How New KPIs Can Help

Cresta News Desk
Published
December 2, 2025

Brittany Benjamin Bell, a customer success leader at Cresta, explains why most AI investments layer technology on top of outdated processes and measure success using contradictory KPIs.

Credit: Outlever

Key Points

  • Most AI investments layer technology on top of outdated processes and measure success using flawed metrics with no correlation to business outcomes.

  • Brittany Benjamin Bell, a customer success leader at Cresta, explains why success in this new era often requires challenging traditional KPIs that companies have always trusted, like QA and CSAT scores.

  • The solution is comprehensive data analysis with a narrow focus on the few key activities that are actually proven to drive real business results.

The biggest struggle of adopting any AI solution is that you need to be willing to let go of what you had before. If you're still stuck in a legacy format, layering automation on top isn't going to help. You're just doing everything here plus there, and that makes the job twice as tedious.

Brittany Benjamin Bell

Senior Strategic Customer Success Manager

Brittany Benjamin Bell

Senior Strategic Customer Success Manager
|
Cresta

Most companies are making massive investments in cutting-edge AI solutions—but many of them might also be making a critical mistake. By layering AI on top of an already-flawed foundation, workflows often become more complicated. Meanwhile, on the front lines, AI adoption generally stalls before the promised ROI ever materializes.

Brittany Benjamin Bell, a Senior Strategic Customer Success Manager at Cresta, has seen the adoption challenge from both sides. First, as an Executive Director at TTEC, Bell was responsible for the performance of large sales teams serving clients like Meta and Microsoft. In her current role, she helps organizations implement the very technology meant to empower those teams.

“The biggest struggle of adopting any AI solution is that you need to be willing to let go of what you had before. If you're still stuck in a legacy format, layering automation on top isn't going to help. You're just doing everything here plus there, and that makes the job twice as tedious," Bell says. For her, the origin of this struggle is a fundamental disconnect between executives and frontline users.

  • Minor glitch, major grievance: Users must fit new tools into a workflow crowded with "200 random pieces of software," Bell continues. Because trust is already so fragile, minor issues can have an outsized impact here. "You have a very narrow window to build trust with users. Sometimes leaders don't see the forest for the trees. They might have a great new platform, but if a couple of people a day report that it froze, the overall sentiment can quickly become negative, even if the tool worked perfectly for thousands of other interactions."

In contrast, Bell's approach to change management is intensely hands-on. For example, during a recent on-site visit, she describes giving a hospitality client a clear choice: lower standards to pass a test, or retrain the team to meet a more challenging bar that would actually improve the business. "Until you're actually in a room with someone, sitting down with your hands on the keyboard, details can slip their minds," she says. "It's remarkable what that in-person enablement achieves. They doubled their QM results in a week just because they finally understood the process."

  • Questioning the score: For Bell, focusing on tangible results also reveals a core problem with outdated metrics. Too many contact centers are crippled by KPIs that create friction, she explains. “Who cares about the QA score? Is it correlated with greater customer engagement, satisfaction, or lifetime value? If not, start from scratch.”

  • Dueling data points: Legacy metrics also force teams to chase contradictory goals. "I hear all the time people say, 'Man, I really want to drive up the conversion rate, but I also need to drive down handle time.' That doesn't necessarily work," Bell says.

Rather than fighting the tech sprawl typical in modern enterprises, Bell champions a collaborative, data-driven strategy. By analyzing data across all platforms, she tests whether popular metrics actually correlate with business outcomes.

  • A case of the feels: Surprisingly, the results often expose a flawed system. "The CSAT scores are based on, 'I stubbed my toe today, so I'm giving you a zero,' or 'It was my birthday, and you said happy birthday, so I'm giving you a 10,' even though the actual conversation itself wasn't great.”

Now, the goal is to save leaders from the "huge time suck" of chasing insufficient data and refocus them on outcome-driven work, Bell concludes—but successful adoption is only the beginning. The next phase is proving continuous value by focusing narrowly on the metrics that matter. For her, grounding coaching in objective data eliminates subjective debates. "The common excuse, 'Oh, you just picked a bad call,' no longer works when data reveals trends across all conversations." Instead, a narrow focus simplifies work for employees at every level, and it concentrates their efforts on what creates real impact. “If you know what you’re looking for and you can apply it across everything, you can make so much of the noise go away.”