All articles
Are Average Handle Time Metrics Losing Relevance as AI Pushes the Hardest Interactions to Human Agents?
Alain Mowad, VP of Product & Customer Marketing at Aspect Software, explains why the new North Star metric is successful outcomes, why AI agents need performance reviews just like humans, and why leaders should ask whether AI is saving costs or just moving them.

Average handle time as it stands today is just not a good metric anymore. As the more complex interactions reach agents now, they need more time to get the issue resolved, and the customer expects that.
AI is handling more of the straightforward contact center volume, which sounds like an efficiency win until you look at what lands on human agents. The interactions that get through are longer, harder, more emotional, and more likely to span multiple channels before they resolve. Measuring those interactions by how fast they end is not just inaccurate. It actively misrepresents the work.
Alain Mowad is VP of Product & Customer Marketing at Aspect Software, a workforce management and contact center platform provider. Before Aspect, he held senior product marketing leadership roles at C1, Talkdesk, 8x8, and RingCentral, spanning more than two decades in cloud contact center and CX technology.
"Average handle time as it stands today is just not a good metric anymore. As the more complex interactions reach agents now, they need more time to get the issue resolved, and the customer expects that," says Mowad.
The metric that made sense no longer does
Mowad grounds the shift in what changed about the work itself. When tier-one calls dominated the queue, average handle time and average speed of answer were reasonable proxies for productivity. That era is ending.
"What used to be just a call is no longer just a call," Mowad says. "It may start as a call but end up as an email or a workflow that continues." He points to loan processing: a customer starts an application, speaks with an adviser, the adviser processes the application, and follows up. That entire sequence is a single transaction. Measuring it by handle time on the initial call misses the point.
"The new metric is a successful outcome. Was the loan processed? Was the issue resolved?" Mowad says. Customers and prospects in recent think tank discussions consistently reported that focusing on outcomes instead of speed improved CSAT and overall CX quality.
AI agents need performance reviews too
Mowad argues that agentic AI bots should be treated as part of the workforce, measured against defined objectives, and optimized when they underperform, just like human agents.
"They are given a job to be done. Do they get the job done?" Mowad says. "I'm expecting an 80% automation rate for this transaction. But the automation rate is only 50%. Why? What's the delta, and how do we improve it?" He also tracks whether bots are self-improving and whether they can scale to handle additional volume without degradation.
For bots interacting directly with customers, Mowad adds a satisfaction layer. "Is the bot handling the interaction in a way the customer is happy with, or is it going in a loop?" he says. Quality of interaction matters as much as completion rate.
The handoff determines everything
Where AI and human agents intersect is where the customer experience either holds together or falls apart. Mowad describes two failure modes and one pattern that works.
"I spent five minutes of my time, and now I have to go to an agent to actually get the work done," Mowad says. When a bot cannot complete the interaction it was designed to handle and pushes the customer into a restart, that is a clear failure.
A successful handoff follows the intended workflow. Mowad describes a high-touch business where the bot triages calls, identifies the customer's need, and transfers them with full context. "The agent continues the conversation. It starts with: you've gotten this far. Now we're going to take you the rest of the way," he says. "The customer feels continuity."
Data hygiene and ROI honesty
Mowad connects the metrics conversation back to data hygiene. "Your outcomes are only going to be as good as the data. If your data is bad, you can expect a lot of hallucinations," he says. Once data is clean, AI enables capabilities like real-time intraday schedule adjustments that help workforce managers get ahead of shifts instead of reacting to them.
For leaders taking AI investments to the C-suite, Mowad warns that the cost story is more complicated than most presentations suggest. AI is typically consumption-based, and multiple customers have told him that costs are not going down. They are moving around.
"You have to ask: am I saving costs, or am I just moving costs?" Mowad says. He recommends framing ROI over six to 18 months rather than expecting immediate payback, and pairing cost data with CX improvement. "If you can combine a measurable ROI with improved customer experience, that's something you can take to leadership. Even if our costs remain the same, we elevated CX to another level. That's what they should be taking to the C-suite."





