All articles
CX Current 2026: Contact Center Leaders Tap Into Agentic AI for Resolution, Revenue, & Resilience
CX Current 2026 research report explains how agentic AI, new metrics and coaching turn contact centers into CX engines for resolution, revenue and growth.

Key Points
CX Current has compiled a new editorial research report on how agentic AI is reshaping work, metrics, and outcomes in contact centers.
We combined Cresta research, live-conversation analytics, customer case studies, and interviews with CX, support, and success leaders across industries to understand what actually changed in 2025.
The organizations seeing the strongest results are pairing AI agents with human teams, resetting metrics, redesigning coaching and QA, and giving AI clear roles and guardrails.
The difference between a good contact center and a great one is how well they orchestrate between AI and humans.
Executive Summary
AI is everywhere, but resolution lags. Nearly every contact center now runs some form of AI. Yet few AI-only conversations end in full resolution. Meanwhile, human agents still handle the most challenging, most emotional work, often without the tools, metrics, or support to match this new reality.
Human + AI teams win. When AI supports agents instead of trying to replace them, organizations see higher satisfaction, more revenue, faster ramp, and several hours of busywork removed from each agent’s week. The most significant gains come from behavior-focused metrics, targeted coaching, and unified QA for humans and machines.
2026 is about discipline, not experimentation. Leaders who succeed will automate low-risk, high-annoyance tasks first, extend conversation intelligence to AI agents, and treat agentic AI as part of the workforce with training, metrics, and accountability.
What Contact Centers Look Like in 2025
Agentic AI is now part of daily life in the contact center. Agents log in to platforms that listen to calls, suggest next steps, and draft summaries. AI agents greet customers, collect information, and complete simple tasks on chat and SMS. Leaders talk openly about “virtual teammates” and “copilots” rather than “pilots” and “proofs of concept.”
Yet the most challenging work still lands on humans. As AI absorbs status checks and routine questions, agents spend more time on:
Cancellations, complaints, and saves
Multi-step journeys that span several channels
Financial, health, or safety issues where the stakes are real
Meanwhile, many of the systems around them still reflect an earlier era. Metrics often reward speed more than resolution. Coaching focuses on volume more than precision. Bots and humans operate in parallel, each with partial views of the same customer.
The result is a turning point. Agentic AI is proving it can handle real work. But it's also forcing leaders to see where processes are weak, where incentives conflict, and where trust is fragile. The next step is to adopt a different approach to managing work, outcomes, and risk across human and AI teams.
Signals from the Field
“AI is finally getting good at those repetitive behind-the-scenes tasks that every support person hates. That means my team can spend more time on the human interactions that actually matter.” — Kenji Hayward, Senior Director of Customer Support, Front
“There’s a polarized view on AI. Some people think it’s the solution to everything, others don’t trust it at all. But the real issue for large enterprises is still the basics: getting your data and processes in order so AI has something to work with.” — Ali Shoukat, Sr. Manager, Enterprise Architecture, Hilton
“Anything that agents have to do over and over again with no judgment involved is a candidate for AI. But if you automate broken processes, you’re just delivering a bad experience faster.” — Aurelia Pollet, Vice President of Customer Experience, CarParts.com
“It’s not AI versus human. It’s AI versus being ignored or waiting a week for a response. If we can get an answer in minutes instead of silence, customers will take that every time.” — Wes Griffith, Sr. Director, Global Consumer Support Experience, Coinbase
“AI doesn’t automatically understand sarcasm, slang, or cultural nuance. In Mexico, one word can be positive or negative depending on context. You need experienced people to interpret that and train the models so sentiment and themes actually reflect what customers mean.” — Montserrat Padierna, Customer Intelligence and Experience Lead, Walmart Canada
“Support teams are an organization’s immune system. They’re the first to feel where something’s wrong. AI can help us detect patterns faster, but we still need humans to diagnose what’s really going on.” — James Elsner, Customer Support Team Lead and SME, Tapcheck
What's Changed in the Past 12 Months?
AI is widespread, but success rates are low. AI is now standard in the contact center. In fact, 98% of contact centers use some form of AI, from chatbots and voicebots to analytics and scheduling tools. More than half of companies now manage virtual assistants and human agents under the same team.
Yet resolution remains low when AI operates alone. Fewer than 20% of AI-handled conversations reach successful resolution. In other words, most AI systems today assist with parts of the interaction, but still need humans to close the loop.
Agents want that assistance. Today, 65% want real-time AI assistance during customer interactions, and 95% of agents already using AI say it helps them resolve issues quickly and efficiently.
Now, the picture is clear: AI is nearly universal in the contact center, and agents are eager to use it. But most AI-only conversations still fall short of full resolution.
Conversations are more challenging and emotional. AI now takes the easiest work off the table. The leftover set of conversations is tougher. Leaders describe a shift from highly repetitive inbound questions to:
Entangled billing and product issues
Journeys that touch web, chat, SMS, and voice before they reach an agent
Situations where a mistake affects health, housing, or financial stability
Customer behavior reinforces this complexity: 75% of customers use multiple channels in a single transaction, and more than half expect personalization at every touchpoint.
However, many contact centers still run separate bots and workflows per channel. Systems often fail to preserve context as customers move from web to chat to SMS to voice. By the time a customer reaches an agent, they often feel impatient, confused, or frustrated.
Legacy metrics don't deliver value. Average handle time (AHT) is widely tracked, but it no longer reflects value in and of itself. Across travel, hospitality, and financial services, calls that lead to sales or other positive outcomes are significantly longer than the average.
Coaching patterns show a similar mismatch. In most cases, increasing the number of coaching sessions does not reliably improve behavioral adherence. Because sessions aren't targeted at the specific behaviors driving outcomes, many teams spend more time in coaching meetings without seeing clear gains.
The conclusion is straightforward: Shorter calls are not always better, and more coaching is not always more practical. Metrics and programs designed for efficiency alone no longer capture where value is created.
Agentic AI is introducing risk. Agentic AI now behaves more like a junior employee than a script. AI is already fully automating around 20% of customer interactions for some organizations, with leaders expecting that share to rise. In many small and mid-sized contact centers, executives report that AI agents now handle 30–60% of routine tasks, particularly in self-service and low-complexity workflows.
However, adoption is still early at the application level, and risk is real. Gartner forecasts that more than 40% of agentic AI projects will be canceled by the end of 2027 due to escalating costs, unclear business value, or weak risk controls.
That combination of high ambition, high failure risk, and growing complexity shapes the landscape for 2026. Leaders now have to treat agentic AI as part of the workforce and the operating model, not as a set of point solutions, and they need to set clear expectations for where AI will act independently and where humans must remain in the loop.
Cresta's Expert Insights
“I’m deliberate when I use the word ‘transformation’ because I believe it’s a lost art, particularly in the contact center.” — James Russell, Head of the Customer Transformation Group, Cresta
“If you’re not advancing with technology and tools, your competitors are, and you’re going to be left behind.” — Rachel Bloch, Conversational AI Designer, Cresta
“The biggest struggle of adopting any AI solution is that you need to be willing to let go of what you had before.” — Brittany Benjamin Bell, Senior Strategic Customer Success Manager, Cresta
“Agentic AI can absolutely act like an employee. But just like employees, it needs to be trained, supported, and audited. Otherwise, it will do exactly what you asked instead of what you intended.” — Mark Meghezzi, Head of EMEA, Cresta
“The industry needs to grow up and embrace regulation instead of fighting against it. Regulation is really what’s going to make AI mainstream-ready.” — Robert Kugler, Head of Security, IT, and Compliance, Cresta
CX Playbook: Six Practical Actions
1. Reset metrics to reward the right outcomes.
If your KPIs still treat low AHT and high volume as the leading indicators of success, AI will optimize for speed rather than value. Start by auditing your top-level KPIs. What do they encourage agents and supervisors to do? Then, add or elevate:
Resolution rate across humans and AI
Behavior metrics like discovery, explanation, and clear commitments
Revenue per conversation and save rates where relevant
Agent satisfaction and burnout indicators
Communicate the changes to frontline teams by explaining how to define “good” and how AI will support that definition moving forward.
2. Start with simple, high-annoyance use cases.
AI delivers the most obvious value early by eliminating repetitive work that agents dislike and that customers find tedious. First, list your top inbound reasons by volume and complexity. Then, flag interactions where the path to resolution is clear and limited, or the emotional stakes are low. For example:
Order status updates
Basic password and profile changes
Routine account information
Drafting after-call notes and tags for agents
Measure containment rate, customer satisfaction, and time saved per agent.
3. Use AI to aim coaching at high-leverage behaviors.
The most significant value of conversation intelligence is its ability to home in on what matters for each agent. Start by defining 5–7 behaviors that correlate with your best outcomes. For instance:
Asking clarifying questions early
Summarizing next steps clearly
Handling objections without defensiveness
Configure your AI tools to flag calls where those behaviors are strong or weak, and suggest 3–5 calls per agent each week for review. Then, train supervisors to use AI-suggested calls as the starting point for coaching. Tie feedback to specific sentences and moments, and track behavior scores over time, plus their impact on CSAT, revenue, or savings.
4. Build journeys around shared context.
While most customers move across channels, many contact centers still treat each touchpoint as a standalone interaction. Map a few critical journeys in detail, including:
New customer onboarding
High-risk cancellations
Complex complaints
For each journey, define:
What information AI should collect and store
When AI should hand off to a human
What context must travel with the customer
Make sure your systems present full histories to agents at the moment of contact, and allow AI to read and write within the same record. Then, pilot one journey where an AI agent handles the early steps and a human handles the high-stakes steps, both working from the same view.
5. Treat AI agents like employees.
Agentic AI now behaves like a junior agent who knows the system, but not the full context. For each AI agent that interacts with customers, define:
The scope of responsibility
Allowed actions
Escalation triggers
For interactions where AI handled most of the work, set KPIs like resolution rate, containment, and CSAT. Put agents into QA by scoring a sample of AI-led interactions each week, and establish ownership by assigning a named person or team as the “manager” of each AI agent.
6. Protect human expertise.
Human agents should focus on the work AI cannot do well. First, identify conversation types where humans must lead. For example:
Financial hardship
Health and safety issues
Regulatory disputes
Complex churn risks
Then, invest in training for de-escalation, negotiation, and problem-solving, as well as working alongside AI, including when to override or escalate. Design career paths that allow agents to grow into roles such as journey owners, AI trainers, and QA leads.
Signals from the Field
"Just embrace it, but start small. Test AI on incremental improvements, measure the ROI, learn from the bumps, and let those early wins build trust in the process." — Gabriel Lozano, CRM Lead, Athena Home Loans
“We treat AI as a copilot, not autopilot. The tools can speed up research, prep, and follow-up, but judgment and ownership still sit with the person talking to the customer.” — Taylor Conyers, Senior Customer Success Manager, Enterprise, Hootsuite
“In healthcare, we have to be careful. If AI starts making suggestions without the right data or consent, we can lose trust that took years to build.” — Brindha Sridhar, Vice President of Customer Experience Strategy, MetroPlusHealth
“In banking, the key question is not just ‘does this work’ but ‘can we explain it.’ If we can’t show regulators why a particular decision was made, we shouldn’t make it with AI.” — Khushboo Mishra, Assistant Vice President, HSBC
“In financial services, agentic AI can surface incredible insights. But we still need humans to translate that into a narrative that makes sense for clients, boards, and regulators. The context switching is intense.” — Ankur Gupta, Chief of Staff to the Chief Customer Officer, FedEx
“The difference between a good contact center and a great one is not how much AI they have. It’s how well they orchestrate between the AI and the humans.” — Jae Washington, Head of Community, Enterprise Experiences, Headspace
“We use AI to do things like auto-generate QBR decks and analyze usage patterns. But the value is still in the conversation with the customer. You need context and judgment to decide what actually matters.” — Shi Yunn Chua, Director of Customer Success, Wellhub
Pressure Test Your 2026 Plan
Use these questions to stress-test your plan for next year.
Metrics and incentives: Do our KPIs reward resolution, behavior, revenue, and agent health, or do they reward speed alone? Have we clearly communicated what “good” looks like in a human-AI environment?
AI use cases: Have we identified a small set of low-risk, high-annoyance tasks as first automation targets? Do we know which conversations should always route to humans?
Coaching and QA: Are supervisors using AI to choose which calls to review and which behaviors to coach? Do we score AI-led interactions with the same rigor as human-led ones?
Journey design: Does customer context persist across channels and between AI and human agents? Have we defined clear routing and escalation rules for our top journeys?
Governance and ownership: Does each AI agent have a defined role, owner, and KPI set? Do we have audit trails and explainability where regulators or brand risk demand it?
Human expertise and careers: Are we investing in the specific skills humans need in an AI-rich environment? Do agents see real career paths that involve working with and on AI, not just alongside it?
Workforce and hiring: Do we understand which future roles AI will change, and how that affects hiring today? Are we building job descriptions and performance criteria that assume AI is present?
Where the Contact Center Goes Next
Agentic AI is now part of the contact center’s core operating system. It handles real tasks, shapes real conversations, and influences real outcomes. But the past year has also proven that AI on its own does not guarantee success. Many AI-only conversations still end without resolution. Many pilots stall. Yet when AI and humans work together with the right metrics, coaching, and governance, the results speak for themselves.
Senior leaders now need to decide where AI fits in their workforce, not just their tech stack. They need to define metrics that reward value, not just speed, and invest in human skills that matter more, not less, in an AI-rich environment. They also need to set boundaries around where AI must defer to human judgment, especially when health, safety, and long-term trust are at stake. The contact centers that make those moves in 2026 will deliver better experiences for customers, create better jobs for agents, and turn agentic AI into a strategic advantage.
How this Editorial Report Came TogetherThis report draws on a mix of real-world insights and Cresta research to reflect the reality of the "Agentic AI" shift in 2025. First, we conducted in-depth interviews with CX, support, and customer success leaders from financial services, healthcare, retail, automotive, technology, and other industries, including executives from Hilton, Coinbase, HSBC, and Walmart Canada. Cresta’s proprietary data supplied the quantitative backbone, including insights from the "State of the Agent 2024" report and "Cresta IQ" analysis of millions of conversation minutes. We also incorporated external benchmarks from Gartner, Metrigy, and Calabrio. The editorial team compared patterns across these sources to identify where agentic AI is already delivering clear value, where it falls short, and which practices separate leaders from laggards. The result is a concise view of 2025 and a practical playbook for contact center leaders designing human-AI operations for 2026 and beyond. |





