CX Current 2026

Contact Center Leaders Tap Into Agentic AI for Resolution, Revenue, & Resilience

Executive Summary

AI is widespread, but success rates are low.

Nearly every contact center now runs some form of AI. Yet few AI-only conversations end in full resolution. Meanwhile, human agents still handle the most challenging, most emotional work, often without the tools, metrics, or support to match this new reality.

Human + AI teams win.

When AI supports agents instead of trying to replace them, organizations see higher satisfaction, more revenue, faster ramp, and several hours of busywork removed from each agent’s week. The most significant gains come from behavior-focused metrics, targeted coaching, and unified QA for humans and machines.

2026 is about discipline.

Leaders who succeed will automate low-risk, high-annoyance tasks first, extend conversation intelligence to AI agents, and treat agentic AI as part of the workforce with training, metrics, and accountability.

What Contact Centers Looked Like in 2025...

The difference between a good contact center and a great one is how well they orchestrate between AI and humans.

Jae Washington
Head of Community, Enterprise Experiences
@
Headspace

The difference between a good contact center and a great one is how well they orchestrate between AI and humans.

Jae Washington
Head of Community, Enterprise Experiences
@
 Headspace

Agentic AI became part of daily life in the contact center.

2025 brought massive strides to agentic systems in the contact center, allowing agents to log in to platforms that listen to calls, suggest next steps, and draft summaries. AI agents greet customers, collect information, and complete simple tasks on chat and SMS. Leaders talk openly about “virtual teammates” and “copilots” rather than “pilots” and “proofs of concept.”

In fact, 88% of organizations report regular AI use in at least one business function. Meanwhile, more than half of companies now manage virtual assistants and human agents under the same team.

Executives say the urgency to adopt is palpable.

According to a recent AI agent survey, 67% of executives agree that AI agents will drastically transform existing roles in the next 12 months, 73% of leaders agree that how they use AI agents will be a competitive advantage in the next 12 months, and 57% are actively using or planning to use agents for customer service. 

Meanwhile, another 48% say they will likely increase headcount due to the changes AI agents will bring to how we work. By 2029, Gartner predicts agentic AI will autonomously resolve 80% of common customer service issues without human intervention, driving a 30% reduction in operational costs.

It’s not AI versus human. It’s AI versus being ignored or waiting a week for a response.

Wes Griffith
Customer Support Team Lead and SME
@
Coinbase

But resolution remains low when AI operates alone.

In years past, fewer than 20% of AI-handled conversations reach successful resolution. In other words, most AI systems today assist with parts of the interaction, but still need humans to close the loop.

Most agents welcome AI assistance.

According to recent research, 65% of human agents want real-time AI assistance during customer interactions, and 95% already using AI say it helps them resolve issues quickly and efficiently. Agents using AI, however, are roughly twice as likely as non-users to  because of the technology available to them.

If you automate broken processes, you’re just delivering a bad experience faster.

Aurelia Pollet
Vice President of Customer Experience
@
CarParts.com

Yet the most challenging work still lands on humans.

As AI absorbs status checks and routine questions, agents spend more time on cancellations, complaints, and saves across multi-step journeys spanning several channels and often involving financial, health, or safety issues where the stakes are high.

Meanwhile, over half of customer interactions remain transactional, despite significant efforts to eliminate them.

Customer behavior reinforces complexity.

Today, more than three-quarters of customers use multiple channels in a single transaction, and more than half expect personalization at every touchpoint.

But many contact centers still run separate workflows per channel.

Systems often fail to preserve context as customers move from web to chat to SMS to voice. By the time a customer reaches an agent, they often feel impatient, confused, or frustrated.

Legacy metrics no longer delivering the same value.

Many systems still reflect an earlier era. Metrics like average handle time (AHT) are widely tracked, but no longer reflect value in and of themselves. Across travel, hospitality, and financial services, calls that lead to sales or other favorable outcomes are significantly longer than the average.

The real issue for large enterprises is still the basics: getting your data and processes in order so AI has something to work with.

Ali Shoukat
Sr. Manager, Enterprise Architecture
@
Hilton

Coaching patterns show a similar mismatch.

In most cases, increasing the number of coaching sessions does not reliably improve behavioral adherence. Because sessions aren’t targeted at the specific behaviors driving outcomes, many teams spend more time in coaching meetings without seeing clear gains.

The volume of coaching sessions is not strongly correlated with improved behavior adherence, and extra sessions can even create busywork. In contrast, agents who receive AI-personalized coaching rate it nearly 3x more effecttive than one-size-fits-all coaching.

The value is still in the conversation with the customer. You need context and judgment to decide what actually matters.

Shi Yunn Chua
Director of Customer Success
@
Wellhub

Agentic AI is introducing risk.

Agentic AI now behaves like a junior employee rather than a script. In 2026, 35% of organizations plan to automate +60% of inbound inquiries by 2028.

Yet, 4 out of 5 businesses still allocate less than 10% of their overall customer care budget to AI. Meanwhile, 50% are stuck in pilot mode, 80% feel uncomfortable about running end-to-end operations, and 35% lack a clear AI roadmap and use-case hierarchy.

In banking, the key question is not just ‘does this work’ but ‘can we explain it.‘

Khushboo Mishra
Assistant Vice President
@
HSBC

But adoption is still early, and the risk is real.

Banking, technology, and telecommunications are the sectors furthest along in embedding AI and scaling modern customer care models. Meanwhile, the bottom 30% maintain consistent service operations with limited innovation, relying on standardized training, manual processes, and legacy systems with limited channel integration or automation.

In fact, Gartner forecasts that more than 40% of agentic AI projects will be canceled by the end of 2027 due to escalating costs, unclear business value, or weak risk controls.

That combination of high ambition, high failure risk, and growing complexity shapes the landscape for 2026.

Leaders now have to treat agentic AI as part of the workforce and the operating model, not as a set of point solutions, and they need to set clear expectations for where AI will act independently and where humans must remain in the loop.

If AI starts making suggestions without the right data or consent, we can lose trust that took years to build.

Brindha Sridhar
Vice President of Customer Experience Strategy
@
MetroPlusHealth

Cresta Expert Insights

I’m deliberate when I use the word ‘transformation’ because I believe it’s a lost art, particularly in the contact center.

James Russel
Head of the Customer Transformation Group
@
Cresta

Agentic AI can absolutely act like an employee. But just like employees, it needs to be trained, supported, and audited.

Mark Meghezzi
Head of EMEA
@
Cresta

If you’re not advancing with technology and tools, your competitors are, and you’re going to be left behind.

Rachel Bloch
Conversational AI Designer
@
Cresta

Regulation is really what’s going to make AI mainstream-ready.

Robert Kugler
Head of Security, IT, and Compliance
@
Cresta

The biggest struggle of adopting any AI solution is that you need to be willing to let go of what you had before.

Brittany Benjamin Bell
Senior Strategic Customer Success Manager
@
Cresta

CX Playbook:
6 Practical Actions

1

Reset metrics to reward the right outcomes.

If your KPIs still treat low AHT and high volume as the leading indicators of success, AI will optimize for speed rather than value. Start by auditing your top-level KPIs. What do they encourage agents and supervisors to do? Then, add or elevate:

  • Resolution rate across humans and AI
  • Behavior metrics like discovery, explanation, and clear commitments
  • Revenue per conversation and save rates where relevant
  • Agent satisfaction and burnout indicators

Communicate the changes to frontline teams by explaining how to define “good” and how AI will support that definition moving forward.

2

Start with simple, high-annoyance use cases.

AI delivers the most obvious value early by eliminating repetitive work that agents dislike and that customers find tedious. First, list your top inbound reasons by volume and complexity. Then, flag interactions where the path to resolution is clear and limited, or the emotional stakes are low. For example:

  • Order status updates
  • Basic password and profile changes
  • Routine account information
  • Drafting after-call notes and tags for agents

Measure containment rate, customer satisfaction, and time saved per agent.

3

Use AI to aim coaching at high-leverage behaviors.

The most significant value of conversation intelligence is its ability to home in on what matters for each agent. Start by defining 5–7 behaviors that correlate with your best outcomes. For instance:

  • Asking clarifying questions early
  • Summarizing next steps clearly
  • Handling objections without defensiveness

Configure your AI tools to flag calls for closer review where those behaviors are strong or weak. Train supervisors to use AI-suggested calls as the starting point for coaching. Then, tie feedback to specific sentences and moments, and track behavior scores over time, along with their impact on CSAT, revenue, or savings.

4

Build journeys around shared context.

While most customers move across channels, many contact centers still treat each touchpoint as a standalone interaction. Map a few critical journeys in detail, including:

  • New customer onboarding
  • High-risk cancellations
  • Complex complaints

For each journey, define:

  • What information AI should collect and store
  • When AI should hand off to a human
  • What context must travel with the customer

Make sure your systems present full histories to agents at the moment of contact, and allow AI to read and write within the same record. Then, pilot one journey where an AI agent handles the early steps and a human handles the high-stakes steps, both working from the same view.

5

Treat AI agents like employees.

Agentic AI workflows most often falter due to a lack of context. For each AI agent that interacts with customers, define:

  • The scope of responsibility
  • Allowed actions
  • Escalation triggers

For interactions where AI handles most of the work, set KPIs like resolution rate, containment, and CSAT. Put agents into QA by scoring a sample of AI-led interactions each week, and establish ownership by assigning a named person or team as the “manager” of each AI agent.

6

Protect human expertise.

Human agents should focus on the work AI cannot do well. First, identify conversation types where humans must lead. For example:

  • Financial hardship
  • Health and safety issues
  • Regulatory disputes
  • Complex churn risks

Then, invest in training for de-escalation, negotiation, and problem-solving, as well as working alongside AI, including when to override or escalate. Design career paths that allow agents to grow into roles such as journey owners, AI trainers, and QA leads.

Pressure Test Your 2026 Plan

Use these questions to stress-test your plan the upcoming year.

Metrics and incentives.

Do our KPIs reward resolution, behavior, revenue, and agent health, or do they reward speed alone? Have we clearly communicated what “good” looks like in a human-AI environment?

AI use cases.

Have we identified a small set of low-risk, high-annoyance tasks as first automation targets? Do we know which conversations should always route to humans?

Coaching and QA.

Are supervisors using AI to choose which calls to review and which behaviors to coach? Do we score AI-led interactions with the same rigor as human-led ones?

Journey design.

Does customer context persist across channels and between AI and human agents? Have we defined clear routing and escalation rules for our top journeys?

Governance and ownership.

Does each AI agent have a defined role, owner, and KPI set? Do we have audit trails and explainability where regulators or brand risk demand it?

Human expertise and careers.

Are we investing in the specific skills humans need in an AI-rich environment? Do agents see real career paths that involve working with and on AI, not just alongside it?

Workforce and hiring.

Do we understand which future roles AI will change, and how that affects hiring today? Are we building job descriptions and performance criteria that assume AI is present?

Where the Contact Center Goes Next

Agentic AI is now part of the contact center’s core operating system. It handles real tasks, shapes honest conversations, and influences real outcomes. But the past year has also proven that AI on its own does not guarantee success. Many AI-only conversations still end without resolution, and many pilots stall.

Yet when AI and humans work together with the right metrics, coaching, and governance, the results speak for themselves.

Senior leaders now need to decide where AI fits in their workforce, not just their tech stack. They need to define metrics that reward value, not just speed, and invest in human skills that matter more, not less, in an AI-rich environment. They also need to set boundaries around where AI must defer to human judgment, especially when health, safety, and long-term trust are at stake. 

The contact centers that make those moves in 2026 will deliver better experiences for customers, create better jobs for agents, and turn agentic AI into a strategic advantage.

Signals from the Field

We still need humans to translate that into a narrative that makes sense for clients, boards, and regulators.

Ankur Gupta
Chief of Staff to the Chief Customer Officer
@
FedEx

AI is finally getting good at repetitive tasks, so my team can spend more time on the human interactions that actually matter.

Kenji Hayward
Senior Director of Customer Support
@
Front

You need experienced people to interpret that and train the models so sentiment and themes reflect what customers mean.

Montserrat Padierna
Customer Intelligence and Experience Lead
@
Walmart Canada

AI can help us detect patterns faster, but we still need humans to diagnose what’s really going on.

James Elsner
Customer Support Team Lead and SME
@
Tapcheck

Methodology

This report draws on a mix of real-world insights and Cresta research to reflect the reality of the “Agentic AI” shift in 2025.

First, we conducted in-depth interviews with CX, support, and customer success leaders from financial services, healthcare, retail, automotive, technology, and other industries, including executives from Hilton, Coinbase, HSBC, and Walmart Canada.

Cresta’s proprietary data supplied the bulk of the quantitative backbone, including insights from the “State of the Agent 2024” report and “Cresta IQ” analysis of millions of conversation minutes. We also incorporated external benchmarks from Gartner, Metrigy, McKinsey, Adobe, PwC, and Grand View Research.

The editorial team compared patterns across these sources to identify where agentic AI is already delivering clear value, where it falls short, and which practices separate leaders from laggards.

The result is a concise view of 2025 and a practical playbook for contact center leaders designing human-AI operations for 2026 and beyond.