All articles

How AI Is Moving Support Roles Upmarket Into Higher-Skill, Higher-Impact Functions

Cresta News Desk
Published
March 30, 2026

Daniel Bunton, Director of Customer Support at Lawhive, is redesigning support roles as automation removes routine tier-one tasks.

Credit: Outlever

Key Points

  • As AI automates tier-one support, "easy" tickets that once trained new agents are disappearing, leaving teams to handle more complex, unpredictable cases that require deeper problem-solving.

  • Daniel Bunton, Director of Customer Support at Lawhive, is redesigning support operations around this shift, moving agents into higher-value roles such as reviewing AI outputs, building workflows, and training automated systems.

  • By introducing systems like points-based performance tracking and focusing on post-interaction outcomes rather than surface-level metrics, teams are aligning support work more closely with real customer resolution and long-term value.

AI is fundamentally changing the role of human agents, shifting them into higher-complexity work and new value creation like training systems, validating outputs, and even building the bots themselves.

Daniel Bunton

Director of Customer Support

Daniel Bunton

Director of Customer Support
|
Lawhive

In highly automated support environments, AI is taking over the simple, repetitive tier-one tickets. But that success is creating a new challenge on the front line: the "easy" work that once helped train new agents is disappearing. Instead, the tickets that reach human agents are more complex, less predictable, and harder to resolve. As a result, support teams are rethinking how work is structured, from training and onboarding to how cases are handed off between AI systems and human agents now carrying a heavier cognitive load.

Daniel Bunton, Director of Customer Support at the AI-native consumer law firm Lawhive, navigates the shift on a daily basis. Focused on human-in-the-loop data annotation and gamification, he has reworked how people, processes, and data align, introducing a points-per-hour system that drove a 50% improvement in team efficiency. His approach reflects how support work is being redesigned from the inside out. As routine tasks disappear, teams now need more adaptable problem solvers, forcing changes in hiring, onboarding, and how agents are trained to work alongside AI.

"AI is fundamentally changing the role of human agents, shifting them into higher-complexity work and new value creation like training systems, validating outputs, and even building the bots themselves," Bunton notes. From his perspective, automation is rewriting the career ceiling for support staff. Instead of displacing workers, he explained that AI turns experienced agents into trainers and designers of automated workflows. "No one understands the customer journey like a customer support agent. They know every nook and cranny, and they know exactly what a customer is feeling on a certain page. They're the best people to build this stuff."

  • Mavericks over manuals: This shift is also changing how teams think about hiring. As automation takes on repeatable work, the focus is moving toward agents who can handle exceptions and think beyond scripts. In some cases, a small number of high-performing agents, using tools like Claude and Cursor, can match the output of much larger teams. For growing companies, that kind of leverage creates flexibility, allowing them to scale impact without scaling headcount. Bunton looks for these "dynamic problem solvers" and brings them into workflow design. "I'm yanking them off the tickets," he says. "They'll build their own bots and workflows that can handle a specific problem."

Day-to-day support work is now centered on human-in-the-loop tasks that are both repetitive and cognitively demanding. Agents are reviewing AI conversations, scoring outputs, and handling edge cases, while also beginning to build their own bots and workflows to solve recurring issues. Without the buffer of simple tickets, the risk of burnout increases. Bunton responds by taking ownership of backlog and introducing a points-based system that turns abstract performance targets into clear, manageable actions.

  • TikTok for tasks: The idea originated with Bunton's team. He admits he was initially skeptical when younger agents pushed for a more engaging way to structure the work, but their feedback reshaped his approach. The result is a system designed to match tasks to agents in real time, helping them stay engaged while maintaining performance. "We’ve developed a performance framework for the TikTok generation," he says. "It’s about trying to make the most of that person who’s sitting there and put the right thing in front of them at the right time."

  • Dopamine on demand: In Bunton's model, agents earn points for specific actions. A reply might be worth 10 points, a resolved issue 10 points, a positive CSAT 50 points. At the end of the month, total points are divided by hours worked to get a points-per-hour score, which ties into bonuses. Just as importantly, he takes on the responsibility for structural stressors like backlogs and response times. "By allowing them to really focus on what is right in front of them, they can focus on giving good customer service to someone. I’ll worry about the rest."

The bigger challenge ahead isn’t the technology itself, but how organizations align people, processes, and data to interpret signals consistently. Early assumptions that AI would eliminate support roles have already been tested. When ChatGPT launched, some of Bunton’s agents expected their jobs to disappear within months. "Four years later, we’re still here," he says. "Humans are still taking tickets and that has not changed." What has changed is the nature of the work, mirroring shifts in fields like software engineering, where smaller teams are using AI to increase output and take on more complex tasks.

  • Misleading metrics: Measurement is emerging as a key hurdle for teams adopting AI. Most early reporting centers on efficiency metrics like deflection and completion rates, but those don’t always capture whether the customer’s issue was actually resolved. This disconnect makes it harder to assess real outcomes, and Bunton cautions that teams may be overestimating success. "We see a conversation complete and assume it was a success," he observes, "but we don’t actually know that. The customer hasn’t actually told us that."

  • The follow-through factor: The challenge often comes down to what teams choose to measure. Instead of focusing only on the interaction, Bunton looks at what happens after and whether customers move forward or remain stuck. He tracks behaviors like purchases, renewals, and repeat contacts to determine if the experience actually solved the problem. That shift helps connect support efforts to real outcomes. "You can apply the pre-AI numbers and say, 'Okay, we would have normally expected a certain share of these customers to end up in a behavior that looks like this.'"

For Bunton, the case for human involvement is less about capacity and more about connection. AI can handle the straightforward work, but it lacks the context and diplomacy needed for more sensitive interactions. He recalls a legal AI system his team built that successfully flagged overdue client responses only for the alerts to be ignored. Resolving that kind of breakdown requires human judgment, ownership, and the ability to navigate nuance.

Bunton treats a closed bot conversation as a starting point, layering in behavioral signals to understand whether the experience actually worked. He’s cautious about relying too heavily on surveys, noting that customer sentiment often doesn’t match behavior. "We’ve never had more tools available to us. I’ve also never trusted them less," he concludes. "It's hard making sense of all of the data we have now to understand how customers actually feel about something. I think it's a massive open market that has not been nailed yet.”