All articles

One Meta Product Leader Warns That Plummeting Support Volume Is Often A Wall, Not A Win

Cresta News Desk
Published
April 21, 2026

Mukta Dhanuka, a Product Lead for Responsible Monetization at Meta, argues that companies celebrating dramatic drops in support volume after AI deployment are often mistaking suppression for success, and that the fix is building systems for deep listening.

Credit: CX Current

Key Points

  • Dramatic drops in AI-powered support volume are frequently misread as operational success, when they often signal that companies have made themselves harder to reach, suppressing legitimate complaints along with the noise.

  • Mukta Dhanuka, a Product Lead for Responsible Monetization at Meta, traces the root causes to metric engineering, technical illiteracy, fragmented process ownership, and leadership incentives that reward short-term efficiency over long-term customer health.

  • She recommends building systems for "deep listening" through simple complaint categorization, separated leadership incentives across sales, brand, and support, and executives willing to act as their own fake customer to test what the experience actually feels like.

If your support issues have gone down 90% in one week, one month, or six months, there is a high likelihood that you haven't solved all the problems. You've potentially just created a wall for people to reach you.

Mukta Dhanuka

Product Lead

Mukta Dhanuka

Product Lead
|
Meta

When a company deploys an AI support system and ticket volume drops 90%, the leadership team usually celebrates. The cost of operations falls. The efficiency metrics improve. Bonuses follow. But the drop rarely means the company solved 90% of its problems. More often, it means 90% of customers gave up trying to reach a human.

Mukta Dhanuka is a Product Lead for Responsible Monetization at Meta, where she leads responsible monetization across markets, demographics, verticals, and the full ad stack. Previously, she built and scaled commerce platforms at Square (Block) and spent over 11 years at SAP leading platforms, marketplaces, and applications across 25 industries and thousands of global enterprises. Her experience spans B2B, B2C, and B2B2C at every stage of product and organizational maturity.

"If your support issues have gone down 90% in one week, one month, or six months, there is a high likelihood that you haven't solved all the problems," says Dhanuka. "You've potentially just created a wall for people to reach you." The problem, she explains, runs across companies of all sizes and geographies. The drivers vary, but they tend to cluster around a few recurring failures: technical illiteracy as teams rush to adopt fast-moving technology, metric engineering tied to bonuses and short-term contracts, and fragmented process ownership where no single person understands the full customer journey.

  • The incentive trap: The dynamic is sharper in companies run by hired executives versus founders. "Their short-term contractual gains are often dependent on short-term margins. They're not going to be around in ten years," Dhanuka says. When leadership sets targets to reduce cost of operations by 80 or 90 percent overnight without having fixed the underlying product or service issues, "they're just shoving them under the rug. It will show up in a different way."

  • Lagging signals: The consequences of suppressed feedback rarely appear immediately. Brand sentiment erodes slowly. Customer acquisition costs rise. Sales cycles lengthen. Lifetime value declines. "Most likely it will be a lagging indicator versus a leading indicator," Dhanuka explains. By the time boards see the numbers move, the damage has been compounding for months.

  • The leading indicator: There is one early warning sign worth watching. "If your support issues have gone down drastically but you haven't really done the work to improve your product or service, then you have your answer," she says. Dhanuka recalls building a cloud product at SAP, where a decline in customer messages prompted her VP to flag that users were likely disengaging entirely, not becoming happier. A check of the portal confirmed it: fewer sales orders, fewer service requests, fewer active users.

The fix is not removing AI from customer support. It is redesigning the system to surface truth rather than hide it. Dhanuka calls this building for "deep listening," and she offers several concrete starting points.

  • Simplify categorization: Most support interfaces present either 20 menu items that nobody reads or no categorization at all, leaving routing to a black box. Dhanuka recommends fewer than five clear categories: qualitative feedback, refund request, compensation claim, and legal concern. "If you see a sudden drop in a certain category of complaints but you haven't done the work to improve in that category, then you have your answer." The structure also helps manage financial liability, since refund exposure is capped while compensation claims can be routed directly to legal.

  • Separate the incentives: Sales, brand management, and post-sales service should not report to a single leader whose goal is operational efficiency. "If you put them all under one senior leader, you will not get the signals you need," Dhanuka warns. Independent reporting lines for returns data, brand sentiment, and sales performance create the objective feedback loops that catch problems early.

  • Be your own customer: The simplest diagnostic is also the most underused. "Pick up the phone and try your own customer service. Make some of your team be the fake customer," Dhanuka says. "If something does not feel right, it's probably not right." She also recommends segmenting customers by tenure and watching for complaint drops among long-term users, a signal that loyalty is being exhausted rather than earned.

The broader risk extends beyond brand erosion. Dhanuka points to a growing wave of regulatory scrutiny and warns that liability lines between technology providers, implementers, and the companies deploying AI remain unclear. Class action exposure is real, and regulations will inevitably catch up.

"This is a good time for boards and C-level executives to think about what kind of targets they are setting," she says. "Just because they are in an AI race does not mean creating conditions that will harm the company in the long term. Make it easy for the right people to reach you. Make it hard for the bad actors to rip you off. That's the right trade-off."