AI Won't Replace Your Support Agents. But It Will Expose Everything Else That's Broken.
Forrester says 2026 is the year AI "gets real" for customer service. After deploying AI agents across dozens of companies, here's what that actually looks like from the inside.
There's a version of the AI-in-customer-support story that's very clean. It goes like this: You plug in an AI agent. It reads your knowledge base. It starts resolving tickets. Your support costs drop 40%. Your CSAT goes up. Your team focuses on "high-value interactions." Everyone wins.
I've spent the last four years building IrisAgent, an AI platform that automates customer support across chat, email, and voice. We've deployed AI agents for companies ranging from mid-market SaaS to large enterprises. And I can tell you: that clean story is missing about 90% of what actually happens.
Forrester recently published their 2026 predictions for customer service, and the headline caught my eye: "AI Gets Real For Customer Service — But It's Not Glamorous Work." Their thesis is that 2026 won't be a year of AI-powered transformation. It will be a year of "gritty, foundational work — the kind that rarely makes headlines but is essential to realizing AI's long-term promise."
I agree with Forrester. But I'd go further: this gritty work isn't just a phase companies need to push through. It is the product. And the companies that understand this will be the ones that actually see results from AI in support.
The Gap Between the Demo and Production
Every AI support tool demos beautifully. You point it at a clean help center, ask it a question, and it gives a polished answer. The audience nods. The deal moves forward.
Then you deploy it into a real support environment, and you discover things like:
Your knowledge base is a mess. Articles contradict each other. Some haven't been updated since 2022. Critical workflows are documented in a Google Doc that three people know about. The AI will happily surface all of this — confidently and at scale.
Your integrations have edge cases nobody mapped. A customer's Freshchat auto-assigns conversations to agents the moment they come in, which triggers the AI to stop responding — before it's even had a chance to help. The fix isn't an AI problem. It's a workflow configuration problem that nobody noticed because humans worked around it intuitively.
Your ticket taxonomy doesn't reflect reality. You have 15 ticket categories, but 40% of incoming issues don't cleanly fit any of them. Your agents have been quietly making judgment calls for years. Now the AI needs explicit rules, and suddenly everyone realizes the rules were never written down.
None of this is a failure of AI. It's AI making visible what was already broken — and had been working only because humans are remarkably good at compensating for bad systems.
What Forrester Gets Right — and What They're Missing
Forrester predicts that service quality will actually dip in 2026 as companies wrestle with AI deployment complexity. They say one in four brands will see a modest 10% increase in successful self-service interactions. And they expect 30% of enterprises to create parallel AI functions that mirror human service roles — managers to "onboard" AI agents, teams to optimize performance, specialists to unblock AI when it stalls.
The first two predictions match what I see in the field. The dip in quality is real — it happens during the transition period when AI is handling some interactions but the handoff workflows aren't yet smooth. And a 10% improvement in self-service is honestly a realistic target for companies doing this seriously, not the 70% deflection rates that vendor marketing promises.
But the third prediction — building large internal AI management teams — is where I think Forrester is overcomplicating things. Most mid-market companies don't have the budget or headcount to build a parallel AI operations team. What they need is tooling that makes AI agents manageable without a dedicated staff. That's the vendor's job, not the customer's.
The Real Blockers Aren't What You Think
When a customer's AI deployment stalls, the reason is almost never "the AI isn't smart enough." The models are good. The natural language understanding is good. The generation quality is good.
The real blockers are:
Data quality. If your historical tickets are poorly categorized, your training data is noisy. If your knowledge base has gaps, the AI has gaps. Garbage in, garbage out applies to LLMs just as much as it applied to every technology before them.
Integration depth. Deploying a chatbot is easy. Deploying an AI agent that can actually do things — create tickets in Freshdesk with the right required fields, route escalations to the correct group, pull order status from your backend — requires deep integration work. And every customer's setup is different. Every Zendesk instance is a unique snowflake.
Trust and change management. Support leaders are measured on CSAT and resolution time. Asking them to trust an AI with customer interactions — when a bad AI response can trigger a churned account — is a big ask. The companies that succeed are the ones that start small, measure obsessively, and build confidence gradually.
Cross-functional alignment. When AI reveals that 30% of support tickets are actually product bugs or UX issues, that's not a support problem anymore. It's a product problem. But most organizations don't have the muscle to turn support data into product action. The AI surfaces the insight. The org structure buries it.
AI Won't Replace Agents. It Will Change What They Do.
Here's the contrarian take I want to leave you with: the "AI replaces agents" narrative fundamentally misunderstands what makes customer support hard.
The easy tickets — password resets, order status checks, how-do-I-do-X questions — yes, AI handles these well today. We see automation rates above 50% for these categories across our customer base.
But the hard tickets — the ones involving frustrated customers, ambiguous situations, multi-step investigations, or issues that cross departmental boundaries — those aren't going to be fully automated anytime soon. And they shouldn't be. These are the interactions where human judgment, empathy, and creativity matter most.
What AI does is free your agents to focus on these complex cases by eliminating the repetitive work that burns them out. It gives agents real-time context and suggested responses so they can work faster. It identifies patterns across thousands of tickets that no human team could spot manually.
The end state isn't fewer agents. It's agents who are better equipped, less burned out, and working on problems that actually require human intelligence.
What "Getting Real" Actually Looks Like
If you're a support leader planning your AI strategy for 2026, here's what I'd focus on:
Audit your knowledge base before you buy any AI tool. If your docs are outdated, contradictory, or incomplete, fix that first. It's the single highest-ROI activity you can do, and it benefits your human agents too.
Start with a narrow, measurable use case. Don't try to automate everything at once. Pick one channel (chat), one category of issue (billing questions), and one metric (resolution rate). Prove it works. Then expand.
Invest in integration, not just intelligence. The difference between an AI demo and an AI deployment is integration depth. Make sure your vendor can connect to your actual systems — your ticketing platform, your CRM, your product data — not just answer questions from a knowledge base.
Measure what the AI gets wrong, not just what it gets right. Deflection rate is a vanity metric if the AI is confidently giving wrong answers that customers don't bother to push back on. Track customer satisfaction on AI-handled interactions separately. Read the transcripts. Build feedback loops.
Don't reorganize your team around AI on day one. You don't need a "Head of AI Agents." You need your existing support ops team to understand how to monitor and tune the AI, the same way they monitor and tune your routing rules and macros today.
The Unsexy Truth
Forrester is right that 2026 will be defined by foundational work. But I'd reframe it: the foundational work isn't a prerequisite to AI transformation. It is the transformation.
The companies that will win in AI-powered support aren't the ones with the most advanced models or the flashiest chatbot interfaces. They're the ones willing to do the unglamorous work: cleaning their data, fixing their integrations, training their teams, and building trust one interaction at a time.
That's not a very exciting pitch for a conference keynote. But it's the truth. And in an industry drowning in hype, the truth is the most contrarian thing you can say.
Palak Dalal Bhatia is the founder and CEO of IrisAgent, an AI-powered customer support platform that automates over 50% of support interactions across chat, email, and voice. Before founding IrisAgent, she was a product manager at Google. She holds an MBA from Harvard Business School.



