69% of Your Customers Want to Know When They're Talking to AI (You're Not Telling Them)
Recent December 2025 data reveals a massive transparency gap: customers demand AI disclosure, but most companies stay silent. Here's why that's killing trust.
Let's get right into it.
There's a massive trust crisis happening right now with AI customer service, and most companies are walking straight into it with their eyes closed.
Fresh data from December 2025 just dropped, and the numbers are brutal: 69% of customers believe brands should always reveal when AI is being used. But here's the kicker—only 22% of companies actually do it.
That's a 47-percentage-point gap between what customers want and what they're getting. And it's costing you.
The Trust Freefall Nobody's Talking About
Here's how the math works out: customer trust in businesses using AI ethically just crashed from 58% in 2023 to 42% in 2025. That's a 16-point drop in two years.
Think about that. We're deploying more AI than ever, and trust is moving in the opposite direction.
Why? Because 81% of customers already believe companies are hiding AI usage. They're not wrong. And when 37% say they'll lose confidence in you the moment they discover you've been hiding it—that's not a PR problem, that's a business problem.
Real Companies, Real Consequences
Let me give you three examples from the last few months that show exactly what happens when you play games with AI disclosure.
Commonwealth Bank Australia deployed an AI voice bot in June 2025, claimed it reduced calls by 2,000 per week, then cut 45 customer service jobs. Plot twist: call volumes were actually rising. Staff was forced into overtime, managers had to answer phones, and the whole thing imploded. They had to rehire everyone in August with back pay and apologies. That's huge.
Air Canada let their chatbot tell a customer he could retroactively apply for bereavement fares. The chatbot was completely wrong. The customer sued. Air Canada's defense? "The chatbot is a separate legal entity." The court called this a "remarkable submission" and ruled against them. You own what your AI says. Period.
Subaru dealership created a fake employee named "Cameron Rowe"—full name, business title, email signature—for their AI bot. A customer texted "Cameron" a dozen times before discovering none of it was real. The story went viral. The customer felt deceived. The dealership became a case study in what not to do.
You get the idea.
What Customers Actually Want
Nearly 80% of customers say AI use should be revealed at the start of a customer service interaction. Not halfway through. Not when they ask. Immediately.
And here's the part that should make every founder pause: 80% of customers would prefer human support even if the outcome and wait time are identical to AI.
This isn't about AI being slow or ineffective. It's about trust, empathy, and feeling like you're talking to someone who actually gets it.
The Transparency Playbook
So what's the move? Here's how it could work:
Number one: Disclose upfront. Every single time. "Hi, this is our AI assistant" should be the first thing customers hear or read. As of December 2025, this is becoming legally required anyway—Congress just introduced bipartisan legislation mandating disclosure at the start of all AI customer service interactions.
Number two: Make human escalation stupid easy. Don't hide the "talk to a human" button seven menus deep. When customers want a person, give them a person. The Slovakia micro-enterprise study from 2025 showed that easy human escalation was critical to maintaining customer satisfaction with AI support.
Number three: Be honest about what AI can and can't do. OPPO crushed this—they achieved an 83% chatbot resolution rate and saw a 57% boost in repurchase rates. How? By being transparent about capabilities and training their AI properly instead of just hoping it would figure things out.
Why This Matters for Your Business
The companies winning with AI customer service right now are the ones being honest about it. The ones losing are trying to sneak it past customers.
Welcome to 2025, baby. Your customers are already suspicious. They're already looking for signs you're hiding AI. And when they find out—and they will—you've got a 37% chance they'll lose confidence in your entire brand.
That's not a risk worth taking to save a few support dollars.
The Bottom Line
Transparency isn't just good ethics. It's good business.
Recent data from healthcare shows that 75% of consumers expect AI disclosure in medical communications. Finance and retail aren't far behind. The regulatory hammer is coming—multiple states already require disclosure, and federal legislation is moving fast.
You can lead this shift or get dragged into it. Companies that get ahead of transparency requirements will build trust while competitors are scrambling to retrofit honesty into systems designed to deceive.
Here's what we're doing at Julya: we're upfront about our AI phone answering capabilities from day one. We tell your customers they're talking to AI. We train our system on your specific business so it actually helps instead of frustrating people. And we make it easy to escalate to you when needed.
That's the cheat code. Be honest. Be helpful. Give customers a choice.
The 69% who want transparency? They're not going away. They're the majority. And they're about to become the law.
Time to lean into it.
Ready to Never Miss Another Call?
Start your free trial with Julya AI and turn every ring into revenue.
Get Started for FreeRelated Articles
Google's Gemini 2.5 vs. Call Centers: Why 'Robotic' Voices Are History
Google's Gemini 2.5 native audio just killed the robotic voice problem. Here's why small businesses are ditching call centers for AI that sounds actually human.
From Chatbots to Agents: Why Your IVR System Just Became a Rotary Phone
AWS re:Invent 2025 just declared IVR systems obsolete. Here's what the shift to agentic AI means for your business - and the $126K you're losing annually.
The $200M Betrayal: Why ChatGPT's Ad Scandal Means You Can't Trust 'Premium' AI
OpenAI just proved premium subscriptions mean nothing. Here's what happened when $200/month ChatGPT Pro users saw Target ads.