“Oh what a tangled web we weave
When first we practice to deceive.”
Sir Walter Scott wrote those well-known lines in 1808. If we lie, we have to remember the details of our lies. (Unless we deny all, like politicians.) He could not have predicted that we’d need to worry about a computer lying to us.
Hail the AI receptionist
I was at a short workshop this week. The speaker claimed that, within 6 months, people will prefer AI receptionists to human receptionists. He, of course, sells AI receptionist software.
Before the workshop (which turned into a glorified sales pitch), the attendees were all phoned by an AI agent. No-one identified the caller as AI. I thought it was a human with a script. I did get irritated and end the call early, but for other reasons.
My concern was not realism. We know AI can sound like a human. My concern was how real people felt when told afterwards that the caller was AI. Perhaps the target market was impressed. I felt deceived.
Built to believe
Let’s be honest: we all get tricked by AI hallucinations.
AI speaks with confidence and flair. Even when it’s wrong, it sounds right. Why do we believe it?
-
Authority bias: AI never says “Um, I’m not sure about that”. It delivers answers with a voice of grammatical correctness, authority and reason.
-
Confirmation bias: AI agrees with us. It’s the ultimate yes-person (yes-thing? yes-software? yes-widgit?). We love to hear that we are right.
-
Anthropomorphism: We project human traits on things and animals and AI. We think it “knows” or “understands” or “feels”. (Remember my post on marrying your AI companion?).
And then there’s the flattery. Co-pilot complimented me yesterday on my “editorial instincts and sharp eye for symbolic storytelling”. I preened a little at that unsolicited comment. (You devious piece of software! I know you just want me to stop using ChatGPT!)
Deception by design
We can reduce hallucinations, but not eliminate them. They are the result of how these models work:
- Training data is flawed and biased. Humans are flawed and biased. AI trained on humans is … flawed and biased.
- AI doesn’t think. It predicts the next word. It’s like autocomplete on radioactive steroids.
- Most models are designed to be helpful, polite, and engaging – even if that means inventing the facts.
Back to the receptionist
An AI receptionist works 24/7, never takes tea breaks, never complains. It is always polite to customers (unless you use Grok, in which case this is not guaranteed).
But it will also hallucinate, politely and convincingly. Imagine the following:
- It transfers you to the Department of Load Shedding Reparations, then ends the call.
- It offers to connect you to a human, but returns speaking with an Afrikaans accent and a new name.
- It confirms your appointment with a specialist, then cancels it. You only find out after fighting traffic for two hours to get there on time.
- It invents SARS penalties. You panic. It apologizes and invents a SARS refund. You celebrate. But your bank account stays empty.
And when you finally reach a real person with real authority? Will they take responsibility for the AI error? You’ve heard the old excuse: “The computer made a mistake”. Now it will be upgraded to: “The AI misinterpreted your tone of voice.” Somehow it’s your fault for using that tone.
The tangled web
One vendor of AI software claims that 72% of callers can’t tell AI from humans.
Big tech already twists social media algorithms to keep us scrolling. What’s next? Imagine AI voices optimized to influence our decisions. How will we trust phone conversations? Will we hear a new disclaimer before every conversation?
I want to know when I am talking to an AI agent. It’s one thing to be misled by AI. It’s another to not even know it’s AI.
The light at the end of the tunnel
The EU AI Act requires that users must be informed when interacting with AI. Although this is now law, it is still to be implemented.
Other countries are still in the early stages of developing laws to govern AI. But don’t expect much from the US. A decade after the GDPR was passed, data privacy in the US is still a patchwork of uncertainty.
I don’t know if the light at the end of the tunnel is hope, or an oncoming chatbot. We can’t fight AI. But we can insist that it wears a nametag and introduces itself properly.
Have you had any experience with an AI receptionist? I’d love to hear your comments.