There are many concerns about AI. One concern is that we won’t know whether we are dealing with a real or an artificial intelligence.
I discovered a simple test last week.
I’m the caller you love to hate
I would hate to be a call centre operator. You need the patience of a saint to put up with silly questions and bad tempers.
Having said that, I admit that I am one of the customer types that call center operators don’t like: the know-it-all. I definitely don’t know it all. But I understand technical issues better than most of the operators seem to.
(My pet peeve for years was phoning to report a problem with the internet connection, and being told to reboot the router. Aaagh!)
A definitely human experience
I bought some linen online at store X. It arrived, I was happy with the colour and wanted to buy some more.
That should have been easy. But the filter functionality didn’t show the linen I bought, let alone anything else that matched. (Bad data entry?) Eventually I found it, using keyword search based on my previous order.
The grey colour I wanted was listed. But when selected, the photo showed black sheets. I assume the warehouse packs by item code, and doesn’t know what the website shows. But I decided to make sure, so I contacted the WhatsApp line.
After I’d gone through some bot questions, I was put through to a real person. Her response was two-fold:
- She claimed the system probably showed the wrong colour because it might be out of stock. That would mean the system also showed the wrong quantity in stock. (Bad logic. There is more likely to be an error in the SQL syntax that retrieves the data based on my choices.)
- She advised me to go to a physical shop to make sure I get the right item. That doesn’t align with her claim that they probably don’t have stock.
I was not impressed.
Definitely human
Out of curiousity, I tried this scenario out on ChatGPT. Despite the very limited information I used in my prompts, it gave me a more acceptable response.
I compared the two conversations, and found a few tests to know when the call center agent is definitely human:
-
The agent doesn’t read the message properly. I always set out my query in clear and simple language. Human agents don’t bother to read, so they ask me for information I’ve already provided.
-
The agent isn’t apologetic. Human agents have human levels of patience. Sometimes human agents apologise, but not with the fervour of ChatGPT. It expressed deep concern about my unsatisfactory customer experience.
-
The agent doesn’t take the next step. Humans are lazy. The live agent came up with a non-sensical explanation for the website error, and left it at that. ChatGPT volunteered to pass the information on to the web development team.
Every online chat bot I’ve used so far is awful. But if ChatGPT is the future for call centres, I might not complain.
I’d love to hear your comments on this.