A year ago, we were confused about the future of AI. Was it a bubble about to burst? Would it take over the world? Guess what? We still don’t know. We’re even more confused than before.
A few weeks ago I read two very good, very different, articles on AI. Both were written by developers who are deeply involved in the AI industry.
AI is coming for me
Matt Shumer’s article Something Big is Happening has been viewed over 60 million times.
It is a calm, matter‑of‑fact warning that AI will be a general substitute for cognitive work in all industries.
You can read the article for yourself, but here is an extract:
“The experience that tech workers have had over the past year, of watching AI go from “helpful tool” to “does my job better than I do”, is the experience everyone else is about to have.
…
I know the next two to five years are going to be disorienting in ways most people aren’t prepared for. This is already happening in my world. It’s coming to ours.”
It isn’t meant to be a doom-and-gloom article, but it won’t cheer you up.
AI is exhausting me
The second article is by Siddhant Kha: AI fatigue is real and nobody talks about it.
Kha describes the reality of working with AI. He explains why AI fatigue is real and what causes it.
It’s worth reading the article (and sharing it with your boss). Here are a few snippets:
“When each task takes less time, you don’t do fewer tasks. You do more tasks. Your capacity appears to expand, so the work expands to fill it. And then some.
…
I might touch six different problems in a day. Each one “only takes an hour with AI”. But context-switching between six problems is brutally expensive for the human brain. The AI doesn’t get tired between problems. I do.
…
AI reduces the cost of production but increases the cost of coordination, review, and decision-making. And those costs fall entirely on the human.”
Using AI, but still working harder? This is the explanation.
AI is driving me nuts
Recently I was impressed with the results from AI. I had provided very lengthy and extensive prompts over a period of days. It asked probing, relevant questions. It provided useful, insightful answers. I even understood why people talk to AI like it’s human.
Then I uploaded a CSV file and asked it to select records that matched the criteria we had agreed on. (AI can’t really agree, but you know what I mean.) It wasn’t a big file. And the problems started.
It invented fake records. I tried again, with extra, very specific instructions about reading the file. I asked for line numbers as an extra check. Sometimes it got the answer right. Sometimes it hallucinated. Every time I had to double-check.
It apologised. Very politely. It apologised again. It suggested prompts to prevent the problem. If it had been human, it would have been in tears. Eventually I did the work myself.
If this was so painful, imagine checking 500 lines of generated code. That doesn’t need a prompt. That needs a trained programmer. A very well-trained programmer.
I’d love to hear about your experiences. Please share them.