I was alone in a doctor's consulting room yesterday, waiting for my appointment. A chatty member of the staff came to show me a photo on her cell phone. She was wondering if the photo was real, or manufactured.
It was a photo of the monkey orchid: a plant whose flowers look like monkeys' faces. The plant exists, so the photo was probably real. (I've used a photo on the blog post, so you can see what it looks like.)
This short conversation highlights the problem. When we want information, our first response is to search the internet. But we can't always trust what we find.
Infoxication and infobesity
Information overload is when too much information makes it difficult to understand and make a decision about an issue.
"Infoxication" and "infobesity" are other names for information overload. I like the words, because they convey how negative this can be. We should use "infoxication" to remind us that wrong information can poison our minds. And with AI-generated content, this will only get worse.
MIT Technology has an article about how junk websites with AI-generated text attract paying advertisers. Companies don't want to advertise on those sites, but that's the result of the algorithms. Lots of money is wasted. The article describes this as the "arrival of a glitchy, spammy internet".
Predictions of AI harm
Metaculus is an online forecasting platform. I don't know how they evaluate forecasts, but there is an article on the statistics behind it.
Some of its forecasts for AI are quite scary:
- 90% of predictions expect that a successful deepfake attempt will cause real damage and make front page news before the end of 2023.
- 84% of predictions expect an AI malfunction to cause at least 100 deaths and/or $1 billion in damage before 2032.
Those are predictions: they may or may not come true. But what is true, is that we have to improve our skills at separating good information from bad. Instead of courses on AI, we need to learn critical thinking and good research skills.
What do you think? Please share your comments.