I remember the days when I could check the truth of something with a little research. It was useful when somebody at book club mentioned a silly theory they had read in a magazine. Or when emails or social media posts made dubious claims. Snopes and a little research would easily dispel these myths.
Last week I mentioned the study from MIT that shows how using AI chatbots reduces our brain activity. (See the blog post.)
But even if we keep our brains active, it is difficult to know what content is AI-generated. Sometimes it’s easy – like when images look weird in places. But as AI gets better, this becomes more difficult. Even AI detection systems don’t always get it right.
The start of the slopocalypse
Wikipedia defines AI slop as “low-quality media, including writing and images … characterized by an inherent lack of effort, logic, or purpose”.
AI slop is the newest iteration of spam. It’s engineered for maximum engagement. You have seen plenty of it as it floods social media feeds and search results. It’s also very popular. Apparently TikTok’s newest AI slop trend is videos of food eating itself. I don’t understand that.
Toxic slop
There are many concerns about AI. These include job losses, increased bias, fake news, copyright violations, AI hallucinations and model collapse.
I watched a Last Week Tonight video by John Oliver about AI slop. (Note that it’s about 30 minutes long and contains bad language.) This video mentioned some other consequences that I had not thought about.
When disasters happen – like floods and wildfires – emergency teams need to find people fast. One of the ways they get information is through social media. AI tools can help teams to prioritise their efforts. But AI-generated videos also create noise and confusion, which hampers rescue efforts. Imagine if a rescue team goes to the wrong site because of AI slop!
The liar’s dividend
There’s another problem with disinformation and deep fakes. Because people know that something might be fake, they stop believing in things that are real.
Robert Chesney and Danielle Citron coined the term “the liar’s dividend” in 2019. This is when the liar claims that real content is fake, and people believe it.
False claims that stories are fake news can help politicians maintain support after a scandal. It works better than remaining silent or apologizing. One study suggests that such claims can actually increase the politician’s support.
An article about the photo of Trump as pope described it like this:
“In a democracy, leaders answer to the people. In the Catholic Church, the pope answers to God. In this new regime, AI answers to no one.”
We live in interestng — and terrifying — times. I’d love to hear your views.