Coding matters: The slopocalypse

The word FAKE repeated over and over again

I remember the days when I could check the truth of something with a little research. It was useful when somebody at book club mentioned a silly theory they had read in a magazine. Or when emails or social media posts made dubious claims. Snopes and a little research would easily dispel these myths.

Last week I mentioned the study from MIT that shows how using AI chatbots reduces our brain activity. (See the blog post.)

But even if we keep our brains active, it is difficult to know what content is AI-generated. Sometimes it’s easy – like when images look weird in places. But as AI gets better, this becomes more difficult. Even AI detection systems don’t always get it right.

The start of the slopocalypse

Wikipedia defines AI slop as “low-quality media, including writing and images … characterized by an inherent lack of effort, logic, or purpose”.

AI slop is the newest iteration of spam. It’s engineered for maximum engagement. You have seen plenty of it as it floods social media feeds and search results. It’s also very popular. Apparently TikTok’s newest AI slop trend is videos of food eating itself. I don’t understand that.

Toxic slop

There are many concerns about AI. These include job losses, increased bias, fake news, copyright violations, AI hallucinations and model collapse.

I watched a Last Week Tonight video by John Oliver about AI slop. (Note that it’s about 30 minutes long and contains bad language.) This video mentioned some other consequences that I had not thought about.

When disasters happen – like floods and wildfires – emergency teams need to find people fast. One of the ways they get information is through social media. AI tools can help teams to prioritise their efforts. But AI-generated videos also create noise and confusion, which hampers rescue efforts. Imagine if a rescue team goes to the wrong site because of AI slop!

The liar’s dividend

There’s another problem with disinformation and deep fakes. Because people know that something might be fake, they stop believing in things that are real.

Robert Chesney and Danielle Citron coined the term “the liar’s dividend” in 2019. This is when the liar claims that real content is fake, and people believe it.

False claims that stories are fake news can help politicians maintain support after a scandal. It works better than remaining silent or apologizing. One study suggests that such claims can actually increase the politician’s support.

An article about the photo of Trump as pope described it like this:

“In a democracy, leaders answer to the people. In the Catholic Church, the pope answers to God. In this new regime, AI answers to no one.”

We live in interestng — and terrifying — times. I’d love to hear your views.

If you enjoyed this, subscribe to our weekly newsletter

Leave a Comment

Your email address will not be published. Required fields are marked *

Thank You

We're Excited!

Thank you for completing the form. We're excited that you have chosen to contact us about training. We will process the information as soon as we can, and we will do our best to contact you within 1 working day. (Please note that our offices are closed over weekends and public holidays.)

Don't Worry

Our privacy policy ensures your data is safe: Incus Data does not sell or otherwise distribute email addresses. We will not divulge your personal information to anyone unless specifically authorised by you.

If you need any further information, please contact us on tel: (27) 12-666-2020 or email info@incusdata.com

How can we help you?

Let us contact you about your training requirements. Just fill in a few details, and we’ll get right back to you.