Coding matters: Check the teeth

AI-generated image of a robot stealing money from a human.

Tech news focuses on two things: AI and security. And AI security – that is, how to make your AI tools and apps secure. It focuses less on how to protect us from AI scams and fake videos.

Seeing is not believing

People used to say “Seeing is believing”. That’s not true anymore.

Last year, scammers tricked an employee at a global firm into sending $25 million to a fraudulent account. The employee thought he was attending a video call with top executives. But it turned out the other people on the call were deepfake recreations.

Most of us don’t have access to that kind of money, so we might not worry about such a sophisticated scam. But we are not safe.

SABRIC (South African Banking Risk Information Centre) has warned of the increased AI scams in the banking sector. One person was tricked into believing they were trading on the JSE, and lost over R6 million.

Since the release of ChatGPT, phishing scams have increased by more than 4000%. Then there are deepfake videos, voice cloning (aka vishing), fake banking apps, fake product endorsements, and more. We are in trouble.

Train your employees

Remember all the POPI user training? It wasn’t just about privacy. It included teaching employees how to recognise (and avoid) security threats, like phishing scams and poor passwords.

I read a lot about how South Africa must embrace AI. But I haven’t seen as much warning about it. I’m curious as to what SA companies are doing about this threat. Have you had training on how to identify AI scams?

AI-powered checking tools can only do so much. According to cybersecurity experts, the most important way to defend against AI scams is to train humans. We need awareness campaigns and education.

Check the teeth

In “Coding matters: The slopocalypse”, I complained how difficult it is to know what content is AI-generated.

Lewis sent me a link to a video titled “Can We Teach our Moms to Spot Fake Ai Videos?”. This will not make you a super-AI-detector, but it’s a good start. Here is a summary of the tips:

  1. Check the upload date. If the video was uploaded before 2023, it will either be real, or easy to identify as fake.

  2. Count the seconds in a video shot. Generating AI video is still expensive, so they are usually limited in length. If a video take is longer than 20 seconds without a cut, it’s probably real.

  3. Check the text. I’ve seen this often in AI images. The text is often illegible or non-sensical. Of course, this is getting better all the time.

  4. Check the teeth. It turns out that, at this stage, anyway, AI will generate unrealistic or inconsistent or blurry teeth. Or maybe one big white dental blur where the teeth are supposed to be. That’s because of the lack of tooth training data.

  5. Watch for continuity problems. For example, if the same person in different shots is wearing different clothing. Or the background changes are inconsistent.

  6. Look for logic problems. Like when cars in the background go in both directions in the same lane. Part of this is to trace the lines. There may be too many legs, or the angles of a wall bend in the wrong direction.

  7. Think critically. This is the most important test of all. Does this match what you know of the person or the process?

One expert puts it simply: Verify before you trust.

It’s a moving target. As we become more critical and aware, AI will get better. Even the teeth are improving.

Do you have company training on this topic? I’d love to hear your views.

If you enjoyed this, subscribe to our weekly newsletter

Leave a Comment

Your email address will not be published. Required fields are marked *

Thank You

We're Excited!

Thank you for completing the form. We're excited that you have chosen to contact us about training. We will process the information as soon as we can, and we will do our best to contact you within 1 working day. (Please note that our offices are closed over weekends and public holidays.)

Don't Worry

Our privacy policy ensures your data is safe: Incus Data does not sell or otherwise distribute email addresses. We will not divulge your personal information to anyone unless specifically authorised by you.

If you need any further information, please contact us on tel: (27) 12-666-2020 or email info@incusdata.com

How can we help you?

Let us contact you about your training requirements. Just fill in a few details, and we’ll get right back to you.