Tech news focuses on two things: AI and security. And AI security – that is, how to make your AI tools and apps secure. It focuses less on how to protect us from AI scams and fake videos.
Seeing is not believing
People used to say “Seeing is believing”. That’s not true anymore.
Last year, scammers tricked an employee at a global firm into sending $25 million to a fraudulent account. The employee thought he was attending a video call with top executives. But it turned out the other people on the call were deepfake recreations.
Most of us don’t have access to that kind of money, so we might not worry about such a sophisticated scam. But we are not safe.
SABRIC (South African Banking Risk Information Centre) has warned of the increased AI scams in the banking sector. One person was tricked into believing they were trading on the JSE, and lost over R6 million.
Since the release of ChatGPT, phishing scams have increased by more than 4000%. Then there are deepfake videos, voice cloning (aka vishing), fake banking apps, fake product endorsements, and more. We are in trouble.
Train your employees
Remember all the POPI user training? It wasn’t just about privacy. It included teaching employees how to recognise (and avoid) security threats, like phishing scams and poor passwords.
I read a lot about how South Africa must embrace AI. But I haven’t seen as much warning about it. I’m curious as to what SA companies are doing about this threat. Have you had training on how to identify AI scams?
AI-powered checking tools can only do so much. According to cybersecurity experts, the most important way to defend against AI scams is to train humans. We need awareness campaigns and education.
Check the teeth
In “Coding matters: The slopocalypse”, I complained how difficult it is to know what content is AI-generated.
Lewis sent me a link to a video titled “Can We Teach our Moms to Spot Fake Ai Videos?”. This will not make you a super-AI-detector, but it’s a good start. Here is a summary of the tips:
-
Check the upload date. If the video was uploaded before 2023, it will either be real, or easy to identify as fake.
-
Count the seconds in a video shot. Generating AI video is still expensive, so they are usually limited in length. If a video take is longer than 20 seconds without a cut, it’s probably real.
-
Check the text. I’ve seen this often in AI images. The text is often illegible or non-sensical. Of course, this is getting better all the time.
-
Check the teeth. It turns out that, at this stage, anyway, AI will generate unrealistic or inconsistent or blurry teeth. Or maybe one big white dental blur where the teeth are supposed to be. That’s because of the lack of tooth training data.
-
Watch for continuity problems. For example, if the same person in different shots is wearing different clothing. Or the background changes are inconsistent.
-
Look for logic problems. Like when cars in the background go in both directions in the same lane. Part of this is to trace the lines. There may be too many legs, or the angles of a wall bend in the wrong direction.
-
Think critically. This is the most important test of all. Does this match what you know of the person or the process?
One expert puts it simply: Verify before you trust.
It’s a moving target. As we become more critical and aware, AI will get better. Even the teeth are improving.
Do you have company training on this topic? I’d love to hear your views.