Technology in terms you understand. Sign up for the Confident Computing newsletter for weekly solutions to make your life easier. Click here and get The Ask Leo! Guide to Staying Safe on the Internet — FREE Edition as my thank you for subscribing!

How AI is Revolutionizing Scams: Can We No Longer Trust Our Eyes or Ears?

Vigilance is called for.

Scams are bad enough. Throw AI into the mix, and things only get worse.
A blurred screen with a video call window displaying a convincing deepfake video of a person, alongside a separate speaker icon indicating AI-generated voice mimicry. The background has a person in a dark room, looking confused or shocked while on their smartphone, symbolizing the unsettling reality of AI scams.
(Image: DALL-E 3)

Scammers utilize all the technology available to try to fool us into handing over our hard-earned cash.

AI is increasingly becoming a part of their toolset, with sometimes dramatic and heartbreaking results.

You probably already know about content written by AI. But audio? Video? They can be devastating when in the hands of individuals with bad intent.

Become a Patron of Ask Leo! and go ad-free!

TL;DR:

AI and scams

AI is amplifying scam tactics, making fraud more realistic using convincing voice impersonation, deepfake videos, and polished phishing messages. Scammers use these tools to exploit trust, impersonate loved ones or public figures, and deceive with high-tech precision. Protect yourself: verify identities, ignore unsolicited contacts, and always be wary of too-good-to-be-true offers.

AI + Scammer = Trouble

Here are four AI functions that scammers use.

  • Text-based AI. This can write for you or summarize, clarify, or improve existing writing effectively.
  • AI image generators. The image at the top of this article, for example, is AI-generated1. While it certainly looks AI-generated, other tools are even better at mimicking reality — and, of course, all the tools are improving every day.
  • AI video generators. These tools are concerning because they can be used to create videos that appear to be someone doing something they haven’t.
  • AI audio generators. This is perhaps an under-appreciated technology. It’s easy to fool people into thinking a voice is real when it is not.

Scammers use each of these to make their attempts at fooling us more realistic and ultimately more successful.

AI Voice: no, it’s not your grandchild on the phone

AI can be used to convincingly mimic someone’s voice. It’s not quite a recording, but artificial nonetheless. It’s very easy to be fooled.

The scammer gets a recording of someone’s voice — say a grandchild, though it could be anyone you know. This is often easy, as voice clips are common on social media platforms. It doesn’t take much of a clip — say less than a minute — to generate a “clone” of the individual’s voice. This clone can then “say” anything. In fact, some tools work quickly enough that a scammer can hold a passable real-time conversation using this cloned voice.

We’re certainly used to telephone call quality being less than perfect, so if it sounds a little off or scratchy, that isn’t enough to trigger any red flags.

The scam is simple. The scammer calls you. Using the voice of your grandchild or other known contact (who is not involved in any way), they paint a scenario where they’re in trouble and need cash right away. It’s almost always about money, though sometimes it can be about information (like a credit card number), and it’s always urgent. And they always try to keep you on the line so you can’t call anyone else to verify anything.

The solution is also simple. Hang up without warning and without apology, and call the grandchild (or whomever they claimed to be) directly using a phone number you know to be correct. Chances are they’ll have no idea what you’re talking about, which is a sure sign of an attempted scam.

AI Video: no, that celebrity isn’t promoting cryptocurrency

Just like audio, AI can be used to create convincing video footage of people doing things they didn’t do. Combined with AI-generated audio, they can appear to say things they never said.

One common scenario of late is the cryptocurrency scam, where a well-known celebrity appears to be interviewed, encouraging viewers to invest for an almost certain return. The celebrity isn’t involved, and the investment is almost certain to be completely lost.

Much like the grandchild audio scam above, scammers have also been known to generate artificial videos of individuals you may know and use that video to convince you to hand over money or personal information to solve some urgent problem.

Again, the solution is simple: don’t believe the promotion without a lot of verification via different channels. Call the grandchild or other person directly via channels you know are correct.

AI Text: no, that email does not mean your account is about to be closed

Spam has been a problem since the birth of email. But until now, many scams have been relatively easy to detect. Bad grammar, bad spelling, and phrases that indicate the author is not a native English2 speaker make it fairly easy to realize that what you’re looking at might not be legitimate.

AI can fix all that.

Here’s an example message I’ve selected from my spam folder:

Your mailbox has reached 90% of the maximum space allowed on the server.

You can continue to receive e-mails, but it’s recommended to free up space before reaching 100% of the maximum space allowed.
Once 100% of the occupied space , you will no longer receive messages on this account.
Also remember to empty your Trash too, to free up some space storage.

You can archive your old e-mails on your machine to conserve while freeing up space on the server.
You can also setup your e-mail account with POP protocol on a mail Software so that all e-mail are stored on your computer.

If you rely on e-mail administrator to manage your address, please contact him for more information.

Most native English speakers will notice a few oddities in the wording that don’t match what would be expected from this formal type of notice.

However, all a scammer needs to do is open up ChatGPT and ask it, “please correct the following for spelling, grammar, and professional terminology:”. The result:

Your mailbox has reached 90% of its maximum storage capacity on the server.

You can still receive emails, but it’s recommended to free up space before reaching 100% capacity. Once the mailbox is full, you will no longer receive messages on this account. Be sure to empty your Trash folder as well to free up additional storage space.

To conserve server space, you can archive old emails on your computer. You may also configure your email account to use the POP protocol with an email client, so that emails are downloaded and stored locally on your computer.

If your email is managed by an administrator, please contact them for further assistance.

This reads more professionally and is thus slightly more likely to fool someone into clicking the link the scammer has included.

AI phishing attacks leverage artificial intelligence to make the phishing emails more convincing and personalized. Just because it’s written well (or not as poorly as most spam), or that it calls you out by name and includes additional personal information, doesn’t mean anything anymore.

AI Chat: no, that’s not a job offer you’re getting

There has been an uptick in the number of scam attempts via text messaging and other instant-messaging tools like WhatsApp, Facebook Messenger, and more.

AI chatbots can hold a relatively coherent conversation with you. They gently guide you down a path that builds your trust to a point where the bot can ask you for assistance — typically money — all while having it seem quite legitimate and even altruistic on your part.

It’s all about social engineering and making the conversation seem both natural and personal to your situation. It’s easy to be fooled.

The solution: ignore requests from numbers and people you don’t recognize. Period.

AI Everything: no, she doesn’t love you

To me, the most heartbreaking scam is what’s called the romance scam. Using any or all of the technologies I’ve talked about so far, a scammer makes contact and over time attempts to develop a remote romantic relationship.

Scammers create well-written and persuasive text, beautiful images of beautiful people, and even videos that not only look realistic but feel like a completely natural part of the conversation. They use this to deepen the relationship, setting the hook before reeling you in.

At some point, they need money. Perhaps to visit you or perhaps for some other emergency, they need cash, and they need it urgently. When you send some, an unforeseen situation has them needing even more. If you hesitate? You must not love them.

It’s a trap. Individuals (particularly retirees, it seems) have lost thousands, if not hundreds of thousands, of dollars falling victim to these scams.

The solution: keep your romantic activities in person. If you do engage in a remote relationship with someone you’ve never met, never, ever send them money. Period.

AI protecting us from scammers

You’ll hear an increasing number of security and other products claiming they’ll have added AI tools to improve their ability to protect you.

I have mixed feelings about using technology to protect us from these kinds of scams — or, put another way, using technology to protect us from technology or AI to protect us from AI.

Yes, use what you can and what’s available, and of course, keep it up to date. But don’t trust it 100% either. It’s not an excuse to let your guard down.

I liken it to email spam: it’s a horserace. As each side improves its technology, who’s in the lead changes back and forth. Scammers will come up with some great AI innovations, and security tools need time to be able to detect them. Scammers will notice, and over time continue to tweak their technology to evade the new detection. It goes on and on.

Do this

At this point, I’m sure it all seems very bleak, but you can protect yourself with relatively simple steps.

  • Don’t answer unknown callers or messages. If it’s legitimate, they’ll find other ways of contacting you that include verifiable references to their identity. Typically, that’s physical mail.
  • Have a code word3. Particularly since my voice and likeness is out there, this is something I’ve established: a codeword that I can say or be asked for. If “I” can’t respond with the codeword, it’s not me.
  • Be skeptical. This is perhaps the single most important thing to do. I was raised in an era where we would trust first and ask questions later. That’s just no longer practical, particularly online. It’s not at all rude to take steps, ask questions, or hang up to protect yourself.

As it has always been: if it sounds too good to be true, it’s almost certainly not true.

Stay informed, and help the people around you. Together, we can all stay safe.

These are all topics I touch on often. Subscribe to Confident Computing! Less frustration and more confidence, solutions, answers, and tips in your inbox every week.

Podcast audio

Play

Footnotes & References

1: As an example of how easy this is, I asked ChatGPT, “Please generate a 16:9 photorealistic hero image for the following article:” followed by the full text of the article. It created the image. In this case, I took the first response it gave me, but it’s equally easy to ask it to “try again” or otherwise refine results.

2: Or whatever your native language is.

3: I suppose you could call it a “safe word”, but that has other connotations.

Leave a reply:

Before commenting please:

  • Read the article.
  • Comment on the article.
  • No personal information.
  • No spam.

Comments violating those rules will be removed. Comments that don't add value will be removed, including off-topic or content-free comments, or comments that look even a little bit like spam. All comments containing links and certain keywords will be moderated before publication.

I want comments to be valuable for everyone, including those who come later and take the time to read.