AI scams are a growing concern

Watch out for those suspicious messages, it could be an AI voice

Free Close-Up Shot of Bills Stock Photo

Phishing and social engineering scams are nothing new. We’re plagued with spam and scam messages from every internet-facing device and installation. But a new problem is forming, one that takes advantage of both social engineering and AI prompt engines. AI scams are a concern, but primarily it’s because those in the crosshairs of these autonomously generated scams are not organizations with complex cybersecurity defenses and extensive resources. It’s regular people susceptible to the pitfalls of social engineering.

Sadly, we’re not all engineers with degrees in the IT field. But the good news is, we can catch AI scams in the act, no expert knowledge is required. And if you ever feel down – remember, phishing is so effective it’s compromised some of the most clandestine networks around the world. Nobody’s perfect.

What is an AI scam?

AI scams are a growing trend, using social engineering techniques and prompted responses to generate deceptive messages. The goal is to compromise targeted accounts at a faster rate. But, it’s far more nefarious than that.

AI learning models are not just text-centric, they’re capable of audio and media generation too. Deepfakes are nothing new, a concept and application that’s been around for several years. But replicating human voices and faces has reached a new advanced plateau, once capable of causing severe financial harm to unsuspecting targets. Thus, a new wave of social engineering prompts is here.

An AI scam is the use of media generation – primarily voice – to send falsified messages to specific targets. The goal is usually to scam said targets out of money or steal personal information. Imagine receiving a voicemail or call from a friend, family member, or even coworker. In that message, they’re asking for help, such as with money, or access information to a business account.

One man, for example, gave away $1000 to his “father,” only to find it was a falsified voicemail using his father’s AI-generated voice.

That’s an alarming scenario for anyone, presenting a serious threat to personal finances. It also represents a dangerous area for business enterprises that must fend off all forms of social engineering attacks.

Protecting against AI scams

The question is, then, how does one defend themselves against AI scams without the knowledge of someone experienced in cybersecurity? Cyber criminals do not need large samples of audio to generate falsified voice records. They use voice samples, found virtually anywhere. With additional research about a set of victims, they proceed to call and leave false voice messages. Often, it’s an “emergency situation” requesting money.

Therefore, the initial way to defend against AI scams is to understand their motive. Threat actors rely on emotional situations to supersede logic. If you’re in an alarmed or panicked state, it’s harder to step back and assess a situation. Hackers and threat actors are willing to utilize any message that might instill panic or concern, from requests for help to even kidnapping claims.

It’s important to recognize a few things if you receive a message that could be an AI scam:

  • It’s about money, either as a request for help, “emergency,” or severe event
  • It is from a trusted source like a family member, loved one, or coworker
  • The message may be short and may sound distorted
  • The message relies on emotional situations to encourage rapid decisions
  • AI scam messages imply there is a “limited time window”

Detecting these signs can tip you off to a potential AI scam. Furthermore, if you truly suspect a family member or someone you know is struggling, use a trusted cybersecurity strategy: trust until verified. In IT, zero-trust means contacting the correct parties to authenticate messages, emails, and official requests related to data. The same goes for unusual calls or messages: know before you act.

In zero-trust policies, workers also use passcodes or safe phrases so they can verify a contact. If you believe you’re the target of an AI scam, consider passing a phrase around to relevant contacts.

What else can I do?

It can feel stressful and frightening when scammers put you in crosshairs, but the truth is these attacks are common and growing in frequency. You’re not alone, and with a little caution, you can deflect any AI scam attempt.

Authorities also recommend contacting the law or any relevant federal party if you suspect fraud, theft, or abduction. While scams are difficult to recover from if you’ve fallen victim to one, they can be avoided, and that’s the best strategy.

The next time you hear a relative or coworker making unusual requests, take a moment to reconsider. It could very well be an AI scam.

If you need additional assistance or information regarding AI, cybersecurity, or IT practices, consider third-party help. Contact Bytagig today for more information.

Share this post: