Deepfakes and their dangerous debut

The impacts of deepfakes and cybersecurity concerns

From above of crop anonymous male hacker typing on netbook with data on screen while sitting at desk

In 2021 I touched on the challenges presented by AI-based threats. Specifically, AI-generated media used for deceiving recipients. For a while, examples of a seemingly innocuous technology referred to as “deepfake” has now transitioned into the threat climate. It makes sense, really. If you could mimic the face of a trusted person, that’s a golden key to just about anything, right?

But first off, what is a deepfake? Simply put, it’s the creation of visuals to appear as something else, AI-made or otherwise. Deepfake masking is a common example. You might see it when the visual of a person talking is “mapped” over something else. The FBI describes it as the “broad spectrum of generated or manipulated digital content, which includes images, video, audio and text.”

Deepfake also encompasses the realm of AI-generated visuals, pulled from libraries of collected media. Said media then creates a “person” who appears real but is in fact not. If you haven’t already, you can see the implications behind something like this. In essence, someone can use falsified images of people or visuals for deceptive reasons.

Imagine reading a social media post (or otherwise) from a “fake” person, where the post has an attached link to say, deals on an online purchase. That could be used to lead someone to give up credentials without realizing it.

The implications beyond

However, a dangerous link is the least of our worries when it comes to deepfake tech. Deepfakes are rooted in misinformation and deception, something which has plagued the net for years, but especially as of recent. The COVID-19 pandemic, for example, is wrought with conspiracies, misinformation, and outright deception.

In that vein of disinfo, hackers can take advantage of what appears to be legitimate imagery to fool recipients. Imagine, for instance, receiving a message from someone in your business messaging platform, like Slack. Their profile shows a person, but it’s an artificial one, in this scenario. The message is asking you for some credentials to do something routine, and without thinking, maybe you hand them over. It’s those small moments of trust which create a bigger problem.

Another hypothetical. What if you or your enterprise utilizes VoIP? It’s possible a deepfake voice call can be made to sift for information, whatever that is. Or perhaps posing as a manager, customer, or even another staff worker.

The long and short is, deepfakes tie directly into phishing, which is the go-to method for malicious parties. Phishing already proves effective. Combined with manipulated media, it’s possible it can fool even the most careful of users.

How do I protect myself?

This is always the question, isn’t it? Deepfakes present a confuddling threat. Savvy people and enterprises are no strangers to cybersecurity vigilance, and it’s that vigilance they must continue. It starts with awareness training, much like any defense strategy. Key notes about detecting visual abnormalities involve, experts say, facial features, especially the eyes. Eyes, thus far, are very difficult for deepfakes to replicate in a believable way.

If, however, you were hoping for a program or catch-all software which can automatically flag a deepfake, that’s not in the mainstream (yet). Though, this is an inherently good thing. Relying too much on automated defense solutions takes away the human element and all-important decision-making that’s part of it. Knowing the fundamentals is as crucial as ever.

The good news, though, is deepfakes remain fairly juvenile in the broad spectrum of cybersecurity attacks. Efficient teams and resources have the advantage, so long as they remain educated on the issue.

For more information, you can contact Bytagig today.

Share this post: