The long-standing issue of social engineering has plagued the internet since the conception of email. Deceptive messages play an integral role in the delivery of malicious payloads or credential theft. However, the problem in today’s modern environment has worsened thanks to AI.
Integral to phishing schemes and social engineering is deception. To appear as a legitimate contact or message to dupe a recipient, normally for access credentials or other critical information. Defending against phishing emails and deception techniques involved identifying tells or relying on zero-trust policies. But AI is changing the game and “mechanics” of these malicious messages, and the looming threat of undetectable AI-based phishing emails is a reality we’ll have to face.
Dangerous Lure Documents
There are two major factors spearheading the dangers of AI phishing emails and social media scams. The first is automation. The saturation of attacks increases dramatically as AI-generated messages are faster than a standard scam email. Attackers can quickly launch threat campaigns with increased efficacy and speed. We’ve discussed available dark markets like RaaS allowing even basic users to deploy complex malware. With the help of AI-generated phishing, these attacks will worsen in both volume and speed.
The second is the effectiveness of deception. AI-generated content not only aggregates messages faster but with decreased errors. Models can be trained to simulate business emails for BEC attacks, for example. They can also work in tandem with malicious chatbots to further deceive recipients of those dangerous messages. The goal is accuracy and to appear legitimate, which AI can readily do with higher efficiency than standard methods.
Realism is the goal, and social engineering already proves to be a major hurdle to overcome for non-savvy users. If the use of AI in fishing can generate authentic-looking emails, what can be done?
Protecting Against AI-Enhanced Phishing
It’s truly a predicament countering social engineering and phishing. Given that phishing is still one of the most popular methods for attackers it’s worth implementing strategies to protect against them. But with phishing enhanced by AI-generated content that’s much harder.
It isn’t impossible, however, and much of what protects us against phishing can still be used for phishing. It’s now a matter of staying true to zero-trust policies, the “trust until verify” method. Zero trust means content in a message is not accessed or interacted with until the sender can be verified as safe, ideally in business network environments.
Zero-trust can also apply to personal activity as well. Whenever receiving a message or email, suspicious tells still involve the nature of phishing. That is to say, phishing messages attempt to prompt the reader into taking action using an emotional trigger. This urgency is what causes the reader to make unsafe snap decisions. Often, tells involve asking the reader to make “account changes,” adjust their password, review their login for a sensitive account (bank, business account), or even claim a financial accounting error was discovered.
Fighting AI with AI
There’s also something to be said for security models based on AI that can actively detect AI-generated content. They can also use analytics to quickly gather data about threat behaviors to better defend networks from AI and malware attacks.
Because AI models can learn and respond faster, it’s an ideal defense against AI-generated attacks, since the latter can create malicious content within minutes.
The bottom line is: that phishing remains incredibly dangerous and will only worsen with the help of AI-generated content. Utilizing zero-trust, security awareness training, and AI analytics is essential for staying ahead of the curve and protecting your data.