With so much marketing and media hysteria surrounding the space, it's reasonable to question whether AI will really have a significant impact on cyber threats. Unfortunately, despite claims that the impact of AI on business is largely overhyped—at least, according to OpenAI's COO—it appears it will play a very significant role in the future of cybercrime.
In a January 2024 report, the UK National Cybersecurity Centre notes threat actors of all types and skill levels are already using AI to enhance their capabilities and makes the following judgments:
- AI will almost certainly increase the volume and impact of cyberattacks.
- The primary threat (for now) comes from the enhancement of existing tactics.
- AI reconnaissance and social engineering are more effective, efficient, and harder to detect.
- Threat actors will analyze exfiltrated data more effectively and use it to train AI models.
- AI lowers the barrier for low-skilled cybercriminals to carry out effective attacks.
- Commoditization of AI-enabled tools in criminal markets will uplift the capabilities of threat actors of all types and skill levels.
Many of these concerns were echoed by FBI Director Christopher Wray, speaking at the FBI Atlanta Cyber Threat Summit in June 2023. He stated:
"AI has significantly reduced some technical barriers, allowing those with limited experience or technical expertise to write malicious code and conduct low-level cyber activities. While still imperfect at generating code, AI has helped more sophisticated actors expedite the malware development process, create novel attacks, and enabled more convincing delivery options and effective social engineering."
To demonstrate the very real threat already posed by AI-powered cyberattacks, we'll look at two ways it's already being used.
Use of Deepfakes for Social Engineering is On the Rise
A deepfake is artificially produced photo, video, or audio content that appears to show an individual doing or saying something that never happened. High-profile examples include video footage of Barack Obama talking about the dangers of fake news and footballer David Beckham speaking fluently in nine languages—neither of these events really happened, but the footage is convincing.
Deepfakes are created using generative AI tools. They already have a bad reputation for being used in the production of fake, unauthorized pornographic content, often using the image of celebrities. In recent years, deepfakes have also been used to spread propaganda and influence political and social outcomes, prompting one expert to claim AI “destabilizes the concept of truth itself.”
Perhaps inevitably, deepfakes are now being used to add credibility to social engineering attacks, including by impersonating senior executives on video and telephone calls.
In February 2024, a finance team member at a multinational company was tricked into sending over $25 million to cybercriminals. The criminals used BEC-style email tactics to initiate contact but significantly upped the scam’s believability by inviting the individual to a video call with several of his finance colleagues—including the company’s CFO. It later transpired that everyone on the call had been fake despite looking and sounding like the real colleagues they posed as.
In the coming months, organizations can expect a wave of similar attacks and should prepare suitable governance mechanisms—for example, requiring multiple sign-offs inside the payments system—to prevent illegitimate payments from being processed.
Evading Security Controls with Better Spelling and Grammar
We’ve all received bad phishing emails and chuckled to ourselves, wondering how anyone could fall for them. From poor spelling and grammar to incomprehensible requests, these attacks are the “lowest common denominator” of cybercrime, relying on massive volume to turn a profit.
However, with the rise of AI-powered writing tools like Grammarly and generative AI tools like ChatGPT, cybercriminals can create convincing social engineering campaigns in any language in a matter of seconds. Not only are these attacks more convincing for their human recipients, but they’re also far more likely to evade spam and malicious content filters.
While perhaps not the most exciting use of new technologies, cybercriminals already use this approach en masse. A recent SlashNext report claims AI-generated content is largely responsible for a 1,265% rise in phishing emails since Q4 2022.
Professionals to the End
Notice that these early stories of AI threats are still focused on the path of least resistance. If cybercriminals don’t have to rely on excessive technical complexity and skill… they won’t. Even with the bleeding edge of technology at their fingertips, cybercriminals are using AI tools to enhance tactics they’ve been using for over a decade.
Are cybercriminal groups using AI to enhance or expedite their malware creation? Probably, yes. In the near future, we’ll revise that assessment to “definitely.”
For now, though, the primary use of AI for cybercrime is in supercharging threats aimed at humans.
Get the Full Cybercrime Story
In our latest report, we provide a detailed analysis of the year’s top evolving cyber threats—without unnecessary fluff. The findings implore the critical need for robust cybersecurity measures and how cybersecurity professionals combat the ever-evolving threats.
Discover:
- The 4 primary monetization strategies driving cybercriminal behavior
- The rise of BEC scams, ransomware, and supply chain attacks
- The growing role of AI in enhancing social engineering
- Industry analyses and predictions from renowned cybersecurity veterans
Don't miss out on this essential guide to staying ahead of evolving threats. Download the report!
Tags:
April 18, 2024