web analytics

The Rise of AI in Cyber Threats: Key Challenges and How to Respond

While artificial intelligence (AI) can greatly increase productivity in the workplace, it can also be exploited to launch complex and sophisticated cyber-attacks. A recent report from the UK’s National Cyber Security Center (NCSC) claims that AI will “almost certainly increase the volume and heighten the impact of cyber-attacks over the next two years”.

Generative AI models, which can create new content such as text, images, and videos, have sparked controversy as they can be easily exploited to carry out malicious activities. For example, threat actors can use Generative AI to generate convincing phishing emails to lure people into handing over credentials, or other types of sensitive information. Likewise, AI can be used to create deepfake videos to manipulate public opinion on a variety of matters, including elections.

In this article we will explore some of the ways that AI has made it possible for even inexperienced hackers to join the ranks, allowing them to orchestrate sophisticated attacks with relative ease.

Polymorphic Viruses

Artificial Intelligence (AI) has significantly accelerated the development of polymorphic viruses, making it easier for hackers to create and deploy these malicious programs. AI-powered tools can rapidly generate countless code variants and code strings, allowing polymorphic viruses to evade detection by antivirus software and adapt to new environments. By leveraging machine learning algorithms and mutation engines, virus strains can be effortlessly created which continuously mutate and evade detection. As a result, polymorphic viruses have become a significant threat to cybersecurity, capable of infecting files on any operating system. While security technologies and methods, such as behavior-based analytics and application whitelisting can help detect these viruses, will they will be enough to adequately safeguard against such threats in the future?

The Use of Deepfakes for Social Engineering

Deepfakes are artificially created digital content that can deceive people into believing they’re seeing or hearing something that never actually occurred. According to the World Economic Forum, an alarming 66% of cybersecurity professionals encountered deepfake attacks within their own organizations in 2022, highlighting the prevalence of this type of threat. 

These highly realistic forgeries can be easily produced using generative AI tools (mentioned above), and they have already been used to create fake videos of public figures, as well as unauthorized pornographic content. Unfortunately, deepfakes have also been employed to spread propaganda and influence political and social outcomes, and they can even be used to add credibility to social engineering attacks, such as impersonating senior executives on video and phone calls.

In recent years, deepfakes have been used to trick people into sending large sums of money to cybercriminals, with criminals using deepfakes to impersonate colleagues and initiate fraudulent payments. To prevent similar attacks, organizations should prepare by implementing robust governance mechanisms, such as requiring multiple sign-offs for payments. 

AI Voice Cloning

Alongside the growing menace of visual deepfakes, AI voice cloning has emerged as a major concern. The widespread use of voice biometrics in various devices and systems, touted as a robust security measure, has now been rendered vulnerable to hacking. This is because AI has advanced to the point where it can accurately replicate audio fingerprints and mimic voice clips from mere sample vocals. The implication is that voice-protected systems are no longer secure, leaving them susceptible to manipulation by hackers. This can lead to a range of nefarious consequences, as hackers manipulate audio files to convincingly perpetuate false narratives.  

AI Keylogging

AI Keylogging tools can actively record every keystroke, collecting sensitive information such as passwords, with astonishing accuracy, boasting a success rate of nearly 95%. This means that even the most cautious and security-conscious individuals can be vulnerable to having their sensitive information compromised by this type of malware. To defend against AI-powered keyloggers, it is essential to implement a multi-layered approach. One effective strategy is to monitor user behavior to identify and respond to unusual typing patterns. Additionally, a robust endpoint security solution can detect and prevent malware-driven keyloggers from infiltrating systems. Multi-factor authentication (MFA) adds an extra layer of protection, requiring an additional authentication factor even if keystrokes are intercepted. To ensure the integrity of keystrokes, encryption can be used to safeguard captured data, making it indecipherable without the encryption key. Finally, regular updates and patches to software, operating systems, and security applications are crucial to maintaining a secure environment and addressing known vulnerabilities exploited by attackers.    

Better Spelling and Grammar To Evade Spam Filters

Cybercriminals have traditionally used poor spelling and grammar to mask their phishing emails, but with the advent of AI-powered writing tools, they can now create convincing social engineering campaigns in any language in a matter of seconds. This new approach has made it increasingly difficult for spam and malicious content filters to detect and block these emails. According to a recent report by cybersecurity firm SlashNext, the use of AI-generated content has led to a 1,265% surge in phishing emails since 2022. As a result, AI-generated content has become a widespread and effective tactic used by cybercriminals on a large scale, making it a crucial concern for individuals and organizations seeking to protect themselves from cyber threats.

AI Brute Force Attacks & CAPTCHA Cracking

AI-powered brute force attacks have emerged as a significant threat to online security. These attacks use machine learning to analyze user behavior and patterns to crack passwords faster. Additionally, AI has also been able to outsmart CAPTCHA systems, which were previously designed to distinguish between human and bot interactions. By leveraging patterns learned from human behavior, AI can now accurately solve CAPTCHA forms, rendering these security measures less effective in preventing bots from accessing secured locations.

Specialized Language Models Are on The Rise

While not a threat in itself, the rise of large language models (LLMs) has transformed the field of organizational cybersecurity, arming security teams with the power to sift through large amounts of data and generate actionable insights with simple queries. While these models have shown remarkable capabilities in understanding and generating human-like text, they are still limited in their ability to comprehend the intricacies of specialized cybersecurity datasets. However, in the coming years security teams can expect to transition to smaller language models that offer tailored and actionable insights, real-time data training, and the ability to adapt quickly to the ever-evolving threat landscape. These small language models will provide more focused and effective solutions for cybersecurity teams, enabling them to stay ahead of the curve in the fight against cyber threats.

Conclusion

As AI becomes increasingly pervasive in our daily lives, the way cybersecurity defenders respond to its emergence will be crucial. The rise of generative AI has sparked a heated debate about its ethical implications and potential uses, but what’s clear is that organizations must act quickly to harness its power before threat actors exploit it. It’s likely that threat actors will use AI to launch sophisticated phishing campaigns, create swarms of deepfakes, and gain access to detailed information about targets, ultimately bypassing endpoint security defenses. To stay ahead of the curve, security leaders must prepare for the inevitable wave of AI-generated threats and develop strategies to mitigate their impact.

Author bio

Aidan Simister
Aidan Simister is the CEO of Lepide, a leading provider of data security and compliance solutions. With over two decades of experience in the IT industry, he is recognized for his expertise in cybersecurity and his commitment to helping organizations safeguard their sensitive data.

Leave a Reply