Archive for the ‘Artificial Intelligence’ Category
The Rise of AI in Cyber Threats: Key Challenges and How to Respond

While artificial intelligence (AI) can greatly increase productivity in the workplace, it can also be exploited to launch complex and sophisticated cyber-attacks. A recent report from the UK’s National Cyber Security Center (NCSC) claims that AI will “almost certainly increase the volume and heighten the impact of cyber-attacks over the next two years”.
Generative AI models, which can create new content such as text, images, and videos, have sparked controversy as they can be easily exploited to carry out malicious activities. For example, threat actors can use Generative AI to generate convincing phishing emails to lure people into handing over credentials, or other types of sensitive information. Likewise, AI can be used to create deepfake videos to manipulate public opinion on a variety of matters, including elections.
In this article we will explore some of the ways that AI has made it possible for even inexperienced hackers to join the ranks, allowing them to orchestrate sophisticated attacks with relative ease.
Polymorphic Viruses
Artificial Intelligence (AI) has significantly accelerated the development of polymorphic viruses, making it easier for hackers to create and deploy these malicious programs. AI-powered tools can rapidly generate countless code variants and code strings, allowing polymorphic viruses to evade detection by antivirus software and adapt to new environments. By leveraging machine learning algorithms and mutation engines, virus strains can be effortlessly created which continuously mutate and evade detection. As a result, polymorphic viruses have become a significant threat to cybersecurity, capable of infecting files on any operating system. While security technologies and methods, such as behavior-based analytics and application whitelisting can help detect these viruses, will they will be enough to adequately safeguard against such threats in the future?
The Use of Deepfakes for Social Engineering
Deepfakes are artificially created digital content that can deceive people into believing they’re seeing or hearing something that never actually occurred. According to the World Economic Forum, an alarming 66% of cybersecurity professionals encountered deepfake attacks within their own organizations in 2022, highlighting the prevalence of this type of threat.
These highly realistic forgeries can be easily produced using generative AI tools (mentioned above), and they have already been used to create fake videos of public figures, as well as unauthorized pornographic content. Unfortunately, deepfakes have also been employed to spread propaganda and influence political and social outcomes, and they can even be used to add credibility to social engineering attacks, such as impersonating senior executives on video and phone calls.
In recent years, deepfakes have been used to trick people into sending large sums of money to cybercriminals, with criminals using deepfakes to impersonate colleagues and initiate fraudulent payments. To prevent similar attacks, organizations should prepare by implementing robust governance mechanisms, such as requiring multiple sign-offs for payments.
AI Voice Cloning
Alongside the growing menace of visual deepfakes, AI voice cloning has emerged as a major concern. The widespread use of voice biometrics in various devices and systems, touted as a robust security measure, has now been rendered vulnerable to hacking. This is because AI has advanced to the point where it can accurately replicate audio fingerprints and mimic voice clips from mere sample vocals. The implication is that voice-protected systems are no longer secure, leaving them susceptible to manipulation by hackers. This can lead to a range of nefarious consequences, as hackers manipulate audio files to convincingly perpetuate false narratives.
AI Keylogging
AI Keylogging tools can actively record every keystroke, collecting sensitive information such as passwords, with astonishing accuracy, boasting a success rate of nearly 95%. This means that even the most cautious and security-conscious individuals can be vulnerable to having their sensitive information compromised by this type of malware. To defend against AI-powered keyloggers, it is essential to implement a multi-layered approach. One effective strategy is to monitor user behavior to identify and respond to unusual typing patterns. Additionally, a robust endpoint security solution can detect and prevent malware-driven keyloggers from infiltrating systems. Multi-factor authentication (MFA) adds an extra layer of protection, requiring an additional authentication factor even if keystrokes are intercepted. To ensure the integrity of keystrokes, encryption can be used to safeguard captured data, making it indecipherable without the encryption key. Finally, regular updates and patches to software, operating systems, and security applications are crucial to maintaining a secure environment and addressing known vulnerabilities exploited by attackers.
Better Spelling and Grammar To Evade Spam Filters
Cybercriminals have traditionally used poor spelling and grammar to mask their phishing emails, but with the advent of AI-powered writing tools, they can now create convincing social engineering campaigns in any language in a matter of seconds. This new approach has made it increasingly difficult for spam and malicious content filters to detect and block these emails. According to a recent report by cybersecurity firm SlashNext, the use of AI-generated content has led to a 1,265% surge in phishing emails since 2022. As a result, AI-generated content has become a widespread and effective tactic used by cybercriminals on a large scale, making it a crucial concern for individuals and organizations seeking to protect themselves from cyber threats.
AI Brute Force Attacks & CAPTCHA Cracking
AI-powered brute force attacks have emerged as a significant threat to online security. These attacks use machine learning to analyze user behavior and patterns to crack passwords faster. Additionally, AI has also been able to outsmart CAPTCHA systems, which were previously designed to distinguish between human and bot interactions. By leveraging patterns learned from human behavior, AI can now accurately solve CAPTCHA forms, rendering these security measures less effective in preventing bots from accessing secured locations.
Specialized Language Models Are on The Rise
While not a threat in itself, the rise of large language models (LLMs) has transformed the field of organizational cybersecurity, arming security teams with the power to sift through large amounts of data and generate actionable insights with simple queries. While these models have shown remarkable capabilities in understanding and generating human-like text, they are still limited in their ability to comprehend the intricacies of specialized cybersecurity datasets. However, in the coming years security teams can expect to transition to smaller language models that offer tailored and actionable insights, real-time data training, and the ability to adapt quickly to the ever-evolving threat landscape. These small language models will provide more focused and effective solutions for cybersecurity teams, enabling them to stay ahead of the curve in the fight against cyber threats.
Conclusion
As AI becomes increasingly pervasive in our daily lives, the way cybersecurity defenders respond to its emergence will be crucial. The rise of generative AI has sparked a heated debate about its ethical implications and potential uses, but what’s clear is that organizations must act quickly to harness its power before threat actors exploit it. It’s likely that threat actors will use AI to launch sophisticated phishing campaigns, create swarms of deepfakes, and gain access to detailed information about targets, ultimately bypassing endpoint security defenses. To stay ahead of the curve, security leaders must prepare for the inevitable wave of AI-generated threats and develop strategies to mitigate their impact.
Author bio
Aidan Simister
Aidan Simister is the CEO of Lepide, a leading provider of data security and compliance solutions. With over two decades of experience in the IT industry, he is recognized for his expertise in cybersecurity and his commitment to helping organizations safeguard their sensitive data.
Introduction to AI Code Generators

The past couple of years brought us tons of examples of using generative AI to improve many aspects of our lives.
We can see vendors, with strong community and developers’ support, introducing more and more services for almost any aspect of our lives.
The two most famous examples are ChatGPT (AI Chatbot) and Midjourney (Image generator).
Wikipedia provides us with the following definition for Generative AI:
“Generative artificial intelligence (also generative AI or GenAI) is artificial intelligence capable of generating text, images, or other media, using generative models. Generative AI models learn the patterns and structure of their input training data and then generate new data that have similar characteristics.”
Source: https://en.wikipedia.org/wiki/Generative_artificial_intelligence
In this blog post, I will compare some of the alternatives for using Gen AI to assist developers in producing code.
What are AI Code Generators?
AI code generators are services using AI/ML engines, integrated as part of the developer’s Integrated Development Environment (IDE), and provide the developer suggestions for code, based on the programming language and the project’s context.
In most cases, AI code generators come as a plugin or an addon to the developer’s IDE.
Mature AI code generators support multiple programming languages, can be integrated with most popular IDEs, and can provide valuable code samples, by understanding both the context of the code and the cloud provider’s eco-system.
AI Code Generators Terminology
Below are a couple of terms to know when using AI code generators:
- Suggestions – The output of AI code generators is code samples
- Prompts – Collection of code and supporting contextual information
- User engagement data / Client-side telemetry – Events generated at the client IDE (error messages, latency, feature engagement, etc.)
- Code snippets – Lines of code created by the developer inside the IDE
- Code References – Code originated from open-source or externally trained data
AI Code Generators – Alternative Comparison
The table below provides a comparison between the alternatives the largest cloud providers offer their customers:

AI Code Generators – Security Aspects
AI Code Generators can provide a lot of benefits for the developers, but at the end of the day we need to recall that these are still cloud-based services, deployed in a multi-tenant environment, and as with the case of any AI/ML, the vendor is aiming at training their AI/ML engines to provide better answers.
Code may contain sensitive data – from static credentials (secrets, passwords, API keys), hard-coded IP addresses or DNS names (for accessing back-end or even internal services), or even intellectual property code (as part of the organization’s internal IP).
Before consuming AI code generators, it is recommended to thoroughly review the vendors’ documentation, understand what data (such as telemetry) is transferred from the developer’s IDE back to the cloud, and how data is protected at all layers.
The table below provides a comparison between the alternatives the largest cloud providers offer their customers from a security point of view:

Summary
In this blog post, we have reviewed alternatives of AI code generators, offered by AWS, Azure, and GCP.
Although there are many benefits from using those services, allowing developers fast coding capabilities, the work on those services is still a work in progress.
Customers should perform their risk analysis before using those services, and limit as much as possible the amount of data shared with the cloud providers (since they are all built on multi-tenant environments).
As with any code developed, it is recommended to embed security controls, such as Static application security testing (SAST) tools, and invest in security training for developers.
References
- What is Amazon Code Whisperer?
https://docs.aws.amazon.com/codewhisperer/latest/userguide/what-is-cwspr.html
- GitHub Copilot documentation
https://docs.github.com/en/copilot
- Duet AI in Google Cloud overview