Yuen Pin Yeap is CEO at NeuShield. Yuen Pin is an experienced leader with a long track record of creating innovative security solutions.
Artificial intelligence (AI) is a hotly debated topic between proponents and those expressing concerns regarding boundaries and control. Generative AI, like ChatGPT, uses algorithms and data modeling to output new code for tools and apps, and to generate content for emails and other forms of communication. ChatGPT is the most referenced and widely used generative AI today. Other AI generative models include AlphaCode, Anyword, Bard, Cohere Generate, DuetAI, GitHub Copilot, Jasper and more.
When it comes to code and content, hackers are already employing AI to create new zero-day ransomware and malware, and to improve the content within their social engineering methods. AI is used to create fraudulent phishing content to get users to click, swipe or tap on nefarious web links, emails and texts. AI helps hackers generate content without typos and grammar errors for phishing emails and web content. Threat actors are beginning to capitalize on generative AI to create new and improved code for more sophisticated malware that is harder for cybersecurity tools to detect.
Hackers also use AI as a tutorial to help them more easily and quickly create ransomware and other forms of malware. If they don’t know how to write code in Assembly or C languages, ChatGPT can help teach them how to create working samples and new code. AI output can train hackers to generate complex programs with payloads that encrypt files and execute at runtime. The user still needs to put the pieces together manually, but the learning time is significantly reduced, and new ransomware code can be generated and put together in just a few hours.
AI Is Becoming A Weapon Of Choice For Attackers And Defenders
Cybersecurity has always been a cat-and-mouse game with no rules. It is a series of crafty and calculated tactics designed to exploit or thwart an opponent. AI is yet another weapon bad actors are employing to exploit individuals and corporate defenders. While attackers can use AI algorithms to automate and enhance their attacks, defenders are busy leveraging AI for threat and anomaly detection, and predictive analysis to identify and mitigate potential attacks more effectively.
While ChatGPT has built-in logic that prevents bad actors from explicitly creating malware, they can use ChatGPT to create multiple pieces of nefarious code and the program will output those pieces separately. The user then needs to piece together everything and make some adjustments. But it can be accomplished, and the time and effort saved is substantial.
ChatGPT can output a framework for creating zero-day ransomware capable of evading cybersecurity tools. It can include all of the necessary pieces, like source code that traverses directories and returns the files, with code that encrypts the files. The hacker can piece them together to create a binary program that will deploy and encrypt data at runtime in any programming language.
The manipulation of existing ransomware code is another tactic used by cybercriminals. If ransomware has been detected by a cybersecurity tool, ChatGPT can be used to generate a different algorithm to avoid detection. AI-powered tools and techniques can enhance the capabilities of attackers in several ways, making them more sophisticated and efficient. Here are a few ways cyber attackers can utilize AI.
Automated attacks: Automate various stages of an attack, such as reconnaissance, vulnerability scanning and launching the attack. Machine learning algorithms can analyze massive amounts of data and identify potential vulnerabilities or targets much faster than humans.
Intelligent malware: Creation of malware that adapts and evolves based on the target environment. Malware equipped with AI can learn and modify its behavior to bypass security measures, evade detection and spread more effectively.
Social engineering: AI algorithms analyze vast amounts of personal data from social media platforms, emails and other sources, enabling attackers to create highly targeted and convincing phishing attacks. AI can generate highly effective emails, text messages and voice calls, making it harder for victims to identify fraudulent activities.
Evasion of defense systems: Development of advanced evasion techniques to bypass security measures, such as intrusion detection systems and endpoint security. AI algorithms can identify vulnerabilities in defenses and create customized attacks that exploit those weaknesses.
Deepfakes and manipulation: AI-powered deep fake technology can create highly realistic fake videos, images and audio for malicious purposes, such as impersonation, disinformation or blackmail.
AI Tactics For Corporate Defenders
Cybersecurity professionals and organizations must invest in AI-based defenses and techniques to counter emerging threats. These can include:
Using AI to enhance threat detection and response capabilities by developing robust machine learning algorithms to identify malicious activities and implementing advanced behavioral analytics to identify anomalies in real time.
Collaboration between the cybersecurity community, researchers and technology developers is crucial to stay ahead of the evolving techniques used by cyber attackers.
Leverage machine learning AI within endpoint security to detect threats and anomalies, with predictive analysis that identifies and mitigates potential attacks.
When a cyberattack makes its way past security defenses, having the ability to immediately recover targeted data and computer systems to pre-attack status speeds the recovery process from weeks to hours and eliminates the need to pay a ransom and rebuild or restore infected computers.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Read the full article here