Skip to main content

Back to Blog

Criminal Uses for ChatGPT: A Versatile New Tool for Hackers

The rise of artificial intelligence (AI) has opened up a world of possibilities, but it has also brought with it new threats. ChatGPT, an AI-powered chatbot developed by OpenAI, has the potential to become a powerful tool for cybercriminals looking to spread havoc. Launched late last year, ChatGPT has captured our attention and sparked much debate, but has it also become a useful tool in the hands of cybercriminals?

Distressingly, many cybersecurity professionals are thinking, yes, it has.

As fun as it is to consider all the things AI can do for us as it evolves, we must remain wary of the negative possibilities too. What potential does it possess to do us harm? In the wrong hands, could it improve a hacker’s ability to develop malware and spread havoc? And, most importantly, what can we do to protect our systems against these constantly morphing threats?

To answer these questions, we need to look at all the ways threat actors use (or could use) ChatGPT and software like it to speed up the production of malware and attack critical business systems like Active Directory.

ChatGPT Lowers the Bar for Writing Malicious Code

Although OpenAPI has taken a few protective measures in an attempt to stop bad actors from exploiting ChatGPT’s impressive capabilities, they aren’t enough to prevent users from employing it to develop malicious code. Not by a long shot. Investigations into ChatGPT’s capabilities have shown how easy it is to get around OpenAPI’s controls. Asking it to write something like a keylogger or ransomware directly won’t get you anywhere, but by being more granular with your request, you can trick it into creating almost anything you need.

Research organizations have confirmed several cases of cybercriminals already leveraging the tool to help them build sophisticated malware. Even more troubling, these bad actors don’t need development expertise or coding skills to successfully create these programs with ChatGPT. The tool makes it so easy to write malware that experts predict a surge of formidable cyberattacks in the near future. Essentially, ChatGPT lowers the bar for would-be hackers to try their hand at building effective malicious code and makes it easier for those with basic skills to forge much more advanced programs.

Harder to Detect Uncommon Languages Get Missed by Anti-Malware

ChatGPT also makes writing code in less common languages simpler for novices and experts alike. Anti-malware software has a harder time detecting programs in these languages, making them more likely to slip through the cracks of that security layer. In one investigation, ChatGPT created functional ransomware in Go script. When VirusTotal checked to see if anti-malware software would detect it, only four out of 69 engines spotted one version of the malware, and only two out of 71 engines flagged the second version.

The speed and convenience of creating evasive malware is likely to lead to a surge in the frequency of data breaches appearing on the dark web. This is important because a majority of users reuse passwords across multiple accounts according to GoogleDark Reading and Knowbe4. As a result, in the event of a third-party data breach involving login credentials, users who engage in password reuse may inadvertently compromise the security of their corporate accounts.A defense in depth strategy helps to mitigate this issue. Anti-malware might have weaknesses in detecting this kind of attack, but other layers of security help bridge that gap. Shoring up your password security layer to prevent users from utilizing stolen credentials will strengthen your overall defenses and mitigate the success of these attacks.

Phishing Scams Are About to Get a Lot More Convincing

The most successful phishing attacks involve good impersonation. When an account is compromised, a scammer can use it to contact other accounts in the system, and it’s far more likely that someone will believe the scammer is legitimate and fall for any tricks they have up their sleeve. Better yet, if the scammer uses ChatGPT to mimic that individual’s language habits, they’ll look even more legitimate. With ChatGPT on the scene, these phishing scams are going to get a lot more convincing.

Now, consider that you can input assorted documentation into ChatGPT, and it will compose content that imitates its cadence and common verbiage for you. Stolen credentials become a much bigger issue because if a hacker uses them to gain access to all of your company’s internal communications, they’ll be able to use ChatGPT to emulate your insider language in all their phishing attempts. And that’s precisely what’s going to happen much more frequently now that bad actors have ChatGPT to level up their phishing scams.

Opening the Floodgates to Many More Attempts on Our Systems

51% of IT decision makers believe a successful cyberattack will be credited to ChatGPT within the year. ChatGPT is like a new swing set on a hacker’s playground, and more capabilities are constantly being discovered and tested. Businesses need to be on alert and ready to implement comprehensive, layered security strategies to protect themselves against the coming onslaught of attacks headed our way.

When credentials are stolen, they can be used as an entry point for these assaults, steal data, or escalate privileges in your systems. Continuously monitoring the password security layer is one way to reduce the effectiveness of these tactics. ChatGPT and tools like it embolden bad actors to target keystone business systems like Active Directory. We must shield ourselves with policies and tools that take these new capabilities into account as AI evolves into the future.