Russian criminals can’t wait to hop over OpenAI’s fence, use ChatGPT for evil

Cybercriminals are famously fast adopters of new tools for nefarious purposes, and ChatGPT is no different in that regard. 

However, its adoption by miscreants has happened “even faster than we expected,” according to Sergey Shykevich, threat intelligence group manager at Check Point. The security shop’s research team said it has already seen Russian cybercriminals on underground forums discussing workarounds so that they can bring OpenAI’s ChatGPT to the dark side.

Security researchers told The Register this text-generating tool is worrisome because it can be used to experiment with creating polymorphic malware, which can be used in ransomware attacks. It’s called polymorphic because it mutates to evade detection and identification by antivirus. ChatGPT can also be used to automatically produce text for phishing and other online scams, if the AI’s content filter can be sidestepped.

We’d have thought ChatGPT would be most useful for coming up with emails and other messages to send people to trick them into handing over their usernames and passwords, but what do we know? Some crooks may find the AI model helpful in offering ever-changing malicious code and techniques to deploy.

“It allows people that have zero knowledge in development to code malicious tools and easily to become an alleged developer,” Shykevich told The Register. “It simply lowers the bar to become a cybercriminal.”

In a series of screenshots posted on Check Point’s blog, the researchers show miscreants asking other crooks what’s the best way to use a stolen credit card to pay for

Read more

Explore the site

More from the blog

Latest News