Cybercriminals Are Already Using ChatGPT To Own You

When ChatGPT — OpenAI’s large language model interface — released to the public late last year, it was immediately apparent to many in the information security community that the tool could (in theory) be leveraged by cybercriminals in a variety of ways.

Now, new findings from Check Point Research indicates that this is no longer a hypothetical threat.

According to the company, underground hacking forums on the dark web are already awash in real-world examples of cybercriminals attempting to use the program for malicious purposes, creating infostealers, encryption tools and phishing lures to use in hacking and fraud campaigns. There are even examples of actors using it in more creative ways, like developing cryptocurrency payment systems with real-time currency trackers to add onto dark web marketplaces, or using it to generate AI art to sell on Etsy and other online platforms.

Sergey Shykevich, a threat intelligence manager at Check Point Research, told SC Media while most of the examples they found aligned with how they expected cybercriminals to use the program, the sheer speed of that adoption was head-turning.

“I think maybe the only really surprising thing is that it happened much faster than I thought it would happen. I didn’t think that within two to three weeks we would already see malicious tools and other stuff on the underground,” he said.

In one forum, a cybercriminal boasted about recreating malware strains and hacking techniques by prompting ChatGPT with publicly available writeups, including a Python-based file stealer. Check

Read more

Explore the site

More from the blog

Latest News