June Wan/ZDNET
While ChatGPT’s ability to generate human-like answers has been widely celebrated, it also is posing the biggest risk to businesses.
The artificial intelligence (AI) tool already is being used to enhance phishing attacks, said Jonathan Jackson, BlackBerry’s Asia-Pacific director of engineering.
Pointing to activities spotted in underground forums, he said there were indication hackers were using OpenAI’s ChatGPT and other AI-powered chatbots to improve impersonation attacks. They also were used to power deepfakes and spread misinformation, Jackson said in a video interview with ZDNET. He added that hacker forums were offering services to leverage ChatGPT for nefarious purposes.
Also: Scammers are using AI to impersonate your loved ones. Here’s what to watch out for
In a note posted last month, Check Point Technologies’ threat intelligence group manager Sergey Shykevich also noted that signs were pointing to the use of ChatGPT amongst cybercriminals to speed up their code writing. In one instance, the security vendor noted that the tool was used to successfully complete an infection flow, which included creating a convincing spear-phishing email and a reserve shell that could accept commands in English.
While the attack codes developed so far remained fairly basic, Shykevich said it was simply a matter of time before more sophisticated threat actors enhanced the way they used such AI-based tools.
Some “side effects” will emerge from technologies that power deepfakes and ChatGPT, wrote Synopsys Software Integrity Group’s principal scientist Sammy Migues, in his 2023 predictions. People
Read more