The malware can target Windows, macOS and Linux devices.
HYAS Institute researcher and cybersecurity expert, Jeff Sims, has developed a new type of ChatGPT-powered malware named Blackmamba, which can bypass Endpoint Detection and Response (EDR) filters.
This should not come as a surprise, as in January of this year, cybersecurity researchers at CyberArk also reported on how ChatGPT could be used to develop polymorphic malware. During their investigation, the researchers were able to create the polymorphic malware by bypassing the content filters in ChatGPT, using an authoritative tone.
As per the HYAS Institute’s report (PDF), the malware can gather sensitive data such as usernames, debit/credit card numbers, passwords, and other confidential data entered by a user into their device.
The ChatGPT-powered Blackmamba keylogger in action (Screenshot credit: Jeff Sims)
Once it captures the data, Blackmamba employs MS Teams webhook to transfer it to the attacker’s Teams channel, where it is “analyzed, sold on the dark web, or used for other nefarious purposes,” according to the report.
Jeff used MS Teams because it enabled him to gain access to an organization’s internal sources. Since it is connected to many other vital tools like Slack, identifying valuable targets may be more manageable.
Jeff created a polymorphic keylogger, powered by the AI-based ChatGPT, that can modify the malware randomly by examining the user’s input, leveraging the chatbot’s language capabilities.
The researcher was able to produce the keylogger in Python 3 and create a unique Python script by running the python exec()
Read more