Employees Are Entering Sensitive Business Data Into ChatGPT

Employees may be putting confidential business information at risk by entering sensitive data into ChatGPT, the wildly popular artificial intelligence chatbot. 

In the meantime, bad actors are looking to take advantage of its popularity by creating a fake Chrome extension to hijack Facebook accounts and install backdoors. The security firm Guardio reported about the malicious extension recently and said it has since been removed from the Google Play store.

According to a recent report by the security firm Cyberhaven, the content people input into ChatGPT is used by the chatbot’s maker, OpenAI, as training data to improve the technology. While only 5.6% of employees have used the technology in the workplace, according to the 1.6 million workers using Cyberhaven’s products, Cyberhaven Labs data shows that 4.9% of those workers have tried at least once to paste company data into ChatGPT since it was launched three months ago.

According to the company, firms such as JP Morgan and Verizon have blocked access to the ChatGPT over such concerns, and an attorney with Amazon warned employees in January to not input confidential information into the chatbot. 

On March 1, Cyberhaven said it detected a record 3,381 attempts to paste corporate data into ChatGPT per 100,000 employees, which is defined as “data egress events.”

The cybersecurity firm also said that fewer than 1% of employees (0.9%) are responsible for 80% of data egress events.

Fake ChatGPT extension harvested browser info

To make matters worse, some prospective users may be giving

Read more

Explore the site

More from the blog

Latest News