Amidst concerns that employees could be entering sensitive information into the ChatGPT artificial intelligence model, a data privacy vendor has launched a redaction tool aimed at reducing companies’ risk from inadvertently exposing customer and employee data.
Private AI’s new PrivateGPT platform integrates with OpenAI’s high-profile chatbot, automatically redacting 50+ types of personally identifiable information (PII) in real time as users enter ChatGPT prompts.
PrivateGPT sits in the middle of the chat process, stripping out everything from health data and credit-card information to contact data, dates of birth, and Social Security numbers from user prompts, before sending them through to ChatGPT. When ChatGPT responds, PrivateGPT re-populates the PII within the answer, to make the experience more seamless for users, according to a statement this week from PrivateGPT creator Private AI.
“Generative AI will only have a space within our organizations and societies if the right tools exist to make it safe to use,” said Patricia Thaine, co-founder and CEO of Private AI, in a statement. “By sharing personal information with third-party organizations, [companies] lose control over how that data is stored and used, putting themselves at serious risk of compliance violations.”
Privacy Risks & ChatGPT
Every time a user enters data into a prompt for ChatGPT, the information is ingested into the service’s LLM data set, used to train the next generation of the algorithm. The concern is that the information could be retrieved at a later date if proper data security isn’t in place for the service.
Read more