Enterprises Don’t Know What to Buy for Responsible AI

The potential for AI is growing, but technology that relies on real-live personal data requires responsible use of that technology, says the International Association of Privacy Professionals. The use of AI is predicted to grow by more than 25% each year for the next five years, according to PricewaterhouseCoopers.

“It is clear frameworks enabling consistency, standardization, and responsible use are key elements to AI’s success,” the IAPP wrote in its recent Privacy and AI Governance report.

Responsible AI is a technological practice centered around privacy, human oversight, robustness, accountability, security, explainability and fairness. However, 80% of surveyed organizations have yet to formalize the choice of tools to assess the responsible use of AI. Organizations find it difficult to procure appropriate technical tools to address privacy and ethical risks stemming from AI, the IAPP wrote in the report.

While organizations have good intentions, they do not have a clear picture of what technologies will get them to responsible AI. In 80% of surveyed organizations, guidelines for ethical AI are almost always limited to high-level policy declarations and strategic objectives, IAPP said.

“Without a clear understanding of the available categories of tools needed to operationalize responsible AI, individual decision makers following legal requirements or undertaking specific measures to avoid bias or a black box cannot, and do not, base their decisions on the same premises,” the report said.

When asked to specify “tools for privacy and responsible AI,” 34% mentioned responsible AI tools, 29% mentioned processes, 24% listed policies,

Read more

Explore the site

More from the blog

Latest News