LLM Guard: Open-source toolkit for securing Large Language Models

Guard is a toolkit designed to fortify the security of (LLMs). is designed for easy and deployment in production environments.

It provides extensive evaluators for both inputs and outputs of , offering sanitization, of harmful language and , and against prompt injection and .

LLM Guard was developed for

Read more

Related Posts