Open-source framework for LLM output validation and safety
Guardrails AI is an open-source framework for adding safety and validation to LLM outputs. Define guardrails as validators — check for hallucinations, PII, toxicity, off-topic responses, and more. The Guardrails Hub provides 50+ community validators. Works with any LLM provider.
No reviews yet. Be the first!