This repository showcases the integration of NVIDIA NeMo Guardrails with the Llama2 model.
- Programmable Guardrails: Define the behavior of your LLM, guiding conversation and preventing discussions on unwanted topics
- Seamless Integration: Easily connect your LLM to other services and tools (e.g., LangChain), enhancing its capabilities
- Customization: A specialized modeling language, Colang, that allows you to define and control the behavior of your LLM-based conversational system.
We focus on three common use cases:
-
Topic Guidance and Safety Measures Guide the model to stick to certain topics and avoid specific questions. Essentially, the Guardrail layer examines every user input and filters them based on set rules.
-
Fact-Checking Guardrails Ask the LLM to check its answers for accuracy with the given context. Basically, the LLM verifies its response using the information pulled from a knowledge base.
-
Guardrails Against Hallucinations Designed for situations where there is no knowledge base.
- Enhancing Llama2 Conversations with NeMo Guardrails: A Practical Guide blog post.
- NVIDIA Nemo Guardrails documentation.
- NVIDIA Nemo Guardrails official repo.