Rapidly build and deploy production-ready conversational AI agents using Chainlit and LangGraph. This powerful integration combines state-of-the-art language models with flexible workflow management, enabling developers to create sophisticated chatbots, virtual assistants, and interactive AI applications in minutes.
- Table of Contents
- Why This Project?
- Features
- Getting Started
- Creating Custom Workflow
- Workflows
- Upcoming Features
Chainlit is a powerful tool for building production-ready conversational AI applications. LangGraph, on the other hand, is a versatile framework for building and managing state graphs in AI applications. This project combines these two to provide a comprehensive solution for building conversational AI agents, in minutes.
- Building Blocks: Utilize a variety of building blocks to create your own conversational AI agents.
- Multiple LLM Support: Automatically detects and uses the following LLMs:
- Ollama: Open source model.
- Claude: Advanced AI models by Anthropic. Apply API Key Here
- GPT: Advanced AI models by OpenAI. Apply API Key Here
- Grok: Grok models by xAI. Apply API Key Here
- Groq: Fast inference service by Groq. Apply API Key Here
- Gemini: Google AI models. Apply API Key Here
- Examples: Explore a variety of use cases with conversational AI agents examples.
Follow these steps to set up and run the project using Docker Compose or in your Python 3.10 virtual environment.
- Make sure you have Docker and Docker Compose installed on your system.
- Clone this repository and navigate to the project directory.
- Copy the
.env.example
file to.env
and update the necessary environment variables:
cp .env.example .env
- Edit the
.env
file and set the required variables, including:
- API keys (
OPENAI_API_KEY
,ANTHROPIC_API_KEY
): Optional if you use Ollama. - DB volume settings (
POSTGRES_VOLUME_PATH
,MINIO_VOLUME_PATH
): create mount folders on your host machine and set the paths accordingly. - (Optional)
TAVILY_API_KEY
for enabling search - (Optional) Google OAuth
- (Optional) LangSmith
- Start the services using Docker Compose
docker compose up
This will start all the necessary services, including the Chainlit application, PostgreSQL database, and MinIO object storage.
- The application should now be running at http://localhost:8000. Log in with the default username and password (admin:admin). You can change the default credentials in the
.env
file.
- Download and install Ollama.
- Pull whatever model you want to use, for example:
ollama pull cas/ministral-8b-instruct-2410_q4km:latest
ollama pull llama3.2:3b-instruct-q8_0
of any gguf-based model on the HuggingFace.
ollama run hf.co/{username}/{repository}:{quantization}
Creating your own custom workflow allows you to tailor the application to your specific needs. Follow the step-by-step guide below to create your own workflow.
- Go to the
chat_workflow/workflows
directory in your project, and create a new Python file for your workflow, e.g.,my_custom_workflow.py
. - Define Your State Class
- Inherit from
BaseState
to define the state variables your workflow will use. For example:
class MyCustomState(BaseState):
# Model name of the chatbot
chat_model: str
# Add other state variables as needed
- Define Your Workflow
- Inherit from
BaseWorkflow
to define your custom workflow logic, and override thecreate_graph
method to define the state graph.
class MyCustomWorkflow(BaseWorkflow):
def create_graph(self) -> StateGraph:
# LangGraph graph definition
graph = StateGraph(MyCustomState)
# Add nodes to the graph
graph.add_node("chat", self.chat_node)
# Add edges between nodes
graph.add_edge("chat", END)
# Set the entry point of the graph
graph.set_entry_point("chat")
return graph
- Define node methods like
self.chat_node
in thecreate_graph
method. - Define default state by overriding the
get_default_state
method.
def create_default_state(self) -> MyCustomState:
return {
"name": self.name(),
"messages": [],
"chat_model": "",
# Initialize other state variables if needed
}
- Set workflow properties.
- name: The display name of the workflow. For example, "My Custom Workflow".
- output_chat_model: The name of the LLM model to provide final output as a response.
- chat_profile: The profile for the workflow.
- starter: The starter message for the workflow.
This project includes several pre-built workflows to demonstrate the capabilities of the Chainlit Langgraph integration:
Located in simple_chat.py
, this workflow provides a basic chatbot experience:
- Utilizes a state graph with chat and tool nodes
- Supports multiple language models
- Includes basic tools like datetime and web search
- Supports images and text inputs
Found in resume_optimizer.py
, this workflow helps users improve their resumes:
- Features a resume extractor node to process uploaded PDF resumes
- Provides detailed analysis and suggestions for resume improvement
Implemented in lean_canvas_chat.py
, this workflow assists in business modeling:
- Guides users through the Lean Canvas creation process
- Offers a structured approach to defining business models
Each workflow demonstrates different aspects of the Chainlit Langgraph integration, showcasing its flexibility and power in creating AI-driven applications.
- Model Context Protocol: An open protocol that enables seamless integration between LLM applications and external data sources and tools. Open sourced by Anthropic.
- Research Assistant: A research assistant that can help users with their general research tasks, like NotebookLM.
- NVIDIA NIM: Self-host GPU-accelerated inferencing microservices for pretrained and customized AI models across clouds, data centers, and workstations.
- Cloud Deployment: Easy deployment of the application to cloud platforms like AWS, Azure, or GCP.
- Graph Builder: A meta-workflow builder that allows users to create custom workflows with natural language.
- OpenAI o1-like agentic workflow: Advanced self-prompting agentic workflow.
- Image Generation: Generate images based on user input.