AIConfig - the open-source framework for building production-grade AI applications
|
|
|
|
Documentation
AIConfig is a framework that makes it easy to build generative AI applications for production. It manages generative AI prompts, models and model parameters as JSON-serializable configs that can be version controlled, evaluated, monitored and opened in a notebook playground for rapid prototyping.
It allows you to store and iterate on generative AI behavior separately from your application code, offering a streamlined AI development workflow.
# for python installation:
pip3 install python-aiconfig
# or using poetry: poetry add python-aiconfig
# for node.js installation:
npm install aiconfig
# or using yarn: yarn add aiconfig
Here is a sample AIConfig that uses gpt-3.5-turbo and gpt-4:
trip_planner_aiconfig.json
{
"name": "trip_planner",
"schema_version": "latest",
"metadata": {
"models": {
"gpt-3.5-turbo": {
"model": "gpt-3.5-turbo",
"top_p": 1,
"temperature": 0
},
"gpt-4": {
"model": "gpt-4",
"top_p": 1,
"temperature": 0,
"system_prompt": "You are an expert travel coordinator with exquisite taste. Be concise but specific in recommendations.\n\nEmoji: Choose an emoji based on the location: \n\nOutput Format: \n## Personalized Itinerary [emoji]\n \n### Morning\n[Breakfast spot and attraction]\n \n### Afternoon\n[Lunch spot and attraction]\n \n###Evening\n[Dinner spot and attraction]\n \n### Night\n[Dessert spot and attraction]\n\nStyle Guidelines: \nBold the restaurants and the attractions. Be structured."
}
},
"default_model": "gpt-3.5-turbo",
"parameters": {"city": "London"}
},
"prompts": [
{
"name": "get_activities",
"input": "Give me the top 5 fun attractions to do in {{city}}"
},
{
"name": "gen_itinerary",
"input": "Generate a one-day personalized itinerary based on : \n1. my favorite cuisine: {{cuisine}} \n2. list of activities: {{get_activities.output}}",
"metadata": {
"model": {"name": "gpt-4"},
"parameters": {"cuisine": "Malaysian"},
"remember_chat_context": false
}
}
]
}
The core SDK connects your AIConfig to your application code.
We cover Python instructions here - for Node.js please see the detailed Getting Started guide here.
The example below uses trip_planner_aiconfig.json
shared above.
Resources: Getting Started Docs | YouTube Demo Video
# first, setup your openai key: https://platform.openai.com/api-keys
# in your CLI, set the environment variable
export OPENAI_API_KEY=my_key
# load your AIConfig
from aiconfig import AIConfigRuntime, InferenceOptions
config = AIConfigRuntime.load("trip_planner_aiconfig.json")
# setup streaming
inference_options = InferenceOptions(stream=True)
# run a prompt
# `get_activities` prompt generates a list of activities in specified city ('london' is default)
get_activities_response = await config.run("get_activities", options=inference_options)
# run a prompt with a different parameter
# update city parameter to 'san francisco'
get_activities_response = await config.run("get_activities", params = {“city” : “san francisco”}, options=inference_options)
# run a prompt that has dependencies
# `gen_itinterary` prompt generates itinerary based the output from `get_activities` prompt and user's specified cuisine
await config.run("gen_itinerary", params = {"cuisine" : "russian"}, run_with_dependencies=True)
# save the aiconfig to disk. and serialize outputs from the model run
config.save('updated_aiconfig.json', include_outputs=True)
We can iterate on an AIConfig using a notebook editor called an AI Workbook.
- Go to https://lastmileai.dev.
- Go to Workbooks page: https://lastmileai.dev/workbooks
- Click dropdown from ' New Workbook' and select 'Create from AIConfig'
- Upload
trip_planner_aiconfig.json
upload_config.mp4
Today, application code is tightly coupled with the gen AI settings for the application -- prompts, parameters, and model-specific logic is all jumbled in with app code.
- results in increased complexity
- makes it hard to iterate on the prompts or try different models easily
- makes it hard to evaluate prompt/model performance
AIConfig helps unwind complexity by separating prompts, model parameters, and model-specific logic from your application.
- simplifies application code -- simply call
config.run()
- open the
aiconfig
in a playground to iterate quickly - version control and evaluate the
aiconfig
- it's the AI artifact for your application.
- Prompts as Configs: standardized JSON format to store prompts and model settings in source control.
- Editor for Prompt Chains: Prototype and iterate on your prompt chains and model settings in AI Workbooks.
- Model-agnostic and multimodal SDK: Python & Node SDKs to use
aiconfig
in your application code. AIConfig is designed to be model-agnostic and multi-modal, so you can extend it to work with any generative AI model, including text, image and audio. - Extensible: Extend AIConfig to work with any model and your own endpoints.
- Collaborative Development: AIConfig enables different people to work on prompts and app development, and collaborate together by sharing the
aiconfig
artifact.
AIConfig makes it easy to work with complex prompt chains, various models, and advanced generative AI workflows. Start with these recipes and access more in /cookbooks
:
- RAG with AIConfig
- Function Calling with OpenAI
- CLI Chatbot
- Prompt Routing
- Multi-LLM Consistency
- Safety Guardrails for LLMs - LLama Guard
- Chain-of-Verification
AIConfig supports the following models out of the box. See examples:
- OpenAI models (GPT-3, GPT-3.5, GPT-4, DALLE3)
- Gemini
- LLaMA
- LLaMA Guard
- Google PaLM models (PaLM chat)
- Hugging Face Text Generation Task models (Ex. Mistral-7B)
If you need to use a model that isn't provided out of the box, you can implement a ModelParser
for it.
See instructions on how to support a new model in AIConfig.
AIConfig is designed to be customized and extended for your use-case. The Extensibility guide goes into more detail.
Currently, there are 3 core ways to extend AIConfig:
- Supporting other models - define a ModelParser extension
- Callback event handlers - tracing and monitoring
- Custom metadata - save custom fields in
aiconfig
We are rapidly developing AIConfig! We welcome PR contributions and ideas for how to improve the project.
- Join the conversation on Discord -
#aiconfig
channel - Open an issue for feature requests
- Read our contributing guide
We currently release new tagged versions of the pypi
and npm
packages every week. Hotfixes go out when completed.