- Model Aliases
- Model Alias Values
- AI21 Studio
- AiLAYER
- AIMLAPI
- Anyscale
- Anthropic
- Cloudflare AI
- Cohere
- Corcel
- DeepInfra
- DeepSeek
- Fireworks AI
- Forefront AI
- FriendliAI
- Google Gemini
- GooseAI
- Groq
- Hugging Face Inference
- HyperBee AI
- Lamini
- LLaMA.CPP
- Mistral AI
- Monster API
- Neets.ai
- Novita AI
- NVIDIA AI
- OctoAI
- Ollama
- OpenAI
- Perplexity AI
- Reka AI
- Replicate
- Shuttle AI
- TheB.ai
- Together AI
- Watsonx AI
- Writer
- Zhipu AI
To simplify using LLMInterface.sendMessage(), you can use the following model aliases:
default
large
small
agent
If no model is passed, the system will use the default model for the LLM provider. If you'd prefer to specify your model by size instead of name, pass large
or small
.
Aliases can simplify working with multiple LLM providers letting you call different providers with the same model names out of the box.
const response = await LLMInterface.sendMessage("openai", "Explain the importance of low latency LLMs", { model: "small" });
const geminiResult = await LLMInterface.sendMessage("gemini", "Explain the importance of low latency LLMs", { model: "small" });
Changing the aliases is easy:
LLMInterface.setModelAlias("openai", "default", "gpt-4o-mini");
default
: jamba-instructlarge
: jamba-instructsmall
: jamba-instructagent
: jamba-instruct
default
: Llama-2-70blarge
: Qwen/Qwen1.5-72B-Chatsmall
: alpaca-7bagent
: Llama-2-70b
default
: gpt-3.5-turbo-16klarge
: Qwen/Qwen1.5-72B-Chatsmall
: Qwen/Qwen1.5-0.5B-Chatagent
: gpt-4-32k-0613
default
: mistralai/Mixtral-8x22B-Instruct-v0.1large
: meta-llama/Llama-3-70b-chat-hfsmall
: mistralai/Mistral-7B-Instruct-v0.1agent
: mistralai/Mixtral-8x22B-Instruct-v0.1
default
: claude-3-sonnet-20240229large
: claude-3-opus-20240229small
: claude-3-haiku-20240307agent
: claude-3-sonnet-20240229
default
: @cf/meta/llama-3-8b-instructlarge
: @hf/thebloke/llama-2-13b-chat-awqsmall
: @cf/tinyllama/tinyllama-1.1b-chat-v1.0agent
: @cf/meta/llama-3-8b-instruct
default
: command-rlarge
: command-r-plussmall
: command-lightagent
: command-r-plus
default
: gpt-4-turbo-2024-04-09large
: gpt-4osmall
: cortext-liteagent
: gemini-pro
default
: openchat/openchat-3.6-8blarge
: nvidia/Nemotron-4-340B-Instructsmall
: microsoft/WizardLM-2-7Bagent
: Qwen/Qwen2-7B-Instruct
default
: deepseek-chatlarge
: deepseek-chatsmall
: deepseek-chatagent
: deepseek-chat
default
: accounts/fireworks/models/llama-v3-8b-instructlarge
: accounts/fireworks/models/llama-v3-70b-instructsmall
: accounts/fireworks/models/phi-3-mini-128k-instructagent
: accounts/fireworks/models/llama-v3-8b-instruct
default
: forefront/Mistral-7B-Instruct-v0.2-chatmllarge
: forefront/Mistral-7B-Instruct-v0.2-chatmlsmall
: forefront/Mistral-7B-Instruct-v0.2-chatmlagent
:
default
: mixtral-8x7b-instruct-v0-1large
: meta-llama-3-70b-instructsmall
: meta-llama-3-8b-instructagent
: gemma-7b-it
default
: gemini-1.5-flashlarge
: gemini-1.5-prosmall
: gemini-1.5-flashagent
: gemini-1.5-pro
default
: gpt-neo-20blarge
: gpt-neo-20bsmall
: gpt-neo-125magent
: gpt-j-6b
default
: llama3-8b-8192large
: llama3-70b-8192small
: gemma-7b-itagent
: llama3-8b-8192
default
: meta-llama/Meta-Llama-3-8B-Instructlarge
: meta-llama/Meta-Llama-3-8B-Instructsmall
: microsoft/Phi-3-mini-4k-instructagent
: meta-llama/Meta-Llama-3-8B-Instruct
default
: hivelarge
: gpt-4osmall
: gemini-1.5-flashagent
: gpt-4o
default
: meta-llama/Meta-Llama-3-8B-Instructlarge
: meta-llama/Meta-Llama-3-8B-Instructsmall
: microsoft/phi-2agent
: meta-llama/Meta-Llama-3-8B-Instruct
default
: gpt-3.5-turbolarge
: gpt-3.5-turbosmall
: gpt-3.5-turboagent
: openhermes
default
: mistral-large-latestlarge
: mistral-large-latestsmall
: mistral-small-latestagent
: mistral-large-latest
default
: meta-llama/Meta-Llama-3-8B-Instructlarge
: google/gemma-2-9b-itsmall
: microsoft/Phi-3-mini-4k-instructagent
: google/gemma-2-9b-it
default
: Neets-7Blarge
: mistralai/Mixtral-8X7B-Instruct-v0.1small
: Neets-7Bagent
:
default
: meta-llama/llama-3-8b-instructlarge
: meta-llama/llama-3-70b-instructsmall
: meta-llama/llama-3-8b-instructagent
: meta-llama/llama-3-70b-instruct
default
: nvidia/llama3-chatqa-1.5-8blarge
: nvidia/nemotron-4-340b-instructsmall
: microsoft/phi-3-mini-128k-instructagent
: nvidia/llama3-chatqa-1.5-8b
default
: mistral-7b-instructlarge
: mixtral-8x22b-instructsmall
: mistral-7b-instructagent
: mixtral-8x22b-instruct
default
: llama3large
: llama3small
: llama3agent
:
default
: gpt-4o-minilarge
: gpt-4osmall
: gpt-4o-miniagent
: gpt-4o
default
: llama-3.1-sonar-large-128k-chatlarge
: llama-3.1-sonar-large-128k-chatsmall
: llama-3.1-sonar-small-128k-chatagent
: llama-3.1-70b-instruct
default
: reka-corelarge
: reka-coresmall
: reka-edgeagent
: reka-core
default
: mistralai/mistral-7b-instruct-v0.2large
: meta/meta-llama-3-70b-instructsmall
: mistralai/mistral-7b-instruct-v0.2agent
: meta/meta-llama-3-70b-instruct
default
: shuttle-2-turbolarge
: shuttle-2-turbosmall
: shuttle-2-turboagent
: shuttle-2-turbo
default
: gpt-4-turbolarge
: llama-3-70b-chatsmall
: llama-2-7b-chatagent
: gpt-4-turbo
default
: google/gemma-7blarge
: mistralai/Mixtral-8x22Bsmall
: google/gemma-2bagent
: Qwen/Qwen1.5-14B
default
: ibm/granite-13b-chat-v2large
: meta-llama/llama-3-70b-instructsmall
: google/flan-t5-xxlagent
: meta-llama/llama-3-70b-instruct
default
: palmyra-x-002-32klarge
: palmyra-x-002-32ksmall
: palmyra-x-002-32kagent
:
default
: glm-4-airxlarge
: glm-4small
: glm-4-flashagent
: glm-4