Skip to content

charmbracelet/mods

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Mods

Mods product art and type treatment
Latest Release Build Status

AI for the command line, built for pipelines.

a GIF of mods running

Large Language Models (LLM) based AI is useful to ingest command output and format results in Markdown, JSON, and other text based formats. Mods is a tool to add a sprinkle of AI in your command line and make your pipelines artificially intelligent.

It works great with LLMs running locally through LocalAI. You can also use OpenAI, Cohere, Groq, or Azure OpenAI.

Installation

Use a package manager:

# macOS or Linux
brew install charmbracelet/tap/mods

# Windows (with Winget)
winget install charmbracelet.mods

# Arch Linux (btw)
yay -S mods

# Nix
nix-shell -p mods
Debian/Ubuntu
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://repo.charm.sh/apt/gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/charm.gpg
echo "deb [signed-by=/etc/apt/keyrings/charm.gpg] https://repo.charm.sh/apt/ * *" | sudo tee /etc/apt/sources.list.d/charm.list
sudo apt update && sudo apt install mods
Fedora/RHEL
echo '[charm]
name=Charm
baseurl=https://repo.charm.sh/yum/
enabled=1
gpgcheck=1
gpgkey=https://repo.charm.sh/yum/gpg.key' | sudo tee /etc/yum.repos.d/charm.repo
sudo yum install mods

Or, download it:

  • Packages are available in Debian and RPM formats
  • Binaries are available for Linux, macOS, and Windows

Or, just install it with go:

go install github.com/charmbracelet/mods@latest
Shell Completions

All the packages and archives come with pre-generated completion files for Bash, ZSH, Fish, and PowerShell.

If you built it from source, you can generate them with:

mods completion bash -h
mods completion zsh -h
mods completion fish -h
mods completion powershell -h

If you use a package (like Homebrew, Debs, etc), the completions should be set up automatically, given your shell is configured properly.

What Can It Do?

Mods works by reading standard in and prefacing it with a prompt supplied in the mods arguments. It sends the input text to an LLM and prints out the result, optionally asking the LLM to format the response as Markdown. This gives you a way to "question" the output of a command. Mods will also work on standard in or an argument supplied prompt individually.

Be sure to check out the examples and a list of all the features.

Mods works with OpenAI compatible endpoints. By default, Mods is configured to support OpenAI's official API and a LocalAI installation running on port 8080. You can configure additional endpoints in your settings file by running mods --settings.

Saved Conversations

Conversations are saved locally by default. Each conversation has a SHA-1 identifier and a title (like git!).

a GIF listing and showing saved conversations.

Check the ./features.md for more details.

Usage

  • -m, --model: Specify Large Language Model to use.
  • -f, --format: Ask the LLM to format the response in a given format.
  • --format-as: Specify the format for the output (used with --format).
  • -P, --prompt: Prompt should include stdin and args.
  • -p, --prompt-args: Prompt should only include args.
  • -q, --quiet: Only output errors to standard err.
  • -r, --raw: Print raw response without syntax highlighting.
  • --settings: Open settings.
  • -x, --http-proxy: Use HTTP proxy to connect to the API endpoints.
  • --max-retries: Maximum number of retries.
  • --max-tokens: Specify maximum tokens with which to respond.
  • --no-limit: Do not limit the response tokens.
  • --role: Specify the role to use (See custom roles).
  • --word-wrap: Wrap output at width (defaults to 80)
  • --reset-settings: Restore settings to default.

Conversations

  • -t, --title: Set the title for the conversation.
  • -l, --list: List saved conversations.
  • -c, --continue: Continue from last response or specific title or SHA-1.
  • -C, --continue-last: Continue the last conversation.
  • -s, --show: Show saved conversation for the given title or SHA-1.
  • -S, --show-last: Show previous conversation.
  • --delete-older-than=<duration>: Deletes conversations older than given duration (10d, 1mo).
  • --delete: Deletes the saved conversation for the given title or SHA-1.
  • --no-cache: Do not save conversations.

Advanced

  • --fanciness: Level of fanciness.
  • --temp: Sampling temperature.
  • --topp: Top P value.
  • --topk: Top K value.

Custom Roles

Roles allow you to set system prompts. Here is an example of a shell role:

roles:
  shell:
    - you are a shell expert
    - you do not explain anything
    - you simply output one liners to solve the problems you're asked
    - you do not provide any explanation whatsoever, ONLY the command

Then, use the custom role in mods:

mods --role shell list files in the current directory

Setup

Open AI

Mods uses GPT-4 by default. It will fall back to GPT-3.5 Turbo.

Set the OPENAI_API_KEY environment variable. If you don't have one yet, you can grab it the OpenAI website.

Alternatively, set the [AZURE_OPENAI_KEY] environment variable to use Azure OpenAI. Grab a key from Azure.

Cohere

Cohere provides enterprise optimized models.

Set the COHERE_API_KEY environment variable. If you don't have one yet, you can get it from the Cohere dashboard.

Local AI

Local AI allows you to run models locally. Mods works with the GPT4ALL-J model as setup in this tutorial.

Groq

Groq provides models powered by their LPU inference engine.

Set the GROQ_API_KEY environment variable. If you don't have one yet, you can get it from the Groq console.

Whatcha Think?

We’d love to hear your thoughts on this project. Feel free to drop us a note.

License

MIT


Part of Charm.

The Charm logo

Charm热爱开源 • Charm loves open source