Skip to content

Latest commit

 

History

History

ollama-completion-example

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 

Ollama Completion Example

Welcome to this cheerful example of using Ollama with LangChain Go! 🎉

This simple yet powerful script demonstrates how to generate text completions using the Ollama language model through the LangChain Go library. Let's break down what this exciting code does!

What Does This Example Do?

  1. Sets Up Ollama: The script initializes an Ollama language model, specifically using the "llama2" model. This is like preparing our AI assistant for a conversation!

  2. Generates a Completion: We ask the AI a question: "Who was the first man to walk on the moon?" The AI will then generate a response to this query.

  3. Streams the Output: As the AI generates its response, the script streams the output in real-time. This means you can see the answer being "typed out" as it's generated!

  4. Handles Errors: The script includes error handling to ensure smooth operation and provide helpful feedback if something goes wrong.

How to Run

  1. Make sure you have Go installed on your system.
  2. Ensure you have Ollama set up and running locally.
  3. Run the script using: go run ollama_completion_example.go

What to Expect

When you run this script, you'll see the AI's response to the moon landing question being printed to your console in real-time. It's like watching the AI think and respond!

Fun Fact

Did you know? The temperature setting (0.8 in this example) controls how creative or focused the AI's responses are. Higher values make it more creative, while lower values make it more deterministic!

Enjoy exploring the world of AI-powered text generation with Ollama and LangChain Go! 🚀🌙