Skip to content

Automatically transcribe and summarize lecture recordings completely on-device using AI.

Notifications You must be signed in to change notification settings

natanielf/lecsum

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

lecsum

Automatically transcribe and summarize lecture recordings completely on-device using AI.

Environment Setup

Install Ollama.

Create a virtual Python environment:

python3 -m venv venv

Activate the virtual environment:

source venv/bin/activate

Install dependencies:

pip install -r requirements.txt

Configuration (optional)

Edit lecsum.yaml:

Field Default Value Possible Values Description
whisper_model "base.en" Whisper model name Specifies which Whisper model to use for transcription
ollama_model "llama3.1:8b" Ollama model name Specifies which Ollama model to use for summarization
prompt "Summarize: " Any string Instructs the large language model during the summarization step

Run

Run the Ollama server:

ollama serve

In a new terminal, run:

./lecsum.py -c [CONFIG_FILE] [AUDIO_FILE]

References

About

Automatically transcribe and summarize lecture recordings completely on-device using AI.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Languages