Skip to content
forked from coqui-ai/TTS

๐Ÿธ๐Ÿ’ฌ - a deep learning toolkit for Text-to-Speech, battle-tested in research and production

License

Notifications You must be signed in to change notification settings

idiap/coqui-ai-TTS

ย 
ย 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

๐Ÿธ Coqui TTS is a library for advanced Text-to-Speech generation.

๐Ÿš€ Pretrained models in 1100 languages.

๐Ÿ› ๏ธ Tools for training new models and fine-tuning existing models in any language.

๐Ÿ“š Utilities for dataset analysis and curation.

Discord PyPI - Python Version License PyPI version Downloads DOI GithubActions GithubActions GithubActions Docs

๐Ÿ“ฃ News

  • Fork of the original, unmaintained repository. New PyPI package: coqui-tts
  • 0.25.0: OpenVoice models now available for voice conversion.
  • 0.24.2: Prebuilt wheels are now also published for Mac and Windows (in addition to Linux as before) for easier installation across platforms.
  • 0.20.0: XTTSv2 is here with 17 languages and better performance across the board. XTTS can stream with <200ms latency.
  • 0.19.0: XTTS fine-tuning code is out. Check the example recipes.
  • 0.14.1: You can use Fairseq models in ~1100 languages with ๐ŸธTTS.

๐Ÿ’ฌ Where to ask questions

Please use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly so that more people can benefit from it.

Type Platforms
๐Ÿšจ Bug Reports, Feature Requests & Ideas GitHub Issue Tracker
๐Ÿ‘ฉโ€๐Ÿ’ป Usage Questions GitHub Discussions
๐Ÿ—ฏ General Discussion GitHub Discussions or Discord

The issues and discussions in the original repository are also still a useful source of information.

๐Ÿ”— Links and Resources

Type Links
๐Ÿ’ผ Documentation ReadTheDocs
๐Ÿ’พ Installation TTS/README.md
๐Ÿ‘ฉโ€๐Ÿ’ป Contributing CONTRIBUTING.md
๐Ÿš€ Released Models Standard models and Fairseq models in ~1100 languages

Features

  • High-performance text-to-speech and voice conversion models, see list below.
  • Fast and efficient model training with detailed training logs on the terminal and Tensorboard.
  • Support for multi-speaker and multilingual TTS.
  • Released and ready-to-use models.
  • Tools to curate TTS datasets under dataset_analysis/.
  • Command line and Python APIs to use and test your models.
  • Modular (but not too much) code base enabling easy implementation of new ideas.

Model Implementations

Spectrogram models

End-to-End Models

Vocoders

Voice Conversion

Others

You can also help us implement more models.

Installation

๐ŸธTTS is tested on Ubuntu 24.04 with python >= 3.9, < 3.13, but should also work on Mac and Windows.

If you are only interested in synthesizing speech with the pretrained ๐ŸธTTS models, installing from PyPI is the easiest option.

pip install coqui-tts

If you plan to code or train models, clone ๐ŸธTTS and install it locally.

git clone https://github.com/idiap/coqui-ai-TTS
cd coqui-ai-TTS
pip install -e .

Optional dependencies

The following extras allow the installation of optional dependencies:

Name Description
all All optional dependencies
notebooks Dependencies only used in notebooks
server Dependencies to run the TTS server
bn Bangla G2P
ja Japanese G2P
ko Korean G2P
zh Chinese G2P
languages All language-specific dependencies

You can install extras with one of the following commands:

pip install coqui-tts[server,ja]
pip install -e .[server,ja]

Platforms

If you are on Ubuntu (Debian), you can also run the following commands for installation.

make system-deps
make install

Docker Image

You can also try out Coqui TTS without installation with the docker image. Simply run the following command and you will be able to run TTS:

docker run --rm -it -p 5002:5002 --entrypoint /bin/bash ghcr.io/coqui-ai/tts-cpu
python3 TTS/server/server.py --list_models #To get the list of available models
python3 TTS/server/server.py --model_name tts_models/en/vctk/vits # To start a server

You can then enjoy the TTS server here More details about the docker images (like GPU support) can be found here

Synthesizing speech by ๐ŸธTTS

๐Ÿ Python API

Multi-speaker and multi-lingual model

import torch
from TTS.api import TTS

# Get device
device = "cuda" if torch.cuda.is_available() else "cpu"

# List available ๐ŸธTTS models
print(TTS().list_models())

# Initialize TTS
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to(device)

# List speakers
print(tts.speakers)

# Run TTS
# โ— XTTS supports both, but many models allow only one of the `speaker` and
# `speaker_wav` arguments

# TTS with list of amplitude values as output, clone the voice from `speaker_wav`
wav = tts.tts(
  text="Hello world!",
  speaker_wav="my/cloning/audio.wav",
  language="en"
)

# TTS to a file, use a preset speaker
tts.tts_to_file(
  text="Hello world!",
  speaker="Craig Gutsy",
  language="en",
  file_path="output.wav"
)

Single speaker model

# Initialize TTS with the target model name
tts = TTS("tts_models/de/thorsten/tacotron2-DDC").to(device)

# Run TTS
tts.tts_to_file(text="Ich bin eine Testnachricht.", file_path=OUTPUT_PATH)

Voice conversion (VC)

Converting the voice in source_wav to the voice of target_wav

tts = TTS("voice_conversion_models/multilingual/vctk/freevc24").to("cuda")
tts.voice_conversion_to_file(
  source_wav="my/source.wav",
  target_wav="my/target.wav",
  file_path="output.wav"
)

Other available voice conversion models:

  • voice_conversion_models/multilingual/multi-dataset/openvoice_v1
  • voice_conversion_models/multilingual/multi-dataset/openvoice_v2

Voice cloning by combining single speaker TTS model with the default VC model

This way, you can clone voices by using any model in ๐ŸธTTS. The FreeVC model is used for voice conversion after synthesizing speech.

tts = TTS("tts_models/de/thorsten/tacotron2-DDC")
tts.tts_with_vc_to_file(
    "Wie sage ich auf Italienisch, dass ich dich liebe?",
    speaker_wav="target/speaker.wav",
    file_path="output.wav"
)

TTS using Fairseq models in ~1100 languages ๐Ÿคฏ

For Fairseq models, use the following name format: tts_models/<lang-iso_code>/fairseq/vits. You can find the language ISO codes here and learn about the Fairseq models here.

# TTS with fairseq models
api = TTS("tts_models/deu/fairseq/vits")
api.tts_to_file(
    "Wie sage ich auf Italienisch, dass ich dich liebe?",
    file_path="output.wav"
)

Command-line interface tts

Synthesize speech on the command line.

You can either use your trained model or choose a model from the provided list.

  • List provided models:

    tts --list_models
  • Get model information. Use the names obtained from --list_models.

    tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>"

    For example:

    tts --model_info_by_name tts_models/tr/common-voice/glow-tts
    tts --model_info_by_name vocoder_models/en/ljspeech/hifigan_v2

Single speaker models

  • Run TTS with the default model (tts_models/en/ljspeech/tacotron2-DDC):

    tts --text "Text for TTS" --out_path output/path/speech.wav
  • Run TTS and pipe out the generated TTS wav file data:

    tts --text "Text for TTS" --pipe_out --out_path output/path/speech.wav | aplay
  • Run a TTS model with its default vocoder model:

    tts --text "Text for TTS" \
        --model_name "<model_type>/<language>/<dataset>/<model_name>" \
        --out_path output/path/speech.wav

    For example:

    tts --text "Text for TTS" \
        --model_name "tts_models/en/ljspeech/glow-tts" \
        --out_path output/path/speech.wav
  • Run with specific TTS and vocoder models from the list. Note that not every vocoder is compatible with every TTS model.

    tts --text "Text for TTS" \
        --model_name "<model_type>/<language>/<dataset>/<model_name>" \
        --vocoder_name "<model_type>/<language>/<dataset>/<model_name>" \
        --out_path output/path/speech.wav

    For example:

    tts --text "Text for TTS" \
        --model_name "tts_models/en/ljspeech/glow-tts" \
        --vocoder_name "vocoder_models/en/ljspeech/univnet" \
        --out_path output/path/speech.wav
  • Run your own TTS model (using Griffin-Lim Vocoder):

    tts --text "Text for TTS" \
        --model_path path/to/model.pth \
        --config_path path/to/config.json \
        --out_path output/path/speech.wav
  • Run your own TTS and Vocoder models:

    tts --text "Text for TTS" \
        --model_path path/to/model.pth \
        --config_path path/to/config.json \
        --out_path output/path/speech.wav \
        --vocoder_path path/to/vocoder.pth \
        --vocoder_config_path path/to/vocoder_config.json

Multi-speaker models

  • List the available speakers and choose a <speaker_id> among them:

    tts --model_name "<language>/<dataset>/<model_name>"  --list_speaker_idxs
  • Run the multi-speaker TTS model with the target speaker ID:

    tts --text "Text for TTS." --out_path output/path/speech.wav \
        --model_name "<language>/<dataset>/<model_name>"  --speaker_idx <speaker_id>
  • Run your own multi-speaker TTS model:

    tts --text "Text for TTS" --out_path output/path/speech.wav \
        --model_path path/to/model.pth --config_path path/to/config.json \
        --speakers_file_path path/to/speaker.json --speaker_idx <speaker_id>

Voice conversion models

tts --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>" \
    --source_wav <path/to/speaker/wav> --target_wav <path/to/reference/wav>

About

๐Ÿธ๐Ÿ’ฌ - a deep learning toolkit for Text-to-Speech, battle-tested in research and production

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

 
 
 

Languages

  • Python 91.9%
  • Jupyter Notebook 7.6%
  • Other 0.5%