The Bark API is a web API that generates waveform prompts from input text. It is built using FastAPI, a modern, fast (high-performance) web framework for building APIs.
- Docker
- Docker Compose
To run the application, follow these steps:
- Clone the repository to your local machine.
- In the project directory, run
docker-compose up --build -d
command. - Wait for the containers to start. You can monitor the logs using
docker-compose logs -f
command. - Open your web browser and navigate to http://localhost:8000.
- That's it! The application should now be running in your browser.
To stop the application, run docker-compose down -v
command in the project directory. This will stop and remove all the containers, networks, and volumes created by docker-compose up command.
The Bark API has two endpoints:
This endpoint returns a welcome message.
This endpoint generates a waveform prompt from the input text.
- Request Body** The request body must contain a JSON object with two keys:
text (string): any text to be used as input for prompt generation
filename (string, optional): the filename to be used to save the generated prompt. If not provided, a default filename (dummy.npz) will be used.
- Response If the request is successful, the endpoint will return a JSON object with a message indicating that prompt generation has been started.
Examples | Model Card | Playground Waitlist
Bark is a transformer-based text-to-audio model created by Suno. Bark can generate highly realistic, multilingual speech as well as other audio - including music, background noise and simple sound effects. The model can also produce nonverbal communications like laughing, sighing and crying. To support the research community, we are providing access to pretrained model checkpoints ready for inference.
from bark import SAMPLE_RATE, generate_audio, preload_models
from IPython.display import Audio
# download and load all models
preload_models()
# generate audio from text
text_prompt = """
Hello, my name is Suno. And, uh โ and I like pizza. [laughs]
But I also have other interests such as playing tic tac toe.
"""
audio_array = generate_audio(text_prompt)
# play text in notebook
Audio(audio_array, rate=SAMPLE_RATE)
pizza.webm
To save audio_array
as a WAV file:
from scipy.io.wavfile import write as write_wav
write_wav("/path/to/audio.wav", SAMPLE_RATE, audio_array)
Bark supports various languages out-of-the-box and automatically determines language from input text. When prompted with code-switched text, Bark will attempt to employ the native accent for the respective languages. English quality is best for the time being, and we expect other languages to further improve with scaling.
text_prompt = """
Buenos dรญas Miguel. Tu colega piensa que tu alemรกn es extremadamente malo.
But I suppose your english isn't terrible.
"""
audio_array = generate_audio(text_prompt)
miguel.webm
Bark can generate all types of audio, and, in principle, doesn't see a difference between speech and music. Sometimes Bark chooses to generate text as music, but you can help it out by adding music notes around your lyrics.
text_prompt = """
โช In the jungle, the mighty jungle, the lion barks tonight โช
"""
audio_array = generate_audio(text_prompt)
lion.webm
Bark has the capability to fully clone voices - including tone, pitch, emotion and prosody. The model also attempts to preserve music, ambient noise, etc. from input audio. However, to mitigate misuse of this technology, we limit the audio history prompts to a limited set of Suno-provided, fully synthetic options to choose from for each language. Specify following the pattern: {lang_code}_speaker_{0-9}
.
text_prompt = """
I have a silky smooth voice, and today I will tell you about
the exercise regimen of the common sloth.
"""
audio_array = generate_audio(text_prompt, history_prompt="en_speaker_1")
sloth.webm
Note: since Bark recognizes languages automatically from input text, it is possible to use for example a german history prompt with english text. This usually leads to english audio with a german accent.
You can provide certain speaker prompts such as NARRATOR, MAN, WOMAN, etc. Please note that these are not always respected, especially if a conflicting audio history prompt is given.
text_prompt = """
WOMAN: I would like an oatmilk latte please.
MAN: Wow, that's expensive!
"""
audio_array = generate_audio(text_prompt)
latte.webm
pip install git https://github.com/suno-ai/bark.git
or
git clone https://github.com/suno-ai/bark
cd bark && pip install .
Bark has been tested and works on both CPU and GPU (pytorch 2.0
, CUDA 11.7 and CUDA 12.0).
Running Bark requires running >100M parameter transformer models.
On modern GPUs and PyTorch nightly, Bark can generate audio in roughly realtime. On older GPUs, default colab, or CPU, inference time might be 10-100x slower.
If you don't have new hardware available or if you want to play with bigger versions of our models, you can also sign up for early access to our model playground here.
Similar to Vall-E and some other amazing work in the field, Bark uses GPT-style models to generate audio from scratch. Different from Vall-E, the initial text prompt is embedded into high-level semantic tokens without the use of phonemes. It can therefore generalize to arbitrary instructions beyond speech that occur in the training data, such as music lyrics, sound effects or other non-speech sounds. A subsequent second model is used to convert the generated semantic tokens into audio codec tokens to generate the full waveform. To enable the community to use Bark via public code we used the fantastic EnCodec codec from Facebook to act as an audio representation.
Below is a list of some known non-speech sounds, but we are finding more every day. Please let us know if you find patterns that work particularly well on Discord!
[laughter]
[laughs]
[sighs]
[music]
[gasps]
[clears throat]
โ
or...
for hesitationsโช
for song lyrics- capitalization for emphasis of a word
MAN/WOMAN:
for bias towards speaker
Supported Languages
Language | Status |
---|---|
English (en) | โ |
German (de) | โ |
Spanish (es) | โ |
French (fr) | โ |
Hindi (hi) | โ |
Italian (it) | โ |
Japanese (ja) | โ |
Korean (ko) | โ |
Polish (pl) | โ |
Portuguese (pt) | โ |
Russian (ru) | โ |
Turkish (tr) | โ |
Chinese, simplified (zh) | โ |
Arabic | Coming soon! |
Bengali | Coming soon! |
Telugu | Coming soon! |
- nanoGPT for a dead-simple and blazing fast implementation of GPT-style models
- EnCodec for a state-of-the-art implementation of a fantastic audio codec
- AudioLM for very related training and inference code
- Vall-E, AudioLM and many other ground-breaking papers that enabled the development of Bark
Bark is licensed under a non-commercial license: CC-BY 4.0 NC. The Suno models themselves may be used commercially. However, this version of Bark uses EnCodec
as a neural codec backend, which is licensed under a non-commercial license.
Please contact us at [email protected]
if you need access to a larger version of the model and/or a version of the model you can use commercially.
Weโre developing a playground for our models, including Bark.
If you are interested, you can sign up for early access here.
Use the XDG_CACHE_HOME
env variable to override where models are downloaded and cached (otherwise defaults to a subdirectory of ~/.cache
).
Bark is a GPT-style model. As such, it may take some creative liberties in its generations, resulting in higher-variance model outputs than traditional text-to-speech approaches.