This repository contains training, generation and utility scripts for Stable Diffusion.
Change History is moved to the bottom of the page. 更新履歴はページ末尾に移しました。
For easier use (GUI and PowerShell scripts etc...), please visit the repository maintained by bmaltais. Thanks to @bmaltais!
This repository contains the scripts for:
- DreamBooth training, including U-Net and Text Encoder
- Fine-tuning (native training), including U-Net and Text Encoder
- LoRA training
- Texutl Inversion training
- Image generation
- Model conversion (supports 1.x and 2.x, Stable Diffision ckpt/safetensors and Diffusers)
Stable Diffusion web UI now seems to support LoRA trained by sd-scripts
. (SD 1.x based only) Thank you for great work!!!
These files do not contain requirements for PyTorch. Because the versions of them depend on your environment. Please install PyTorch at first (see installation guide below.)
The scripts are tested with PyTorch 1.12.1 and 1.13.0, Diffusers 0.10.2.
All documents are in Japanese currently.
- Training guide - common : data preparation, options etc...
- DreamBooth training guide
- Step by Step fine-tuning guide:
- training LoRA
- training Textual Inversion
- note.com Image generation
- note.com Model conversion
Python 3.10.6 and Git:
- Python 3.10.6: https://www.python.org/ftp/python/3.10.6/python-3.10.6-amd64.exe
- git: https://git-scm.com/download/win
Give unrestricted script access to powershell so venv can work:
- Open an administrator powershell window
- Type
Set-ExecutionPolicy Unrestricted
and answer A - Close admin powershell window
Open a regular Powershell terminal and type the following inside:
git clone https://github.com/kohya-ss/sd-scripts.git
cd sd-scripts
python -m venv venv
.\venv\Scripts\activate
pip install torch==1.12.1 cu116 torchvision==0.13.1 cu116 --extra-index-url https://download.pytorch.org/whl/cu116
pip install --upgrade -r requirements.txt
pip install -U -I --no-deps https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/f/xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl
cp .\bitsandbytes_windows\*.dll .\venv\Lib\site-packages\bitsandbytes\
cp .\bitsandbytes_windows\cextension.py .\venv\Lib\site-packages\bitsandbytes\cextension.py
cp .\bitsandbytes_windows\main.py .\venv\Lib\site-packages\bitsandbytes\cuda_setup\main.py
accelerate config
update: python -m venv venv
is seemed to be safer than python -m venv --system-site-packages venv
(some user have packages in global python).
Answers to accelerate config:
- This machine
- No distributed training
- NO
- NO
- NO
- all
- fp16
note: Some user reports ValueError: fp16 mixed precision requires a GPU
is occurred in training. In this case, answer 0
for the 6th question:
What GPU(s) (by id) should be used for training on this machine as a comma-separated list? [all]:
(Single GPU with id 0
will be used.)
Other versions of PyTorch and xformers seem to have problems with training. If there is no other reason, please install the specified version.
When a new release comes out you can upgrade your repo with the following command:
cd sd-scripts
git pull
.\venv\Scripts\activate
pip install --use-pep517 --upgrade -r requirements.txt
Once the commands have completed successfully you should be ready to use the new version.
The implementation for LoRA is based on cloneofsimo's repo. Thank you for great work!
The LoRA expansion to Conv2d 3x3 was initially released by cloneofsimo and its effectiveness was demonstrated at LoCon by KohakuBlueleaf. Thank you so much KohakuBlueleaf!
The majority of scripts is licensed under ASL 2.0 (including codes from Diffusers, cloneofsimo's and LoCon), however portions of the project are available under separate license terms:
Memory Efficient Attention Pytorch: MIT
bitsandbytes: MIT
BLIP: BSD-3-Clause
-
19 Mar. 2023, 2023/3/19:
-
Add a function to load training config with
.toml
to each training script. Thanks to Linaqruf for this great contribution!- Specify
.toml
file with--config_file
..toml
file haskey=value
entries. Keys are same as command line options. See #241 for details. - All sub-sections are combined to a single dictionary (the section names are ignored.)
- Omitted arguments are the default values for command line arguments.
- Command line args override the arguments in
.toml
. - With
--output_config
option, you can output current command line options to the.toml
specified with--config_file
. Please use as a template.
- Specify
-
Add
--lr_scheduler_type
and--lr_scheduler_args
arguments for custom LR scheduler to each training script. Thanks to Isotr0py! #271- Same as the optimizer.
-
Add sample image generation with weight and no length limit. Thanks to mio2333! #288
( )
,(xxxx:1.2)
and[ ]
can be used.
-
Fix exception on training model in diffusers format with
train_network.py
Thanks to orenwang! #290 -
各学習スクリプトでコマンドライン引数の代わりに
.toml
ファイルで引数を指定できるようになりました。Linaqruf氏の多大な貢献に感謝します。--config_file
で.toml
ファイルを指定してください。ファイルはkey=value
形式の行で指定し、key はコマンドラインオプションと同じです。詳細は #241 をご覧ください。- ファイル内のサブセクションはすべて無視されます。
- 省略した引数はコマンドライン引数のデフォルト値になります。
- コマンドライン引数で
.toml
の設定を上書きできます。 --output_config
オプションを指定すると、現在のコマンドライン引数を--config_file
オプションで指定した.toml
ファイルに出力します。ひな形としてご利用ください。
-
任意のスケジューラを使うための
--lr_scheduler_type
と--lr_scheduler_args
オプションを各学習スクリプトに追加しました。Isotr0py氏に感謝します。 #271- 任意のオプティマイザ指定と同じ形式です。
-
学習中のサンプル画像出力でプロンプトの重みづけができるようになりました。また長さ制限も緩和されています。mio2333氏に感謝します。 #288
( )
、(xxxx:1.2)
や[ ]
が使えます。
-
train_network.py
でローカルのDiffusersモデルを指定した時のエラーを修正しました。orenwang氏に感謝します。 #290
-
-
11 Mar. 2023, 2023/3/11:
-
Fix
svd_merge_lora.py
causes an error about the device. -
svd_merge_lora.py
でデバイス関連のエラーが発生する不具合を修正しました。 -
Sample image generation: A prompt file might look like this, for example
# prompt 1 masterpiece, best quality, (1girl), in white shirts, upper body, looking at viewer, simple background --n low quality, worst quality, bad anatomy,bad composition, poor, low effort --w 768 --h 768 --d 1 --l 7.5 --s 28 # prompt 2 masterpiece, best quality, 1boy, in business suit, standing at street, looking back --n (low quality, worst quality), bad anatomy,bad composition, poor, low effort --w 576 --h 832 --d 2 --l 5.5 --s 40
Lines beginning with
#
are comments. You can specify options for the generated image with options like--n
after the prompt. The following can be used.--n
Negative prompt up to the next option.--w
Specifies the width of the generated image.--h
Specifies the height of the generated image.--d
Specifies the seed of the generated image.--l
Specifies the CFG scale of the generated image.--s
Specifies the number of steps in the generation.
The prompt weighting such as
( )
and[ ]
are working. -
サンプル画像生成: プロンプトファイルは例えば以下のようになります。
# prompt 1 masterpiece, best quality, 1girl, in white shirts, upper body, looking at viewer, simple background --n low quality, worst quality, bad anatomy,bad composition, poor, low effort --w 768 --h 768 --d 1 --l 7.5 --s 28 # prompt 2 masterpiece, best quality, 1boy, in business suit, standing at street, looking back --n low quality, worst quality, bad anatomy,bad composition, poor, low effort --w 576 --h 832 --d 2 --l 5.5 --s 40
#
で始まる行はコメントになります。--n
のように「ハイフン二個+英小文字」の形でオプションを指定できます。以下が使用可能できます。--n
Negative prompt up to the next option.--w
Specifies the width of the generated image.--h
Specifies the height of the generated image.--d
Specifies the seed of the generated image.--l
Specifies the CFG scale of the generated image.--s
Specifies the number of steps in the generation.
( )
や[ ]
などの重みづけは動作しません。
-
Please read Releases for recent updates. 最近の更新情報は Release をご覧ください。