Skip to content

Fannovel16/sd-scripts

Repository files navigation

This repository contains training, generation and utility scripts for Stable Diffusion.

Updates

  • 19 Jan. 2023, 2023/1/19

    • Fix a part of LoRA modules are not trained when gradient_checkpointing is enabled.
    • Add --save_last_n_epochs_state option. You can specify how many state folders to keep, apart from how many models to keep. Thanks to shirayu!
    • Fix Text Encoder training stops at max_train_steps even if max_train_epochs is set in `train_db.py``.
    • Added script to check LoRA weights. You can check weights by python networks\check_lora_weights.py <model file>. If some modules are not trained, the value is 0.0 like following.
      • lora_te_text_model_encoder_layers_11_* is not trained with clip_skip=2, so 0.0 is okay for these modules.
    • 一部のLoRAモジュールが gradient_checkpointing を有効にすると学習されない不具合を修正しました。ご不便をおかけしました。
    • --save_last_n_epochs_state オプションを追加しました。モデルの保存数とは別に、stateフォルダの保存数を指定できます。shirayu氏に感謝します。
    • train_db.py で、max_train_epochs を指定していても、max_train_steps のステップでText Encoderの学習が停止してしまう不具合を修正しました。
    • LoRAの重みをチェックするスクリプトを追加してあります。python networks\check_lora_weights.py <model file> のように実行してください。学習していない重みがあると、値が 下のように 0.0 になります。
      • lora_te_text_model_encoder_layers_11_ で始まる部分は clip_skip=2 の場合は学習されないため、0.0 で正常です。
  • example result of check_lora_weights.py, Text Encoder and a part of U-Net are not trained:

number of LoRA-up modules: 264
lora_te_text_model_encoder_layers_0_mlp_fc1.lora_up.weight,0.0
lora_te_text_model_encoder_layers_0_mlp_fc2.lora_up.weight,0.0
lora_te_text_model_encoder_layers_0_self_attn_k_proj.lora_up.weight,0.0
:
lora_unet_down_blocks_2_attentions_1_transformer_blocks_0_ff_net_0_proj.lora_up.weight,0.0
lora_unet_down_blocks_2_attentions_1_transformer_blocks_0_ff_net_2.lora_up.weight,0.0
lora_unet_mid_block_attentions_0_proj_in.lora_up.weight,0.003503334941342473
lora_unet_mid_block_attentions_0_proj_out.lora_up.weight,0.004308608360588551
:
  • all modules are trained:
number of LoRA-up modules: 264
lora_te_text_model_encoder_layers_0_mlp_fc1.lora_up.weight,0.0028684409335255623
lora_te_text_model_encoder_layers_0_mlp_fc2.lora_up.weight,0.0029794853180646896
lora_te_text_model_encoder_layers_0_self_attn_k_proj.lora_up.weight,0.002507600700482726
lora_te_text_model_encoder_layers_0_self_attn_out_proj.lora_up.weight,0.002639499492943287
:
  • 17 Jan. 2023, 2023/1/17

    • Important Notice It seems that only a part of LoRA modules are trained when gradient_checkpointing is enabled. The cause is under investigation, but for the time being, please train without gradient_checkpointing. The issue is fixed now.
    • 重要なお知らせ gradient_checkpointing を有効にすると LoRA モジュールの一部しか学習されないようです。原因は調査中ですが当面は gradient_checkpointing を指定せずに学習してください。問題は修正されました。
  • 15 Jan. 2023, 2023/1/15

    • Added --max_train_epochs and --max_data_loader_n_workers option for each training script.
    • If you specify the number of training epochs with --max_train_epochs, the number of steps is calculated from the number of epochs automatically.
    • You can set the number of workers for DataLoader with --max_data_loader_n_workers, default is 8. The lower number may reduce the main memory usage and the time between epochs, but may cause slower dataloading (training).
    • --max_train_epochs--max_data_loader_n_workers のオプションが学習スクリプトに追加されました。
    • --max_train_epochs で学習したいエポック数を指定すると、必要なステップ数が自動的に計算され設定されます。
    • --max_data_loader_n_workers で DataLoader の worker 数が指定できます(デフォルトは8)。値を小さくするとメインメモリの使用量が減り、エポック間の待ち時間も短くなるようです。ただしデータ読み込み(学習時間)は長くなる可能性があります。

Please read release version 0.3.0 for recent updates. 最近の更新情報は release version 0.3.0 をご覧ください。

日本語版README

For easier use (GUI and PowerShell scripts etc...), please visit the repository maintained by bmaltais. Thanks to @bmaltais!

This repository contains the scripts for:

  • DreamBooth training, including U-Net and Text Encoder
  • fine-tuning (native training), including U-Net and Text Encoder
  • LoRA training
  • image generation
  • model conversion (supports 1.x and 2.x, Stable Diffision ckpt/safetensors and Diffusers)

About requirements.txt

These files do not contain requirements for PyTorch. Because the versions of them depend on your environment. Please install PyTorch at first (see installation guide below.)

The scripts are tested with PyTorch 1.12.1 and 1.13.0, Diffusers 0.10.2.

Links to how-to-use documents

All documents are in Japanese currently, and CUI based.

Windows Required Dependencies

Python 3.10.6 and Git:

Give unrestricted script access to powershell so venv can work:

  • Open an administrator powershell window
  • Type Set-ExecutionPolicy Unrestricted and answer A
  • Close admin powershell window

Windows Installation

Open a regular Powershell terminal and type the following inside:

git clone https://github.com/kohya-ss/sd-scripts.git
cd sd-scripts

python -m venv --system-site-packages venv
.\venv\Scripts\activate

pip install torch==1.12.1 cu116 torchvision==0.13.1 cu116 --extra-index-url https://download.pytorch.org/whl/cu116
pip install --upgrade -r requirements.txt
pip install -U -I --no-deps https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/f/xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl

cp .\bitsandbytes_windows\*.dll .\venv\Lib\site-packages\bitsandbytes\
cp .\bitsandbytes_windows\cextension.py .\venv\Lib\site-packages\bitsandbytes\cextension.py
cp .\bitsandbytes_windows\main.py .\venv\Lib\site-packages\bitsandbytes\cuda_setup\main.py

accelerate config

Answers to accelerate config:

- This machine
- No distributed training
- NO
- NO
- NO
- all
- fp16

note: Some user reports ValueError: fp16 mixed precision requires a GPU is occurred in training. In this case, answer 0 for the 6th question: What GPU(s) (by id) should be used for training on this machine as a comma-separated list? [all]:

(Single GPU with id 0 will be used.)

Upgrade

When a new release comes out you can upgrade your repo with the following command:

cd sd-scripts
git pull
.\venv\Scripts\activate
pip install --upgrade -r requirements.txt

Once the commands have completed successfully you should be ready to use the new version.

Credits

The implementation for LoRA is based on cloneofsimo's repo. Thank you for great work!!!

License

The majority of scripts is licensed under ASL 2.0 (including codes from Diffusers, cloneofsimo's), however portions of the project are available under separate license terms:

Memory Efficient Attention Pytorch: MIT

bitsandbytes: MIT

BLIP: BSD-3-Clause

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%