Skip to content

Official PyTorch Implementation of "DiffusionPen: Towards Controlling the Style of Handwritten Text Generation" - ECCV 2024

License

Notifications You must be signed in to change notification settings

koninik/DiffusionPen

Repository files navigation

🔥 DiffusionPen: Towards Controlling the Style of Handwritten Text Generation

ECCV Paper | ArXiv | Poster | Hugging Face

📢 Introduction

  • We introduce DiffusionPen, a few-shot diffusion model developed for generating stylized handwritten text. By using just a few reference samples (as few as five), it learns a writer’s unique handwriting style and generates new text that imitates that style.
  • DiffusionPen effectively captures both seen and unseen handwriting styles with fewer examples. This is achieved through a style extraction module that combines metric learning and classification, allowing for greater flexibility in representing various writing styles.
  • We evaluated DiffusionPen on IAM and GNHK (only qualitative) handwriting datasets, demonstrating its ability to generate diverse and realistic handwritten text. The generated data closely matches the real handwriting distribution, leading to enhancement in Handwriting Text Recognition (HTR) systems when used for training.

Overview of the proposed DiffusionPen

Overview of the proposed DiffusionPen

🚀 Download Dataset & Models from Hugging Face 🤗

You can download the pre-processed dataset and model weights from HF here: https://huggingface.co/konnik/DiffusionPen

Place the folders 📁saved_iam_data, 📁style_models, and 📁diffusionpen_iam_model_path in the main code directory.

For VAE encoder-decoder and DDIM we use stable-diffusion-v1-5.

🧪 Sampling using DiffusionPen

For single image sampling run

python train.py --save_path ./diffusionpen_iam_model_path --style_path ./style_models/iam_style_diffusionpen.pth --train_mode sampling --sampling_mode single_sampling

For paragraph sampling run

python train.py --save_path ./diffusionpen_iam_model_path --style_path ./style_models/iam_style_diffusionpen.pth --train_mode sampling --sampling_mode paragraph

We also provide the IAM training and validation set images generated using DiffusionPen in the following link:
Download IAM Dataset Generated with DiffusionPen (test set will be soon uploaded!!!)

🏋️‍♂️ Train with Your Own Data

If you'd like to train DiffusionPen using your own data, simply adjust the data loader to fit your dataset and follow these 2 steps:

  1. Train the Style Encoder:
python style_encoder_train.py
  1. Train DiffusionPen:
python train.py --epochs 1000 --model_name diffusionpen --save_path /new/path/to/save/models --style_path /new/path/to/style/model.pth --stable_dif_path ./stable-diffusion-v1-5

📝 Evaluation

We compare DiffusionPen with several state-of-the-art generative models, including GANwriting, SmartPatch, VATr, and WordStylist. The Handwriting Text Recognition (HTR) system used for evaluation is based on Best practices for HTR.


📄 Citation

If you find our work useful for your research, please cite:

@article{nikolaidou2024diffusionpen,
  title={DiffusionPen: Towards Controlling the Style of Handwritten Text Generation},
  author={Nikolaidou, Konstantina and Retsinas, George and Sfikas, Giorgos and Liwicki, Marcus},
  journal={arXiv preprint arXiv:2409.06065},
  year={2024}
}

About

Official PyTorch Implementation of "DiffusionPen: Towards Controlling the Style of Handwritten Text Generation" - ECCV 2024

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages