⚡️ Tracking Framework for GAGAvatar ⚡️
GAGAvatar Track is a monocular face tracker built on FLAME. It provides FLAME parameters (including eyeball pose) and camera parameters, along with the bounding box and landmarks used during optimization.
This environment is a sub-environment of GAGAvatar. You can skip this step if you have already built GAGAvatar.
conda env create -f environment.yml
conda activate GAGAvatar_track
Prepare resources with bash ./build_resources.sh
.
Resources Link
The models and resources are available at https://huggingface.co/xg-chu/GAGAvatar_track.
It takes longer to track the first frame.
python track_video.py -v ./demos/obama.mp4
python track_image.py -i ./demos/monroe.jpg
python track_lmdb.py -l ./demos/vfhq_demo
If you find our work useful in your research, please consider citing:
@inproceedings{
chu2024gagavatar,
title={Generalizable and Animatable Gaussian Head Avatar},
author={Xuangeng Chu and Tatsuya Harada},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=gVM2AZ5xA6}
}
Some part of our work is built based on FLAME, StyleMatte, EMICA and VGGHead. The GAGAvatar Logo is designed by Caihong Ning. We thank you for sharing their wonderful code and their wonderful work.
- FLAME: https://flame.is.tue.mpg.de
- StyleMatte: https://github.com/chroneus/stylematte
- EMICA: https://github.com/radekd91/inferno
- VGGHead: https://github.com/KupynOrest/head_detector