Skip to content
/ ASH Public

The Training and Demo code for: Ash: Animatable gaussian splats for efficient and photoreal human rendering (CVPR 2024)

Notifications You must be signed in to change notification settings

kv2000/ASH

Repository files navigation

ASH: Animatable Gaussian Splats for Efficient and Photoreal Human Rendering (CVPR 2024)

Haokai Pang† · Heming Zhu† · Adam Kortylewski · Christian Theobalt · Marc Habermann‡

Joint first authors.

Corresponding author.


News

2024-6-14 The Training Code, and the Data Processing Code is available! 🎆🎆🎆

2024-3-29 The initial release, i.e., the Demo Code is available. The Training Code is on the way. For more details, pleaase check out the project page😃.


Installation

Clone the repo

git clone [email protected]:kv2000/ASH.git --recursive

cd ./submodules/diff-gaussian-rasterization/
git submodule update --init --recursive

Install the dependencies

The code is tested on Python 3.9, pytorch 1.12.1, and cuda 11.3.

Setup DeepCharacters Pytorch

Firstly, install the underlying clothed human body model, 🎆DeepCharacters Pytorch🎆, which also consists the dependencies that needed for this repo.

Setup 3DGS

Then, setup the submodules for 3D Gaussian Splatting.

# the env with DeepCharacters Pytorch
conda activate mpiiddc 

# 3DGS go
cd ./submodules/diff-gaussian-rasterization/
python setup.py install

cd ../simple-knn/
python setup.py install

Setup the metadata and checkpoints

You may find the metadata and the checkpoints from this link.

The extracted metadata and checkpoints follows folder structure below

# for the checkpoints
checkpoints
|--- Subject0001
    |---deformable_character_checkpoint.pth # character checkpoints
    |---gaussian_checkpoints.tar            # gaussian checkpoints

# for the meta data
meta_data
|--- Subject0001
    |---skeletoolToGTPose                   # training poses
    |   |--- ... 
    |
    |---skeletoolToGTPoseTest               # Testing poses
    |   |--- ...
    |
    |---skeletoolToGTPoseRetarget           # Retartget another subject's pose
    |   |--- ...
    |
    |--- ...                                # Others

Run the demo

Run the following and the results will be stored in ./dump_results/ by default.

bash run_inference.sh

Train your model

Step 1. Data Processing

  • Download the compressed raw data from from this link in to ./raw_data/ .
  • Decompress the data with tar -xzvf Subject0022.tar.gz
  • Run the (slurm) bash script ./process_video/bash_get_image.sh that extracts the masked images from the raw RGB videoes and the foreground mask videoes . The provided script supports parallel the image extraction with slurm job arrays.

Step 2. Start Training

Run the following and the results will be stored in ./dump_results/ by default.

bash run_train.sh

The folder structure for the training is as follows:

# for the meta data
dump_results
|--- Subject0022
    |---cached_files                                # The precomputed character related
    |   |--- cached_fin_rotation_quad.pkl
    |   |--- cached_fin_translation_quad.pkl
    |   |--- cached_joints.pkl
    |   |--- cached_ret_canonical_delta.pkl
    |   |--- cached_ret_posed_delta.pkl
    |   |--- cached_temp_vert_normal.pkl
    |
    |---checkpoints                               
    |   |--- ...
    |
    |---exp_stats                                   # Tensorboard Logs
    |   |--- ...
    |
    |---validations_fine                            # Validationed images every X Frames

Note that at the first time that the training script runs, it will pre-compute and store the character related data, stored in ./dump_results/[Subject Name]/cached_files/. Which will greatly speed up and reduce the gpu usage of the training process.

Step 3. Train with your own data.

Plese check out this issue on some hints on training on your own data, discussion is welcomed :).


Todo list

  • Data processing for Training
  • Training Code

Citation

If you find our work useful for your research, please, please, please consider citing our paper!

@InProceedings{Pang_2024_CVPR,
    author    = {Pang, Haokai and Zhu, Heming and Kortylewski, Adam and Theobalt, Christian and Habermann, Marc},
    title     = {ASH: Animatable Gaussian Splats for Efficient and Photoreal Human Rendering},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2024},
    pages     = {1165-1175}
}

Contact

For questions, clarifications, feel free to get in touch with:
Heming Zhu: [email protected]
Marc Habermann: [email protected]


License

Deep Characters Pyotrch is under CC-BY-NC license. The license applies to the pre-trained models and the metadata as well.


Acknowledgements

Christian Theobalt was supported by ERC Consolidator Grant 4DReply (No.770784). Adam Kortylewski was supported by the German Science Foundation (No.468670075). This project was also supported by the Saarbrucken Research Center for Visual Computing, Interaction, and AI. We would also like to thank Andrea Boscolo Camiletto and Muhammad Hamza Mughal for the efforts/discussion on motion retargeting.

Below are some resources that we benefit from (keep updating):

About

The Training and Demo code for: Ash: Animatable gaussian splats for efficient and photoreal human rendering (CVPR 2024)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published