Skip to content

rlgnswk/Content-controllable-motion-infilling

Repository files navigation

Content controllable motion infilling

Motion infilling into target content:

The Reference Model(Convolutional Autoencoders for Human Motion Infilling, 3DV 2020) can generate only one determined output from one input. But, there are many possible cases between keyframes. Therefore, this project conducted for making its output various with conditional input.


Overall Structure:

Result:

Result(latent interpolation):


Usage:

Data

  1. Get data from Holden et al
  2. Pre-pocessing by using code from Kaufmann et al

Run on the code:

Training

python train_blending_GAN_controallable.py --name <name_of_experiment> --datasetPath <your_train_data_path> --ValdatasetPath <your_valid/test_data_path> 

The results will save in <Code_path>/experiments(default)/

You can check the default setting and use it from line 29 to 37 <Code_path>/train_blending_GAN_controallable.py

Test

The pretrained model is provided at "<Code_path>/pretrained/"

We provide pretrained model "model_399.pt" at "./pretrained" folder, check it if you want.

cd <Code_path>/Test_Code; python test.py --name <name_of_experiment> --ValdatasetPath <your_valid/test_data_path> --model_pretrained_modelpath <trained_model_path>

The results will save in <Code_path>/experiments(default)/

Visualization

The visualization codes are referd from Here

cd <Code_path>/VisualizationCode ; python view.py --name <name_of_experiment> --epoch <num of epoch(train)/iter(test)>

Reference

Reference Paper(3DV 2020)

Official Github (Tensorflow) of reference paper

About

Motion infilling into target content

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages