Skip to content
This repository has been archived by the owner on Jan 10, 2023. It is now read-only.
/ mannequinchallenge Public archive

Inference code and trained models for "Learning the Depths of Moving People by Watching Frozen People."

License

Notifications You must be signed in to change notification settings

google/mannequinchallenge

Mannequin Challenge Code and Trained Models

This repository contains inference code for models trained on the Mannequin Challenge dataset introduced in the CVPR 2019 paper "Learning the Depths of Moving People by Watching Frozen People."

This is not an officially supported Google product.

Setup

The code is based on PyTorch. The code has been tested with PyTorch 1.1 and Python 3.6.

We recommend setting up a virtualenv environment for installing PyTorch and the other necessary Python packages. The TensorFlow installation guide may be helpful (follow steps 1 and 2) or follow the virtualenv documentation.

Once your environment is set up and activated, install the necessary packages:

(pytorch)$ pip install torch torchvision scikit-image h5py

The model checkpoints are stored on Google Cloud and may be retrieved by running:

(pytorch)$ ./fetch_checkpoints.sh

Single-View Inference

Our test set for single-view inference is the DAVIS 2016 dataset. Download and unzip it by running:

(pytorch)$ ./fetch_davis_data.sh

Then run the DAVIS inference script:

(pytorch)$ python test_davis_videos.py --input=single_view

Once the run completes, visualizations of the output should be available in test_data/viz_predictions.

Full Model Inference

The full model described in the paper requires several additional inputs: the human segmentation mask, the depth-from-parallax buffer, and (optionally) a human keypoint buffer. We provide a preprocessed version of the TUM RGBD dataset that includes these inputs. Download (~9GB) and unzip it using the script:

(pytorch)$ ./fetch_tum_data.sh

To reproduce the numbers in Table 2 of the paper, run:

(pytorch)$ python test_tum.py --input=single_view
(pytorch)$ python test_tum.py --input=two_view
(pytorch)$ python test_tum.py --input=two_view_k

Where single_view is the variant I from the paper, two_view is the variant IDCM, and two_view_k is the variant IDCMK. The script prints running averages of the various error metrics as it runs. When the script completes, the final error metrics are shown.

Acknowledgements

If you find the code or results useful, please cite the following paper:

@inproceedings{li2019learning,
  title={Learning the Depths of Moving People by Watching Frozen People},
  author={Li, Zhengqi and Dekel, Tali and Cole, Forrester and Tucker, Richard
    and Snavely, Noah and Liu, Ce and Freeman, William T},
  booktitle={Proc. Computer Vision and Pattern Recognition (CVPR)},
  year={2019}
}

About

Inference code and trained models for "Learning the Depths of Moving People by Watching Frozen People."

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published