Skip to content

tensorflow implementation of Generating Images with Perceptual Similarity Metrics based on Deep Networks

License

Notifications You must be signed in to change notification settings

shijx12/DeepSim

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DeepSim

This is a tensorflow implementation of the paper Generating Images with Perceptual Similarity Metrics based on Deep Networks by Alexey Dosovitskiy, Thomas Brox.

This repo is based on CharlesShang's work TFFRCNN. I really appreciate their great work.

I mainly use the data load module of their work in ./deepSimGAN/util.py. You can remove all codes outside the ./deepSimGAN directory if you rewrite the DataFetcher class in the ./deepSimGAN/util.py script.

requirement

  • python 2.7
  • tensorflow >= 1.1.0
  • python-opencv >= 3.2.0
  • numpy >= 1.11.3
  • tqdm

Training

To train your own deepsim model, you need to:

  1. Prepare dataset and pretrained-model for encoder training.
  2. Train your encoder and save the fine-tuned checkpoint.
  3. Prepare dataset for generator and discriminator training.
  4. Load fine-tuned encoder and train the generator and discriminator.

Prepare dataset and pretrained model for encoder

  1. Download the training, validation, data and VOCdevkit to the target directory named VOCdevkit, such as /data/VOCdevkit. We use $VOCdevkit to refer to it and use $DeepSim to refer to this repo's root directory.

    cd $VOCdevkit
    wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
    wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCdevkit_18-May-2011.tar
  2. Extract all of these tars

    tar xvf VOCtrainval_11-May-2012.tar
    tar xvf VOCdevkit_18-May-2011.tar
  3. It should have this basic structure

    $VOCdevkit/                           # development kit
    $VOCdevkit/VOCcode/                   # VOC utility code
    $VOCdevkit/VOC2012                    # image sets, annotations, etc.
    # ... and several other directories ...
  4. Create symlinks for the PASCAL VOC dataset

    cd $DeepSim/data
    ln -s $VOCdevkit VOCdevkit2012
  5. Download pre-trained model VGG16 and put it in the path $DeepSim/data/pretrain_model/VGG_imagenet.npy

Train the encoder

The codes of encoder net deninition and training are in deepSimGAN/EncoderNet.py. You can use following command to start your training:

cd $DeepSim
python deepSimGAN/EncoderNet.py --weight_path data/pretrain_model/VGG_imagenet.npy --logdir output/encoder

All of checkpoints and summaries are stored in the given logdir. You can use tensorboard to monitor the training process

tensorboard --logdir output/encoder --host 0.0.0.0 --port 6006

Prepare dataset for generator and discriminator training

You can just reuse the Pascal VOC 2012.

If you want to use other datasets, remember to rewrite the class DataFetcher in ./deepSimGAN/util.py

Train your deepSimNet

The code of deepSimNet definition is in deepSimGAN/deepSimNet.py and the code of training is in deepSimGAN/main.py. You can use following command to start your training:

python deepSimGAN/main.py --encoder output/encoder --logdir output/deepsim

There are many other arguments which can be specified to influence your training. Please refer to the argument parser in deepSimGAN/main.py.

About

tensorflow implementation of Generating Images with Perceptual Similarity Metrics based on Deep Networks

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published