Skip to content

pranavphoenix/WavePaint

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

WavePaint

PWC PWC

Resource-efficient Token-mixer for Self-supervised Inpainting

[arXiv]

7rklek

Thick Mask

7rkp3w

Medium Mask

7rkxzr

Thin Mask

Model Architecture

image

Training using train.py

Change the path to input directory containing images. Set the output image size as required. Modify the model parameters and training configurations. Use can use either medium or thick masks.

python train.py -batch <batch-size> -mask <mask-size>

Running Inference using infer.py

Provide the path to save model pth file, folder containing validation images ground truth and masks, folder to which model outputs to be saved, folder to which masked images to be saved

python infer.py 

Calculating performance metrics using evaluate.py

Provide the path to save model pth file, folder containing validation images ground truth and masks, folder to which model outputs to be saved, folder to which masked images to be saved

evaluate.py <path/to/Ground/truth/images> <path/to/model/output> <path/to/save/metrics.csv>

As its written now, the code expects a folder structure in the form:

workspace/
├── train.py
├── config.py
├── datasets.py
├── evaluate.py
├── infer.py
├── masks.py
├── model.py
├── scores.py
├── celebhq/
│   ├── train_256/
│   │   ├── 0.jpg
│   │   └── 1.jpg ...
│   └── val_256/
│       ├── random_medium_256/
│       │   ├── 0.png
│       │   └── 0_mask000.png ...
│       ├── random_thick_256/
│       │   ├── 0.png
│       │   └── 0_mask000.png ...
│        ── random_thin_256/
│           ├── 0.png
│           └── 0_mask000.png ...
├── generated_images/
│   ├── image1.png
│   └── image2.png...
├── metrics/
│   ├── alex.pth
│   ├── squeeze.pth
│   ├── vgg.pth
│   └── metrics.csv
└── output/
    ├── masked/
    │   ├── img1.png
    │   └── img2.png ...
    └── output/
        ├── img1.png
        └── img2.png...

We have used LaMa training and inference codes for our experiments from https://github.com/advimman/lama The scripts to generate various masks for validation set is also available there.

Citation

If you found this code helpful, please consider citing:

@misc{jeevan2023wavepaint,
      title={WavePaint: Resource-efficient Token-mixer for Self-supervised Inpainting}, 
      author={Pranav Jeevan and Dharshan Sampath Kumar and Amit Sethi},
      year={2023},
      eprint={2307.00407},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}