paper|arxiv|youtube|blog|中文论文(ao9l)|中文视频|中文博客
This repository is the official implementation of Residual Denoising Diffusion Models.
Note:
- The current setting is to train two unets (one to estimate the residuals and one to estimate the noise), which can be used to explore partially path-independent generation process.
- Other tasks need to modify a)
[self.alphas_cumsum[t]*self.num_timesteps, self.betas_cumsum[t]*self.num_timesteps]]
->[t,t]
(in L852 and L1292). b) For image restoration,generation=False
in L120,convert_to_ddim=False
in L640 and L726. c) uncomment L726 for simultaneous removal of residuals and noise. d) modify the corresponding experimental settings (see Table 4 in the Appendix). - The code is being updated.
To install requirements:
conda env create -f install.yaml
To train RDDM, run this command:
python train.py
or
accelerate launch train.py
To evaluate image generation, run:
cd eval/image_generation_eval/
python fid_and_inception_score.py path_of_gen_img
For image restoration, MATLAB evaluation codes in ./eval
.
Two unets (deresidual+denoising) for partially path-independent generation process
See Table 3 in main paper.
For image restoration:
For image generation (on the CelebA dataset):
We can convert a pre-trained DDIM to RDDM by coefficient transformation (see code).
If you find our work useful in your research, please consider citing:
@InProceedings{Liu_2024_CVPR,
author = {Liu, Jiawei and Wang, Qiang and Fan, Huijie and Wang, Yinong and Tang, Yandong and Qu, Liangqiong},
title = {Residual Denoising Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2024},
pages = {2773-2783}
}
Please contact Liangqiong Qu (https://liangqiong.github.io/) or Jiawei Liu ([email protected]) if there is any question.