Skip to content

Teacher Network

Thanos Masouris edited this page Aug 28, 2022 · 4 revisions

In the case of the CIFAR-10 dataset, we opted to distil the StyleGAN2-ADA [1] model. The aforementioned model is able to achieve state-of-the-art performance on the task of conditional image generation on the CIFAR-10 [2] dataset. The main reason behind this success was the proposed adaptive discriminator augmentation mechanism that significantly stabilizes training when there are limited data available. However, in our case the model is going to be utilized for a black-box image generation. So despite the training procedure and techniques followed by the study, we only need access to the input-output pairs of the model's generator. In particular we use the official PyTorch implementation of the StyleGAN2-ADA model by NVIDIA Research Projects from Github, along with the provided weights for the pre-trained model on the CIFAR-10 dataset, for conditional image generation. Therefore, StyleGAN is used to create a FakeCIFAR10 dataset, consisting of images generated by the model, along with the input noise vectors and labels. Subsequently, this dataset is used to train the student network to mimic the functionality of the teacher network (i.e. StyleGAN2-ADA), using several objectives. On the other hand, the StyleGAN2-ADA model, upon the creation of the dataset, is no longer used in the training procedure of the student network, and thus it can be discarded.

FakeCIFAR10

The FakeCIFAR10 dataset consists of 50,000 synthetic images, generated by the StyleGAN2-ADA model. There are 5,000 images for each of the 10 classes, along with the noise vectors that were used as input to StyleGAN's Generator. The dataset was generated using the create_dataset.py script, and it can be found here.

References

[1] Karras, Tero, et al. "Training generative adversarial networks with limited data." Advances in Neural Information Processing Systems 33 (2020): 12104-12114.

[2] Krizhevsky, Alex, and Geoffrey Hinton. "Learning multiple layers of features from tiny images." (2009): 7.