Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inversion result is not good ,but the sim is high. #70

Open
zeta7337 opened this issue Nov 30, 2021 · 5 comments
Open

inversion result is not good ,but the sim is high. #70

zeta7337 opened this issue Nov 30, 2021 · 5 comments

Comments

@zeta7337
Copy link

10600
hello, i trained on e4e my own dataset and pretrained stylegan2 model,but the result looks totally different with source img,can you give me some suggestion.

@zeta7337
Copy link
Author

image

@omertov
Copy link
Owner

omertov commented Dec 5, 2021

Hi @zeta7337!
From a quick glance it seems like the images might not be aligned according to FFHQ's alignment method.
In case you are using the pretrained FFHQ StyleGAN2 this might be the cuase for your results.

To better understand the experiment settings, could you please provide me with answers to the following:

  1. Are you using the official StlyeGAN2 FFHQ model? If so, is your training data aligned according to the FFHQ face alignment? (you can look into the inference script or notebook to see how to align your train and test datasets).

  2. In case you trained the StyleGAN on your custom dataset, could you provide some results of generated StyleGAN images?

  3. In case you trained the StyleGAN on your custom (unaligned?) dataset, did you compute the identity loss on the entire image? or just on the crop of the face?

Hope we can fix the training results!
Best,
Omer

@zeta7337
Copy link
Author

zeta7337 commented Dec 9, 2021

Hi @zeta7337! From a quick glance it seems like the images might not be aligned according to FFHQ's alignment method. In case you are using the pretrained FFHQ StyleGAN2 this might be the cuase for your results.

To better understand the experiment settings, could you please provide me with answers to the following:

1. Are you using the official StlyeGAN2 FFHQ model? If so, is your training data aligned according to the FFHQ face alignment? (you can look into the inference script or notebook to see how to align your train and test datasets).

2. In case you trained the StyleGAN on your custom dataset, could you provide some results of generated StyleGAN images?

3. In case you trained the StyleGAN on your custom (unaligned?) dataset, did you compute the identity loss on the entire image? or just on the crop of the face?

Hope we can fix the training results! Best, Omer

thank you!
you are right,the images are not aligned correctly.
I have some other question.
1.should I train the stylegan generator and the e4eencoder with the same dataset?
2.I inverse an image of asian movie star with ffhq encoder ,the result is not that good, the inversion result does not looks like the source img.I think the encoder and generator does not familar with asian people,because most imags in ffhq are of europeans or americans.so if I use a much more bigger dataset that contain all kind of person to train the generator and e4eencoder ,should the inversion get better?
3.i need an encoder that can inverse image well, i dont need to edit the image .Is there suggestion for the training setting?

@FelixChan9527
Copy link

Hello, have you resolved the issue?

@snlpatel001213
Copy link

Was inversion issue for Asian people solved ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants