Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reproduced performance is different from the report #104

Open
trrrrht opened this issue Apr 24, 2024 · 1 comment
Open

Reproduced performance is different from the report #104

trrrrht opened this issue Apr 24, 2024 · 1 comment

Comments

@trrrrht
Copy link

trrrrht commented Apr 24, 2024

Hi,

I installed the pygod package and tried to reproduce the performance in the paper. However, the results of some models are not as high as the reported ones. For example, if I use the default parameters of Radar to run on the Enron dataset, the AUC is only about 0.6, which is lower than the reported one.

Is this because of the randomness of unsupervised learning models?

Thanks for your attention.

@kayzliu
Copy link
Member

kayzliu commented Jun 23, 2024

I agree it may be because of the randomness of unsupervised learning models. Here is the comments from PyTorch doc:

Completely reproducible results are not guaranteed across PyTorch releases, individual commits, or different platforms.

The subtle changes in the implementation may change the results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants