Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is query_topn computes as same as evaluate_performance #257

Open
SuShu19 opened this issue Oct 23, 2021 · 1 comment
Open

Is query_topn computes as same as evaluate_performance #257

SuShu19 opened this issue Oct 23, 2021 · 1 comment

Comments

@SuShu19
Copy link

SuShu19 commented Oct 23, 2021

Description

I'm trying to evaluate the recommendation result of a tool.
I have trained a model using ampligraph's model, however, the problem appears when I tried to evaluate the model on the recommendation.

Actual Behavior

Firstly, I used evaluate_performance function and limited the parameter entities_subset to the type of entity I want to recommend.
Secondly, I used query_topn function to valid the recommend metrics obtained by evluate_performance.
What's strange is the the hit@10 in query_topn is much lower that it in evaluate_performance.

I'm wondering should hit@10 in query_topn be as same as it in evaluate_performance?

@NicholasMcCarthy
Copy link
Contributor

Short answer: no, the functions don't compute the same.

The query_topn function doesn't filter the positive (i.e. known) triples of the graph, so it will return triples that the model was trained on. These triples are usually ranked quite high and therefore inflate the hits@10 metric in comparison to evaluate_performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants