You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to evaluate the recommendation result of a tool.
I have trained a model using ampligraph's model, however, the problem appears when I tried to evaluate the model on the recommendation.
Actual Behavior
Firstly, I used evaluate_performance function and limited the parameter entities_subset to the type of entity I want to recommend.
Secondly, I used query_topn function to valid the recommend metrics obtained by evluate_performance.
What's strange is the the hit@10 in query_topn is much lower that it in evaluate_performance.
I'm wondering should hit@10 in query_topn be as same as it in evaluate_performance?
The text was updated successfully, but these errors were encountered:
Short answer: no, the functions don't compute the same.
The query_topn function doesn't filter the positive (i.e. known) triples of the graph, so it will return triples that the model was trained on. These triples are usually ranked quite high and therefore inflate the hits@10 metric in comparison to evaluate_performance.
Description
I'm trying to evaluate the recommendation result of a tool.
I have trained a model using ampligraph's model, however, the problem appears when I tried to evaluate the model on the recommendation.
Actual Behavior
Firstly, I used
evaluate_performance
function and limited the parameterentities_subset
to the type of entity I want to recommend.Secondly, I used
query_topn
function to valid the recommend metrics obtained byevluate_performance
.What's strange is the the hit@10 in
query_topn
is much lower that it inevaluate_performance
.I'm wondering should hit@10 in
query_topn
be as same as it inevaluate_performance
?The text was updated successfully, but these errors were encountered: