-
-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[solidago] gbt: estimate asymmetrical uncertainties based on increase of loss by 1 #1973
Conversation
This is intended. One interesting implications of this is that if a user says A is maximally better then B, then the comparison will yield an infinite right uncertainty on A, and an infinite uncertainty on B. Does this break something? If the uncertainty is too large (perhaps a feature rather than a bug in principle), the value + 1 in the equation may be changed to a smaller value. |
Ok 👍 I pushed the modification in bf95cbb. It seems to work. I was just a bit surprised to see such a big difference compared to the current expected values for uncertainties in the tests files. For example in "data_3.py":
|
@lenhoanglnh The uncertainty values close to 700 were actually due to a numerical issue. In practice, there are cases where the log-likelihood term never reaches the threshold |
@@ -29,7 +30,7 @@ def solve( | |||
------- | |||
out: float | |||
""" | |||
ymin, ymax = f(xmin) - value, f(xmax) - value | |||
ymin, ymax = f(xmin, *args) - value, f(xmax, *args) - value |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor: Another way to do something similar would be to not change solve, but use it with a partial
https://docs.python.org/3/library/functools.html#functools.partial
|
||
@njit | ||
def f(delta, theta_diff, r, coord_indicator, ll_actual): | ||
return ll_function(theta_diff + delta * coord_indicator, r) - ll_actual - 1.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor: Should this -1.0
be a constant: HIGH_LIKELIHOOD_RANGE_THRESHOLD = 1.0
?
pass | ||
|
||
@cached_property | ||
def loss_increase_to_solve(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor: Naming translated_negative_log_likelihood
(what it is, not what it is meant to be used for) + it's a log likelihood, not a loss
Based on #1970
I struggled with the sign conventions, but I think I got something that works as expected.
TODO:
review the definition
@lenhoanglnh the paper suggests to only consider the negative log-likelihood term to estimate the uncertainties. Don't we need to consider the regularization term too? That what is done on this branch, because I observed very high values when it was not present.
the test data need to be updated with new uncertainties, after some sanity checks on the actual values
adapt the L-BFGS implementation to use the new uncertainties too (or split the tests)
Checklist