-
Notifications
You must be signed in to change notification settings - Fork 11.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Setting correct decimated values when below threshold #8883
Conversation
I think it would be better to clear the decimated data in this case, because you'd expect the result to be same as it would be with the same amount of data originally, right? |
Indeed, it makes sense, even if there is 1M+ data, given the current scale, if it is below threshold, it does not matter as the range will only display a fraction of it. In that case, I then need to also reset the data function and clear _data/_decimated? That would make sense |
I think Edit: It does that for all datasets, I think it might be good to split the per dataset part out of that, or duplicate it, because in this case you'd want to process only the current dataset |
Yep, on it |
Btw, feel free to squash when accepting (or I can do it on my side if you want) if you deem the PR acceptable |
You probably missed my edit on my last comment. I realized the We squash by default when merging, so no worries on that. |
Whoopsie, didn't saw the edit, will do a simple refactor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if the change in the order is needed, but I don't think it makes any difference either in any (real) scenario.
|
Indeed, order was changed by my first commit where I relied on _decimated, now that I clean it, the inversion is not needed and will needlessly perform an allocation in some cases. Want me to reorder? (Btw, sry for the spam and back&forth) |
Its usually better to avoid unneeded changes. So if its not too much trouble, yes please :)
Does not bother me, I like it when things move and don't stall for long periods :) |
Saves a byte too!
|
Not a critical bug, but as one of the primary objectives of #8843 was to have real points when the graph allows it (especially when zooming), the correct data must be set when below the resolution threshold.
Currently, (when using zoom), if the scale contains fewer data than threshold, the algorithm will exit (not doing any decimation), and will then use the last _decimated data instead of data in the current range/the raw data if the dataset was small enough.
This commit aims to fix that.