Skip to content
This repository has been archived by the owner on Oct 20, 2022. It is now read-only.

Out of Memory during Bruteforce #10

Open
telnemri opened this issue Jun 24, 2019 · 2 comments
Open

Out of Memory during Bruteforce #10

telnemri opened this issue Jun 24, 2019 · 2 comments

Comments

@telnemri
Copy link

Is it possible to generate a random number of combinations (like 10,000) instead of just computing all possible combinations?

It's nice to have the possibility to test all possible combinations, but when the range is broad or you have too many variables in the strategy, there is no way to get node to manage the memory usage even when use the max-old-space-size option when running node..

Or you can maybe optimize the memory usage when the combinations are calculated?

The overall use case would be to randomly brute force a broad range of combinations and then reduce to a small range where all combinations can be tested..

@nick-dolan
Copy link
Owner

Hey @telnemri, thank you for the issue!
We can generate a new random combination from settings for each start of a backtest, but I can't see any practical solution to reduce to a small range while all combinations executed (we need to check all newly generated combinations with already made combs that lead to same memory usage).
I'll think about possible solutions to optimize memory usage. If you have any ideas, feel free to suggest.

@FreshLondon
Copy link

FreshLondon commented Mar 2, 2020

if it helps, just increase the integer.
For example, on RSI if you wanted to check a low: range but were unsure of where to start you could use the integer to test from 20-30, but only testing every second one:
low: '20:2:30', which will only produce half as many backtests as low: '20:1:30',

in some cases i have used low: '15:5:30', and then when finding the approximate best values (for example, CSV showed 20 was best overall) then we might go back and try and find the sweet spot with low: '16:2:24',
and the CSV shows 18 as the best so we try low: '17:1:19', to double check the 18 and see if it was just the backtest finding the closest to the sweet spot (really a hidden 17).
Who knows eh, just examples to help you narrow it down a bit - of course using this in conjunction with integers on other variables too!

WIth the above method, rather than processing thousands of files we may only need to process hundreds to find the right result 👍

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants