Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using FST-PSO for discrete optimization ? #15

Open
CharlyEmpereurmot opened this issue Mar 20, 2020 · 7 comments
Open

Using FST-PSO for discrete optimization ? #15

CharlyEmpereurmot opened this issue Mar 20, 2020 · 7 comments

Comments

@CharlyEmpereurmot
Copy link
Contributor

CharlyEmpereurmot commented Mar 20, 2020

Hello Marco !

I now have a new optimization problem, which is very demanding computationally (several biological systems in different MD simulations at each evaluation during optimization) and most probably does NOT require using a continuous search space.

For example, this given parameter to be optimized: X(i) will be able to vary between say [0,1] but I would like it to adopt values chosen within set_X(i) = {0, 0.05, 0.10, 0.15, ..., 0.95, 1}. And each parameter to be optimized: X(n) have their own set_X(n) = {whatever discrete domain}.

I have read a little and I understand PSO and FST-PSO are designed for continuous domains, that efficiencies are tested and guaranteed only for continuous domains and that some PSO variants have been developed specifically for discrete domains. Also other algos might be helpful, but for some reasons I trust FST-PSO more :) Notably because I believe gradient descent or genetic algos might not perform too good on the problem at hand and FST-PSO nicely solved my previous optimization needs.

What would it take to adapt FST-PSO for discrete optimization ? How relevant would it be ?

I'm thinking, for example if:

  1. A vector of discrete domains of the form [set_X(1), set_X(2), ..., set_X(n)] is provided along with the dimensions of the search space and is applied to round the parameters (I'm not sure at which step exactly, maybe this can be done in the eval_function)
    AND
  2. The score of each evaluation is stored and reused if the set of parameters (each rounded to the nearest discrete value) is seen again during optimization
    (this point 2. is very optional and can be left to the user in the eval_function for speed-up, although if there are let's say 50 parameters, that would mean exploring for example a database looking for 50 keys to retrieve the previously calculated fitness value for this set of discrete parameters)
    AND
  3. Fuzzy reasoning, velocities etc are still freely moving over a continuous space

How relevant would this be ?
Or maybe adapting PSO in these kind of ways just makes no sense ...
I would be very glad to hear your take on this ! :)

@aresio
Copy link
Owner

aresio commented Apr 9, 2020

Hi Charlie, due to the coronavirus outbreak I am a bit overwhelmed by additional duties (e.g., transitioning to online courses and exams). I will get back at your idea asap!

@aresio
Copy link
Owner

aresio commented Nov 20, 2020

Hi Charlie,

I created a separate branch for discrete FST-PSO (dfstpso). The feature is still in alpha stage, so it was not added to the master branch, yet.

Let me know if you want to play with it, I will explain the API to you.

Cheers,

Marco

@CharlyEmpereurmot
Copy link
Contributor Author

Hello Marco,

Thank you for coming back to this, I'm really looking forward to testing it.
It would be delightful if you could quickly explain the API :)

Cheers 👍

@aresio
Copy link
Owner

aresio commented Nov 20, 2020

I tried to keep FST-PSO 2's API as simple and immediate as the first version. Hence, this is how it works:

FP = FuzzyPSO()
FP.set_search_space_discrete( list_of_symbols_lists )
FP.set_fitness(fitness_function)
_, best_fitness, best_solution = FP.solve_with_fstpso()

As you can see, everything is basically identical to FST-PSO. What changes is the new set_search_space_discrete method that enables discrete FST-PSO. In particular, if you want to perform discrete optimization, you have to provide a list-of-lists list_of_symbols_lists, which contains (for each variable) the discrete symbols that can be used. One example is:

list_of_symbols_lists = [ [True, False], [0, 1, 2], ['a', 'b', 'c', 'd'] ]

In this case, the problem has 3 variables/dimensions. The first variable can take boolean values; the second variable can take the integer values 0, 1, and 2; the third variable can be one of those four letters. Any sublist composed with discrete elements is valid. Of course, the fitness_function must be able to calculate fitness value using these symbols.

Internally, FST-PSO 2 converts the discrete problem to the real-valued problem of parameterizing a generative probabilistic model able to produce high quality solutions. The algorithm returns a tuple with three values: the optimal structure of the probabilistic model (not really interesting - I am considering the removal of this information); the fitness value of the best individual found; the best discrete solution found.

Please let me know if you need additional information.

Cheers!

@CharlyEmpereurmot
Copy link
Contributor Author

Thank you very much for these explanations. I'm sure I will have good use cases for this algo :)

So now I'm wondering if you have benchmarked the performances already, and if I could have a look at a paper/pre-print ?

I will let you know how it goes as soon as I start using it 👍
Cheers indeed !

@aresio
Copy link
Owner

aresio commented Nov 30, 2020

Not yet, we are beginning the test right now!

@aresio
Copy link
Owner

aresio commented Jan 8, 2021

Hi Charly,

this is to let you know that I pushed a new version of the discrete algorithm because I identified a couple of bugs. The new version seems to be even more effective than before.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants