Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable flexible random initialization of fitness function at each generation #5

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

mschmidt87
Copy link
Contributor

This PR contains a solution to enable flexible initialization of the fitness function at each generation of the algorithm. This is necessary if one e.g. wants to use a different random sample of the task at each generation.

It is achieved by the option to pass a wrapper around the fitness function to the optimize function to enable flexible initialization of the fitness at each generation.

This wrapper could for instance be created like this:

def func_wrapper(fitness, input_generator):
    input_spikes = next(input_generator)
    fitness_i = partial(fitness, input_spikes=input_spikes)
    return fitness_i

def generate_input_spikes(num_steps, params, seed=78293327):
    rng = np.random.RandomState(seed=seed)
    step = 0
    while step < num_steps:
        yield input_spike_sample(params, rng)
        step += 1

input_generator = generate_input_spikes(100, params)

func_wrapper_part = partial(func_wrapper, input_generator=input_generator)

result = optimize(fitness, func_wrapper=func_wrapper_part)

Here we created a generator for random samples of input spikes for some task for 100 generations. This generator is passed to the fitness function wrapper. In the optimize function, it is called at every generation where it uses the generator to create a new sample of input spikes and initializes the fitness function with this.

… optimize function to enable flexible initialization of the fitness at each generation
@jakobj
Copy link
Owner

jakobj commented May 17, 2018

I like your solution via an initialization wrapper! How would you implement a scenario where each individual receives a different input? Not that it makes sense for this particular example, but there might be cases where you want to do this.

@mschmidt87
Copy link
Contributor Author

To achieve this, one could make the fitness draw the input at runtime with a random seed set in a way that it is unique to each individual. Consider the following example:

def func_i(x):
    rng = np.random.RandomState(seed=int(1e3 * np.sum(np.abs(x))))
    return np.sum(rng.rand())


population_size = 10
mu = np.zeros(population_size)
sigma = np.ones(population_size)
s = np.random.normal(0, 1, size=(population_size, *np.shape(mu)))
z = mu + sigma * s

parallel_threads = 5
pool = mp.Pool(processes=parallel_threads)
fitness = np.fromiter(pool.map(func_i, z), np.float)
pool.close()
pool.join()

print(fitness)

The clear disadvantage of this approach is that the user has to take care that the seeds between individuals are indeed different to avoid spurious correlations.

However, the advantage is that this keeps the optimize function more general, because the user does not have to make his/her fitness function accept the additional argument of generation and individual.

One option to improve #4 would be make the optimize function check whether the arguments 'individual' and/or 'generation' are defined for the fitness function and if not, just provide them as dummy arguments, like so:

import inspect


def fitness_with_ind(x, individual):
    return x + individual


def fitness(x):
    return x


def optimize(func):
    if 'individual' in inspect.getargspec(func).args:
        func_i = func
    else:
        def func_i(z, individual):
            return func(z)

    print(func_i(10, 10))

optimize(fitness_with_ind)
optimize(fitness)

Then one could allow the user to not define them in the fitness function

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants