Skip to content

pedroscortes/computational-intelligence-ufes

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Computational Intelligence - UFES

This repository was created as a form of study for the Computational Intelligence discipline at UFES, taught by professor Renato Krohling.

In exercise 1 my goal was to minimize the equations F5 and F6 from the article "Swarm algorithms with chaotic jumps applied to noisy optimization problems" by Mendel, Krohling and Campos (2010) using algorithms studied in the classroom such as Genetic Algorithm, Differential Evolution and Particle Swarm Optimization.

In exercise 2 I did the same for the equations G1 and G9 from the article "Constrained optimization based on modified differential evolution algorithm" by Mohamed and Sabry (2012).

In exercise 3 the objective was to fit the equation y = a + bx + cx² for real data of x and y using the algorithms mentioned above.

In exercise 4 the objective was to train a neural network (multi-layer perceptron) for a time series prediction problem. Furthermore, we had to optimize the network's hyperparameters using an evolutionary algorithm.

In exercise 5 the objective was to create from scratch, without the use of libraries (e.g.: scipy, scikit learn, scikit optimization, pyswarm, etc.), a genetic algorithm (GA) with real coding, Gaussian mutation and arithmetic crossover and a standard Evolution strategy (ES) algorithm. From these algorithms, our interest was in minimizing some mathematical functions using some other functions as constraints. This exercise was inspired on the article "Constrained optimization based on modified differential evolution algorithm" by Mohamed and Sabry (2012).

Genetic Algorithm

A genetic algorithm is a computational method for optimization inspired by natural selection. It begins with a population of potential solutions, evaluates each solution's fitness, selects individuals based on their fitness scores, and then uses genetic operators like crossover and mutation to produce offspring. The offspring replace some individuals in the population, guiding it towards better solutions. This process iterates until a termination condition is met, such as finding a satisfactory solution. Genetic algorithms are effective for solving complex optimization problems by mimicking evolutionary principles to efficiently explore solution spaces and find optimal or near-optimal solutions.

Read more here.

Differential Evolution

The differential evolution algorithm is an optimization technique that iteratively evolves a population of candidate solutions by combining and modifying existing ones using differential operators. It's particularly suited for complex, nonlinear, and multi-dimensional optimization problems. Unlike genetic algorithms, which operate on binary strings or other representations, differential evolution typically works directly with continuous parameter vectors. Additionally, while genetic algorithms rely on genetic operators like crossover and mutation, differential evolution employs differential operators to generate new candidate solutions. These differences make differential evolution well-suited for certain optimization tasks, especially those involving continuous variables and high-dimensional spaces.

Read more here.

Particle Swarm Optimization

The particle swarm optimization (PSO) algorithm is a computational optimization technique inspired by the social behavior of bird flocking or fish schooling. It simulates the movement of a group of particles through a search space to find optimal solutions. Each particle represents a potential solution to the optimization problem, and they adjust their positions based on their own best-known position and the collective information shared among the swarm. By iteratively updating their positions according to simple rules, particles converge towards promising regions of the search space, ultimately leading to the discovery of optimal or near-optimal solutions. PSO is particularly effective for continuous optimization problems and has been successfully applied in various fields such as engineering, finance, and data science.

Read more here.

Multi-layer Perceptron (MLP)

A multi-layer perceptron (MLP) neural network is a type of artificial intelligence model used to recognize patterns and make decisions. It consists of several layers of connected nodes, called neurons. The first layer receives input data, the last layer produces the output, and the layers in between are called hidden layers. Each neuron in a layer is connected to neurons in the next layer, and these connections have weights that adjust during training to improve the model's accuracy. When the network is trained, it learns to transform the input data into the correct output by adjusting these weights. MLPs are powerful because they can learn and model complex relationships in data.

Read more here.

Evolution Strategies (ES)

Evolution Strategies (ES) is an optimization algorithm inspired by natural evolution, used for solving complex optimization problems. It involves a population of candidate solutions that evolve over generations through selection, mutation, and recombination. ES focuses on using mutations to explore the solution space and selecting the best-performing individuals based on a fitness function. This process iteratively improves the population, converging towards optimal solutions without relying heavily on gradient information, making it suitable for a wide range of optimization tasks.

Read more here.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published