Skip to content

reshalfahsi/rocket-trajectory-optimization

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 

Repository files navigation

Rocket Trajectory Optimization Using REINFORCE Algorithm

colab

In the context of machine learning, reinforcement learning (RL) is one of the learning paradigms involving interaction between agent and environment. Recently, RL has been extensively studied and implemented in the field of control theory. The classic example of a control theory problem is trajectory optimization, such as for spacecraft or rockets. Here, in the RL lingo, a rocket can be treated as an agent, and its environment would be outer space, e.g., the surface of the moon. The environment obeys the Markov Decision Process (MDP) property. The agent obtains a reward and observes a state based on the action that is given to the environment. The action taken by the agent is determined by the policy distribution that can be learned in the course of the training process. To learn the policy, one approach is to utilize the REINFORCE algorithm. This method is a policy gradient algorithm that maximizes the expected return (reward), incorporating Monte Carlo approximation. In practice, the gradient of the expected return will be our objective function to update our policy distribution.

Experiment

To see the rocket in action, please go to the following link.

Result

Reward Curve

reward_curve
Reward curve throughout 6561 episodes.

Qualitative Result

Here, the qualitative result of the controller for the rocket is shown below.

qualitative_rocket
The rocket successfully landed on the surface of the moon after hovering under the control of the learned policy from the REINFORCE algorithm.

Credit