Skip to content

Series of deep reinforcement learning algorithms 🤖

Notifications You must be signed in to change notification settings

DarylRodrigo/rl_lib

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

rl_lib

Motivation: I have always thought that the only way to truely test if you understand a concept is to see if you can build it. As such all these these algorithms are implemented studying the relevant papers and coded to test my understanding.

What I cannot create, I do not understand” - Richard Feynman

Algorithms

DQN

Policy Gradient

Tabular Solutions

These were mainly referenced from a really good lecture series by Colin Skow on youtube [link]. A large part was also found in the Deep Reinforcement Learning Udacity course.

  • Bellman Equation
  • Dynamic Programming
  • Q learning

Associated Articles

Results

DQN Pong

  • Converged to an average of 17.56 after 1300 Episodes.
  • Code and results can be found under DQN/7. Vanilla DQN Atari.ipynb

Drawing

DDPG Continuous

  • Converged to ~ -270 after a 100 episodes
  • Code and results can be found under Policy Gradient/4. DDPG.ipynb.ipynb

Drawing

PPO discrete

  • Solved in 409 episodes
  • Code and results can be found under Policy Gradient/5. PPO.ipynb

Drawing

PPO Atari - with Baseline Enhancements

  • Code and results can be found under PPO/

Drawing

Todo

  • Curiousity Driven Exploration
  • HER (Hindsight Experience Replay)
  • Recurrent networks in PPO and DDPG

Credits

Whilst I tried to code everything directly from the papers, it wasn't always easy to understand what I was doing wrong when the algorithm just wouldn't train or I got runtime errors. As such I used the following repositories as references.