Skip to content

Latest commit

 

History

History
84 lines (66 loc) · 3.79 KB

README.md

File metadata and controls

84 lines (66 loc) · 3.79 KB

rl_lib

Motivation: I have always thought that the only way to truely test if you understand a concept is to see if you can build it. As such all these these algorithms are implemented studying the relevant papers and coded to test my understanding.

What I cannot create, I do not understand” - Richard Feynman

Algorithms

DQN

Policy Gradient

Tabular Solutions

These were mainly referenced from a really good lecture series by Colin Skow on youtube [link]. A large part was also found in the Deep Reinforcement Learning Udacity course.

  • Bellman Equation
  • Dynamic Programming
  • Q learning

Associated Articles

Results

DQN Pong

  • Converged to an average of 17.56 after 1300 Episodes.
  • Code and results can be found under DQN/7. Vanilla DQN Atari.ipynb

Drawing

DDPG Continuous

  • Converged to ~ -270 after a 100 episodes
  • Code and results can be found under Policy Gradient/4. DDPG.ipynb.ipynb

Drawing

PPO discrete

  • Solved in 409 episodes
  • Code and results can be found under Policy Gradient/5. PPO.ipynb

Drawing

PPO Atari - with Baseline Enhancements

  • Code and results can be found under PPO/

Drawing

Todo

  • Curiousity Driven Exploration
  • HER (Hindsight Experience Replay)
  • Recurrent networks in PPO and DDPG

Credits

Whilst I tried to code everything directly from the papers, it wasn't always easy to understand what I was doing wrong when the algorithm just wouldn't train or I got runtime errors. As such I used the following repositories as references.