Skip to content

Continuous control project of Udacity Deep Reinforcement Learning Nanodegree

License

Notifications You must be signed in to change notification settings

frgfm/drlnd-p2-continuous-control

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Reach out

License Codacy Badge CircleCI

This repository is an implementation of DDPG agent for the continuous control project of Udacity Deep Reinforcement Learning nanodegree, in the reacher environment provided by unity.

reacher-gif

Table of Contents

Environment

In this environment, a double-jointed arm can move to target locations. A reward of +0.1 is provided for each step that the agent's hand is in the goal location. Thus, the goal of your agent is to maintain its position at the target location for as many time steps as possible.

The observation space consists of 33 variables corresponding to position, rotation, velocity, and angular velocities of the arm. Each action is a vector with four numbers, corresponding to torque applicable to two joints. Every entry in the action vector should be a number between -1 and 1.

The task is episodic, and in order to solve the environment, your agent must get an average score of +30 over 100 consecutive episodes.

Getting started

Prerequisites

Installation

You can install the project requirements as follows:

git clone https://github.com/frgfm/drlnd-p2-continuous-control.git
cd drlnd-p2-continuous-control
pip install -r requirements.txt

Download the environment build corresponding to your OS

Then extract the archive in the project folder.

If you wish to use the agent trained by repo owner, you can download the model parameters as follows:

wget https://github.com/frgfm/drlnd-p2-continuous-control/releases/download/v0.1.0/ddpg_actor.pt

Usage

Training

All training arguments can be found using the --help flag:

python train.py --help

Below you can find an example to train your agent:

python train.py --deterministic --no-graphics

Evaluation

You can use an existing model's checkpoint to evaluate your agent as follows:

python evaluate.py --checkpoint ./ddpg_actor.pt

Credits

This implementation is vastly based on the following papers:

License

Distributed under the MIT License. See LICENSE for more information.