Skip to content

Peg-in-hole assembly with RL

License

Apache-2.0, MIT licenses found

Licenses found

Apache-2.0
LICENSE-APACHE
MIT
LICENSE-MIT
Notifications You must be signed in to change notification settings

AndrejOrsula/drl_omni_peg

Repository files navigation

Leveraging Procedural Generation for Learning Autonomous Peg-in-Hole Assembly in Space

This project focuses on learning autonomous peg-in-hole assembly with deep reinforcement learning, with a particular emphasis on enhancing generalization and adaptability through procedural generation and domain randomization.

SAC agent collecting experience during training.

DreamerV3 agent evaluated on novel test set and assembly scenarios.

Overview


As a proof-of-concept, the environment logic and training/evaluation pipelines are implemented in Rust. Blender's Geometry Nodes are used for procedural generation of peg-in-hole modules via blr. NVIDIA Omniverse is used as the simulation backend through omniverse_rs and pxr_rs for USD-related utilities. Gymnasium API is exposed by gymnasium_rs. Lastly, interfacing with Stable-Baselines3 and DreamerV3 is accomplished with Rust bindings that are automatically generated by pyo3_bindgen.

The workspace contains these packages:

  • drl_omni_peg: Peg-in-hole environment and RL training/evaluation pipelines

Instructions

Rust

Tip

You can install Rust and Cargo through your package manager or via https://rustup.rs.

Generation of Procedural Peg-in-Hole Modules

The procedural generation of peg-in-hole modules is now available at AndrejOrsula/blr_procgen. Both train and test sets can be generated using separate binaries: generate_peg_in_hole_train and generate_peg_in_hole_test.

Random Agent

A random agent can be run either via random.rs or random_gymnasium.rs. The former uses the environment directly, while the latter goes through the Gymnasium API.

# Run random agent
cargo run --release --bin random
# (alternative) Run random agent through Gymnasium API
cargo run --release --bin random_gymnasium

Training and Evaluation of RL Agents

Each algorithm is implemented as a separate binary. You can edit the source code directly to modify the hyperparameters and change the training/evaluation pipeline. Pre-trained models are available for download here.

# ALGO in [dreamerv3, ppo, ppo_recurrent, sac, tqc, trpo]
cargo run --release --bin ALGO

Docker

To install Docker on your system, you can run .docker/host/install_docker.bash to configure Docker with NVIDIA GPU support.

.docker/host/install_docker.bash

Build Image

To build a new Docker image from Dockerfile, you can run .docker/build.bash as shown below.

.docker/build.bash ${TAG:-latest} ${BUILD_ARGS}

Run Container

To run the Docker container, you can use .docker/run.bash as shown below.

.docker/run.bash ${TAG:-latest} ${CMD}

Run Dev Container

To run the Docker container in a development mode (source code mounted as a volume), you can use .docker/dev.bash as shown below.

.docker/dev.bash ${TAG:-latest} ${CMD}

As an alternative, users familiar with Dev Containers can modify the included .devcontainer/devcontainer.json to their needs. For convenience, .devcontainer/open.bash script is available to open this repository as a Dev Container in VS Code.

.devcontainer/open.bash

Join Container

To join a running Docker container from another terminal, you can use .docker/join.bash as shown below.

.docker/join.bash ${CMD:-bash}

Citation

@inproceedings{orsula2024leveraging,
  author    = {Andrej Orsula and Matthieu Geist and Miguel Olivares-Mendez and Carol Martinez},
  title     = {{Leveraging Procedural Generation for Learning Autonomous Peg-in-Hole Assembly in Space}},
  year      = {2024},
  booktitle = {International Conference on Space Robotics (iSpaRo)},
}

License

This project is dual-licensed to be compatible with the Rust project, under either the MIT or Apache 2.0 licenses.