Skip to content
This repository has been archived by the owner on Dec 5, 2024. It is now read-only.
/ RLToolkit Public archive

RLToolkit is a flexible and high-efficient reinforcement learning framework. Include implementation of DQN, AC, ACER, A2C, A3C, PG, DDPG, TRPO, PPO, SAC, TD3 and ....

License

Notifications You must be signed in to change notification settings

jianzhnie/RLToolkit

Repository files navigation

logo

---

Documentation Status

Overview

RLToolkit is a flexible and high-efficient reinforcement learning framework. RLToolkit (website)) is developed for practitioners with the following advantages:

  • Reproducible. We provide algorithms that stably reproduce the result of many influential reinforcement learning algorithms.

  • Extensible. Build new algorithms quickly by inheriting the abstract class in the framework.

  • Reusable. Algorithms provided in the repository could be directly adapted to a new task by defining a forward network and training mechanism will be built automatically.

  • Elastic: allows to elastically and automatically allocate computing resources on the cloud.

  • Lightweight: the core codes <1,000 lines (check Demo).

  • Stable: much more stable than Stable Baselines 3 by utilizing various ensemble methods.

Table of Content

Abstractions

abstractions

RLToolkit aims to build an agent for training algorithms to perform complex tasks. The main abstractions introduced by PARL that are used to build an agent recursively are the following:

Model

Model is abstracted to construct the forward network which defines a policy network or critic network given state as input.

Algorithm

Algorithm describes the mechanism to update parameters in Model and often contains at least one model.

Agent

Agent, a data bridge between the environment and the algorithm, is responsible for data I/O with the outside environment and describes data preprocessing before feeding data into the training process.

Supported Algorithms

RLToolkit implements the following model-free deep reinforcement learning (DRL) algorithms:

../_images/rl_algorithms_9_15.svg

A non-exhaustive, but useful taxonomy of algorithms in modern RL.

Coach Design

Supported Envs

  • OpenAI Gym
  • Atari
  • MuJoCo
  • PyBullet

For the details of DRL algorithms, please check out the educational webpage OpenAI Spinning Up.

Examples

If you want to learn more about deep reinforcemnet learning, please read the deep-rl-class and run the examples.

logo

Breakout

NeurlIPS2018 Half-Cheetah

Experimental Demos

  • Quick start
# into demo dirs
cd  benchmark/quickstart/
# train
python train.py

DNQ example

# into demo dirs
cd  examples/tutorials/lesson3/DQN/
# train
python train.py

PPO Example

# into demo dirs
cd  examples/tutorials/lesson3/DQN/
# train
python train.py

DDPG for Pendulum-v1

# into demo dirs
cd  examples/tutorials/lesson5/ddpg/
# train
python train.py

...

Contributions

We welcome any contributions to the codebase, but we ask that you please do not submit/push code that breaks the tests. Also, please shy away from modifying the tests just to get your proposed changes to pass them. As it stands, the tests on their own are quite minimal (instantiating environments, training agents for one step, etc.), so if they're breaking, it's almost certainly a problem with your code and not with the tests.

We're actively working on refactoring and trying to make the codebase cleaner and more performant as a whole. If you'd like to help us clean up some code, we'd strongly encourage you to also watch Uncle Bob's clean coding lessons if you haven't already.

References

  1. Deep Q-Network (DQN) (V. Mnih et al. 2015)
  2. Double DQN (DDQN) (H. Van Hasselt et al. 2015)
  3. Advantage Actor Critic (A2C)
  4. Vanilla Policy Gradient (VPG)
  5. Natural Policy Gradient (NPG) (S. Kakade et al. 2002)
  6. Trust Region Policy Optimization (TRPO) (J. Schulman et al. 2015)
  7. Proximal Policy Optimization (PPO) (J. Schulman et al. 2017)
  8. Deep Deterministic Policy Gradient (DDPG) (T. Lillicrap et al. 2015)
  9. Twin Delayed DDPG (TD3) (S. Fujimoto et al. 2018)
  10. Soft Actor-Critic (SAC) (T. Haarnoja et al. 2018)
  11. SAC with automatic entropy adjustment (SAC-AEA) (T. Haarnoja et al. 2018)

Citation

To cite this repository:

@misc{erl,
  author = {jianzhnie},
  title = {{RLToolkit}: An Easy  Deep Reinforcement Learning Toolkit},
  year = {2022},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/jianzhnie/deep-rl-toolkit}},
}

About

RLToolkit is a flexible and high-efficient reinforcement learning framework. Include implementation of DQN, AC, ACER, A2C, A3C, PG, DDPG, TRPO, PPO, SAC, TD3 and ....

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages