RLToolkit is a flexible and high-efficient reinforcement learning framework. RLToolkit (website)) is developed for practitioners with the following advantages:
-
Reproducible. We provide algorithms that stably reproduce the result of many influential reinforcement learning algorithms.
-
Extensible. Build new algorithms quickly by inheriting the abstract class in the framework.
-
Reusable. Algorithms provided in the repository could be directly adapted to a new task by defining a forward network and training mechanism will be built automatically.
-
Elastic: allows to elastically and automatically allocate computing resources on the cloud.
-
Lightweight: the core codes <1,000 lines (check Demo).
-
Stable: much more stable than Stable Baselines 3 by utilizing various ensemble methods.
- Overview
- Table of Content
- Abstractions
- Supported Algorithms
- Supported Envs
- Examples
- Experimental Demos
- Contributions
- References
- Citation
RLToolkit aims to build an agent for training algorithms to perform complex tasks. The main abstractions introduced by PARL that are used to build an agent recursively are the following:
Model
is abstracted to construct the forward network which defines a policy network or critic network given state as input.
Algorithm
describes the mechanism to update parameters in Model
and often contains at least one model.
Agent
, a data bridge between the environment and the algorithm, is responsible for data I/O with the outside environment and describes data preprocessing before feeding data into the training process.
RLToolkit implements the following model-free deep reinforcement learning (DRL) algorithms:
A non-exhaustive, but useful taxonomy of algorithms in modern RL.
- OpenAI Gym
- Atari
- MuJoCo
- PyBullet
For the details of DRL algorithms, please check out the educational webpage OpenAI Spinning Up.
If you want to learn more about deep reinforcemnet learning, please read the deep-rl-class and run the examples.
- Quick start
# into demo dirs
cd benchmark/quickstart/
# train
python train.py
DNQ example
# into demo dirs
cd examples/tutorials/lesson3/DQN/
# train
python train.py
PPO Example
# into demo dirs
cd examples/tutorials/lesson3/DQN/
# train
python train.py
DDPG for Pendulum-v1
# into demo dirs
cd examples/tutorials/lesson5/ddpg/
# train
python train.py
...
We welcome any contributions to the codebase, but we ask that you please do not submit/push code that breaks the tests. Also, please shy away from modifying the tests just to get your proposed changes to pass them. As it stands, the tests on their own are quite minimal (instantiating environments, training agents for one step, etc.), so if they're breaking, it's almost certainly a problem with your code and not with the tests.
We're actively working on refactoring and trying to make the codebase cleaner and more performant as a whole. If you'd like to help us clean up some code, we'd strongly encourage you to also watch Uncle Bob's clean coding lessons if you haven't already.
- Deep Q-Network (DQN) (V. Mnih et al. 2015)
- Double DQN (DDQN) (H. Van Hasselt et al. 2015)
- Advantage Actor Critic (A2C)
- Vanilla Policy Gradient (VPG)
- Natural Policy Gradient (NPG) (S. Kakade et al. 2002)
- Trust Region Policy Optimization (TRPO) (J. Schulman et al. 2015)
- Proximal Policy Optimization (PPO) (J. Schulman et al. 2017)
- Deep Deterministic Policy Gradient (DDPG) (T. Lillicrap et al. 2015)
- Twin Delayed DDPG (TD3) (S. Fujimoto et al. 2018)
- Soft Actor-Critic (SAC) (T. Haarnoja et al. 2018)
- SAC with automatic entropy adjustment (SAC-AEA) (T. Haarnoja et al. 2018)
To cite this repository:
@misc{erl,
author = {jianzhnie},
title = {{RLToolkit}: An Easy Deep Reinforcement Learning Toolkit},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/jianzhnie/deep-rl-toolkit}},
}