Abstract:
Autonomous navigation from A to B in indoor environments is a widely researched field. Many known approaches
are based on a mapping of the entire environment in order to calculate a path through the space in advance. This
Master thesis proposes a new approach based on reinforcement learning. It shows that a system can navigate by
only using sensor data and system pose relative to a target. The method bases on the Robot Operating System
(ROS) and Gazebo for simulation and training. A test vehicle with Ackermann steering was built to test the
proposed method and compared with a state of the art navigation method in a real world scenario.
Evaluation experiments have shown that the proposed method is outperformed by SLAM based solutions, in terms
of pose accuracy, location precision and the reliability of achieving the desired target. Though, the developed
procedure offers potential in cases where prior mapping is not possible and exploration capability of a vehicle is
required.
Full Documentation: Master Thesis
Training Process: | Trained Agent in Real-World: |
---|---|
Navigation trajectories using a TEB based Planner as a baseline: |
---|
Navigation trajectories using the proposed Reinforcement Learning Approach: |
---|
The simulated vehicle is based on the MIT-Racecar Project.
The python requirements are listed in: requirements.txt
To start the training process in simulation, do the following:
- Install ROS Noetic on Ubuntu 20.04 (other setup's may work but are not tested)
- Clone this repository
- cd into the
Ackerbot/ackerbot_sim_ws/src
folder - Clone https://github.com/dschori/racecar_gazebo into the folder
- Clone https://github.com/mit-racecar/racecar into the folder
- cd back into the
Ackerbot/ackerbot_sim_ws
folder - Install missing ROS-Packages with:
rosdep install --from-paths src --ignore-src -r -y
- Build the workspace with
catkin build
- Source with
source devel/setup.bash
- Run the Gazebo Train Simulation with:
roslaunch racecar_gazebo racecar_rl_env_train.launch
- Open new Terminal in the same directory:
- Source again with
source devel/setup.bash
- Start the training with:
roslaunch navigation_train train.launch
- To increase the training speed, one can change the
real time update
to 0 (as fast as possible) in Gazebo under World/Physics - Monitor the training statistics with Tensorboard:
tensorboard --logdir ~/ray_results
, then go to http://localhost:6006/ to see the board
Gym-Gazebo Environment base: navigation_env.py
Discrete Task: navigation_discrete_task.py
Environment for test-vehicle: navigation_env_testvehicle.py
see: robot pkg