-
The setup: we did 6 mapping sessions at dusk to evaluate how well RTAB-Map can localize (only by vision) on maps taken at different illumination conditions. The data has been collected with RTAB-Map Tango.
This folder contains scripts to re-generate results from the paper. The main idea behind this work is that using multi-session mapping can help to localize visually in illumination changing environments even with features that are not very robust to such conditions. We compared common hand-made visual features like SIFT, SURF, BRIEF, BRISK, FREAK, DAISY, KAZE with learned descriptor SuperPoint. The following picture show how robust are the visual features tested when localizing against single session recorded at different time. For example, the bottom-left and top-right cells are when the robot tries to localize the night on a map taken the day or vice-versa. The diagonal is localization performance when the localization session is about the same time than when the map was recorded. SuperPoint has clearly an advantage on this single-session experiment.
The following image shows when we do the same localization experiment at different hours, but against maps created by assembling maps taken at different hours. In this case, we can see that even binary features like BRIEF can work relatively well in illumination-variant environments. See the paper for more detailled results and comments. The line 1 2 3 4 5 6
refers to the assembled map shown below containing all mapping sessions linked together in same database.
We provide two formats: the first one is more general and the second one is used to produce the results in this paper with RTAB-Map. Please open issue if the links are outdated.
- Images:
rgb
: folder containing *.jpg color camera imagesdepth
: folder containing *.png 16bits mm depth imagescalib
: folder containing calibration for each color image. Each calibration contains also the transform betweendevice
andcamera
frames aslocal_transform
.device_poses.txt
: VIO poses of each image indevice
framecamera_poses.txt
: VIO poses of each image incamera
frame
- RTAB-Map Databases
- Dataset now also available on Federated Research Data Repository (FRDR) (if links above don't work)
-
RTAB-Map should be built from source with those dependencies (don't need to "install" it, we will launch it from build directory in the scripts below to avoid conflicting with another rtabmap already installed):
- Use Ubuntu 20.04 to avoid any python2/python3 conflicts.
- OpenCV built with xfeatures2d and nonfree modules
- torchlib c (tested on v1.10.2) to enable SuperPoint
- Git clone SuperGlue into scripts directory.
- Generate
superpoint_v1.pt
in the scripts directory (can also be downloaded from here but may not be compatible with more recent pytorch versions):cd rtabmap/archive/2022-IlluminationInvariant/scripts wget https://github.com/magicleap/SuperPointPretrainedNetwork/raw/master/superpoint_v1.pth wget https://raw.githubusercontent.com/magicleap/SuperPointPretrainedNetwork/master/demo_superpoint.py python trace.py
-
Download databases of the dataset and extract them.
-
Adjust the path inside
rtabmap_latest.sh
script to match where you just built rtabmap with right dependencies. -
Run
run_all.sh DATABASES_PATH OUTPUT_PATH
, this script will do the following steps (warning, this could take hours to do...):- Recreate the map databases for each feature type
- Create the merged databases
- Run localization databases over all map/merged databases
- Run consecutive localization experiment
-
Export statistics with
export_stats.sh
script. -
Use the MatLab/Octave scripts in this folder to show results you want. Set
dataDir
to directory containing the exported statistics.
sudo apt install octave liboctave-dev
# In octave:
pkg install -forge control signal
- Create the docker image:
cd rtabmap
docker build -t rtabmap_frontiers -f docker/frontiers2022/Dockerfile .
- Assuming you extracted the databases of the dataset in
~/Downloads/Illumination_invariant_databases
, create an output directory for results:
mkdir ~/Downloads/Illumination_invariant_databases/results
- Run script:
docker run --gpus all -it --rm --ipc=host --runtime=nvidia \
--user $(id -u):$(id -g) \
-w=/workspace/scripts \
-v ~/Downloads/Illumination_invariant_databases:/workspace/databases \
-v ~/Downloads/Illumination_invariant_databases/results:/workspace/results \
rtabmap_frontiers /workspace/scripts/run_all.sh /workspace/databases /workspace/results
- Export statistics:
docker run --gpus all -it --rm --ipc=host --runtime=nvidia \
--env="DISPLAY=$DISPLAY" \
--env="QT_X11_NO_MITSHM=1" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
--env="XAUTHORITY=$XAUTH" \
--volume="$XAUTH:$XAUTH" \
--user $(id -u):$(id -g) \
-w=/workspace/results \
-v ~/Downloads/Illumination_invariant_databases/results:/workspace/results \
rtabmap_frontiers /workspace/scripts/export_stats.sh /workspace/results