Skip to content

Dynamic map benchmark, open-sourced all the methods and datasets in the paper.

Notifications You must be signed in to change notification settings

SLAMWang/DynamicMap_Benchmark

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A Dynamic Points Removal Benchmark in Point Cloud Maps

arXiv Will show up in ITSC 2023, Spain.

Here is preview on readme in codes. I'm trying my best on updating all codes and datasets.

Task detect dynamic points in maps and remove them, enhancing the maps:

Folder quick view:

  • methods : contains all the methods in the benchmark
  • scripts/py/eval: eval the result pcd compared with ground truth, get quantitative table
  • scripts/py/data : pre-process data before benchmark. We also directly provided all the dataset we tested in the map. We run this benchmark offline in computer, so we will extract only pcd files from custom rosbag/other data format [KITTI, Argoverse2]

Quick try:

  • Teaser data on KITTI sequence 00 only 530Mb, download through personal One Drive
  • Go to methods folder, build and run through
    ./build/${methods_name}_run ${data_path, e.g. /data/00} ${config.yaml} -1 
  • Clone our repo:
    git clone --recurse-submodules https://github.com/KTH-RPL/DynamicMap_Benchmark.git

Methods:

Please check in methods folder.

Please note that we provided the comparison methods also but modified a little bit for us to run the experiments quickly, but no modified on their methods' core. Please check the LICENSE of each method in their official link before using it.

You will find all methods in this benchmark under methods folder. So that you can easily reproduce the experiments. And we will also directly provide the result data so that you don't need to run the experiments by yourself.

Last but not least, feel free to pull request if you want to add more methods. Welcome!

Dataset & Scripts

Download all these dataset from Zenodo online drive. Or create by yourself through the scripts we provided.

Welcome to contribute your dataset with ground truth to the community through pull request.

Evaluation

First all the methods will output the clean map, if you are only user on map clean task, it's enough. But for evaluation, we need to extract the ground truth label from gt label based on clean map. Why we need this? Since maybe some methods downsample in their pipeline, we need to extract the gt label from the downsampled map.

Check create dataset readme part in the scripts folder to get more information. But you can directly download the dataset through the link we provided. Then no need to read the creation; just use the data you downloaded.

Acknowledgements

This benchmark implementation is based on codes from several repositories as we mentioned in the beginning. Thanks for these authors who kindly open-sourcing their work to the community. Please see our paper reference section to get more information.

This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation

Cite Our Paper

Please cite our work if you find these useful for your research.

Benchmark:

@article{zhang2023benchmark,
  author={Qingwen Zhang, Daniel Duberg, Ruoyu Geng, Mingkai Jia, Lujia Wang and Patric Jensfelt},
  title={A Dynamic Points Removal Benchmark in Point Cloud Maps},
  journal={arXiv preprint arXiv:2307.07260},
  year={2023}
}

DUFOMap:

@article{duberg2023dufomap,
  author={Daniel Duberg*, Qingwen Zhang*, Mingkai Jia and Patric Jensfelt},
  title={{DUFOMap}: TBD}, 
}

About

Dynamic map benchmark, open-sourced all the methods and datasets in the paper.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 53.6%
  • C 40.0%
  • Dockerfile 3.5%
  • CMake 2.9%