This is a ROS package to calibrate a camera network consisting of not only static overlapping cameras but also dynamic and non-overlapping cameras. It bridges separated camera views using a dynamic support camera with visual odometry, and estimates all the static and dynamic camera poses based on pose graph optimization. It has an optional depth image-based refinement step for RGB-D cameras.
The calibration method itself is designed for general camera networks. But, it also provides some scripts to incorporate with OpenPTrack, an RGB-D camera network-based human-machine interaction framework.
This package has been tested on ROS melodic on Ubuntu 18.04
git clone https://github.com/AprilRobotics/apriltags.git
cd apriltags && sudo make install
In this work, we use Direct Sparse Odometry to obtain the camera motion. However, you can replace it with any visual odometry algortihm. Only the requirement is that it publishes the estimated odometry to "/vodom (geometry_msgs::PoseStamped)" topic.
# (Recommended) Install Pangolin (https://github.com/stevenlovegrove/Pangolin)
git clone https://github.com/koide3/dso.git
mkdir dso/build && cd dso/build
cmake ..
make -j4
echo "export DSO_PATH=/path/to/dso" >> ~/.bashrc
soruce ~/.bashrc
cd catkin_ws/src
git clone https://github.com/koide3/dso_ros.git
cd dso_ros && git checkout catkin
cd ../.. && catkin_make
cd catkin_ws/src
git clone https://github.com/koide3/sparse_dynamic_calibration.git
cd .. && catkin_make
Print out apriltag 36h11 family, and place the in the environment so that each camera can see at least one tag.
Edit "data/tags.yaml" to specify the tag size.
%YAML:1.0
default_tag_size: 0.160
Then, launch "generate_tag_camera_network_conf.launch" to detect tags from the static cameras. It automatically detects cameras by findind topics which match with a regex pattern (e.g., "(/kinect.*)/rgb/image"), and then extracts image data from topics of the detected cameras (/rgb/camera_info, /rgb/image, /depth_ir/points). You can change the pattern and topic names by editing the launch file.
roslaunch sparse_dynamic_calibration generate_tag_camera_network_conf.launch
Record an image stream and visual odometry data using a dynamic camera.
# in case you use a pointgrey camera
roslaunch sparse_dynamic_calibration camera.launch
#otherwise, use any other camera node like usb_cam
roslaunch dso_ros dso.launch
Although the calibration method itself is an online method, we recommend to run the calibration on a rosbag for testing.
rosbag record -O test.bag -e "/camera/(camera_info|image_raw/compressed)" /vodom /points
rosparam set use_sim_time true
roslaunch sparse_dynamic_calibration calibration.launch
roscd sparse_dynamic_calibration/config
rviz -d rviz.rviz
rosrun image_transport republish compressed in:=/camera/image_raw raw out:=/camera/image_raw
rosbag play --clock test.bag
After finishing to play the rosbag, save the estimate poses:
rostopic pub /sparse_dynamic_calibration/save std_msgs/Empty
You should be able to see the calibrated camera poses in "data/tag_camera_poses.yaml".
rosparam set use_sim_time false
roslaunch sparse_dynamic_calibration refinement.launch
Refined camera poses will be saved to "data/tag_camera_poses_refined.yaml". The accumulated point clouds before/after the refinement will be saved to /tmp/(original|refined).pcd
After calibrating the camera network, copy the estimated camera poses with:
rosrun sparse_dynamic_calibration copy_to_openptrack.py
This script reads the estimated camera poses in "data/tag_camera_poses.yaml" and writes them into "opt_calibration/launch/opt_calibration_results.launch" and "opt_calibration/conf/camera_poses.yaml". Then, distribute the calibration result to each PC:
# On master PC
roslaunch opt_calibration detection_initializer.launch
# On each distributed PC
roslaunch opt_calibration listener.launch
Static camera imageset
Dynamic camera rosbag
(will be available soon)
tar xzvf sparse_dynamic_example.tar.gz
cp -R sparse_dynamic_example/data catkin_ws/src/sparse_dynamic_calibration/
roslaunch sparse_dynamic_calibration generate_tag_camera_network_conf.launch read_from_file:=true
rosparam set use_sim_time true
rosrun image_transport republish compressed in:=/camera/image_raw raw out:=/camera/image_raw
roscd sparse_dynamic_calibration/config
rviz -d rviz.rviz
roslaunch sparse_dynamic_calibration calibration.launch
rosbag play --clock real_30.bag
rostopic pub /sparse_dynamic_calibration_node/save std_msgs/Empty
rosparam set use_sim_time false
roscd sparse_dynamic_calibration/config
rviz -d rviz.rviz
roslaunch sparse_dynamic_calibration refinement.launch
Kenji Koide and Emanuele Menegatti, Non-overlapping RGB-D Camera Network Calibration with Monocular Visual Odometry, IROS2020.