It is worthy of noting that the accuracy of registration algorithms is heavily dependent on the closeness of point cloud data in scale, size and sparsity. These factors are in turn heavily dependent on the hardware used. As different applications start to relay more and more on multi-source data, the data fusion of these different kinds data is emerging as more and more of a problem. Cross-source point cloud registration has wide applications such as building construction, augmented reality, and driverless vehicles. For example, in construction, project information is in the form of the 3D CAD model and in the form of real-time LiDAR scans (potentially captured using different hardware). The fusion of this data is vital to enable contractors to track progress and evaluate construction quality. However, until now there were no data sets for cross-source point cloud registration which would contain plentiful 3D data from recent popular sensors.
This benchmark aims to close the gap and provides a labelled 3D cross-source point cloud pairs with recent three popular sensors, that are RGB camera, depth sensor and Lidar sensor. The benchmark is captured in indoor working environment, which contains the popular objects in the working space, such as chairs, desks, computers, lights, walls, flowers, cupboard and so on.
Since the point clouds are captured from the different types of sensors, and different types of sensors contain different imaging mechanisms, the cross-source challenges in the registration problem are much more complicated than the same-source challenges. These challenges are summarized below:
The dataset contains two folders: kinect_lidar and kinect_sfm. The ground truth transformations are labbled by one computer science expert and cross-check by two other experts.
@inproceedings{huang2021comprehensive,
title={{A comprehensive survey on point cloud registration}},
author={Huang, Xiaoshui and Mei, Guofeng and Zhang, Jian and Abbas, Rana},
journal={arXiv preprint arXiv:2103.02690},
year = {2021},
}
The datasets provided on this page are published under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. This means that you must attribute the work in the manner specified by the authors, you may not use this work for commercial purposes and if you alter, transform, or build upon this work, you may distribute the resulting work only under the same license. If you are interested in commercial usage you can contact us for further options.
We provide several download links, including data set, data loader to read this dataset and several baseline algorithms.
The evaluation results are listed below. If you have new results, please contact jian.zhang@uts.edu.au to add to this table.
Registration Method | Registration Recall (%) | Translation Error | Rotation Error | Time(s) |
---|---|---|---|---|
DGR [2] | 36.6 | 0.04 | 4.26 | 0.87 |
FMR [1] | 17.8 | 0.10 | 4.66 | 0.28 |
PointNetLK [3] | 0.05 | 0.09 | 12.54 | 2.25 |
FGR [4] | 1.49 | 0.07 | 10.74 | 2.23 |
ICP [5] | 24.3 | 0.38 | 5.71 | 0.19 |
JRMPC [6] | 1.0 | 0.71 | 8.57 | 18.1 |
RANSAC [7] | 3.47 | 0.13 | 8.30 | 0.03 |
GCTR [8] | 0.5 | 0.17 | 7.46 | 15.8 |
[1] Huang, Xiaoshui, Guofeng Mei, and Jian Zhang. "Feature-metric Registration: A Fast Semi-supervised Approach for Robust Point Cloud Registration without Correspondences." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.
[2] Choy, Christopher, Wei Dong, and Vladlen Koltun. "Deep global registration." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.
[3] Aoki, Yasuhiro, et al. "Pointnetlk: Robust & efficient point cloud registration using pointnet." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.
[4] Zhou, Qian-Yi, Jaesik Park, and Vladlen Koltun. "Fast global registration." European Conference on Computer Vision. Springer, Cham, 2016.
[5] Peng, Furong, et al. "Street view cross-sourced point cloud matching and registration." 2014 IEEE International Conference on Image Processing (ICIP). IEEE, 2014.
[6] Huang, Xiaoshui, et al. "A coarse-to-fine algorithm for registration in 3D street-view cross-source point clouds." 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA). IEEE, 2016.
[7] Mellado, Nicolas, Matteo Dellepiane, and Roberto Scopigno. "Relative scale estimation and 3D registration of multi-modal geometry using Growing Least Squares." IEEE transactions on visualization and computer graphics 22.9 (2015): 2160-2173.
[8] Huang, Xiaoshui, et al. "Fast registration for cross-source point clouds by using weak regional affinity and pixel-wise refinement." 2019 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2019.
If you have any questions about the cross-source point cloud registration benchmark, please contact Prof. Jian Zhang ( jian.zhang@uts.edu.au ).