Abstract

This document presents a novel method based in Convolutional Neural Networks (CNN) to obtain correspondence matchings between sets of keypoints of several unorganized 3D point cloud captures, independently of the sensor used. The proposed technique extends a state-of-the-art method for correspondence matching in standard 2D images to sets of unorganized 3D point clouds. The strategy consists in projecting the 3D neighborhood of the keypoint onto an RGBD patch, and the classification of patch pairs using CNNs. The objective evaluation of the proposed 3D point matching based in CNNs outperforms existing 3D feature descriptors, especially when intensity or color data is available.