Abstract
Given the widespread availability of point cloud data from consumer depth sensors, 3D segmentation becomes a promising building block for high level applications such as scene understanding and interaction analysis. It benefits from the richer information contained in actual world 3D data compared to apparent (projected) data in 2D images. This also implies that the classical color segmentation challenges have recently shifted to RGBD data, whereas new emerging challenges are added as 3D information from depth measurements is usually noisy, sparse and unorganized. We present a novel segmentation approach for 3D point cloud video based on low level features and oriented to the analysis of object interactions. A hierarchical representation of the input point cloud is proposed to efficiently segment 3D data at the finer level, and to temporally establish the correspondence between segments, while dynamically managing the object split and merge at the coarser level. Experiments illustrate promising results and its potential application in object interaction analysis.