Temporally Coherent 3D Point Cloud Video Segmentation in Generic Scenes

Resource Type Date
Results 2017-08-16

Description

We present a novel generic segmentation approach for 3D point cloud video (stream data) thoroughly exploiting the explicit geometry in RGBD. Our proposal is only based on low level features, such as connectivity and compactness. We exploit temporal coherence by representing the rough estimation of objects in a single frame with a hierarchical structure, and propagating this hierarchy along time. The hierarchical structure provides an efficient way to establish temporal correspondences at different scales of object-connectivity, and to temporally manage the splits and merges of objects. This allows updating the segmentation according to the evidence observed in the history. The proposed method is evaluated on several challenging datasets, with promising results for the presented approach.
Some video results are included below showing the dynamic behavior of the presented method

Video Result 1 (comparison experiments on 3 sequences in [Husain 2015]):

In this experiment, we compare the segmentation performance of our approach with the approach proposed in [Husain 2015] on three sequences oriented to the scene of object interaction.
The segmentation result is shown as: the original color image in the left, the segmentation result from our approach in the middle and the segmentation result from [Husain 2015] on the right.

 

Video Result 2 (comparison experiments in RGBD Foreground Segmentation dataset [Fu 2017]):

In this experiment, we compare the object segmentation performance of our approach to the method proposed in [Fu 2017].
The video, compares the results in 8 sequences of the database which have been concatenated sequentially for convenience. The segmentation results from our approach are shown on the left, and the segmentation result of [Fu 2017] on the right. Object masks for segments are highlighted in different colors for the reader to follow correspondence along the sequence.

 

Video Result 3 (experiments on a dataset recorded at UPC):

In this experiment, we show the segmentation result of 4 sequences recorded by ourselves. The sequences focus on a scene where a full human body interacts with objects such as box, handcart, dog etc.
The original color image is shown on the left, the segmentation results at the component level of our approach in the middle (each component painted with a random color), and the segmentation result at the object level of our approach on the right (the color of each object allows to track the correspondence along the time).

 

References:
[Fu 2017] H. Fu et al, “Object-based Multiple Foreground Segmentation in RGBD Video,” IEEE TIP vol. 26(3), pp. 1418–1427, Jan. 2017.
[Husain 2015] F. Husain et al, “Consistent Depth Video Segmentation Using Adaptive Surface Models,” IEEE Trans. Cybernetics, vol. 45(2), pp. 266–278, Feb. 2015.

People involved

Xiao Lin PhD Candidate
Josep R. Casas Associate Professor
Montse Pardàs Professor

Related Publications

X. Lin, Casas, J., and Pardàs, M., Temporally Coherent 3D Point Cloud Video Segmentation in Generic Scenes, IEEE Transactions on Image Processing, vol. 27, no. 6, pp. 3087 - 3099, 2018. (24.37 MB)