Fernàndez D, Varas D, Espadaler J, Ferreira J, Woodward A, Rodríguez D, et al.. ViTS: Video Tagging System from Massive Web Multimedia Collections. In ICCV 2017 Workshop on Web-scale Vision and Social Media . Venice, Italy; 2017.  (1.18 MB)

Abstract

The popularization of multimedia content on the Web has arised the need to automatically understand, index and retrieve it. In this paper we present ViTS, an automatic Video Tagging System which learns from videos, their web context and comments shared on social networks. ViTS analyses massive multimedia collections by Internet crawling, and maintains a knowledge base that updates in real time with no need of human supervision. As a result, each video is indexed with a rich set of labels and linked with other related contents. ViTS is an industrial product under exploitation with a vocabulary of over 2.5M concepts, capable of indexing more than 150k videos per month. We compare the quality and completeness of our tags with respect to the ones in the YouTube-8M dataset, and we show how ViTS enhances the semantic annotation of the videos with a larger number of labels (10.04 tags/video), with an accuracy of 80,87%.