Pan J, Canton-Ferrer C, McGuinness K, O'Connor N, Torres J, Sayrol E, et al.. SalGAN: Visual Saliency Prediction with Generative Adversarial Networks. In CVPR 2017 Scene Understanding Workshop (SUNw). Honolulu, Hawaii, USA; 2017.  (1.85 MB)

Abstract

We introduce SalGAN, a deep convolutional neural network for visual saliency prediction trained with adversarial examples. The first stage of the network consists of a generator model whose weights are learned by back-propagation computed from a binary cross entropy (BCE) loss over downsampled versions of the saliency maps. The resulting prediction is processed by a discriminator network trained to solve a binary classification task between the saliency maps generated by the generative stage and the ground truth ones. Our experiments show how adversarial training allows reaching state-of-the-art performance across different metrics when combined with a widely-used loss function like BCE.