This project explores visual memorability of egocentric in different ways having three main contributions. The first and the main contribution of the project is a new tool visual memorability in egocentric images. This tool that consists in a web application that allows the annotation of the visual memorability associated to still images with an online game. The second contribution of this work is a convolutional neural network model for visual memorability prediction that adapts an off-the-shelf model to egocentric images. Moreover, a visualization study has been pursued to localize the regions of the images that are more memorable than others. With this maps a comparison with saliency maps and is explored. This part of the research opens a new branch in visual memorability that consists in use memorability maps for saliency prediction. Also the memorability of the images is related with a sentiment analysis applying a model that predicts that feature. The final contribution is related to join visual memorability of images with human behaviour and physical state, finding a relation between memory and some physiological signals as: heart rate, galvanic skin response and electroencephalographic signals.

Grade: A with honors (9.8/10.)