This project addresses a novel problem that has appears in the last years. The use of egocentric cameras, devices that takes images of what we see are growing and the main problem of this images are big data (because at the end of the day, we can have a thousand of images, some of them similar and sometimes with a bad quality or low information) and image retrieval (due to the big data, find a certain moment are very difficult and if we don’t avoid that problem, the properties of egocentric images became useless).

This work have two objectives: the first one explore images that have physiological signals associated in order to allow us to add some physiological features for the retrieval instead of only base the retrieval in visual features as the actual state of the art. For this part we associate interesting images to the images that are memorable, so a correlation between memorability and physiological signals will be found. The second objective is to deal with the egocentric paradigm. Some recent works shows that machine learning algorithms that have been trained with human-taken images cannot be extended to egocentric images due to the image construction. Based on MIT (Massachusetts Institute of Technology) previous work I built a visual game that allows me to manually annotate the memorability of the images with a simple user interaction (the user don’t know that he is annotating images during the game). From this game I have computed memorability score and I have obtain predicted scores from a convolutional neural network that MIT present in his work: MemNet. From both results I have compared the results in order to decide if the application of the algorithms on egocentric images is possible.