This thesis explores the semantic classi cation of images based processing of electroencephalogram (EEG) signals generated by the viewer's brain. The work extends an existing solution by exploring the gains obtained when the parameters of the classi er are adapted to the user. Firstly, we developed an universal end-to-end model based on deep learning that extracts features from the EEG raw signals predicts the semantic content of the image between 40 possible classes from the ImageNet dataset. Our main contribution aims at adapting this universal model to new users, in order to build a personalized model based on the minimum feedback from the new user. We explored di fferent deep learning architectures and hyperparameters to obtain a better accuracy than the baseline by Spampinato et al (CVPR 2017). We achieve a result of 89.03 % and 90.34 % of the universal and personalized model respectively.