Abstract

In recent years we have seen growth on interest for Deep Learning (DL) algorithms on a variety of problems, due to their outstanding performance. This is more palpable on a multitude of elds, where self learning algorithms are becoming indispensable tools to help professionals solve complex problems.

However as these models are getting better, they also tend to be more complex and are sometimes referred to as "Black Boxes". The lack of explanations for the resulting predictions and the inability of humans to understand those decisions seems problematic.

In this project, dierent methods to increase the interpretability of Deep Neural Networks (DNN) such as Convolutional Neural Network (CNN) are studied. Additionally, we evaluate how these interpretability methods or techniques can be implemented, measured and applied to real-world problems, by creating a python ToolBox.