Deep learning techniques are referred to as 'black box models' because of its opaque nature. Interpretability techniques and explainable methods help to make the complex network transparent and explain the user why a particular decision is made by the model. Different visualization techniques used for the interpretation of deep learning networks for both image and non-image classification are investigated in this paper.
View Full Article
Download or view the complete article PDF published by the author.