Image Classification: We will now experimentally demonstrate the practical applicability of our methods in the context of image classification with deep neural networks.
Datasets: We consider the MNIST, FashionMNIST, and CIFAR10 datasets. We use the standard training and test sets for our analysis. The data is normalized such that it has mean zero and standard deviation one. We sum the explanations over the absolute values of its channels to get the relevance per pixel. The resulting relevances are then normalized to have a sum of one.
Models: For CIFAR10, we use the VGG16 (Simonyan & Zisserman, 2015) architecture. For FashionMNIST and MNIST, we use a four layer convolutional neural network. We train the model $g$ by minimizing the standard cross entropy loss for classification. The manipulated model $\tilde{g}$ is then trained by minimizing the loss (11) for a given target explanation $h^t$. This target was chosen to have the shape of the number 42. For more details about the architectures and training, we refer to the Appendix D.
Quantitative Measures: We assess the similarity between explanation maps using three quantitative measures: the structural similarity index (SSIM), the Pearson correlation coefficient (PCC) and the mean squared error (MSE). SSIM and PCC are relative similarity measures with values in [0, 1], where larger values indicate high similarity. The MSE is an absolute error measure for which values close to zero indicate high similarity. We also use the MSE metric as well as the Kullback-Leibler divergence for assessing similarity of the class scores of the manipulated model $\tilde{g}$ and the original network $g$.
Results: For all considered models, datasets, and explanation methods, we find that the manipulated model $\tilde{g}$ has explanations which closely resemble the target map $h^t$, e.g. the SSIM between the target and manipulated explanations is of the order $10^{-3}$. At the same time, the manipulated network $\tilde{g}$ has approximately the same output as the original model $g$, i.e. the mean-squared error of the outputs after the final softmax non-linearity is of the order $10^{-3}$. The classification accuracy is changed by about 0.2 percent.
Figure 2 illustrates this for examples from the FashionMNIST and CIFAR10 test sets. We stress that we use a single model for Gradient, x⊙Grad, and Integrated Gradient methods which demonstrates that the manipulation generalizes over all considered gradient-based methods.
The left-hand-side of Figure 3 shows quantitatively that manipulated model $\tilde{g}$ closely reproduces the target map $h^t$ over the entire test set of FashionMNIST. We refer to the Appendix D for additional similarity measures, examples, and quantitative analysis for all datasets.
Figure 2. Example explanations from the original model $g$ (left) and the manipulated model $\tilde{g}$ (right). Images from the test sets of FashionMNIST (top) and CIFAR10 (bottom).
3. Robust Explanations
Having demonstrated both theoretically and experimentally that explanations are highly vulnerable to model manipulation, we will now use our theoretical insights to propose explanation methods which are significantly more robust under such manipulations.
3.1. TSP Explanations: Theory
In this section, we will define a robust gradient explanation method. Appendix C discusses analogous definitions for other methods.
We can formally define an explanation field $H_g$ which associates to every point $x$ on the data manifold $S$ the corresponding gradient explanation $h_g(x)$ of the classifier $g$. We note that $H_g$ is generically a vector field along the manifold since $h_g(x) \in \mathbb{R}^D \cong T_x M$, i.e. it is an element of the tangent space $T_x M$ of the embedding manifold $M$ and not an element of the tangent space $T_x S$ of data manifold $S$.
As explained in Section 2.1, we can decompose the tangent space $T_p M$ of the embedding manifold $M$ as follows Let $P : T_x M \to T_x S$ be the projection on the first summand of this decomposition. We stress that the form of the projector $P$ depends on the point $x \in S$ but we do not make this explicit in order to simplify notation. We can then define: