Monketoo's picture
Add files using upload-large-folder tool
f6876fa verified

Theorem 2 Let $h_g: \mathbb{R}^D \to \mathbb{R}^D$ be the explanation of classifier $g: \mathbb{R}^D \to \mathbb{R}$ with bounded derivatives $|\nabla_i g(x)| \le C \in \mathbb{R}_+$ for $i = 1, \dots, D$.

For a given target explanation $h^t: \mathbb{R}^D \to \mathbb{R}^D$, there exists another classifier $\tilde{g}: \mathbb{R}^D \to \mathbb{R}$ which completely agrees with the classifier $g$ on the data manifold $S$, i.e.

g~S=gS.(3)\tilde{g}|_S = g|_S. \quad (3)

In particular, both classifiers have the same train, validation, and test loss.

However, its explanation $h_{\tilde{g}}$ closely resembles the target $h^t$, i.e.

MSE(hg~(x),ht(x))ϵxS,(4)\text{MSE}(h_{\tilde{g}}(x), h^t(x)) \le \epsilon \quad \forall x \in S, \quad (4)

where $\text{MSE}(h, h') = \frac{1}{D} \sum_{i=1}^{D} (h_i - h'_i)^2$ denotes the mean-squared error and $\epsilon = \frac{d}{D}$.

Proof: By Theorem 1, we can find a function $G$ which agrees with $g$ on the data manifold $S$ but has the derivative

G(x)=(1g(x),,dg(x),hd+1t(x),,hDt(x))\nabla G(x) = (\nabla_1 g(x), \dots, \nabla_d g(x), h_{d+1}^t(x), \dots, h_D^t(x))

for all $x \in S$. By definition, this is its gradient explanation $h_G = \nabla G$.

As explained in Appendix A.2.1, we can assume without loss of generality that $|\nabla_i g(x)| \le 0.5$ for $i \in {1, \dots, D}$. We can furthermore rescale the target map such that $|h_i^t| \le 0.5$ for $i \in {1, \dots, D}$. This rescaling is merely conventional as it does not change the relative importance $h_i$ of any input component $x_i$ with respect to the others. It then follows that

MSE(hG(x),ht(x))=1Di=1D(iG(x)hit(x))2.\text{MSE}(h_G(x), h^t(x)) = \frac{1}{D} \sum_{i=1}^{D} (\nabla_i G(x) - h_i^t(x))^2 .

This sum can be decomposed as

1Di=1d(ig(x)hit(x))21+1Di=d+1D(iG(x)hit(x))2=0\frac{1}{D} \sum_{i=1}^{d} \underbrace{\left(\nabla_i g(x) - h_i^t(x)\right)^2}_{\le 1} + \frac{1}{D} \sum_{i=d+1}^{D} \underbrace{\left(\nabla_i G(x) - h_i^t(x)\right)^2}_{=0}

and from this, it follows that

MSE(hG(x),ht(x))dD=ϵ,\text{MSE}(h_G(x), h^t(x)) \le \frac{d}{D} = \epsilon,

The proof then concludes by identifying $\tilde{g} = G$. □

Intuition: Somewhat roughly, this theorem can be understood as follows: two models, which behave identically on the data, need to only agree on the low-dimensional submanifold $S$. The gradients "orthogonal" to the submanifold $S$ are completely undetermined by this requirement. By the manifold assumption, there are however much more

"orthogonal" than "parallel" directions and therefore the explanation is largely controlled by these. We can use this fact to closely reproduce an arbitrary target while keeping the function's values on the data unchanged.

We stress however that there are a number of non-trivial differential geometric arguments needed in order to make these statements rigorous and quantitative. For example, it is entirely non-trivial that an extension to the embedding manifold exists for arbitrary choice of target explanation. This is shown by Theorem 1 whose proof is based on a differential geometric technique called partition of the unity subordinate to an open cover. See Appendix A.1 for details.

2.3. Explanation Manipulation: Methods

Flat Submanifolds and Logistic Regression: The previous theorem assumes that the data lies on an arbitrarily curved submanifold and therefore has to rely on relatively involved mathematical concepts of differential geometry. We will now illustrate the basic ideas in a much simpler context: we will assume that the data lies on a $d$-dimensional flat hyperplane $S \subset \mathbb{R}^D$.⁶ The points on the hyperplane $S$ obey the relation

xS:(w^(i))Tx=bi,i{1,,Dd},(5)\forall x \in S : (\hat{w}^{(i)})^T x = b_i, \quad i \in \{1, \dots, D-d\}, \quad (5)

where ${\hat{w}^{(i)} \in \mathbb{R}^D | i = 1, \dots, D-d}$ are a set of normal vectors to the hyperplane $S$ and $b_i \in \mathbb{R}$ are the affine translations. We furthermore assume that we use logistic regression as the classification algorithm, i.e.

g(x)=σ(wTx+c),(6)g(x) = \sigma(w^T x + c), \quad (6)

where $w \in \mathbb{R}^D$, $c \in \mathbb{R}$ are the weights and the bias respectively and $\sigma(x) = \frac{1}{1+\exp(-x)}$ is the sigmoid function. This classifier has the gradient explanation⁷

hgrad(x)=w,(7)h_{\text{grad}}(x) = w, \quad (7)

We can now define a modified classifier by

g~(x)=σ(wTx+iλi(w^(i))Txbi+c),(8)\tilde{g}(x) = \sigma\left(w^T x + \sum_i \lambda_i (\hat{w}^{(i)})^T x - b_i + c\right), \quad (8)

for arbitrary $\lambda_i \in \mathbb{R}$. By (5), it follows that both classifiers agree on the data manifold $S$, i.e.

xS:g(x)=g~(x),(9)\forall x \in S : g(x) = \tilde{g}(x), \quad (9)

and therefore have the same train, validation, and test error. However, the gradient explanations are now given by

hgrad(x)=w+iλiw^(i).(10)h_{\text{grad}}(x) = w + \sum_{i} \lambda_{i} \hat{w}^{(i)}. \quad (10)

⁶In mathematics, these submanifolds are usually referred to as $d$-flats and only the case $d = D - 1$ is called hyperplane. We refrain from this terminology.

⁷We recall that in calculating the explanation map, we take the derivative before applying the final activation function.