Since the $\lambda_i$ can be chosen freely, we can modify the explanations arbitrarily in directions orthogonal to the data submanifold $S$ (parameterized by the normal vectors $\hat{w}^{(i)}$). Similar statements can be shown for other explanation methods and we refer to the Appendix A.3 for more details.
As we will discuss in Section 2.4, one can use these tricks even for data which does not (initially) lie on a hyperplane.
General Case: For the case of arbitrary neural networks and curved data manifolds, we cannot analytically construct the manipulated model $\tilde{g}$. We therefore approximately obtain the model $\tilde{g}$ corresponding to the original model $g$ by minimizing the loss
by stochastic gradient descent with respect to the parameters of $\tilde{g}$. The training set is denoted by $\mathcal{T}$ and $h^t \in \mathbb{R}^D$ is a specified target explanation. Note that we could also use different targets for various subsets of the data but we will not make this explicit to avoid cluttered notation. The first term in the loss $\mathcal{L}$ ensures that the models $g$ and $\tilde{g}$ have approximately the same output while the second term encourages the explanations of $\tilde{g}$ to closely reproduce the target $h^t$. The relative weighting of these two terms is determined by the hyperparameter $\gamma \in \mathbb{R}_+$.
As we will demonstrate experimentally, the resulting $\tilde{g}$ will closely reproduce the target explanation $h^t$ and have (approximately) the same output as $g$. Crucially, both statements will be seen to hold also for the test set.
2.4. Explanation Manipulation: Practice
In this section, we will demonstrate manipulation of explanations experimentally. We will first discuss applying logistic regression to credit assessment and then proceed to the case of deep neural networks in the context of image classification. The code for all our experiments is publicly available at https://github.com/fairwashing/fairwashing.
Credit Assessment: In the following, we will suppose that a bank uses a logistic regression algorithm to classify whether a prospective client should receive a loan or not. The classification uses the features $x = (x_{\text{gender}}, x_{\text{income}})$ where
and $x_{\text{income}}$ is the income of the applicant. Normalization is chosen such that the features are of the same order of magnitude. Details can be found in the Appendix B.
Figure 1. x⊙Grad explanations for original classifier g and manipulated $\tilde{g}$ highlight completely different features. Colored bars show the median of the explanations over multiple examples.
We then define a logistic regression classifier $g$ by choosing the weights $w = (0.9, 0.1)$, i.e. female applicants are severely discriminated against. The discriminating nature of the algorithm may be detected by inspecting, for example, the gradient explanation maps $h_{\tilde{g}}^{\text{grad}} = w$.
Conversely, if the explanations did not show any sign of discrimination for another classifier $\tilde{g}$, the user may interpret this as a sign of its trustworthiness and fairness.
However, the bank can easily "fairwash" the explanations, i.e. hide the fact that the classifier is sexist. This can be done by adding new features which are linearly dependent on the previously used features. As a simple example, one could add the applicant's paid taxes $x_{\text{taxes}}$ as a feature. By definition, it holds that
where we assume that there is a fixed tax rate of 0.4 on all income. The features used by the classifier are now $x = (x_{\text{gender}}, x_{\text{income}}, x_{\text{taxes}})$. By (13), all data samples $x$ obey
Therefore, the original classifier $g(x) = \sigma(w^T x)$ with $w = (0.9, 0.1, 0)$ leads to the same output as the classifier $\tilde{g}(x) = \sigma(w^T x + 1000 \hat{w}^T x)$. However, as shown in Figure 1, the classifier $\tilde{g}$ has explanations which suggest that the two financial features (and not the applicant's gender) are important for the classification result.
This example is merely an (oversimplified) illustration of a general concept: for each additional feature which linearly depends on the previously used features, a condition of the form (14) for some normal vector $\hat{w}$ is obtained. We can then construct a classifier with arbitrary explanation along each of these normal vectors.