del
#3
by
ZhijianBao
- opened
This view is limited to 50 files because it contains too many changes.
See the raw diff here.
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/TRpJAAK3o0X/Initial_manuscript_md/Initial_manuscript.md +477 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/TRpJAAK3o0X/Initial_manuscript_tex/Initial_manuscript.tex +161 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/TulqHKf4uPn/Initial_manuscript_md/Initial_manuscript.md +508 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/TulqHKf4uPn/Initial_manuscript_tex/Initial_manuscript.tex +276 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/UeYQXtI7nsX/Initial_manuscript_md/Initial_manuscript.md +396 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/UeYQXtI7nsX/Initial_manuscript_tex/Initial_manuscript.tex +165 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/V7TaczasnAk/Initial_manuscript_md/Initial_manuscript.md +0 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/V7TaczasnAk/Initial_manuscript_tex/Initial_manuscript.tex +207 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/VFGgG8XpFLu/Initial_manuscript_md/Initial_manuscript.md +526 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/VFGgG8XpFLu/Initial_manuscript_tex/Initial_manuscript.tex +197 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/VhBtAHeIUaB/Initial_manuscript_md/Initial_manuscript.md +659 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/VhBtAHeIUaB/Initial_manuscript_tex/Initial_manuscript.tex +233 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/Vz_gE-nrFu9/Initial_manuscript_md/Initial_manuscript.md +244 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/Vz_gE-nrFu9/Initial_manuscript_tex/Initial_manuscript.tex +245 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/Y8PmDhBdmv/Initial_manuscript_md/Initial_manuscript.md +290 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/Y8PmDhBdmv/Initial_manuscript_tex/Initial_manuscript.tex +157 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/YzPaQcK2Ko4/Initial_manuscript_md/Initial_manuscript.md +205 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/YzPaQcK2Ko4/Initial_manuscript_tex/Initial_manuscript.tex +246 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/Z31SloFrp7/Initial_manuscript_md/Initial_manuscript.md +503 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/Z31SloFrp7/Initial_manuscript_tex/Initial_manuscript.tex +265 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/_-gZhHVnI3e/Initial_manuscript_md/Initial_manuscript.md +374 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/_-gZhHVnI3e/Initial_manuscript_tex/Initial_manuscript.tex +239 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/_Fl5G8NCA2/Initial_manuscript_md/Initial_manuscript.md +282 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/_Fl5G8NCA2/Initial_manuscript_tex/Initial_manuscript.tex +189 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/_QcreQjxHi/Initial_manuscript_md/Initial_manuscript.md +386 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/_QcreQjxHi/Initial_manuscript_tex/Initial_manuscript.tex +173 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/_h_ikjOEGL_/Initial_manuscript_md/Initial_manuscript.md +255 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/_h_ikjOEGL_/Initial_manuscript_tex/Initial_manuscript.tex +145 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/uBlxaWPm8l/Initial_manuscript_md/Initial_manuscript.md +165 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/uBlxaWPm8l/Initial_manuscript_tex/Initial_manuscript.tex +157 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/uEQTusqzEg-/Initial_manuscript_md/Initial_manuscript.md +201 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/uEQTusqzEg-/Initial_manuscript_tex/Initial_manuscript.tex +127 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/uPF2bs14E3p/Initial_manuscript_md/Initial_manuscript.md +318 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/uPF2bs14E3p/Initial_manuscript_tex/Initial_manuscript.tex +218 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/zaJsDuwwdlJ/Initial_manuscript_md/Initial_manuscript.md +171 -0
- NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/zaJsDuwwdlJ/Initial_manuscript_tex/Initial_manuscript.tex +198 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/-O-A_6M_oi/Initial_manuscript_md/Initial_manuscript.md +444 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/-O-A_6M_oi/Initial_manuscript_tex/Initial_manuscript.tex +329 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/0nNhIdvKQkU/Initial_manuscript_md/Initial_manuscript.md +655 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/0nNhIdvKQkU/Initial_manuscript_tex/Initial_manuscript.tex +638 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/0yzM0ibZgg/Initial_manuscript_md/Initial_manuscript.md +521 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/0yzM0ibZgg/Initial_manuscript_tex/Initial_manuscript.tex +482 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/1Hwy5yfNadS/Initial_manuscript_md/Initial_manuscript.md +321 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/1Hwy5yfNadS/Initial_manuscript_tex/Initial_manuscript.tex +438 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/1sGdp5g0NP/Initial_manuscript_md/Initial_manuscript.md +891 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/1sGdp5g0NP/Initial_manuscript_tex/Initial_manuscript.tex +468 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/1vkyEY-HeLY/Initial_manuscript_md/Initial_manuscript.md +1139 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/1vkyEY-HeLY/Initial_manuscript_tex/Initial_manuscript.tex +405 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/4CTnlIc1rhw/Initial_manuscript_md/Initial_manuscript.md +777 -0
- NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/4CTnlIc1rhw/Initial_manuscript_tex/Initial_manuscript.tex +905 -0
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/TRpJAAK3o0X/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,477 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Group Excess Risk Bound of Overparameterized Linear Regression with Constant-Stepsize SGD
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
## Abstract
|
| 12 |
+
|
| 13 |
+
It has been observed that machine learning models trained using stochastic gradient descent (SGD) exhibit poor generalization to certain groups within and outside the population from which training instances are sampled. This has serious ramifications for the fairness, privacy, robustness, and out-of-distribution (OOD) generalization of machine learning. Hence, we theoretically characterize the inherent generalization of SGD-learned overparameterized linear regression to intra-and extra-population groups. We do this by proving an excess risk bound for an arbitrary group in terms of the full eigenspectra of the data covariance matrices of the group and population. We additionally provide a novel interpretation of the bound in terms of how the group and population data distributions differ and the effective dimension of SGD, as well as connect these factors to real-world challenges in practicing trustworthy machine learning. We further empirically validate the tightness of our bound on simulated data.
|
| 14 |
+
|
| 15 |
+
## 14 1 Introduction
|
| 16 |
+
|
| 17 |
+
Much recent work has sought to better understand the inductive biases of stochastic gradient descent (SGD), such as benign overfitting and implicit regularization in overparameterized settings [1, 2, 3, 3]. However, this line of literature has overwhelmingly focused on bounding the excess risk of an SGD-learned model over the entire population $\mathcal{P}$ from which training instances are sampled, and has not investigated how SGD (e.g., its assumption that training data are IID) yields poor model generalization to intra-population groups ${\mathcal{G}}_{\text{intra }}$ (i.e., subsets of the population) and extra-population groups ${\mathcal{G}}_{\text{extra }}$ (i.e., instances that fall outside the population). We illustrate these concepts in Figure 1,
|
| 18 |
+
|
| 19 |
+
Establishing the theory behind this phenomenon is critical, as it provides provable guarantees about the trustworthiness (e.g., fairness, privacy, robustness, and out-of-distribution generalization) of SGD-learned models. As an example, let's consider an automated candidate screening system that a company trains on the data (e.g., number of years previously worked, relevant skills) of a sample from the population $\mathcal{P}$ of past job applicants [4]. In the context of fairness, many works have observed that for models trained using SGD, group-imbalanced data distributions translate to generalization disparities [5, 6, 7, 8]. Hence, the candidate screening system may generalize poorly for minoritized groups ${\mathcal{G}}_{\text{intra }}$ (i.e., not satisfy equal opportunity [9]), which can yield hiring discrimination. This phenomenon also has implications for privacy, as adversaries, against the desire of a job applicant, can infer whether the applicant's data were used to train the candidate screening system, based on the system's loss on the applicant [10]. When considering robustness, we may be interested in how well the candidate screening system generalizes to a target group ${\mathcal{G}}_{\text{intra }}$ of applicants when $\mathcal{P}$ is noisy or corrupted [11, 12]. Finally, in the context of out-of-distribution generalization, models deployed in the real world often have to deal with data distributions that differ from the training distribution [13]; for instance, the candidate screening system, if trained prior to a recession, may generalize poorly to stellar job applicants ${\mathcal{G}}_{\text{extra }}$ who were laid off during the recession for reasons beyond their control. Therefore, it is paramount to understand how SGD-learned models generalize to extra-population groups [14], especially in terms of how the properties of the data distributions for these groups differ from those of the training distribution.
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
|
| 23 |
+
Figure 1: Euler diagram of a population $\mathcal{P}$ , an intra-population group ${\mathcal{G}}_{\text{intra }}$ , and an extra-population group ${\mathcal{G}}_{\text{extra }}$ , as well as a visual depiction of their respective possible data distributions ${\mathcal{D}}_{\mathcal{P}},{\mathcal{D}}_{{\mathcal{G}}_{\text{intra }}},{\mathcal{D}}_{{\mathcal{G}}_{\text{extra }}}$ for a feature $x$ (which are noticeably distinct).
|
| 24 |
+
|
| 25 |
+
Towards bolstering the theoretical foundations of trustworthy machine learning, we characterize the inherent generalization of constant-stepsize SGD (with iterate averaging) to groups within and outside the population from which training instances are sampled, for an arguably simple setting: overparameterized linear regression. We prove an excess risk bound for an arbitrary group which can be decomposed into bias and variance factors that depend on the full eigenspectra of the data covariance matrices of the group and population. We then re-express the excess risk bound in terms of: 1) how the group and population data distributions diverge, 2) how the distributions' feature variances differ, and 3) the effective dimension of SGD. We connect these three components to real-world challenges in practicing trustworthy machine learning, such as limited features and sample size disparities for minoritized groups. Finally, we empirically validate the tightness of our bound on simulated data. As a whole, we set the stage for future research to extend our results to deep neural networks and models trained with other variants of gradient descent (e.g., minibatch GD, SGD with learning rate scheduling).
|
| 26 |
+
|
| 27 |
+
## 2 Related work
|
| 28 |
+
|
| 29 |
+
Imbalanced learning The tendency of machine learning models to overpredict the majority class in the presence of class-imbalanced training samples [6, 15, 7, 16, 8, 17] and underperform for minoritized groups [18, 19] has been extensively empirically and theoretically studied. Some papers have theoretically investigated worst-case group generalization in the overparameterized regime [20, 21]. However, these works have not examined how SGD in particular (e.g., its assumption that training data are IID) causes poor generalization for a minoritized group, even in the arguably simple case of overparameterized linear regression. We do so in terms of the data covariance matrices of the group and population, rather than representation dimension or information, which affords greater interpretability.
|
| 30 |
+
|
| 31 |
+
Inductive biases of SGD Theoretically analyzing the inductive biases of stochastic gradient descent 5 (e.g., implicit regularization, benign overfitting), especially in the overparameterized regime, is a nascent area of research [2, 1, 3, 22, 23, 24] and strengthens our understanding of how deep learning works. In this paper, we make novel contributions to learning theory by analyzing constant-stepsize SGD with iterate averaging when training instances are sampled from a different distribution than the evaluation distribution. We do so by extending the analysis of [1], which only analyzes the excess risk of SGD-learned linear regression over the entire population from which training instances are sampled, and not over a particular group.
|
| 32 |
+
|
| 33 |
+
Trustworthy machine learning Numerous works in fair machine learning have explored the implications of generalization disparities among groups (known as equal opportunity [9]) for model-induced harms faced by minoritized groups [25, 5]. For example, in the case of automated loan approval, if white men enjoy better model generalization, their loan applications will be less likely to be incorrectly rejected compared to women and people of color. Prior research has also theoretically and empirically studied worst-case group generalization in the context of fairness without demographics and distributionally robust optimization [26, 27, 28]. In this work, we prove a group excess risk bound for overparameterized linear regression and contextualize the bound in terms of real-world challenges in practicing trustworthy machine learning.
|
| 34 |
+
|
| 35 |
+
## 3 Problem setup
|
| 36 |
+
|
| 37 |
+
The linear regression problem of interest is $\mathop{\min }\limits_{w}{L}_{\mathcal{D}}\left( w\right)$ , where ${L}_{\mathcal{D}}\left( w\right) =$ $\frac{1}{2}{\mathbb{E}}_{\left( {x, y}\right) \sim \mathcal{D}}\left\lbrack {\left( y-\langle w, x\rangle \right) }^{2}\right\rbrack$ . In this equation, $x \in \mathcal{H}$ is the feature vector (we assume $\dim \left( \mathcal{H}\right) = \infty$ to model the overparameterized regime), $y \in \mathbb{R}$ is the response, $w \in \mathcal{H}$ is the weight vector to be optimized, and $\mathcal{D}$ is the arbitrary population distribution over $x$ and $y$ . Furthermore, suppose an arbitrary group $m$ (within or outside the population) has the arbitrary distribution ${\mathcal{D}}_{m}$ over $x$ and $y$ . Now, assume the optimal parameters ${w}_{m}^{ * }$ for group $m$ satisfy the first-order optimality condition $\nabla {L}_{{\mathcal{D}}_{m}}\left( {w}_{m}^{ * }\right) = {\mathbb{E}}_{\left( {x, y}\right) \sim {\mathcal{D}}_{m}}\left\lbrack {\left( {y - \left\langle {{w}_{m}^{ * }, x}\right\rangle }\right) x}\right\rbrack = 0.$
|
| 38 |
+
|
| 39 |
+
In this paper, we consider constant stepsize SGD with iterate averaging; at each iteration $t$ , a training instance $\left( {{x}_{t},{y}_{t}}\right) \sim \mathcal{D}$ is independently observed and the weight is updated as follows:
|
| 40 |
+
|
| 41 |
+
$$
|
| 42 |
+
{w}_{t} \mathrel{\text{:=}} {w}_{t - 1} - \gamma \left( {\left\langle {{w}_{t - 1},{x}_{t}}\right\rangle - {y}_{t}}\right) {x}_{t}, t = 1,\ldots , N
|
| 43 |
+
$$
|
| 44 |
+
|
| 45 |
+
where $\gamma > 0$ is a constant stepsize, $N$ is the number of samples observed, and the weights are initialized as ${w}_{0} \in \mathcal{H}$ . Following [1], the final output is the average of the iterates ${\bar{w}}_{N} \mathrel{\text{:=}} \frac{1}{N}\mathop{\sum }\limits_{{t = 0}}^{{N - 1}}{w}_{t}$ .
|
| 46 |
+
|
| 47 |
+
## 4 Main result
|
| 48 |
+
|
| 49 |
+
We now introduce relevant notation and our assumptions (which are similar to those in [1]), as well as state our main result.
|
| 50 |
+
|
| 51 |
+
Assumption 1 (Regularity conditions) For each group $m$ , assume ${H}_{m} \mathrel{\text{:=}} {\mathbb{E}}_{\left( {x, y}\right) \sim {\mathcal{D}}_{m}}\left\lbrack {x{x}^{T}}\right\rbrack$ (i.e., the data covariance matrix ${}^{\top }$ of ${\mathcal{D}}_{m}$ ) and ${\mathbb{E}}_{\left( {x, y}\right) \sim {\mathcal{D}}_{m}}\left\lbrack {y}^{2}\right\rbrack$ exist and are finite. Furthermore, assume that $\operatorname{tr}\left( {H}_{m}\right)$ is finite (i.e., ${\bar{H}}_{m}$ is trace-class) and ${H}_{m}$ is symmetric positive semi-definite (PSD). Let ${\left\{ {\lambda }_{i}\right\} }_{i = 1}^{\infty }\left( {H}_{m}\right)$ be the eigenvalues of ${H}_{m}$ sorted in decreasing order.
|
| 52 |
+
|
| 53 |
+
Additionally, denote the population data covariance matrix $H \mathrel{\text{:=}} {\mathbb{E}}_{\left( {x, y}\right) \sim \mathcal{D}}\left\lbrack {x{x}^{T}}\right\rbrack \operatorname{Let}{\left\{ {\lambda }_{i}\right\} }_{i = 1}^{\infty }\left( H\right)$ be the eigenvalues of $H$ sorted in decreasing order. Furthermore, suppose the eigendecomposition of $H = \mathop{\sum }\limits_{i}{\lambda }_{i}{v}_{i}{v}_{i}^{T}$ ; then, ${H}_{k : \infty } \mathrel{\text{:=}} \mathop{\sum }\limits_{{i > k}}{\lambda }_{i}{v}_{i}{v}_{i}^{T}$ . Similarly, the head of the identity matrix ${I}_{0 : k} \mathrel{\text{:=}} \mathop{\sum }\limits_{{i = 1}}^{k}{v}_{i}{v}_{i}^{T}.$
|
| 54 |
+
|
| 55 |
+
We also define the following linear operators (which we assume to exist and be finite): $\mathcal{I} = I \otimes I$ , $\mathcal{M} = {\mathbb{E}}_{\left( {x, y}\right) \sim \mathcal{D}}\left\lbrack {x \otimes x \otimes x \otimes x}\right\rbrack$ (where $\otimes$ is the tensor product), $\mathcal{T} = H \otimes I + I \otimes H - \gamma \mathcal{M}$ . All the results about these operators from Lemma 4.1 in [1] hold.
|
| 56 |
+
|
| 57 |
+
Assumption 2 (Fourth moment conditions) Assume that there exists a positive constant $\alpha > 0$ , such that for any PSD matrix $A$ , it holds that ${\mathbb{E}}_{\left( {x, y}\right) \sim \mathcal{D}}\left\lbrack {x{x}^{T}{Ax}{x}^{T}}\right\rbrack \preccurlyeq \alpha \operatorname{tr}\left( {HA}\right) H$ . This assumption
|
| 58 |
+
|
| 59 |
+
---
|
| 60 |
+
|
| 61 |
+
${}^{1}$ We refer to ${H}_{m}$ as the data covariance matrix (following [1]), but it is in fact the raw second moment. We do not assume $\mathbb{E}\left\lbrack x\right\rbrack = 0$ .
|
| 62 |
+
|
| 63 |
+
---
|
| 64 |
+
|
| 65 |
+
is satisfied for Gaussian distributions by $\alpha = 3$ , and it is further implied if the distribution over ${H}^{-\frac{1}{2}}x$ has sub-Gaussian tails (see Lemma A.1 in [1]).
|
| 66 |
+
|
| 67 |
+
Assumption 3 (Noise conditions) Suppose that ${\sum }_{\text{noise }} \mathrel{\text{:=}} {\mathbb{E}}_{\left( {x, y}\right) \sim \mathcal{D}}\left\lbrack {{\left( y - \left\langle {w}_{m}^{ * }, x\right\rangle \right) }^{2}x{x}^{T}}\right\rbrack$ (i.e., the covariance matrix of the gradient noise at ${w}_{m}^{ * }\left( 2\right)$ and ${\sigma }^{2} \mathrel{\text{:=}} {\begin{Vmatrix}{H}^{-\frac{1}{2}}{\sum }_{\text{noise }}{H}^{-\frac{1}{2}}\end{Vmatrix}}_{2}$ (i.e., the additive noise) exist and are finite.
|
| 68 |
+
|
| 69 |
+
Assumption 4 (Learning rate condition) Assume $\gamma \leq \frac{1}{\alpha \operatorname{tr}\left( H\right) }$ .
|
| 70 |
+
|
| 71 |
+
The excess risk of a trained model for group $m$ quantifies how much worse the model performs for group $m$ than the optimal model for $m$ (i.e., the model parameterized by ${w}_{m}^{ * }$ ) does. In Theorem 1, we present an excess risk bound of overparameterized linear regression with constant-stepsize SGD (with iterate averaging) for group $m$ in terms of the full eigenspectra of the data covariance matrices of the group and population.
|
| 72 |
+
|
| 73 |
+
Under the assumptions above, we are ready for the statement of the main theorem:
|
| 74 |
+
|
| 75 |
+
Theorem 1 We can bound the excess risk ${\mathcal{E}}_{m}$ for group $m$ as:
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
{\mathcal{E}}_{m} = {\mathbb{E}}_{\mathcal{D}}\left\lbrack {{L}_{{\mathcal{D}}_{m}}\left( {\bar{w}}_{N}\right) }\right\rbrack - {L}_{{\mathcal{D}}_{m}}\left( {w}_{m}^{ * }\right) \leq 2 \cdot \text{ Effective Bias } + 2 \cdot \text{ Effective Var,}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
where:
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
\text{ EffectiveBias } \leq \left\{ \begin{array}{ll} \frac{{\lambda }_{1}\left( {H}_{m}\right) {\begin{Vmatrix}{w}_{0} - {w}_{m}^{ * }\end{Vmatrix}}_{2}^{2}}{{\gamma }^{2}{N}^{2}{\lambda }_{1}^{2}\left( H\right) }, & {\lambda }_{1}\left( H\right) \geq \frac{1}{\gamma N} \\ {\lambda }_{1}\left( {H}_{m}\right) {\begin{Vmatrix}{w}_{0} - {w}_{m}^{ * }\end{Vmatrix}}_{2}^{2}, & \text{ otherwise } \end{array}\right.
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
\text{ EffectiveVar } \leq \frac{\frac{2\alpha }{N\gamma }\left( {{\begin{Vmatrix}{w}_{0} - {w}_{m}^{ * }\end{Vmatrix}}_{{I}_{0 : {k}^{ * }}}^{2} + {N\gamma }{\begin{Vmatrix}{w}_{0} - {w}_{m}^{ * }\end{Vmatrix}}_{{H}_{{k}^{ * }:\infty }}^{2}}\right) + {\sigma }^{2}}{1 - {\gamma \alpha }\operatorname{tr}\left( H\right) }
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
\cdot \left( {\frac{1}{N}\underset{\text{head }}{\underbrace{\mathop{\sum }\limits_{{i \leq {k}^{ * }}}\frac{{\lambda }_{i}\left( {H}_{m}\right) }{{\lambda }_{i}\left( H\right) }}} + N{\gamma }^{2}\underset{\text{tail }}{\underbrace{\mathop{\sum }\limits_{{i > {k}^{ * }}}{\lambda }_{i}\left( {H}_{m}\right) {\lambda }_{i}\left( H\right) }}}\right) ,
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
where ${k}^{ * } = \max \left\{ {k : {\lambda }_{k}\left( H\right) \geq \frac{1}{\gamma N}}\right\}$ . As the bound suggests, to obtain a vanishing bound, we need:
|
| 96 |
+
|
| 97 |
+
1. A sufficiently large sample from the population: ${\lambda }_{1}\left( H\right) \geq \frac{1}{\gamma N}$
|
| 98 |
+
|
| 99 |
+
2. The head to converge in $N : \mathop{\sum }\limits_{{i \leq {k}^{ * }}}\frac{{\lambda }_{i}\left( {H}_{m}\right) }{{\lambda }_{i}\left( H\right) } = o\left( N\right)$
|
| 100 |
+
|
| 101 |
+
3. The tail to converge in $N : \mathop{\sum }\limits_{{i > {k}^{ * }}}{\lambda }_{i}\left( {H}_{m}\right) {\lambda }_{i}\left( H\right) = o\left( {1/N}\right)$
|
| 102 |
+
|
| 103 |
+
We prove Theorem 1 in Section A. In the following section, we contextualize the group excess risk bound through the lens of real-world challenges in practicing trustworthy machine learning.
|
| 104 |
+
|
| 105 |
+
### 4.1 Interpreting the group excess risk bound
|
| 106 |
+
|
| 107 |
+
In interpreting the bound for ${\mathcal{E}}_{m}$ , we consider the case where the eigenspectrum of $H$ rapidly decays and thus focus on the head of the bound. ${}^{3}$ We first re-express the head in terms of the variance and mean of ${\mathcal{D}}_{m}$ and $\mathcal{D}$ . We denote $\mu \mathrel{\text{:=}} {\mathbb{E}}_{\left( {x, y}\right) \sim \mathcal{D}}\left\lbrack x\right\rbrack ,\sum \mathrel{\text{:=}} {\mathbb{E}}_{\left( {x, y}\right) \sim \mathcal{D}}\left\lbrack {\left( {x - \mu }\right) {\left( x - \mu \right) }^{T}}\right\rbrack$ , ${\mu }_{m} \mathrel{\text{:=}} {\mathbb{E}}_{\left( {x, y}\right) \sim {\mathcal{D}}_{m}}\left\lbrack x\right\rbrack$ , and ${\sum }_{m} \mathrel{\text{:=}} {\mathbb{E}}_{\left( {x, y}\right) \sim {\mathcal{D}}_{m}}\left\lbrack {\left( {x - {\mu }_{m}}\right) {\left( x - {\mu }_{m}\right) }^{T}}\right\rbrack$ . Without loss of generality,
|
| 108 |
+
|
| 109 |
+
---
|
| 110 |
+
|
| 111 |
+
${}^{2}$ Recall that ${w}_{m}^{ * }$ is an optimum of ${L}_{{\mathcal{D}}_{m}}\left( w\right)$ .
|
| 112 |
+
|
| 113 |
+
${}^{3}$ We leave rigorously analyzing the tail of the bound as future work. If the eigenspectrum of $H$ doesn’t decay rapidly (i.e., there exist many high-variance features), then the variance error of the group excess risk will be higher.
|
| 114 |
+
|
| 115 |
+
---
|
| 116 |
+
|
| 117 |
+
we assume $\mu = 0$ . Then:
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
\mathop{\sum }\limits_{{i \leq {k}^{ * }}}\frac{{\lambda }_{i}\left( {H}_{m}\right) }{{\lambda }_{i}\left( H\right) } = \mathop{\sum }\limits_{{i \leq {k}^{ * }}}\frac{{\lambda }_{i}\left( {\sum }_{m}\right) + {\lambda }_{i}\left( {{\mu }_{m}{\mu }_{m}^{T}}\right) }{{\lambda }_{i}\left( \sum \right) + 0}
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
= \frac{{\lambda }_{1}\left( {\sum }_{m}\right) + {\begin{Vmatrix}{\mu }_{m}\end{Vmatrix}}_{2}^{2}}{{\lambda }_{1}\left( \sum \right) } + \mathop{\sum }\limits_{{2 \leq i \leq {k}^{ * }}}\frac{{\lambda }_{i}\left( {\sum }_{m}\right) }{{\lambda }_{i}\left( \sum \right) }
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
$$
|
| 128 |
+
= 2\underset{\text{distributional difference }}{\underbrace{\left\lbrack \mathrm{{KL}}\left( {p}_{1},{q}_{1}\right) + \mathop{\sum }\limits_{{2 \leq i \leq {k}^{ * }}}\mathrm{{KL}}\left( {p}_{i},{q}_{i}\right) \right\rbrack }} + \underset{\text{relative feature variance }}{\underbrace{\mathop{\sum }\limits_{{i \leq {k}^{ * }}}\log \frac{{\lambda }_{i}\left( {\sum }_{m}\right) }{{\lambda }_{i}\left( \sum \right) }}} + \underset{\text{effective dimension }}{\underbrace{{k}^{ * }}},
|
| 129 |
+
$$
|
| 130 |
+
|
| 131 |
+
where ${p}_{1} = \mathcal{N}\left( {{\begin{Vmatrix}{\mu }_{m}\end{Vmatrix}}_{2},{\lambda }_{1}\left( {\sum }_{m}\right) }\right) ,{p}_{i} = \mathcal{N}\left( {0,{\lambda }_{i}\left( {\sum }_{m}\right) }\right)$ , and ${q}_{i} = \mathcal{N}\left( {0,{\lambda }_{i}\left( \sum \right) }\right)$ [29]. This result shows that the excess risk for group $m$ can be minimized (and thus generalization to group $m$ can be improved) by:
|
| 132 |
+
|
| 133 |
+
1. Making the distributional difference between group $m$ and the population smaller. This result corroborates findings in the fairness literature that randomly oversampling or increasing training data from minoritized groups (thereby boosting the representation of group $m$ in the population) improves worst-case group generalization [19].
|
| 134 |
+
|
| 135 |
+
2. Minimizing the variance of feature values in group $m$ relative to the variance of feature values in the population. High relative feature variance can occur when group $m$ has sparse or noisy data, which poses a challenge in the real world because minoritized groups are often sidelined in data collection [30] and data are only partially observed [31]. This finding is also consistent with the literature on SGD's implicit bias to rely less on high-variance features [8].
|
| 136 |
+
|
| 137 |
+
3. Reducing the effective dimension ${k}^{ * }$ of SGD. Recall that ${k}^{ * } = \max \left\{ {k : {\lambda }_{k}\left( H\right) \geq \frac{1}{\gamma N}}\right\}$ and $\gamma \leq \frac{1}{\alpha \operatorname{tr}\left( H\right) } = \frac{1}{\alpha \mathop{\sum }\limits_{i}{\lambda }_{i}\left( H\right) }$ ; therefore, ${k}^{ * }$ can be reduced by: 1) decreasing the variance of feature values in the population $\mathop{\sum }\limits_{i}{\lambda }_{i}\left( H\right)$ and 2) increasing the number of training samples $N$ . This is consistent with intuition, as 1) SGD implicitly relies less on high-variance features [8] and 2) increasing the number of randomly-sampled training instances can improve the representation of minoritized groups in the training data.
|
| 138 |
+
|
| 139 |
+
While, theoretically, it seems that increasing the representation of minoritized groups in the training data and better including them in data collection improves generalization to such groups, it is important to not engage in predatory inclusion and exploitative data collection practices ${}^{4}$ . We further emphasize that simply increasing the sheer number of samples, especially without analyzing the randomness or validity of sampling strategies, does not imply increasing the representation of minoritized groups in the training data [32]. Overall, we believe that one of the first steps of socially conscientious data work is to consider how data collection practices reinforce and contribute to the power relations and complex social inequality experienced by minoritized groups [33].
|
| 140 |
+
|
| 141 |
+
## 5 Empirical results
|
| 142 |
+
|
| 143 |
+
To investigate the tightness of our group excess risk bound, we empirically examine how well our bound aligns with the real group excess risk in a simulated setting, wherein we have control over $\mathcal{D}$ and ${\mathcal{D}}_{m}$ . In particular, we assume $\mathcal{D} \mathrel{\text{:=}} p{\mathcal{D}}_{m} + \left( {1 - p}\right) {\mathcal{D}}_{\text{rest }}$ is a mixture distribution that interpolates ${\mathcal{D}}_{m} \mathrel{\text{:=}} \mathcal{N}\left( {{\mu }_{m},{\sum }_{m}}\right)$ and ${\mathcal{D}}_{\text{rest }} \mathrel{\text{:=}} \mathcal{N}\left( {{\mu }_{\text{rest }},{\sum }_{\text{rest }}}\right)$ for $p \in \left\lbrack {0,1}\right\rbrack$ . If $p < < {0.5}, m$ could be considered a minoritized group, and the excess risk for group $m$ would have implications for fairness (Section 1). If $p > > {0.5},{\mathcal{D}}_{\text{rest }}$ could be viewed as noise, so a model’s excess risk for group $m$ would offer insight into the robustness of the model (Section 1). $p = 0$ models a privacy-risk or OOD setting, as ${\mathcal{D}}_{m}$ would be an extra-population group (Section 1). In our experiments, we compare the group excess risk and our bound thereof for various values of $p \in \left\lbrack {0,1}\right\rbrack$ .
|
| 144 |
+
|
| 145 |
+
Inspired by [1] (Section 6), we consider two over-parameterized linear regression problem instances with different rates of eigenspectrum decay for $H$ that satisfy our assumptions from Section 4:
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
\text{1.}{\mu }_{\text{rest }} \mathrel{\text{:=}} \mathbf{0},{\mu }_{m}\left\lbrack i\right\rbrack = {\lambda }_{i}\left( {\sum }_{m}\right) = {\lambda }_{i}\left( {\sum }_{\text{rest }}\right) \mathrel{\text{:=}} {\left( i + 1\right) }^{-1}\log {\left( i + 1\right) }^{-2}
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
\text{2.}{\mu }_{\text{rest }} \mathrel{\text{:=}} \mathbf{0},{\mu }_{m}\left\lbrack i\right\rbrack = {\lambda }_{i}\left( {\sum }_{m}\right) = {\lambda }_{i}\left( {\sum }_{\text{rest }}\right) \mathrel{\text{:=}} {i}^{-2}
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
---
|
| 156 |
+
|
| 157 |
+
${}^{4}$ https://slideslive.com/38955136/beyond-the-fairness-rhetoric-in-ml
|
| 158 |
+
|
| 159 |
+
---
|
| 160 |
+
|
| 161 |
+

|
| 162 |
+
|
| 163 |
+
Figure 2: True group excess risk and our bound thereof for $p \in \{ {0.0},{0.1},{0.2},\ldots ,{0.9},{1.0}\}$ for problem instances 1 (left) and 2 (right). Each data point in the plots is averaged over 10 independent runs, and the true group excess risk data points are approximated over ${10}^{5}$ samples from ${\mathcal{D}}_{m}$ .
|
| 164 |
+
|
| 165 |
+
For both problem instances, ${w}_{m}^{ * }\left\lbrack i\right\rbrack = \frac{{i}^{-1}}{2}$ and ${w}_{\text{rest }}^{ * }\left\lbrack i\right\rbrack = {i}^{-1}$ (where ${w}_{\text{rest }}^{ * }$ are the optimal parameters relating $\left( {x, y}\right) \sim {\mathcal{D}}_{\text{rest }}$ ). Additionally, $\sigma = {0.1}$ . We choose sufficiently different ${\mu }_{m},{\mu }_{\text{rest }}$ and ${w}_{m}^{ * },{w}_{\text{rest }}^{ * }$ and smaller $\sigma$ than [1] to enlarge the total variation distance between ${\mathcal{D}}_{m}$ and ${\mathcal{D}}_{\text{rest }}$ , towards stress-testing our group excess risk bound. Otherwise, we follow the same experimental settings as [1]: $N = {200}, d = {2000} > > N$ (to simulate overparametrization), $\alpha = 6$ , and $\gamma = \frac{1}{\alpha \operatorname{tr}\left( H\right) }$ .
|
| 166 |
+
|
| 167 |
+
Our results are displayed in Figure 2. In both plots, the group excess risk decreases (i.e., generalization improves) when $p$ increases, as ${\mathcal{D}}_{m}$ is sampled at a higher rate during training. This finding corroborates our commentary about fairness, privacy, robustness, and OOD generalization. Furthermore, the plots demonstrate that our group excess risk bound closely captures the true excess risk across various $p \in \left\lbrack {0,1}\right\rbrack$ , suggesting that our bound is tight (empirically).
|
| 168 |
+
|
| 169 |
+
## 6 Discussion and conclusion
|
| 170 |
+
|
| 171 |
+
In this paper, we characterize the inherent generalization of overparameterized linear regression with constant-stepsize SGD (with iterate averaging) to groups within and outside the population from which training instances are sampled. We do so by proving the excess risk bound for an arbitrary group in terms of the full eigenspectra of the data covariance matrices of the group and population. We additionally present a novel interpretation of the group excess risk bound through the lens of real-world challenges in practicing trustworthy machine learning. Finally, we empirically validate the tightness of our bound on simulated data.
|
| 172 |
+
|
| 173 |
+
This paper offers numerous promising future directions for research. We encourage proving a lower bound on the group excess risk to determine if our upper bound is tight (theoretically). We also suggest proving group excess risk bounds for tail averaging and last-iterate SGD with learning rate decay [1, 2]. It would further be interesting to extend this work to prove group excess risk bounds for logistic regression, 2-layer neural networks, and 1-layer graph convolutional networks [34].
|
| 174 |
+
|
| 175 |
+
## References
|
| 176 |
+
|
| 177 |
+
[1] Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, and Sham M. Kakade. Benign Overfitting of Constant-Stepsize SGD for Linear Regression. ArXiv, abs/2103.12692, 2021.
|
| 178 |
+
|
| 179 |
+
[2] Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu, and Sham M. Kakade. Last Iterate Risk Bounds of SGD with Decaying Stepsize for Overparameterized Linear Regression. ArXiv, abs/2110.06198, 2021.
|
| 180 |
+
|
| 181 |
+
[3] Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Dean P. Foster, and Sham M. Kakade. The Benefits of Implicit Regularization from SGD in Least Squares Problems. ArXiv, abs/2108.04552, 2021.
|
| 182 |
+
|
| 183 |
+
[4] Christo Wilson, Avijit Ghosh, Shan Jiang, Alan Mislove, Lewis Baker, Janelle Szary, Kelly Trindel, and Frida Polli. Building and Auditing Fair Algorithms: A Case Study in Candidate Screening. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT '21, page 666-677, New York, NY, USA, 2021. Association for Computing Machinery.
|
| 184 |
+
|
| 185 |
+
[5] Solon Barocas, Moritz Hardt, and Arvind Narayanan. Fairness and Machine Learning. fairml-book.org, 2019. http://www.fairmlbook.org.
|
| 186 |
+
|
| 187 |
+
[6] Marcus A. Maloof. Learning When Data Sets are Imbalanced and When Costs are Unequal and Unknown. In ICML-2003 Workshop on Learning from Imbalanced Data Sets II, 2003.
|
| 188 |
+
|
| 189 |
+
[7] Thomas Oommen, Laurie G. Baise, and Richard M. Vogel. Sampling Bias and Class Imbalance in Maximum-likelihood Logistic Regression. Mathematical Geosciences, 43:99-120, 2011.
|
| 190 |
+
|
| 191 |
+
[8] Klas Leino, Matt Fredrikson, Emily Black, Shayak Sen, and Anupam Datta. Feature-Wise Bias Amplification. ArXiv, abs/1812.08999, 2019.
|
| 192 |
+
|
| 193 |
+
[9] Sahil Verma and Julia Rubin. Fairness Definitions Explained. In Proceedings of the International Workshop on Software Fairness, FairWare '18, page 1-7, New York, NY, USA, 2018. Association for Computing Machinery.
|
| 194 |
+
|
| 195 |
+
[10] R. Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership Inference Attacks Against Machine Learning Models. 2017 IEEE Symposium on Security and Privacy (SP), pages 3-18, 2017.
|
| 196 |
+
|
| 197 |
+
[11] Baharan Mirzasoleiman, Kaidi Cao, and Jure Leskovec. Coresets for Robust Training of Neural Networks against Noisy Labels. ArXiv, abs/2011.07451, 2020.
|
| 198 |
+
|
| 199 |
+
[12] Scott Pesme and Nicolas Flammarion. Online Robust Regression via SGD on the 11 loss. ArXiv, abs/2007.00399, 2020.
|
| 200 |
+
|
| 201 |
+
[13] Abdulkadir Canatar, Blake Bordelon, and Cengiz Pehlevan. Out-of-Distribution Generalization in Kernel Regression. In NeurIPS, 2021.
|
| 202 |
+
|
| 203 |
+
[14] Elan Rosenfeld, Pradeep Ravikumar, and Andrej Risteski. An online learning approach to interpolation and extrapolation in domain generalization. In Gustau Camps-Valls, Francisco J. R. Ruiz, and Isabel Valera, editors, Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, volume 151 of Proceedings of Machine Learning Research, pages 2641-2657. PMLR, 28-30 Mar 2022.
|
| 204 |
+
|
| 205 |
+
[15] Haibo He and Edwardo A. Garcia. Learning from Imbalanced Data. IEEE Transactions on Knowledge and Data Engineering, 21(9):1263-1284, 2009.
|
| 206 |
+
|
| 207 |
+
[16] Mateusz Buda, Atsuto Maki, and Maciej A. Mazurowski. A systematic study of the class imbalance problem in convolutional neural networks. Neural networks : the official journal of the International Neural Network Society, 106:249-259, 2018.
|
| 208 |
+
|
| 209 |
+
[17] Martín Arjovsky, Kamalika Chaudhuri, and David Lopez-Paz. Throwing away data improves worst-class error in imbalanced classification. ArXiv, abs/2205.11672, 2022.
|
| 210 |
+
|
| 211 |
+
[18] Aditya Krishna Menon, Ankit Singh Rawat, and Sanjiv Kumar. Overparameterisation and worst-case generalisation: friend or foe? In International Conference on Learning Representations, 2021.
|
| 212 |
+
|
| 213 |
+
[19] Badr Youbi Idrissi, Martín Arjovsky, Mohammad Pezeshki, and David Lopez-Paz. Simple data balancing achieves competitive worst-group-accuracy. In CLeaR, 2022.
|
| 214 |
+
|
| 215 |
+
[20] Shiori Sagawa, Aditi Raghunathan, Pang Wei Koh, and Percy Liang. An Investigation of Why Overparameterization Exacerbates Spurious Correlations. ArXiv, abs/2005.04345, 2020.
|
| 216 |
+
|
| 217 |
+
[21] Ganesh Ramachandra Kini, Orestis Paraskevas, Samet Oymak, and Christos Thrampoulidis. Label-Imbalanced and Group-Sensitive Classification under Overparameterization. In NeurIPS, 2021.
|
| 218 |
+
|
| 219 |
+
[22] Daniel Soudry, Elad Hoffer, Suriya Gunasekar, and Nathan Srebro. The Implicit Bias of Gradient Descent on Separable Data. ArXiv, abs/1710.10345, 2018.
|
| 220 |
+
|
| 221 |
+
[23] Stéphane d'Ascoli, Marylou Gabrié, Levent Sagun, and Giulio Biroli. On the interplay between data structure and loss function in classification problems. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 8506-8517. Curran Associates, Inc., 2021.
|
| 222 |
+
|
| 223 |
+
[24] Peiyan Li, Xu Liang, and Haochen Song. A Survey on Implicit Bias of Gradient Descent. In 2022 14th International Conference on Computer Research and Development (ICCRD), pages 108-114, 2022.
|
| 224 |
+
|
| 225 |
+
[25] Moritz Hardt, Eric Price, and Nathan Srebro. Equality of Opportunity in Supervised Learning. ArXiv, abs/1610.02413, 2016.
|
| 226 |
+
|
| 227 |
+
[26] Shiori Sagawa, Pang Wei Koh, Tatsunori B. Hashimoto, and Percy Liang. Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization. ArXiv, abs/1911.08731, 2019.
|
| 228 |
+
|
| 229 |
+
[27] Preethi Lahoti, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost, Nithum Thain, Xuezhi Wang, and Ed H. Chi. Fairness without Demographics through Adversarially Reweighted Learning. ArXiv, abs/2006.13114, 2020.
|
| 230 |
+
|
| 231 |
+
[28] Natalia L Martinez, Martin A Bertran, Afroditi Papadaki, Miguel Rodrigues, and Guillermo Sapiro. Blind Pareto Fairness and Subgroup Robustness. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 7492-7501. PMLR, 18-24 Jul 2021.
|
| 232 |
+
|
| 233 |
+
[29] Joram Soch, The Book of Statistical Proofs, Thomas J. Faulkenberry, Kenneth Petrykowski, and Carsten Allefeld. StatProofBook/StatProofBook.github.io: StatProofBook 2020, December 2020.
|
| 234 |
+
|
| 235 |
+
[30] Solon Barocas and Andrew D. Selbst. Big Data's Disparate Impact. California Law Review, 104:671, 2016.
|
| 236 |
+
|
| 237 |
+
[31] Nicolò Cesa-Bianchi, Shai Shalev-Shwartz, and Ohad Shamir. Efficient learning with partially observed attributes. In ICML, 2010.
|
| 238 |
+
|
| 239 |
+
[32] Xiao-Li Meng. Statistical paradises and paradoxes in big data (i) law of large populations, big data paradox, and the 2016 us presidential election. The Annals of Applied Statistics, 12(2):685-726, 2018.
|
| 240 |
+
|
| 241 |
+
[33] Patrícia Hill Collins. Black feminist thought in the matrix of domination from. 2017.
|
| 242 |
+
|
| 243 |
+
[34] Jiaqi Ma, Junwei Deng, and Qiaozhu Mei. Subgroup Generalization and Fairness of Graph Neural Networks. In NeurIPS, 2021.
|
| 244 |
+
|
| 245 |
+
[35] Jean B. Lasserre. A trace inequality for matrix product. IEEE Trans. Autom. Control., 40:1500- 1501, 1995.
|
| 246 |
+
|
| 247 |
+
## A Proof of the main result
|
| 248 |
+
|
| 249 |
+
### A.1 Bias-variance decomposition
|
| 250 |
+
|
| 251 |
+
Towards proving our main result, we first decompose the excess risk for group $m$ into bias and variance errors. We define the centered SGD iterate ${\eta }_{t} \mathrel{\text{:=}} {w}_{t} - {w}_{m}^{ * }$ ; similarly, ${\bar{\eta }}_{N} \mathrel{\text{:=}} \frac{1}{N}\mathop{\sum }\limits_{{t = 0}}^{{N - 1}}{\eta }_{t}$ .
|
| 252 |
+
|
| 253 |
+
[1] (Equation 4.2) shows the bias-variance decomposition of the iterate:
|
| 254 |
+
|
| 255 |
+
$$
|
| 256 |
+
{\eta }_{t} = {\eta }_{t}^{\text{bias }} + {\eta }_{t}^{\text{var }}
|
| 257 |
+
$$
|
| 258 |
+
|
| 259 |
+
$$
|
| 260 |
+
{\eta }_{t}^{\text{bias }} = \left( {I - \gamma {x}_{t}{x}_{t}^{T}}\right) {\eta }_{t - 1}^{\text{bias }},{\eta }_{0}^{\text{bias }} = {\eta }_{0}
|
| 261 |
+
$$
|
| 262 |
+
|
| 263 |
+
$$
|
| 264 |
+
{\eta }_{t}^{\text{var }} = \left( {I - \gamma {x}_{t}{x}_{t}^{T}}\right) {\eta }_{t - 1}^{\text{var }} + \gamma {\xi }_{t}{x}_{t},{\eta }_{0}^{\text{var }} = 0,
|
| 265 |
+
$$
|
| 266 |
+
|
| 267 |
+
6 where ${\xi }_{t} \mathrel{\text{:=}} {y}_{t} - \left\langle {{w}_{m}^{ * },{x}_{t}}\right\rangle$ is the inherent noise. [1] (Equations 4.4 and 4.5) then proves the recursive 297 forms:
|
| 268 |
+
|
| 269 |
+
$$
|
| 270 |
+
{B}_{t} \mathrel{\text{:=}} {\mathbb{E}}_{\left( {x, y}\right) \sim \mathcal{D}}\left\lbrack {{\eta }_{t}^{\text{bias }} \otimes {\eta }_{t}^{\text{bias }}}\right\rbrack = \left( {\mathcal{I} - \gamma \mathcal{T}}\right) \circ {B}_{t - 1},{B}_{0} = {\eta }_{0} \otimes {\eta }_{0}
|
| 271 |
+
$$
|
| 272 |
+
|
| 273 |
+
$$
|
| 274 |
+
{C}_{t} \mathrel{\text{:=}} {\mathbb{E}}_{\left( {x, y}\right) \sim \mathcal{D}}\left\lbrack {{\eta }_{t}^{\text{var }} \otimes {\eta }_{t}^{\text{var }}}\right\rbrack = \left( {\mathcal{I} - \gamma \mathcal{T}}\right) \circ {C}_{t - 1},{C}_{0} = 0
|
| 275 |
+
$$
|
| 276 |
+
|
| 277 |
+
We also define ${\bar{\eta }}_{N}^{\text{bias }} \mathrel{\text{:=}} \frac{1}{N}\mathop{\sum }\limits_{{t = 0}}^{{N - 1}}{\eta }_{t}^{\text{bias }}$ and ${\bar{\eta }}_{N}^{\text{var }} \mathrel{\text{:=}} \frac{1}{N}\mathop{\sum }\limits_{{t = 0}}^{{N - 1}}{\eta }_{t}^{\text{var }}$ , and see that ${\bar{\eta }}_{t} = {\bar{\eta }}_{t}^{\text{bias }} + {\bar{\eta }}_{t}^{\text{var }}$ .
|
| 278 |
+
|
| 279 |
+
Without loss of generality, we present a bias-variance decomposition of the excess risk for group $m$ :
|
| 280 |
+
|
| 281 |
+
$$
|
| 282 |
+
{\mathcal{E}}_{m} = {\mathbb{E}}_{\mathcal{D}}\left\lbrack {{L}_{{\mathcal{D}}_{m}}\left( {\bar{w}}_{N}\right) - {L}_{{\mathcal{D}}_{m}}\left( {w}_{m}^{ * }\right) }\right\rbrack
|
| 283 |
+
$$
|
| 284 |
+
|
| 285 |
+
$$
|
| 286 |
+
= {\mathbb{E}}_{\mathcal{D}}\left\lbrack {\frac{1}{2}{\mathbb{E}}_{\left( {x, y}\right) \sim {\mathcal{D}}_{m}}{\left( y - {\bar{w}}_{N}^{T}x\right) }^{2} - \frac{1}{2}{\mathbb{E}}_{\left( {x, y}\right) \sim {\mathcal{D}}_{m}}{\left( y - {w}_{m}^{*T}x\right) }^{2}}\right\rbrack
|
| 287 |
+
$$
|
| 288 |
+
|
| 289 |
+
$$
|
| 290 |
+
= \frac{1}{2}{\mathbb{E}}_{\mathcal{D}}\left\lbrack {{\mathbb{E}}_{\left( {x, y}\right) \sim {\mathcal{D}}_{m}}{\left( {\left( {w}_{m}^{ * } - {\bar{w}}_{N}\right) }^{T}x + \xi \right) }^{2} - {\mathbb{E}}_{\left( {x, y}\right) \sim {\mathcal{D}}_{m}}{\xi }^{2}}\right\rbrack
|
| 291 |
+
$$
|
| 292 |
+
|
| 293 |
+
$$
|
| 294 |
+
= \frac{1}{2}{\mathbb{E}}_{\mathcal{D}}\left\lbrack {{\mathbb{E}}_{\left( {x, y}\right) \sim {\mathcal{D}}_{m}}\left\lbrack {{\left( {\bar{w}}_{N} - {w}_{m}^{ * }\right) }^{T}x{x}^{T}\left( {{\bar{w}}_{N} - {w}_{m}^{ * }}\right) - 2{\left( {\bar{w}}_{N} - {w}_{m}^{ * }\right) }^{T}\left( {\xi x}\right) }\right\rbrack }\right\rbrack
|
| 295 |
+
$$
|
| 296 |
+
|
| 297 |
+
$$
|
| 298 |
+
= \frac{1}{2}\left\langle {{H}_{m},{\mathbb{E}}_{\mathcal{D}}\left\lbrack {{\bar{\eta }}_{N} \otimes {\bar{\eta }}_{N}}\right\rbrack }\right\rangle
|
| 299 |
+
$$
|
| 300 |
+
|
| 301 |
+
Then, by Lemma B. 2 from [1] and Young’s inequality, ${\mathbb{E}}_{\mathcal{D}}\left\lbrack {{L}_{{\mathcal{D}}_{m}}\left( {\bar{w}}_{N}\right) - {L}_{{\mathcal{D}}_{m}}\left( {w}_{m}^{ * }\right) }\right\rbrack \leq$
|
| 302 |
+
|
| 303 |
+
${\left( \sqrt{\text{ bias }} + \sqrt{\text{ var }}\right) }^{2} \leq 2 \cdot$ bias $+ 2 \cdot$ var, where bias $\mathrel{\text{:=}} \frac{1}{2}\left\langle {{H}_{m},{\mathbb{E}}_{\mathcal{D}}\left\lbrack {{\bar{\eta }}_{N}^{\text{bias }} \otimes {\bar{\eta }}_{N}^{\text{bias }}}\right\rbrack }\right\rangle$ and var $\mathrel{\text{:=}}$ $\frac{1}{2}\left\langle {{H}_{m},{\mathbb{E}}_{\mathcal{D}}\left\lbrack {{\bar{\eta }}_{N}^{\text{var }} \otimes {\bar{\eta }}_{N}^{\text{var }}}\right\rbrack }\right\rangle$ . The bias and var errors provide a decomposition of a bound for ${\mathcal{E}}_{m}$ .
|
| 304 |
+
|
| 305 |
+
### A.2 Bounding the bias error
|
| 306 |
+
|
| 307 |
+
Now that we have decomposed a bound for ${\mathcal{E}}_{m}$ into bias and variance errors, we will separately bound these errors in terms of the full eigenspectra of the data covariance matrices of group $m$ and the population, and then combine these bounds to achieve the desired bound for ${\mathcal{E}}_{m}$ . In Theorem 2, we focus on the bias error.
|
| 308 |
+
|
| 309 |
+
Theorem 2 We can bound the bias error as:
|
| 310 |
+
|
| 311 |
+
$$
|
| 312 |
+
\text{bias} \leq \frac{\alpha \operatorname{tr}\left( {B}_{0, N}\right) }{{N\gamma }\left( {1 - {\gamma \alpha }\operatorname{tr}\left( H\right) }\right) }\left( {\frac{1}{N}\mathop{\sum }\limits_{{i \leq {k}^{ * }}}\frac{{\lambda }_{i}\left( {H}_{m}\right) }{{\lambda }_{i}\left( H\right) } + N{\gamma }^{2}\mathop{\sum }\limits_{{i > {k}^{ * }}}{\lambda }_{i}\left( {H}_{m}\right) {\lambda }_{i}\left( H\right) }\right)
|
| 313 |
+
$$
|
| 314 |
+
|
| 315 |
+
$$
|
| 316 |
+
+ \left\{ \begin{array}{ll} \frac{{\lambda }_{1}\left( {H}_{m}\right) {\begin{Vmatrix}{w}_{0} - {w}_{m}^{ * }\end{Vmatrix}}_{2}^{2}}{{\gamma }^{2}{N}^{2}{\lambda }_{1}^{2}\left( H\right) }, & {\lambda }_{1}\left( H\right) \geq \frac{1}{\gamma N} \\ {\lambda }_{1}\left( {H}_{m}\right) {\begin{Vmatrix}{w}_{0} - {w}_{m}^{ * }\end{Vmatrix}}_{2}^{2}, & \text{ otherwise } \end{array}\right.
|
| 317 |
+
$$
|
| 318 |
+
|
| 319 |
+
309 Proof of Theorem 2 [1] (Lemma B.3) shows that:
|
| 320 |
+
|
| 321 |
+
$$
|
| 322 |
+
{\mathbb{E}}_{\mathcal{D}}\left\lbrack {{\bar{\eta }}_{N}^{\text{bias }} \otimes {\bar{\eta }}_{N}^{\text{bias }}}\right\rbrack \preccurlyeq \frac{1}{{N}^{2}}\mathop{\sum }\limits_{{t = 0}}^{{N - 1}}\mathop{\sum }\limits_{{k = t}}^{{N - 1}}{\left( I - \gamma H\right) }^{k - t}\mathbb{E}\left\lbrack {{\eta }_{t}^{\text{bias }} \otimes {\eta }_{t}^{\text{bias }}}\right\rbrack + \mathbb{E}\left\lbrack {{\eta }_{t}^{\text{bias }} \otimes {\eta }_{t}^{\text{bias }}}\right\rbrack {\left( I - \gamma H\right) }^{k - t}
|
| 323 |
+
$$
|
| 324 |
+
|
| 325 |
+
- Because $H$ and ${H}_{m}$ are PSD and symmetric, and $H$ commutes with ${\left( I - \gamma H\right) }^{k - t}$ :
|
| 326 |
+
|
| 327 |
+
$$
|
| 328 |
+
\text{bias} = \frac{1}{2}\left\langle {{H}_{m},\mathbb{E}\left\lbrack {{\bar{\eta }}_{N}^{\text{bias }} \otimes {\bar{\eta }}_{N}^{\text{bias }}}\right\rbrack }\right\rangle
|
| 329 |
+
$$
|
| 330 |
+
|
| 331 |
+
$$
|
| 332 |
+
\leq \frac{1}{2{N}^{2}}\mathop{\sum }\limits_{{t = 0}}^{{N - 1}}\mathop{\sum }\limits_{{k = t}}^{{N - 1}}\left\langle {{H}_{m},{\left( I - \gamma H\right) }^{k - t}\mathbb{E}\left\lbrack {{\eta }_{t}^{\text{bias }} \otimes {\eta }_{t}^{\text{bias }}}\right\rbrack + \mathbb{E}\left\lbrack {{\eta }_{t}^{\text{bias }} \otimes {\eta }_{t}^{\text{bias }}}\right\rbrack {\left( I - \gamma H\right) }^{k - t}}\right\rangle
|
| 333 |
+
$$
|
| 334 |
+
|
| 335 |
+
$$
|
| 336 |
+
= \frac{1}{2{N}^{2}}\mathop{\sum }\limits_{{t = 0}}^{{N - 1}}\mathop{\sum }\limits_{{k = t}}^{{N - 1}}\left\langle {{H}_{m}{\left( I - \gamma H\right) }^{k - t} + {\left( I - \gamma H\right) }^{k - t}{H}_{m},\mathbb{E}\left\lbrack {{\eta }_{t}^{\text{bias }} \otimes {\eta }_{t}^{\text{bias }}}\right\rbrack }\right\rangle
|
| 337 |
+
$$
|
| 338 |
+
|
| 339 |
+
$$
|
| 340 |
+
= \frac{1}{{2\gamma }{N}^{2}}\mathop{\sum }\limits_{{t = 0}}^{{N - 1}}\left\langle {{H}_{m}{H}^{-1}\left( {I - {\left( I - \gamma H\right) }^{N - t}}\right) + \left( {I - {\left( I - \gamma H\right) }^{N - t}}\right) {H}^{-1}{H}_{m},{B}_{t}}\right\rangle
|
| 341 |
+
$$
|
| 342 |
+
|
| 343 |
+
$$
|
| 344 |
+
\leq \frac{1}{{2\gamma }{N}^{2}}\left\langle {{H}_{m}{H}^{-1}\left( {I - {\left( I - \gamma H\right) }^{N}}\right) + \left( {I - {\left( I - \gamma H\right) }^{N}}\right) {H}^{-1}{H}_{m},\mathop{\sum }\limits_{{t = 0}}^{{N - 1}}{B}_{t}}\right\rangle
|
| 345 |
+
$$
|
| 346 |
+
|
| 347 |
+
311 [1] (Lemma B.10) shows that:
|
| 348 |
+
|
| 349 |
+
$$
|
| 350 |
+
\mathop{\sum }\limits_{{t = 0}}^{{N - 1}}{B}_{t} \preccurlyeq \mathop{\sum }\limits_{{k = 0}}^{{N - 1}}{\left( I - \gamma H\right) }^{k}\left( {\frac{{\gamma \alpha }\operatorname{tr}\left( {B}_{0, N}\right) }{1 - {\gamma \alpha }\operatorname{tr}\left( H\right) } \cdot H + {B}_{0}}\right) {\left( I - \gamma H\right) }^{k},
|
| 351 |
+
$$
|
| 352 |
+
|
| 353 |
+
312 where ${B}_{0, N} = {B}_{0} - {\left( I - \gamma H\right) }^{N}{B}_{0}{\left( I - \gamma H\right) }^{N}$ . Furthermore,[II] (Lemma B.11) shows that
|
| 354 |
+
|
| 355 |
+
313 $\operatorname{tr}\left( {B}_{0, N}\right) \leq 2\left( {{\begin{Vmatrix}{w}_{0} - {w}_{m}^{ * }\end{Vmatrix}}_{{I}_{0 : {k}^{ * }}}^{2} + {N\gamma }{\begin{Vmatrix}{w}_{0} - {w}_{m}^{ * }\end{Vmatrix}}_{{H}_{{k}^{ * } : \infty }}^{2}}\right)$ . Therefore:
|
| 356 |
+
|
| 357 |
+
$$
|
| 358 |
+
\text{bias} \leq \frac{1}{{2\gamma }{N}^{2}}\mathop{\sum }\limits_{{k = 0}}^{{N - 1}}\left\langle {{H}_{m}{H}^{-1}\left( {I - {\left( I - \gamma H\right) }^{N}}\right) + \left( {I - {\left( I - \gamma H\right) }^{N}}\right) {H}^{-1}{H}_{m}}\right. \text{,}
|
| 359 |
+
$$
|
| 360 |
+
|
| 361 |
+
$$
|
| 362 |
+
{\left( I - \gamma H\right) }^{k}\left( {\frac{{\gamma \alpha }\operatorname{tr}\left( {B}_{0, N}\right) }{1 - {\gamma \alpha }\operatorname{tr}\left( H\right) } \cdot H + {B}_{0}}\right) {\left( I - \gamma H\right) }^{k}\rangle
|
| 363 |
+
$$
|
| 364 |
+
|
| 365 |
+
$$
|
| 366 |
+
\leq \frac{1}{{2\gamma }{N}^{2}}\mathop{\sum }\limits_{{k = 0}}^{{N - 1}}\left\langle {{\left( I - \gamma H\right) }^{k} - {\left( I - \gamma H\right) }^{N + k},\frac{{\gamma \alpha }\operatorname{tr}\left( {B}_{0, N}\right) }{1 - {\gamma \alpha }\operatorname{tr}\left( H\right) } \cdot H{H}_{m}{H}^{-1} + {B}_{0}{H}_{m}{H}^{-1}}\right.
|
| 367 |
+
$$
|
| 368 |
+
|
| 369 |
+
$$
|
| 370 |
+
\left. {+{H}^{-1}{H}_{m}H \cdot \frac{{\gamma \alpha }\operatorname{tr}\left( {B}_{0, N}\right) }{1 - {\gamma \alpha }\operatorname{tr}\left( H\right) } + {H}^{-1}{H}_{m}{B}_{0}}\right\rangle ,
|
| 371 |
+
$$
|
| 372 |
+
|
| 373 |
+
314 where we use that ${\left( I - \gamma H\right) }^{k} \preccurlyeq I$ . We now define the following terms:
|
| 374 |
+
|
| 375 |
+
$$
|
| 376 |
+
{I}_{1} = \frac{\alpha \operatorname{tr}\left( {B}_{0, N}\right) }{2{N}^{2}\left( {1 - {\gamma \alpha }\operatorname{tr}\left( H\right) }\right) }\mathop{\sum }\limits_{{k = 0}}^{{N - 1}}\left\langle {{\left( I - \gamma H\right) }^{k} - {\left( I - \gamma H\right) }^{N + k}, H{H}_{m}{H}^{-1}}\right\rangle
|
| 377 |
+
$$
|
| 378 |
+
|
| 379 |
+
$$
|
| 380 |
+
{I}_{2} = \frac{1}{{2\gamma }{N}^{2}}\mathop{\sum }\limits_{{k = 0}}^{{N - 1}}\left\langle {{\left( I - \gamma H\right) }^{k} - {\left( I - \gamma H\right) }^{N + k},{B}_{0}{H}_{m}{H}^{-1}}\right\rangle
|
| 381 |
+
$$
|
| 382 |
+
|
| 383 |
+
315 We first focus on bounding ${I}_{1}$ . If two matrices $A$ and $B$ are symmetric and PSD, then, tr $\left( {AB}\right) \leq$
|
| 384 |
+
|
| 385 |
+
316 $\mathop{\sum }\limits_{i}{\lambda }_{i}\left( A\right) {\lambda }_{i}\left( B\right)$ , where ${\left\{ {\lambda }_{i}\left( A\right) \right\} }_{i = 1}^{\infty }$ and ${\left\{ {\lambda }_{i}\left( B\right) \right\} }_{i = 1}^{\infty }$ are the eigenvalues of $A$ and $B$ , respectively
|
| 386 |
+
|
| 387 |
+
[35]. Thus:
|
| 388 |
+
|
| 389 |
+
$$
|
| 390 |
+
{I}_{1} = \frac{\alpha \operatorname{tr}\left( {B}_{0, N}\right) }{{2\gamma }{N}^{2}\left( {1 - {\gamma \alpha }\operatorname{tr}\left( H\right) }\right) }\left\langle {{\left( I - {\left( I - \gamma H\right) }^{N}\right) }^{2}{H}^{-1}, H{H}_{m}{H}^{-1}}\right\rangle
|
| 391 |
+
$$
|
| 392 |
+
|
| 393 |
+
$$
|
| 394 |
+
\leq \frac{\alpha \operatorname{tr}\left( {B}_{0, N}\right) }{{2\gamma }{N}^{2}\left( {1 - {\gamma \alpha }\operatorname{tr}\left( H\right) }\right) } \cdot \mathop{\sum }\limits_{i}\frac{{\lambda }_{i}\left( {H}_{m}\right) \cdot {\left( 1 - {\left( 1 - \gamma {\lambda }_{i}\left( H\right) \right) }^{N}\right) }^{2}}{{\lambda }_{i}\left( H\right) }
|
| 395 |
+
$$
|
| 396 |
+
|
| 397 |
+
$$
|
| 398 |
+
\leq \frac{\alpha \operatorname{tr}\left( {B}_{0, N}\right) }{{2\gamma }{N}^{2}\left( {1 - {\gamma \alpha }\operatorname{tr}\left( H\right) }\right) } \cdot \mathop{\sum }\limits_{i}\frac{{\lambda }_{i}\left( {H}_{m}\right) \cdot \min \left\{ {1,{\gamma }^{2}{N}^{2}{\lambda }_{i}^{2}\left( H\right) }\right\} }{{\lambda }_{i}\left( H\right) }
|
| 399 |
+
$$
|
| 400 |
+
|
| 401 |
+
$$
|
| 402 |
+
= \frac{\alpha \operatorname{tr}\left( {B}_{0, N}\right) }{{2N\gamma }\left( {1 - {\gamma \alpha }\operatorname{tr}\left( H\right) }\right) }\left( {\frac{1}{N}\mathop{\sum }\limits_{{i \leq {k}^{ * }}}\frac{{\lambda }_{i}\left( {H}_{m}\right) }{{\lambda }_{i}\left( H\right) } + N{\gamma }^{2}\mathop{\sum }\limits_{{i > {k}^{ * }}}{\lambda }_{i}\left( {H}_{m}\right) {\lambda }_{i}\left( H\right) }\right) ,
|
| 403 |
+
$$
|
| 404 |
+
|
| 405 |
+
where ${k}^{ * } = \max \left\{ {k : {\lambda }_{k}\left( H\right) \geq \frac{1}{\gamma N}}\right\}$ . Because $\operatorname{tr}\left( H\right)$ is finite, $\mathop{\sum }\limits_{{i > {k}^{ * }}}{\lambda }_{i}\left( H\right)$ converges. Furthermore, ${\left\{ {\lambda }_{i}\left( {H}_{m}\right) \right\} }_{i \geq {k}^{ * }}$ decreases monotonically and is bounded by $\left\lbrack {0,{\lambda }_{{k}^{ * }}\left( {H}_{m}\right) }\right\rbrack$ . Therefore, by Abel’s Lemma, $\mathop{\sum }\limits_{{i > {k}^{ * }}}{\lambda }_{i}\left( {H}_{m}\right) {\lambda }_{i}\left( H\right)$ converges. Furthermore, we see that the bound for ${I}_{1}$ reduces to that in [1] when $\forall i,{\lambda }_{i}\left( {H}_{m}\right) = {\lambda }_{i}\left( H\right)$ .
|
| 406 |
+
|
| 407 |
+
Now, we bound ${I}_{2}$ . Because ${B}_{0} = \left( {{w}_{0} - {w}_{m}^{ * }}\right) {\left( {w}_{0} - {w}_{m}^{ * }\right) }^{T}$ , the largest and only non-zero eigenvalue
|
| 408 |
+
|
| 409 |
+
of $B$ is ${\lambda }_{1}\left( B\right) = {\begin{Vmatrix}{w}_{0} - {w}_{m}^{ * }\end{Vmatrix}}_{2}^{2}$ . Therefore:
|
| 410 |
+
|
| 411 |
+
$$
|
| 412 |
+
{I}_{2} \leq \frac{1}{2{\gamma }^{2}{N}^{2}}\left\langle {{\left( I - {\left( I - \gamma H\right) }^{N}\right) }^{2}{H}^{-1},{B}_{0}{H}_{m}{H}^{-1}}\right\rangle
|
| 413 |
+
$$
|
| 414 |
+
|
| 415 |
+
$$
|
| 416 |
+
\leq \frac{1}{2{\gamma }^{2}{N}^{2}} \cdot \frac{{\lambda }_{1}\left( {H}_{m}\right) {\begin{Vmatrix}{w}_{0} - {w}_{m}^{ * }\end{Vmatrix}}_{2}^{2}}{{\lambda }_{1}^{2}\left( H\right) } \cdot \left\{ \begin{array}{ll} 1, & {\lambda }_{1}\left( H\right) \geq \frac{1}{\gamma N} \\ {\gamma }^{2}{N}^{2}{\lambda }_{1}^{2}\left( H\right) , & \text{ otherwise } \end{array}\right.
|
| 417 |
+
$$
|
| 418 |
+
|
| 419 |
+
$$
|
| 420 |
+
= \left\{ \begin{array}{ll} \frac{{\lambda }_{1}\left( {H}_{m}\right) {\begin{Vmatrix}{w}_{0} - {w}_{m}^{ * }\end{Vmatrix}}_{2}^{2}}{2{\gamma }^{2}{N}^{2}{\lambda }_{1}^{2}\left( H\right) }, & {\lambda }_{1}\left( H\right) \geq \frac{1}{\gamma N} \\ \frac{1}{2}{\lambda }_{1}\left( {H}_{m}\right) {\begin{Vmatrix}{w}_{0} - {w}_{m}^{ * }\end{Vmatrix}}_{2}^{2}, & \text{ otherwise } \end{array}\right.
|
| 421 |
+
$$
|
| 422 |
+
|
| 423 |
+
324 Now that we have bounds for ${I}_{1}$ and ${I}_{2}$ :
|
| 424 |
+
|
| 425 |
+
$$
|
| 426 |
+
\text{bias} \leq 2{I}_{1} + 2{I}_{2}
|
| 427 |
+
$$
|
| 428 |
+
|
| 429 |
+
$$
|
| 430 |
+
\leq \frac{\alpha \operatorname{tr}\left( {B}_{0, N}\right) }{{N\gamma }\left( {1 - {\gamma \alpha }\operatorname{tr}\left( H\right) }\right) }\left( {\frac{1}{N}\mathop{\sum }\limits_{{i \leq {k}^{ * }}}\frac{{\lambda }_{i}\left( {H}_{m}\right) }{{\lambda }_{i}\left( H\right) } + N{\gamma }^{2}\mathop{\sum }\limits_{{i > {k}^{ * }}}{\lambda }_{i}\left( {H}_{m}\right) {\lambda }_{i}\left( H\right) }\right)
|
| 431 |
+
$$
|
| 432 |
+
|
| 433 |
+
$$
|
| 434 |
+
+ \left\{ \begin{array}{ll} \frac{{\lambda }_{1}\left( {H}_{m}\right) {\begin{Vmatrix}{w}_{0} - {w}_{m}^{ * }\end{Vmatrix}}_{2}^{2}}{{\gamma }^{2}{N}^{2}{\lambda }_{1}^{2}\left( H\right) }, & {\lambda }_{1}\left( H\right) \geq \frac{1}{\gamma N} \\ {\lambda }_{1}\left( {H}_{m}\right) {\begin{Vmatrix}{w}_{0} - {w}_{m}^{ * }\end{Vmatrix}}_{2}^{2}, & \text{ otherwise } \end{array}\right.
|
| 435 |
+
$$
|
| 436 |
+
|
| 437 |
+
Thus, we have successfully bounded the bias error in terms of the full eigenspectra of the data covariance matrices of group $m$ and the population.
|
| 438 |
+
|
| 439 |
+
### A.3 Bounding the variance error
|
| 440 |
+
|
| 441 |
+
Proceeding in our journey to prove Theorem 1, we will bound the variance error.
|
| 442 |
+
|
| 443 |
+
9 Theorem 3 We can bound the variance error as:
|
| 444 |
+
|
| 445 |
+
$$
|
| 446 |
+
\operatorname{var} \leq \frac{{\sigma }^{2}}{1 - {\gamma \alpha }\operatorname{tr}\left( H\right) }\left( {\frac{1}{N}\mathop{\sum }\limits_{{i \leq {k}^{ * }}}\frac{{\lambda }_{i}\left( {H}_{m}\right) }{{\lambda }_{i}\left( H\right) } + N{\gamma }^{2}\mathop{\sum }\limits_{{i > {k}^{ * }}}{\lambda }_{i}\left( {H}_{m}\right) {\lambda }_{i}\left( H\right) }\right)
|
| 447 |
+
$$
|
| 448 |
+
|
| 449 |
+
Similar to for the bias error:
|
| 450 |
+
|
| 451 |
+
$$
|
| 452 |
+
\operatorname{var} \leq \frac{1}{{2\gamma }{N}^{2}}\mathop{\sum }\limits_{{t = 0}}^{{N - 1}}\left\langle {{H}_{m}{H}^{-1}\left( {I - {\left( I - \gamma H\right) }^{N - t}}\right) + \left( {I - {\left( I - \gamma H\right) }^{N - t}}\right) {H}^{-1}{H}_{m},{C}_{t}}\right\rangle
|
| 453 |
+
$$
|
| 454 |
+
|
| 455 |
+
[1] (Lemma B.5) shows ${C}_{t} \preccurlyeq \frac{\gamma {\sigma }^{2}}{1 - {\gamma \alpha }\operatorname{tr}\left( H\right) }\left( {I - {\left( I - \gamma H\right) }^{t}}\right)$ . Therefore:
|
| 456 |
+
|
| 457 |
+
$$
|
| 458 |
+
\operatorname{var} \leq \frac{{\sigma }^{2}}{2{N}^{2}\left( {1 - {\gamma \alpha }\operatorname{tr}\left( H\right) }\right) }\mathop{\sum }\limits_{{t = 0}}^{{N - 1}}\left\langle {{H}_{m}{H}^{-1}\left( {I - {\left( I - \gamma H\right) }^{N - t}}\right) + \left( {I - {\left( I - \gamma H\right) }^{N - t}}\right) {H}^{-1}{H}_{m},}\right.
|
| 459 |
+
$$
|
| 460 |
+
|
| 461 |
+
$$
|
| 462 |
+
\left. {I - {\left( I - \gamma H\right) }^{t}}\right\rangle
|
| 463 |
+
$$
|
| 464 |
+
|
| 465 |
+
$$
|
| 466 |
+
= \frac{{\sigma }^{2}}{2{N}^{2}\left( {1 - {\gamma \alpha }\operatorname{tr}\left( H\right) }\right) }\left\langle {\mathop{\sum }\limits_{{t = 0}}^{{N - 1}}\left( {I - {\left( I - \gamma H\right) }^{N - t}}\right) \left( {I - {\left( I - \gamma H\right) }^{t}}\right) ,{H}_{m}{H}^{-1} + {H}^{-1}{H}_{m}}\right\rangle
|
| 467 |
+
$$
|
| 468 |
+
|
| 469 |
+
$$
|
| 470 |
+
\leq \frac{{\sigma }^{2}}{{2N}\left( {1 - {\gamma \alpha }\operatorname{tr}\left( H\right) }\right) }\left\langle {{\left( I - {\left( I - \gamma H\right) }^{N}\right) }^{2},{H}_{m}{H}^{-1} + {H}^{-1}{H}_{m}}\right\rangle
|
| 471 |
+
$$
|
| 472 |
+
|
| 473 |
+
$$
|
| 474 |
+
\leq \frac{{\sigma }^{2}}{1 - {\gamma \alpha }\operatorname{tr}\left( H\right) }\left( {\frac{1}{N}\mathop{\sum }\limits_{{i \leq {k}^{ * }}}\frac{{\lambda }_{i}\left( {H}_{m}\right) }{{\lambda }_{i}\left( H\right) } + N{\gamma }^{2}\mathop{\sum }\limits_{{i > {k}^{ * }}}{\lambda }_{i}\left( {H}_{m}\right) {\lambda }_{i}\left( H\right) }\right)
|
| 475 |
+
$$
|
| 476 |
+
|
| 477 |
+
Having appropriately bounded both the bias and variance errors, we have proved Theorem 1.
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/TRpJAAK3o0X/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,161 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ GROUP EXCESS RISK BOUND OF OVERPARAMETERIZED LINEAR REGRESSION WITH CONSTANT-STEPSIZE SGD
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
§ ABSTRACT
|
| 12 |
+
|
| 13 |
+
It has been observed that machine learning models trained using stochastic gradient descent (SGD) exhibit poor generalization to certain groups within and outside the population from which training instances are sampled. This has serious ramifications for the fairness, privacy, robustness, and out-of-distribution (OOD) generalization of machine learning. Hence, we theoretically characterize the inherent generalization of SGD-learned overparameterized linear regression to intra-and extra-population groups. We do this by proving an excess risk bound for an arbitrary group in terms of the full eigenspectra of the data covariance matrices of the group and population. We additionally provide a novel interpretation of the bound in terms of how the group and population data distributions differ and the effective dimension of SGD, as well as connect these factors to real-world challenges in practicing trustworthy machine learning. We further empirically validate the tightness of our bound on simulated data.
|
| 14 |
+
|
| 15 |
+
§ 14 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
Much recent work has sought to better understand the inductive biases of stochastic gradient descent (SGD), such as benign overfitting and implicit regularization in overparameterized settings [1, 2, 3, 3]. However, this line of literature has overwhelmingly focused on bounding the excess risk of an SGD-learned model over the entire population $\mathcal{P}$ from which training instances are sampled, and has not investigated how SGD (e.g., its assumption that training data are IID) yields poor model generalization to intra-population groups ${\mathcal{G}}_{\text{ intra }}$ (i.e., subsets of the population) and extra-population groups ${\mathcal{G}}_{\text{ extra }}$ (i.e., instances that fall outside the population). We illustrate these concepts in Figure 1,
|
| 18 |
+
|
| 19 |
+
Establishing the theory behind this phenomenon is critical, as it provides provable guarantees about the trustworthiness (e.g., fairness, privacy, robustness, and out-of-distribution generalization) of SGD-learned models. As an example, let's consider an automated candidate screening system that a company trains on the data (e.g., number of years previously worked, relevant skills) of a sample from the population $\mathcal{P}$ of past job applicants [4]. In the context of fairness, many works have observed that for models trained using SGD, group-imbalanced data distributions translate to generalization disparities [5, 6, 7, 8]. Hence, the candidate screening system may generalize poorly for minoritized groups ${\mathcal{G}}_{\text{ intra }}$ (i.e., not satisfy equal opportunity [9]), which can yield hiring discrimination. This phenomenon also has implications for privacy, as adversaries, against the desire of a job applicant, can infer whether the applicant's data were used to train the candidate screening system, based on the system's loss on the applicant [10]. When considering robustness, we may be interested in how well the candidate screening system generalizes to a target group ${\mathcal{G}}_{\text{ intra }}$ of applicants when $\mathcal{P}$ is noisy or corrupted [11, 12]. Finally, in the context of out-of-distribution generalization, models deployed in the real world often have to deal with data distributions that differ from the training distribution [13]; for instance, the candidate screening system, if trained prior to a recession, may generalize poorly to stellar job applicants ${\mathcal{G}}_{\text{ extra }}$ who were laid off during the recession for reasons beyond their control. Therefore, it is paramount to understand how SGD-learned models generalize to extra-population groups [14], especially in terms of how the properties of the data distributions for these groups differ from those of the training distribution.
|
| 20 |
+
|
| 21 |
+
< g r a p h i c s >
|
| 22 |
+
|
| 23 |
+
Figure 1: Euler diagram of a population $\mathcal{P}$ , an intra-population group ${\mathcal{G}}_{\text{ intra }}$ , and an extra-population group ${\mathcal{G}}_{\text{ extra }}$ , as well as a visual depiction of their respective possible data distributions ${\mathcal{D}}_{\mathcal{P}},{\mathcal{D}}_{{\mathcal{G}}_{\text{ intra }}},{\mathcal{D}}_{{\mathcal{G}}_{\text{ extra }}}$ for a feature $x$ (which are noticeably distinct).
|
| 24 |
+
|
| 25 |
+
Towards bolstering the theoretical foundations of trustworthy machine learning, we characterize the inherent generalization of constant-stepsize SGD (with iterate averaging) to groups within and outside the population from which training instances are sampled, for an arguably simple setting: overparameterized linear regression. We prove an excess risk bound for an arbitrary group which can be decomposed into bias and variance factors that depend on the full eigenspectra of the data covariance matrices of the group and population. We then re-express the excess risk bound in terms of: 1) how the group and population data distributions diverge, 2) how the distributions' feature variances differ, and 3) the effective dimension of SGD. We connect these three components to real-world challenges in practicing trustworthy machine learning, such as limited features and sample size disparities for minoritized groups. Finally, we empirically validate the tightness of our bound on simulated data. As a whole, we set the stage for future research to extend our results to deep neural networks and models trained with other variants of gradient descent (e.g., minibatch GD, SGD with learning rate scheduling).
|
| 26 |
+
|
| 27 |
+
§ 2 RELATED WORK
|
| 28 |
+
|
| 29 |
+
Imbalanced learning The tendency of machine learning models to overpredict the majority class in the presence of class-imbalanced training samples [6, 15, 7, 16, 8, 17] and underperform for minoritized groups [18, 19] has been extensively empirically and theoretically studied. Some papers have theoretically investigated worst-case group generalization in the overparameterized regime [20, 21]. However, these works have not examined how SGD in particular (e.g., its assumption that training data are IID) causes poor generalization for a minoritized group, even in the arguably simple case of overparameterized linear regression. We do so in terms of the data covariance matrices of the group and population, rather than representation dimension or information, which affords greater interpretability.
|
| 30 |
+
|
| 31 |
+
Inductive biases of SGD Theoretically analyzing the inductive biases of stochastic gradient descent 5 (e.g., implicit regularization, benign overfitting), especially in the overparameterized regime, is a nascent area of research [2, 1, 3, 22, 23, 24] and strengthens our understanding of how deep learning works. In this paper, we make novel contributions to learning theory by analyzing constant-stepsize SGD with iterate averaging when training instances are sampled from a different distribution than the evaluation distribution. We do so by extending the analysis of [1], which only analyzes the excess risk of SGD-learned linear regression over the entire population from which training instances are sampled, and not over a particular group.
|
| 32 |
+
|
| 33 |
+
Trustworthy machine learning Numerous works in fair machine learning have explored the implications of generalization disparities among groups (known as equal opportunity [9]) for model-induced harms faced by minoritized groups [25, 5]. For example, in the case of automated loan approval, if white men enjoy better model generalization, their loan applications will be less likely to be incorrectly rejected compared to women and people of color. Prior research has also theoretically and empirically studied worst-case group generalization in the context of fairness without demographics and distributionally robust optimization [26, 27, 28]. In this work, we prove a group excess risk bound for overparameterized linear regression and contextualize the bound in terms of real-world challenges in practicing trustworthy machine learning.
|
| 34 |
+
|
| 35 |
+
§ 3 PROBLEM SETUP
|
| 36 |
+
|
| 37 |
+
The linear regression problem of interest is $\mathop{\min }\limits_{w}{L}_{\mathcal{D}}\left( w\right)$ , where ${L}_{\mathcal{D}}\left( w\right) =$ $\frac{1}{2}{\mathbb{E}}_{\left( {x,y}\right) \sim \mathcal{D}}\left\lbrack {\left( y-\langle w,x\rangle \right) }^{2}\right\rbrack$ . In this equation, $x \in \mathcal{H}$ is the feature vector (we assume $\dim \left( \mathcal{H}\right) = \infty$ to model the overparameterized regime), $y \in \mathbb{R}$ is the response, $w \in \mathcal{H}$ is the weight vector to be optimized, and $\mathcal{D}$ is the arbitrary population distribution over $x$ and $y$ . Furthermore, suppose an arbitrary group $m$ (within or outside the population) has the arbitrary distribution ${\mathcal{D}}_{m}$ over $x$ and $y$ . Now, assume the optimal parameters ${w}_{m}^{ * }$ for group $m$ satisfy the first-order optimality condition $\nabla {L}_{{\mathcal{D}}_{m}}\left( {w}_{m}^{ * }\right) = {\mathbb{E}}_{\left( {x,y}\right) \sim {\mathcal{D}}_{m}}\left\lbrack {\left( {y - \left\langle {{w}_{m}^{ * },x}\right\rangle }\right) x}\right\rbrack = 0.$
|
| 38 |
+
|
| 39 |
+
In this paper, we consider constant stepsize SGD with iterate averaging; at each iteration $t$ , a training instance $\left( {{x}_{t},{y}_{t}}\right) \sim \mathcal{D}$ is independently observed and the weight is updated as follows:
|
| 40 |
+
|
| 41 |
+
$$
|
| 42 |
+
{w}_{t} \mathrel{\text{ := }} {w}_{t - 1} - \gamma \left( {\left\langle {{w}_{t - 1},{x}_{t}}\right\rangle - {y}_{t}}\right) {x}_{t},t = 1,\ldots ,N
|
| 43 |
+
$$
|
| 44 |
+
|
| 45 |
+
where $\gamma > 0$ is a constant stepsize, $N$ is the number of samples observed, and the weights are initialized as ${w}_{0} \in \mathcal{H}$ . Following [1], the final output is the average of the iterates ${\bar{w}}_{N} \mathrel{\text{ := }} \frac{1}{N}\mathop{\sum }\limits_{{t = 0}}^{{N - 1}}{w}_{t}$ .
|
| 46 |
+
|
| 47 |
+
§ 4 MAIN RESULT
|
| 48 |
+
|
| 49 |
+
We now introduce relevant notation and our assumptions (which are similar to those in [1]), as well as state our main result.
|
| 50 |
+
|
| 51 |
+
Assumption 1 (Regularity conditions) For each group $m$ , assume ${H}_{m} \mathrel{\text{ := }} {\mathbb{E}}_{\left( {x,y}\right) \sim {\mathcal{D}}_{m}}\left\lbrack {x{x}^{T}}\right\rbrack$ (i.e., the data covariance matrix ${}^{\top }$ of ${\mathcal{D}}_{m}$ ) and ${\mathbb{E}}_{\left( {x,y}\right) \sim {\mathcal{D}}_{m}}\left\lbrack {y}^{2}\right\rbrack$ exist and are finite. Furthermore, assume that $\operatorname{tr}\left( {H}_{m}\right)$ is finite (i.e., ${\bar{H}}_{m}$ is trace-class) and ${H}_{m}$ is symmetric positive semi-definite (PSD). Let ${\left\{ {\lambda }_{i}\right\} }_{i = 1}^{\infty }\left( {H}_{m}\right)$ be the eigenvalues of ${H}_{m}$ sorted in decreasing order.
|
| 52 |
+
|
| 53 |
+
Additionally, denote the population data covariance matrix $H \mathrel{\text{ := }} {\mathbb{E}}_{\left( {x,y}\right) \sim \mathcal{D}}\left\lbrack {x{x}^{T}}\right\rbrack \operatorname{Let}{\left\{ {\lambda }_{i}\right\} }_{i = 1}^{\infty }\left( H\right)$ be the eigenvalues of $H$ sorted in decreasing order. Furthermore, suppose the eigendecomposition of $H = \mathop{\sum }\limits_{i}{\lambda }_{i}{v}_{i}{v}_{i}^{T}$ ; then, ${H}_{k : \infty } \mathrel{\text{ := }} \mathop{\sum }\limits_{{i > k}}{\lambda }_{i}{v}_{i}{v}_{i}^{T}$ . Similarly, the head of the identity matrix ${I}_{0 : k} \mathrel{\text{ := }} \mathop{\sum }\limits_{{i = 1}}^{k}{v}_{i}{v}_{i}^{T}.$
|
| 54 |
+
|
| 55 |
+
We also define the following linear operators (which we assume to exist and be finite): $\mathcal{I} = I \otimes I$ , $\mathcal{M} = {\mathbb{E}}_{\left( {x,y}\right) \sim \mathcal{D}}\left\lbrack {x \otimes x \otimes x \otimes x}\right\rbrack$ (where $\otimes$ is the tensor product), $\mathcal{T} = H \otimes I + I \otimes H - \gamma \mathcal{M}$ . All the results about these operators from Lemma 4.1 in [1] hold.
|
| 56 |
+
|
| 57 |
+
Assumption 2 (Fourth moment conditions) Assume that there exists a positive constant $\alpha > 0$ , such that for any PSD matrix $A$ , it holds that ${\mathbb{E}}_{\left( {x,y}\right) \sim \mathcal{D}}\left\lbrack {x{x}^{T}{Ax}{x}^{T}}\right\rbrack \preccurlyeq \alpha \operatorname{tr}\left( {HA}\right) H$ . This assumption
|
| 58 |
+
|
| 59 |
+
${}^{1}$ We refer to ${H}_{m}$ as the data covariance matrix (following [1]), but it is in fact the raw second moment. We do not assume $\mathbb{E}\left\lbrack x\right\rbrack = 0$ .
|
| 60 |
+
|
| 61 |
+
is satisfied for Gaussian distributions by $\alpha = 3$ , and it is further implied if the distribution over ${H}^{-\frac{1}{2}}x$ has sub-Gaussian tails (see Lemma A.1 in [1]).
|
| 62 |
+
|
| 63 |
+
Assumption 3 (Noise conditions) Suppose that ${\sum }_{\text{ noise }} \mathrel{\text{ := }} {\mathbb{E}}_{\left( {x,y}\right) \sim \mathcal{D}}\left\lbrack {{\left( y - \left\langle {w}_{m}^{ * },x\right\rangle \right) }^{2}x{x}^{T}}\right\rbrack$ (i.e., the covariance matrix of the gradient noise at ${w}_{m}^{ * }\left( 2\right)$ and ${\sigma }^{2} \mathrel{\text{ := }} {\begin{Vmatrix}{H}^{-\frac{1}{2}}{\sum }_{\text{ noise }}{H}^{-\frac{1}{2}}\end{Vmatrix}}_{2}$ (i.e., the additive noise) exist and are finite.
|
| 64 |
+
|
| 65 |
+
Assumption 4 (Learning rate condition) Assume $\gamma \leq \frac{1}{\alpha \operatorname{tr}\left( H\right) }$ .
|
| 66 |
+
|
| 67 |
+
The excess risk of a trained model for group $m$ quantifies how much worse the model performs for group $m$ than the optimal model for $m$ (i.e., the model parameterized by ${w}_{m}^{ * }$ ) does. In Theorem 1, we present an excess risk bound of overparameterized linear regression with constant-stepsize SGD (with iterate averaging) for group $m$ in terms of the full eigenspectra of the data covariance matrices of the group and population.
|
| 68 |
+
|
| 69 |
+
Under the assumptions above, we are ready for the statement of the main theorem:
|
| 70 |
+
|
| 71 |
+
Theorem 1 We can bound the excess risk ${\mathcal{E}}_{m}$ for group $m$ as:
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
{\mathcal{E}}_{m} = {\mathbb{E}}_{\mathcal{D}}\left\lbrack {{L}_{{\mathcal{D}}_{m}}\left( {\bar{w}}_{N}\right) }\right\rbrack - {L}_{{\mathcal{D}}_{m}}\left( {w}_{m}^{ * }\right) \leq 2 \cdot \text{ Effective Bias } + 2 \cdot \text{ Effective Var, }
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
where:
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
\text{ EffectiveBias } \leq \left\{ \begin{array}{ll} \frac{{\lambda }_{1}\left( {H}_{m}\right) {\begin{Vmatrix}{w}_{0} - {w}_{m}^{ * }\end{Vmatrix}}_{2}^{2}}{{\gamma }^{2}{N}^{2}{\lambda }_{1}^{2}\left( H\right) }, & {\lambda }_{1}\left( H\right) \geq \frac{1}{\gamma N} \\ {\lambda }_{1}\left( {H}_{m}\right) {\begin{Vmatrix}{w}_{0} - {w}_{m}^{ * }\end{Vmatrix}}_{2}^{2}, & \text{ otherwise } \end{array}\right.
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
\text{ EffectiveVar } \leq \frac{\frac{2\alpha }{N\gamma }\left( {{\begin{Vmatrix}{w}_{0} - {w}_{m}^{ * }\end{Vmatrix}}_{{I}_{0 : {k}^{ * }}}^{2} + {N\gamma }{\begin{Vmatrix}{w}_{0} - {w}_{m}^{ * }\end{Vmatrix}}_{{H}_{{k}^{ * }:\infty }}^{2}}\right) + {\sigma }^{2}}{1 - {\gamma \alpha }\operatorname{tr}\left( H\right) }
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
\cdot \left( {\frac{1}{N}\underset{\text{ head }}{\underbrace{\mathop{\sum }\limits_{{i \leq {k}^{ * }}}\frac{{\lambda }_{i}\left( {H}_{m}\right) }{{\lambda }_{i}\left( H\right) }}} + N{\gamma }^{2}\underset{\text{ tail }}{\underbrace{\mathop{\sum }\limits_{{i > {k}^{ * }}}{\lambda }_{i}\left( {H}_{m}\right) {\lambda }_{i}\left( H\right) }}}\right) ,
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
where ${k}^{ * } = \max \left\{ {k : {\lambda }_{k}\left( H\right) \geq \frac{1}{\gamma N}}\right\}$ . As the bound suggests, to obtain a vanishing bound, we need:
|
| 92 |
+
|
| 93 |
+
1. A sufficiently large sample from the population: ${\lambda }_{1}\left( H\right) \geq \frac{1}{\gamma N}$
|
| 94 |
+
|
| 95 |
+
2. The head to converge in $N : \mathop{\sum }\limits_{{i \leq {k}^{ * }}}\frac{{\lambda }_{i}\left( {H}_{m}\right) }{{\lambda }_{i}\left( H\right) } = o\left( N\right)$
|
| 96 |
+
|
| 97 |
+
3. The tail to converge in $N : \mathop{\sum }\limits_{{i > {k}^{ * }}}{\lambda }_{i}\left( {H}_{m}\right) {\lambda }_{i}\left( H\right) = o\left( {1/N}\right)$
|
| 98 |
+
|
| 99 |
+
We prove Theorem 1 in Section A. In the following section, we contextualize the group excess risk bound through the lens of real-world challenges in practicing trustworthy machine learning.
|
| 100 |
+
|
| 101 |
+
§ 4.1 INTERPRETING THE GROUP EXCESS RISK BOUND
|
| 102 |
+
|
| 103 |
+
In interpreting the bound for ${\mathcal{E}}_{m}$ , we consider the case where the eigenspectrum of $H$ rapidly decays and thus focus on the head of the bound. ${}^{3}$ We first re-express the head in terms of the variance and mean of ${\mathcal{D}}_{m}$ and $\mathcal{D}$ . We denote $\mu \mathrel{\text{ := }} {\mathbb{E}}_{\left( {x,y}\right) \sim \mathcal{D}}\left\lbrack x\right\rbrack ,\sum \mathrel{\text{ := }} {\mathbb{E}}_{\left( {x,y}\right) \sim \mathcal{D}}\left\lbrack {\left( {x - \mu }\right) {\left( x - \mu \right) }^{T}}\right\rbrack$ , ${\mu }_{m} \mathrel{\text{ := }} {\mathbb{E}}_{\left( {x,y}\right) \sim {\mathcal{D}}_{m}}\left\lbrack x\right\rbrack$ , and ${\sum }_{m} \mathrel{\text{ := }} {\mathbb{E}}_{\left( {x,y}\right) \sim {\mathcal{D}}_{m}}\left\lbrack {\left( {x - {\mu }_{m}}\right) {\left( x - {\mu }_{m}\right) }^{T}}\right\rbrack$ . Without loss of generality,
|
| 104 |
+
|
| 105 |
+
${}^{2}$ Recall that ${w}_{m}^{ * }$ is an optimum of ${L}_{{\mathcal{D}}_{m}}\left( w\right)$ .
|
| 106 |
+
|
| 107 |
+
${}^{3}$ We leave rigorously analyzing the tail of the bound as future work. If the eigenspectrum of $H$ doesn’t decay rapidly (i.e., there exist many high-variance features), then the variance error of the group excess risk will be higher.
|
| 108 |
+
|
| 109 |
+
we assume $\mu = 0$ . Then:
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
\mathop{\sum }\limits_{{i \leq {k}^{ * }}}\frac{{\lambda }_{i}\left( {H}_{m}\right) }{{\lambda }_{i}\left( H\right) } = \mathop{\sum }\limits_{{i \leq {k}^{ * }}}\frac{{\lambda }_{i}\left( {\sum }_{m}\right) + {\lambda }_{i}\left( {{\mu }_{m}{\mu }_{m}^{T}}\right) }{{\lambda }_{i}\left( \sum \right) + 0}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
= \frac{{\lambda }_{1}\left( {\sum }_{m}\right) + {\begin{Vmatrix}{\mu }_{m}\end{Vmatrix}}_{2}^{2}}{{\lambda }_{1}\left( \sum \right) } + \mathop{\sum }\limits_{{2 \leq i \leq {k}^{ * }}}\frac{{\lambda }_{i}\left( {\sum }_{m}\right) }{{\lambda }_{i}\left( \sum \right) }
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
= 2\underset{\text{ distributional difference }}{\underbrace{\left\lbrack \mathrm{{KL}}\left( {p}_{1},{q}_{1}\right) + \mathop{\sum }\limits_{{2 \leq i \leq {k}^{ * }}}\mathrm{{KL}}\left( {p}_{i},{q}_{i}\right) \right\rbrack }} + \underset{\text{ relative feature variance }}{\underbrace{\mathop{\sum }\limits_{{i \leq {k}^{ * }}}\log \frac{{\lambda }_{i}\left( {\sum }_{m}\right) }{{\lambda }_{i}\left( \sum \right) }}} + \underset{\text{ effective dimension }}{\underbrace{{k}^{ * }}},
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
where ${p}_{1} = \mathcal{N}\left( {{\begin{Vmatrix}{\mu }_{m}\end{Vmatrix}}_{2},{\lambda }_{1}\left( {\sum }_{m}\right) }\right) ,{p}_{i} = \mathcal{N}\left( {0,{\lambda }_{i}\left( {\sum }_{m}\right) }\right)$ , and ${q}_{i} = \mathcal{N}\left( {0,{\lambda }_{i}\left( \sum \right) }\right)$ [29]. This result shows that the excess risk for group $m$ can be minimized (and thus generalization to group $m$ can be improved) by:
|
| 124 |
+
|
| 125 |
+
1. Making the distributional difference between group $m$ and the population smaller. This result corroborates findings in the fairness literature that randomly oversampling or increasing training data from minoritized groups (thereby boosting the representation of group $m$ in the population) improves worst-case group generalization [19].
|
| 126 |
+
|
| 127 |
+
2. Minimizing the variance of feature values in group $m$ relative to the variance of feature values in the population. High relative feature variance can occur when group $m$ has sparse or noisy data, which poses a challenge in the real world because minoritized groups are often sidelined in data collection [30] and data are only partially observed [31]. This finding is also consistent with the literature on SGD's implicit bias to rely less on high-variance features [8].
|
| 128 |
+
|
| 129 |
+
3. Reducing the effective dimension ${k}^{ * }$ of SGD. Recall that ${k}^{ * } = \max \left\{ {k : {\lambda }_{k}\left( H\right) \geq \frac{1}{\gamma N}}\right\}$ and $\gamma \leq \frac{1}{\alpha \operatorname{tr}\left( H\right) } = \frac{1}{\alpha \mathop{\sum }\limits_{i}{\lambda }_{i}\left( H\right) }$ ; therefore, ${k}^{ * }$ can be reduced by: 1) decreasing the variance of feature values in the population $\mathop{\sum }\limits_{i}{\lambda }_{i}\left( H\right)$ and 2) increasing the number of training samples $N$ . This is consistent with intuition, as 1) SGD implicitly relies less on high-variance features [8] and 2) increasing the number of randomly-sampled training instances can improve the representation of minoritized groups in the training data.
|
| 130 |
+
|
| 131 |
+
While, theoretically, it seems that increasing the representation of minoritized groups in the training data and better including them in data collection improves generalization to such groups, it is important to not engage in predatory inclusion and exploitative data collection practices ${}^{4}$ . We further emphasize that simply increasing the sheer number of samples, especially without analyzing the randomness or validity of sampling strategies, does not imply increasing the representation of minoritized groups in the training data [32]. Overall, we believe that one of the first steps of socially conscientious data work is to consider how data collection practices reinforce and contribute to the power relations and complex social inequality experienced by minoritized groups [33].
|
| 132 |
+
|
| 133 |
+
§ 5 EMPIRICAL RESULTS
|
| 134 |
+
|
| 135 |
+
To investigate the tightness of our group excess risk bound, we empirically examine how well our bound aligns with the real group excess risk in a simulated setting, wherein we have control over $\mathcal{D}$ and ${\mathcal{D}}_{m}$ . In particular, we assume $\mathcal{D} \mathrel{\text{ := }} p{\mathcal{D}}_{m} + \left( {1 - p}\right) {\mathcal{D}}_{\text{ rest }}$ is a mixture distribution that interpolates ${\mathcal{D}}_{m} \mathrel{\text{ := }} \mathcal{N}\left( {{\mu }_{m},{\sum }_{m}}\right)$ and ${\mathcal{D}}_{\text{ rest }} \mathrel{\text{ := }} \mathcal{N}\left( {{\mu }_{\text{ rest }},{\sum }_{\text{ rest }}}\right)$ for $p \in \left\lbrack {0,1}\right\rbrack$ . If $p < < {0.5},m$ could be considered a minoritized group, and the excess risk for group $m$ would have implications for fairness (Section 1). If $p > > {0.5},{\mathcal{D}}_{\text{ rest }}$ could be viewed as noise, so a model’s excess risk for group $m$ would offer insight into the robustness of the model (Section 1). $p = 0$ models a privacy-risk or OOD setting, as ${\mathcal{D}}_{m}$ would be an extra-population group (Section 1). In our experiments, we compare the group excess risk and our bound thereof for various values of $p \in \left\lbrack {0,1}\right\rbrack$ .
|
| 136 |
+
|
| 137 |
+
Inspired by [1] (Section 6), we consider two over-parameterized linear regression problem instances with different rates of eigenspectrum decay for $H$ that satisfy our assumptions from Section 4:
|
| 138 |
+
|
| 139 |
+
$$
|
| 140 |
+
\text{ 1. }{\mu }_{\text{ rest }} \mathrel{\text{ := }} \mathbf{0},{\mu }_{m}\left\lbrack i\right\rbrack = {\lambda }_{i}\left( {\sum }_{m}\right) = {\lambda }_{i}\left( {\sum }_{\text{ rest }}\right) \mathrel{\text{ := }} {\left( i + 1\right) }^{-1}\log {\left( i + 1\right) }^{-2}
|
| 141 |
+
$$
|
| 142 |
+
|
| 143 |
+
$$
|
| 144 |
+
\text{ 2. }{\mu }_{\text{ rest }} \mathrel{\text{ := }} \mathbf{0},{\mu }_{m}\left\lbrack i\right\rbrack = {\lambda }_{i}\left( {\sum }_{m}\right) = {\lambda }_{i}\left( {\sum }_{\text{ rest }}\right) \mathrel{\text{ := }} {i}^{-2}
|
| 145 |
+
$$
|
| 146 |
+
|
| 147 |
+
${}^{4}$ https://slideslive.com/38955136/beyond-the-fairness-rhetoric-in-ml
|
| 148 |
+
|
| 149 |
+
< g r a p h i c s >
|
| 150 |
+
|
| 151 |
+
Figure 2: True group excess risk and our bound thereof for $p \in \{ {0.0},{0.1},{0.2},\ldots ,{0.9},{1.0}\}$ for problem instances 1 (left) and 2 (right). Each data point in the plots is averaged over 10 independent runs, and the true group excess risk data points are approximated over ${10}^{5}$ samples from ${\mathcal{D}}_{m}$ .
|
| 152 |
+
|
| 153 |
+
For both problem instances, ${w}_{m}^{ * }\left\lbrack i\right\rbrack = \frac{{i}^{-1}}{2}$ and ${w}_{\text{ rest }}^{ * }\left\lbrack i\right\rbrack = {i}^{-1}$ (where ${w}_{\text{ rest }}^{ * }$ are the optimal parameters relating $\left( {x,y}\right) \sim {\mathcal{D}}_{\text{ rest }}$ ). Additionally, $\sigma = {0.1}$ . We choose sufficiently different ${\mu }_{m},{\mu }_{\text{ rest }}$ and ${w}_{m}^{ * },{w}_{\text{ rest }}^{ * }$ and smaller $\sigma$ than [1] to enlarge the total variation distance between ${\mathcal{D}}_{m}$ and ${\mathcal{D}}_{\text{ rest }}$ , towards stress-testing our group excess risk bound. Otherwise, we follow the same experimental settings as [1]: $N = {200},d = {2000} > > N$ (to simulate overparametrization), $\alpha = 6$ , and $\gamma = \frac{1}{\alpha \operatorname{tr}\left( H\right) }$ .
|
| 154 |
+
|
| 155 |
+
Our results are displayed in Figure 2. In both plots, the group excess risk decreases (i.e., generalization improves) when $p$ increases, as ${\mathcal{D}}_{m}$ is sampled at a higher rate during training. This finding corroborates our commentary about fairness, privacy, robustness, and OOD generalization. Furthermore, the plots demonstrate that our group excess risk bound closely captures the true excess risk across various $p \in \left\lbrack {0,1}\right\rbrack$ , suggesting that our bound is tight (empirically).
|
| 156 |
+
|
| 157 |
+
§ 6 DISCUSSION AND CONCLUSION
|
| 158 |
+
|
| 159 |
+
In this paper, we characterize the inherent generalization of overparameterized linear regression with constant-stepsize SGD (with iterate averaging) to groups within and outside the population from which training instances are sampled. We do so by proving the excess risk bound for an arbitrary group in terms of the full eigenspectra of the data covariance matrices of the group and population. We additionally present a novel interpretation of the group excess risk bound through the lens of real-world challenges in practicing trustworthy machine learning. Finally, we empirically validate the tightness of our bound on simulated data.
|
| 160 |
+
|
| 161 |
+
This paper offers numerous promising future directions for research. We encourage proving a lower bound on the group excess risk to determine if our upper bound is tight (theoretically). We also suggest proving group excess risk bounds for tail averaging and last-iterate SGD with learning rate decay [1, 2]. It would further be interesting to extend this work to prove group excess risk bounds for logistic regression, 2-layer neural networks, and 1-layer graph convolutional networks [34].
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/TulqHKf4uPn/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,508 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# When Personalization Harms: Reconsidering the Use of Group Attributes for Prediction
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s) Affiliation Address email
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
Machine learning models often use group attributes to assign personalized predictions. In this work, we show that models that use group attributes can assign unnecessarily inaccurate predictions to specific groups - i.e., that training a model with group attributes can reduce performance for specific groups. We propose formal conditions to ensure the "fair use" of group attributes in prediction models - i.e., collective preference guarantees that can be checked by training one additional model. We characterize how machine learning models can exhibit fair use due to standard practices in specification, training, and deployment. We study the prevalence of fair use violations in clinical prediction models. Our results highlight the inability to resolve fair use violations, underscore the need to measure the gains of personalization for all groups who provide personal data, and illustrate actionable interventions to mitigate harm.
|
| 8 |
+
|
| 9 |
+
## 13 1 Introduction
|
| 10 |
+
|
| 11 |
+
Machine learning models are often used to support or automate decisions that affect people. In medicine, for example, models diagnose illnesses [64, 31, 73], estimate survival rates [78], and predict treatment response [41]. In such applications, medical decisions follow the ethical principles of beneficence ("do the best") and non-maleficence ("do no harm") [8]. In turn, models that support medical decisions are designed to perform as well as possible without inflicting harm. These principles explain why so many clinical prediction models use group attributes that encode characteristics like sex and age - i.e. characteristics that would be prohibited for models in lending or hiring. To predict as well as possible on a heterogeneous population, models must encode all characteristics that could tell people apart [47].
|
| 12 |
+
|
| 13 |
+
The prevalence of group attributes in prediction models reflects a need for personalization, ${}^{1}$ but do personalized models that use group attributes improve performance for every group? In this paper, we refer to this principle as fair use. Fair use enshrines the basic promise of personalization in applications like precision medicine - i.e., that each person who reports personal characteristics should expect a tailored performance gain in return. In prediction tasks with group attributes, this means that every group should expect better performance from a personalized model that solicits group membership compared to a generic model that does not. These gains should be tailored, meaning that every group should prefer their personalized predictions over the personalized predictions assigned to another group. Machine learning models are trained to use group attributes in ways that improve performance at a population level. In practice, this means that models trained with group attributes assign predictions that are unnecessarily inaccurate to specific groups due to routine decisions in model specification or model selection (see Figure 1). In many real-world applications, this drop in performance reflects harm. In clinical applications, for example, inaccurate predictions undermine medical decisions and health outcomes. This harm is silent and avoidable. Silent because fair use violations would only draw attention if model developers were to evaluate the gains of personalization for intersectional groups. Avoidable because a fair use violation shows that a group could receive better predictions from a generic model or a personalized model for another group; thus we can always resolve a fair use violation by assigning predictions from this better performing model.
|
| 14 |
+
|
| 15 |
+
---
|
| 16 |
+
|
| 17 |
+
${}^{1}$ Personalization is a term that encompasses a breadth of techniques that use personal data. Here, we use it to describe approaches that target groups rather than individuals - i.e., "categorization" rather than "individualization" as per the taxonomy of Fan & Poole [27].
|
| 18 |
+
|
| 19 |
+
---
|
| 20 |
+
|
| 21 |
+
<table><tr><td>GROUP</td><td>SIZE</td><td colspan="2">ERROR RATE</td><td>GAIN</td></tr><tr><td>$g$</td><td>${n}_{g}$</td><td>$R\left( {h}_{0}\right)$</td><td>${R}_{\mathbf{g}}\left( {h}_{\mathbf{g}}\right)$</td><td>${\Delta }_{\mathbf{g}}\left( {{h}_{\mathbf{g}},{h}_{0}}\right)$</td></tr><tr><td>female, <30</td><td>48</td><td>38.1%</td><td>26.8%</td><td>11.3%</td></tr><tr><td>male, <30</td><td>49</td><td>23.9%</td><td>26.7%</td><td>-2.8%</td></tr><tr><td>female, 30 to 60</td><td>307</td><td>30.3%</td><td>29.1%</td><td>1.2%</td></tr><tr><td>male, 30 to 60</td><td>307</td><td>15.4%</td><td>15.2%</td><td>0.2%</td></tr><tr><td>female, 60+</td><td>123</td><td>19.3%</td><td>21.9%</td><td>-2.6%</td></tr><tr><td>male, 60+</td><td>181</td><td>11.0%</td><td>8.2%</td><td>2.8%</td></tr><tr><td>Total</td><td>1152</td><td>20.4%</td><td>19.4%</td><td>1.0%</td></tr></table>
|
| 22 |
+
|
| 23 |
+
Figure 1: Personalization can reduce performance for specific groups. We show the gains of personalization for a classifier to screen for obstructive sleep apnea (i.e., the apnea dataset in §4). We fit a personalized model ${h}_{g}$ and generic model ${h}_{0}$ with logistic regression, personalizing ${h}_{g}$ with a one-hot encoding of sex and age_group. As shown, personalization reduces training error from 20.4% to 19.4% but increases training error at for 2 groups: (female, 60+) and (male, <30). These effects are also present on test data.
|
| 24 |
+
|
| 25 |
+
Although many prediction models that use group attributes to assign personalized predictions, there is little awareness that this practice could reduce performance at a group level [see e.g., 2, 63]. Simply put, it is hard to imagine how a model that accounts for group membership can perform worse than a model that does not. Our goal in this paper is to expose this effect and lay the foundations to address it. To this end, we characterize how fair use violations arise, demonstrate their prevalence in real-world applications, and propose interventions to mitigate their harm. Specifically, the main contributions of our work include:
|
| 26 |
+
|
| 27 |
+
1. We propose formal conditions to ensure the fair use of group attributes in prediction models.
|
| 28 |
+
|
| 29 |
+
2. We characterize how common approaches to personalization in machine learning can produce personalized models to exhibit fair use violations. These "failure modes" delineate the root causes of fair use violations, and inform interventions that mitigate harm.
|
| 30 |
+
|
| 31 |
+
3. We conduct a comprehensive study on the gains of personalization in clinical prediction models for decision-making, ranking, and risk assessment. Our results demonstrate the prevalence of fair use violations across model classes and personalization techniques, and highlight the challenges of resolving these violations through changes to model development.
|
| 32 |
+
|
| 33 |
+
4. We present a case study on personalization for a model trained to predict mortality for patients with acute kidney injury. Our study shows how a fair use audit can safeguard against "race correction" in clinical prediction models, and facilitate targeted interventions that reduce harm (Appendix F).
|
| 34 |
+
|
| 35 |
+
## 60 2 Fair Use Guarantees
|
| 36 |
+
|
| 37 |
+
51 In this section, we present formal conditions for the fair use of group attributes in prediction. We provide notation and preliminaries for this section in Appendix A.
|
| 38 |
+
|
| 39 |
+
### 2.1 Fair Use
|
| 40 |
+
|
| 41 |
+
64 We start with Definition 1, which characterizes the fair use of a group attribute in terms of collective preference guarantees.
|
| 42 |
+
|
| 43 |
+
3 Definition 1 (Fair Use). A personalized model $h : \mathcal{X} \times \mathcal{G} \rightarrow \mathcal{Y}$ guarantees the fair use of a group attribute $\mathcal{G}$ if
|
| 44 |
+
|
| 45 |
+
$$
|
| 46 |
+
{\Delta }_{\mathbf{g}}\left( {{h}_{\mathbf{g}},{h}_{0}}\right) \geq 0
|
| 47 |
+
$$
|
| 48 |
+
|
| 49 |
+
$$
|
| 50 |
+
\text{for all groups}\mathbf{g} \in \mathcal{G}\text{,} \tag{1}
|
| 51 |
+
$$
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
{\Delta }_{\mathbf{g}}\left( {{h}_{\mathbf{g}},{h}_{{\mathbf{g}}^{\prime }}}\right) \geq 0 \tag{2}
|
| 55 |
+
$$
|
| 56 |
+
|
| 57 |
+
Condition (1) captures rationality for group $\mathbf{g}$ : a majority of group $\mathbf{g}$ prefers a personalized model ${h}_{\mathbf{g}}$ to a generic model ${h}_{0}$ . Condition (2) captures envy-freeness for group $\mathbf{g}$ : a majority of group $\mathbf{g}$ prefers their predictions to predictions personalized for any other group. These conditions enshrine minimal expectations of groups from a personalized model. Without rationality, a majority in some group would prefer the generic model. Without envy-freeness, a majority in some group would prefer the personalized predictions assigned to another group.
|
| 58 |
+
|
| 59 |
+
The fair use conditions in Definition 1 are collective, in that performance is measured over individuals in a group; and weak, in that the expected performance gain is non-negative - i.e., no group will be harmed. The conditions can be adapted to different prediction tasks by choosing a suitable risk metric. Since fair use conditions represent guarantees on the expected gains of personalization, a suitable metric should measure model performance exactly (c.f. a surrogate metric that we optimize to fit a model (see Figure 5 in Section 3). In classification tasks where we want accurate decisions, this would be the error rate. In tasks where we want reliable risk estimates, it would be the expected calibration error [54].
|
| 60 |
+
|
| 61 |
+
Personalized models that obey fair use guarantees incentivize groups to truthfully report group membership in deployment [see e.g., 39, 62, 30]. ]
|
| 62 |
+
|
| 63 |
+
### 2.2 Use Cases
|
| 64 |
+
|
| 65 |
+
## Relevant use cases for fair use guarantees include:
|
| 66 |
+
|
| 67 |
+
Protected Classes: Models sometimes include group attributes that encode immutable characteristics due to application-specific norms or special provisions [see 44, 45]. For example, sex is a protected characteristic in employment law, but not in medicine [see e.g., 56, for a discussion on the use of sex to predict cardiovascular disease]. Likewise, U.S. regulations allow credit scores to use age if it does not harm older applicants [15]. In such cases, models should use these attributes in a way that leads to tailored performance gains for every group.
|
| 68 |
+
|
| 69 |
+
Sensitive Data: Models that use attributes like hiv status should guarantee a tailored improvement performance for the sensitive group, hiv $= +$ . Otherwise, it would be better not to solicit this information in the first place as the information could inflicts harm when leaked [see e.g., 6].
|
| 70 |
+
|
| 71 |
+
Self-Reported Data: Certain kinds of models require users to report their data at prediction time [see e.g., self-report diagnostics 42, 67]. These models should obey fair use conditions to incentivize users to report their data truthfully (see Remark 2)
|
| 72 |
+
|
| 73 |
+
Costly Data: Group attributes can encode data collected at prediction time - e.g., an attribute like tumor_subtype whose value can only be determined by an invasive medical test. Models that ensure fair use with respect to tumor_subtype guarantee that patients with a specific type of tumor will not receive a less accurate prediction after undergoing the procedure.
|
| 74 |
+
|
| 75 |
+
## 3 Failure Modes of Personalization
|
| 76 |
+
|
| 77 |
+
In this section, we describe how common approaches to personalization can reduce performance for specific groups. Our goal is to highlight failure modes that apply to a broad range of prediction tasks. We pair each failure mode with toy examples, focusing on simple classification tasks that can be checked manually. ${}^{2}$
|
| 78 |
+
|
| 79 |
+
### 3.1 Model Specification
|
| 80 |
+
|
| 81 |
+
We start with misspecification - i.e., when we fit models that cannot represent the role of group membership in the data distribution. A common form of misspecification occurs when we personalize simple models using a one-hot encoding. In such cases, models exhibit fair use violations on data distributions that exhibit intersectionality (see Figure 2). Consider, for example, a logistic regression model with a one-hot encoding that assigns higher risk to patients who are $\circ 1\mathrm{\;d}$ and to patients who are male. This would lead to a fair use violation for patients who are old and male if their true risk were lower than either group alone.
|
| 82 |
+
|
| 83 |
+
---
|
| 84 |
+
|
| 85 |
+
${}^{2}$ In most cases, we train a linear classifier that minimizes the error rate on a perfectly sampled training dataset - i.e., where $\frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}1\left\lbrack {{\mathbf{x}}_{i} = \mathbf{x},{y}_{i} = y,{\mathbf{g}}_{i} = \mathbf{g}}\right\rbrack = \mathbb{P}\left( {\mathbf{x}, y,\mathbf{g}}\right)$ for all $\left( {\mathbf{x}, y,\mathbf{g}}\right) \in \mathcal{X} \times \mathcal{Y} \times \mathcal{G}$ . This condition ensures that the training error matches the test error.
|
| 86 |
+
|
| 87 |
+
---
|
| 88 |
+
|
| 89 |
+
<table><tr><td>Group</td><td colspan="2">Data</td><td colspan="2">Predictions</td><td colspan="2">Mistakes</td><td>Gain</td></tr><tr><td>$g$</td><td>${n}_{\mathbf{g}}^{ + }$</td><td>${n}_{g}^{ - }$</td><td>${h}_{0}$</td><td>${h}_{g}$</td><td>${R}_{\mathbf{g}}\left( {h}_{0}\right)$</td><td>${R}_{g}\left( {h}_{g}\right)$</td><td>$\Delta {R}_{\mathbf{g}}\left( {{h}_{\mathbf{g}},{h}_{0}}\right)$</td></tr><tr><td>young, female</td><td>0</td><td>24</td><td>-</td><td>+</td><td>0</td><td>24</td><td>-24</td></tr><tr><td>young, male</td><td>25</td><td>0</td><td>-</td><td>+</td><td>25</td><td>0</td><td>25</td></tr><tr><td>old, female</td><td>25</td><td>0</td><td>-</td><td>+</td><td>25</td><td>0</td><td>25</td></tr><tr><td>old, male</td><td>0</td><td>27</td><td>-</td><td>-</td><td>0</td><td>0</td><td>0</td></tr><tr><td>Total</td><td>50</td><td>51</td><td/><td/><td>50</td><td>24</td><td>26</td></tr></table>
|
| 90 |
+
|
| 91 |
+
Figure 2: Fair use violations due to model misspecification. Here, we are given ${n}^{ + } = {50}$ positive examples and ${n}^{ - } = {51}$ negative examples for 2D classification task where $\mathbf{g} \in \{$ male, female $\} \times \{$ old, young $\}$ . We fit two linear classifiers: ${h}_{0}$ , a generic model without group attributes, and ${h}_{g}$ a personalized model with a one-hot encoding. As shown, personalization reduces overall error from 50 to 24. However, not all groups benefit from personalization: (young, female) now receives less accurate predictions while (old, male) receives no gain. Here, ${h}_{g}$ also violates envy-freeness for (young, female) as individuals in this group would receive more accurate predictions by misreporting their group membership as (old, male).
|
| 92 |
+
|
| 93 |
+
Misspecification can also arise due to a failure to account for group-specific interaction effects - e.g., instances where group attributes act as mediator or moderator variables [see e.g., 7]. In Figure 3, we show an example that exhibits the hallmarks of personalization: a generic model performs poorly on "heterogeneous" groups $A$ and $C$ , and a personalized model that accounts for group membership improves overall performance by assigning more accurate predictions to $A$ and $C$ . In this case, the resulting model exhibits a fair use violation for group $B$ because a generic model performs as well as possible for group $B$ . In practice, we can avoid these issues by either fitting models that are rich enough to capture these effects, or by training a separate model for each group. Both are challenging in tasks with multiple groups as we must either specify interactions for each group, or fit models using a limited amount of data for each group.
|
| 94 |
+
|
| 95 |
+

|
| 96 |
+
|
| 97 |
+
Figure 3: Fair use violation resulting from model misspecification. We consider a 2D classification task with heterogeneous groups $\mathbf{g} = \{ A, B, C\}$ where an ideal model should assign a personalized intercept to each group and a personalized slope to group $B$ . In this case, a personalized model with a one-hot encoding would fit a personalized intercept for each group, but fail to fit a personalized slope for group $B$ . The personalized model would improve overall performance by assigning more accurate predictions to groups $A$ and $C$ . However, it would result in a fair use violation by performing worse for group $B$ .
|
| 98 |
+
|
| 99 |
+
### 3.2 Model Selection
|
| 100 |
+
|
| 101 |
+
Model development often involves choosing one model from a family of candidate models - e.g., when we set a regularization penalty to avoid overfitting, or choose a subset of variables to improve usability. Common criteria for model selection consist choosing a model on the basis of population-level performance [e.g., mean K-CV test error 4]. In practice, this choice can lead to models that reduce performance for a specific group. We demonstrate this effect in Figure 4. The example highlights how fair use violations may be unavoidable in settings where we are forced to assign predictions with a single model - as there may not exist a model that ensure fair use for all groups.
|
| 102 |
+
|
| 103 |
+
### 3.3 Other Failure Modes & Discussion
|
| 104 |
+
|
| 105 |
+
Work in personalization naturally presumes that fitting a model with group attributes will provide a uniform performance gain to all groups. In practice, however, this only holds under restrictive assumptions. We include a similar discussion of other failure modes along with examples in Appendix E including: training with a surrogate loss function; generalization; and dataset shifts. The failure models that we have covered in this section are chosen since they motivate potential interventions for model development. For example, one could avoid the fair use violations in Figure 2 by using an intersectional one-hot encoding, and avoid violations across across all cases by training decoupled models.
|
| 106 |
+
|
| 107 |
+
## 4 Empirical Study
|
| 108 |
+
|
| 109 |
+
In this section, we study fair use in clinical prediction models - i.e. models that routinely include group attributes where fair use violations inflict harm. Our goals are to measure the prevalence of fair use violations and to evaluate how these change as a result of interventions in model development. We attach all software to reproduce the results in this section to our submission, and include additional details on our setup and additional experimental results in the supplement.
|
| 110 |
+
|
| 111 |
+
### 4.1 Setup
|
| 112 |
+
|
| 113 |
+
We work with 6 datasets for clinical prediction tasks (see Table 1). We split each dataset into a training sample (80%) to fit models, and a test sample (20%) to evaluate the gains of personalization. We use the training data from each dataset to fit 9 kinds of personalized models. Each personalized model belongs to one of 3 model classes: logistic regression (LR), random forests (RF), and neural nets (NN); and accounts for group membership using one of 3 personalization techniques.
|
| 114 |
+
|
| 115 |
+
The three personalization techniques being: One-hot Encoding (1Hot): We fit a model with dummy variables for each group attribute, Intersectional Encoding (All): We fit a model with dummy variables for each intersectional group, and Decoupling (DCP): We fit a model for each intersectional group using its own data. The three techniques represent increasingly complex ways to account for group membership where complexity is measured by the interactions between group attributes and other features: 1Hot reflect no interactions; All reflect interactions between group attributes; and DCP reflects all possible attributes between group attributes and features.
|
| 116 |
+
|
| 117 |
+
We evaluate the gains of personalization for each model in terms of three performance metrics: (1) error rate, which reflects the accuracy of yes-or-no predictions [for a diagnostic test, e.g., 26]; (2) expected calibration error (ECE), which measures the reliability of risk predictions [for a medical risk score, e.g., 13]; (3) area under ROC curve (AUC), which measures accuracy in ranking [for a prioritization tool, e.g., 77].
|
| 118 |
+
|
| 119 |
+
### 4.2 Results
|
| 120 |
+
|
| 121 |
+
We summarize our results for logistic regression in Table 1 and for other model classes in Appendix G.
|
| 122 |
+
|
| 123 |
+
On Prevalence Our results show that personalized models can improve performance at a population level yet reduce performance for specific groups. These fair use violations arise across datasets, personalization techniques, and model classes. Consider the standard configuration used to develop clinical prediction models - i.e., a logistic regression model with a one-hot encoding of group attributes $\left( {\mathrm{{LR}} + 1\mathrm{{Hot}}}\right)$ . Here, we find that at least one group experiences a statistically significant fair use violation in terms of error on 4/6 datasets (5/6 for AUC and ECE).
|
| 124 |
+
|
| 125 |
+
On Personalization Techniques Our results show that there is no one personalization technique that minimizes fair use violations. In Table 1, for example, the best personalization technique for cardio_eicu is intersectional encoding while the best personalization technique for mortality
|
| 126 |
+
|
| 127 |
+
<table><tr><td rowspan="2">Dataset</td><td rowspan="2">Metrics</td><td colspan="3">Test AUC</td><td colspan="3">Test ECE</td><td colspan="3">Test Error</td></tr><tr><td>1Hot</td><td>All</td><td>DCP</td><td>1Hot</td><td>All</td><td>DCP</td><td>1Hot</td><td>All</td><td>DCP</td></tr><tr><td>apnea</td><td>Personalized</td><td>0.750</td><td>0.750</td><td>0.803</td><td>7.5%</td><td>5.5%</td><td>7.2%</td><td>34.2%</td><td>33.8%</td><td>26.2%</td></tr><tr><td>$n = {1152}, d = {26}$</td><td>Gain</td><td>0.001</td><td>0.000</td><td>0.053</td><td>-1.5%</td><td>0.6%</td><td>-1.1%</td><td>-1.0%</td><td>-0.7%</td><td>7.0%</td></tr><tr><td>$\mathcal{G} = \{$ age, sex $\}$</td><td>Best/Worst Gain</td><td>0.002 / -0.001</td><td>0.001 /-0.016</td><td>0.119 / -0.005</td><td>0.7%1-7.1%</td><td>0.7% 1-4.6%</td><td>1.7% 1-6.6%</td><td>0.0% 1-9.9%</td><td>1.8% 1-7.8%</td><td>21.7% I -7.8%</td></tr><tr><td>$m = 6$</td><td>Rat. Gains/Viols</td><td>$1/2$</td><td>$1/4$</td><td>4/0</td><td>$1/3$</td><td>1/3</td><td>2/2</td><td>0/4</td><td>$1/3$</td><td>4/1</td></tr><tr><td>[66]</td><td>EF Gains/Viols</td><td>0/0</td><td>0/0</td><td>3/0</td><td>0/3</td><td>0/3</td><td>4/0</td><td>0/6</td><td>0/5</td><td>$4/1$</td></tr><tr><td rowspan="5">cardio_eicu $n = {1341}, d = {49}$ $\mathcal{G} = \{$ age, sex $\}$ $m = 4$ [60]</td><td>Personalized</td><td>0.768</td><td>0.767</td><td>0.762</td><td>4.4%</td><td>4.6%</td><td>8.9%</td><td>29.1%</td><td>29.1%</td><td>29.5%</td></tr><tr><td>Gain</td><td>0.000</td><td>-0.001</td><td>-0.007</td><td>0.4%</td><td>0.2%</td><td>-4.1%</td><td>-0.4%</td><td>-0.4%</td><td>-0.9%</td></tr><tr><td>Best/Worst Gain</td><td>0.002 / -0.001</td><td>0.001 /-0.001</td><td>0.094 / -0.099</td><td>1.6% / -1.5%</td><td>0.9% / -0.2%</td><td>-1.1% / -6.3%</td><td>0.0% / -3.1%</td><td>0.2%1-3.1%</td><td>12.9% / -8.9%</td></tr><tr><td>Rat. Gains/Viols</td><td>2/2</td><td>2/1</td><td>$1/2$</td><td>$2/1$</td><td>1/0</td><td>0/4</td><td>0/2</td><td>$1/2$</td><td>2/2</td></tr><tr><td>EF Gains/Viols</td><td>0/0</td><td>0/0</td><td>$3/1$</td><td>0/2</td><td>0/2</td><td>$1/1$</td><td>0/3</td><td>0/3</td><td>3/1</td></tr><tr><td rowspan="5">cardio_mimic $n = {5289}, d = {49}$ $\mathcal{G} = \{$ age. sex $\}$ $m = 4$ [38]</td><td>Personalized</td><td>0.854</td><td>0.854</td><td>0.870</td><td>2.1%</td><td>2.3%</td><td>2.3%</td><td>23.3%</td><td>23.4%</td><td>21.4%</td></tr><tr><td>Gain</td><td>0.001</td><td>0.001</td><td>0.017</td><td>-0.4%</td><td>-0.5%</td><td>-0.6%</td><td>0.3%</td><td>0.3%</td><td>2.2%</td></tr><tr><td>Best/Worst Gain</td><td>0.001 / -0.000</td><td>0.001 / -0.000</td><td>0.051 / 0.006</td><td>0.5% / 0.4%</td><td>0.6% /-0.2%</td><td>0.6% 1-2.3%</td><td>0.9% / -0.1%</td><td>0.9% / -0.1%</td><td>7.6%1-0.2%</td></tr><tr><td>Rat. Gains/Viols</td><td>2/1</td><td>2/1</td><td>4/0</td><td>4/0</td><td>3/0</td><td>1/2</td><td>3/0</td><td>3/0</td><td>3A0</td></tr><tr><td>EFGains/Viols</td><td>0/0</td><td>0/0</td><td>4/0</td><td>$1/3$</td><td>0/1</td><td>$3/1$</td><td>0/3</td><td>0/3</td><td>4/0</td></tr><tr><td rowspan="5">heart $n = {181}, d = {26}$ $\mathcal{G} = \{$ sex.age $\}$ $m = 4$ [17]</td><td>Personalized</td><td>0.870</td><td>0.846</td><td>0.817</td><td>8.4%</td><td>17.8%</td><td>17.5%</td><td>19.7%</td><td>19.7%</td><td>15.8%</td></tr><tr><td>Gain</td><td>-0.007</td><td>-0.030</td><td>-0.060</td><td>2.8%</td><td>-6.6%</td><td>-6.3%</td><td>-1.3%</td><td>-1.3%</td><td>2.6%</td></tr><tr><td>Best/Worst Gain</td><td>0.007 / -0.031</td><td>0.024 /-0.050</td><td>0.039 / -0.190</td><td>4.4% / -0.6%</td><td>-1.8%/-3.1%</td><td>10.1% / -4.6%</td><td>0.0%1-6.1%</td><td>0.0% / -12.1%</td><td>10.6% / -8.4%</td></tr><tr><td>Rat. Gains/Viols</td><td>1/1</td><td>1/1</td><td>0/3</td><td>2/1</td><td>0/4</td><td>2/1</td><td>0/1</td><td>0/3</td><td>3/1</td></tr><tr><td>EFGains/Viols</td><td>0/0</td><td>0/0</td><td>$1/2$</td><td>0/2</td><td>0/3</td><td>2/2</td><td>0/1</td><td>0/1</td><td>2/1</td></tr><tr><td rowspan="5">mortality $n = {25366}, d = {468}$ $\mathcal{G} = \{$ age. sex $\}$ $m = 6$ [38]</td><td>Personalized</td><td>0.848</td><td>0.848</td><td>0.880</td><td>2.0%</td><td>2.1%</td><td>2.5%</td><td>23.6%</td><td>23.4%</td><td>20.2%</td></tr><tr><td>Gain</td><td>0.000</td><td>0.001</td><td>0.033</td><td>0.2%</td><td>0.1%</td><td>-0.3%</td><td>-0.2%</td><td>-0.0%</td><td>3.2%</td></tr><tr><td>Best/Worst Gain</td><td>0.005 / -0.001</td><td>0.005 /-0.000</td><td>0.111/0.012</td><td>1.5%10.1%</td><td>2.6% 1-0.3%</td><td>11.2% 1-2.4%</td><td>0.8% 1.2.5%</td><td>2.1% 1.0.4%</td><td>20.1%/-0.5%</td></tr><tr><td>Rat. Gains/Viols</td><td>3/3</td><td>3/2</td><td>60</td><td>5/0</td><td>5/1</td><td>3/2</td><td>2/4</td><td>3/2</td><td>5/1</td></tr><tr><td>EF Gains/Viols</td><td>0/0</td><td>0/0</td><td>6/0</td><td>$1/1$</td><td>3/2</td><td>5/1</td><td>0/4</td><td>1/4</td><td>6/0</td></tr><tr><td>saps</td><td>Personalized</td><td>0.890</td><td>0.890</td><td>0.888</td><td>1.5%</td><td>1.5%</td><td>2.0%</td><td>18.9%</td><td>18.9%</td><td>18.5%</td></tr><tr><td>$n = {7797}, d = {36}$</td><td>Gain</td><td>0.001</td><td>0.001</td><td>-0.001</td><td>0.1%</td><td>0.1%</td><td>-0.4%</td><td>0.0%</td><td>0.0%</td><td>0.4%</td></tr><tr><td>$\mathcal{G} = \{$ hiv.age $\}$</td><td>Best/Worst Gain</td><td>0.014 / -0.000</td><td>0.014 / -0.001</td><td>0.017 / -0.246</td><td>2.8% 1-1.5%</td><td>2.4% 1-0.6%</td><td>9.4% / -19.1%</td><td>19.0% / -10.4%</td><td>0.8% / -10.4%</td><td>3.5%1-23.3%</td></tr><tr><td>$m = 4$</td><td>Rat. Gains/Viols</td><td>1/1</td><td>1/1</td><td>2.12</td><td>2/0</td><td>7/1</td><td>2/2</td><td>2/1</td><td>1/3</td><td>2/1</td></tr><tr><td>[3]</td><td>EFGains/Viols</td><td>0/0</td><td>0/0</td><td>$2/1$</td><td>2/2</td><td>2/2</td><td>3/1</td><td>1/1</td><td>2/2</td><td>2/2</td></tr></table>
|
| 128 |
+
|
| 129 |
+
Table 1: Performance of personalized logistic regression models on all datasets. We show the gains of personalization in terms of test AUC, ECE, and error. We report: model performance at the population level, the overall gain of personalization, the range of gains over $m$ intersectional groups, and the number of rationality and envy-freeness gains/violations (evaluated using a bootstrap hypothesis test at a 10% significance level).
|
| 130 |
+
|
| 131 |
+
was decoupling. These strategies change across model classes - as the corresponding strategies for neural networks for cardio_eicu and mortality are decoupling and using an intersectional encoding, respectively (see Appendix G). In general, even strategies that exhibit few violations can fail critically. For example, $\mathrm{{LR}} + \mathrm{{DCP}}$ for saps leads to a ${10}\%$ increase in error for $\mathrm{{HIV}} + \& > {30}$ . Overall, these results suggest that the most consistent way to avoid the harm from a fair use violation is to check.
|
| 132 |
+
|
| 133 |
+
On Interventions in Model Development Our results show that routine decisions in model development can produce considerable differences in group-level performance and fair use violations. This suggests that if we are able to spot fair use violations, we may be able to minimize them by "interventions" to model development. In light of this, we consider interventions that address the failure modes in Section 3 - e.g., using an intersectional one-hot encoding, training decoupled models, and equalizing sample sizes.
|
| 134 |
+
|
| 135 |
+
In general, we find that applying these strategies can minimize fair use violations often. For example, we can eliminate all fair use violations for cardio_mimic in our standard configuration by training decoupled models. However, there is no "best" intervention that consistently resolve these violations. Typically, this is because an intervention that resolves a violation for one group will precipitate a violation for others. In cardio_eicu, for instances, a logistic regression model fit with a onehot encoding will exhibit a violation on old males. Switching an intersectional encoding will fix this violation but introduce a new one for old females.
|
| 136 |
+
|
| 137 |
+
On the Reliability of Gains & Violations Our results underscore the need for reliable procedures to discover fair use violations or claim gains from personalization. We can often find detectable instances of benefit or harm. For example, we find that on saps in our default configuration that we detect a gain from personalization for patients who are HIV negative and older than 30 . Additionally, in cardio_eicu when training $\mathrm{{LR}} + \mathrm{{All}}$ we detect a fair use violation for patients who are old females (see e.g., Rat Gains/Violations in Table 1). One actionable finding from an evaluation of the gains of personalization is a group does not experience a meaningful gain nor harm due to personalization. In such cases, one may wish to intervene to avoid soliciting unnecessary data: when group attributes encode information that is sensitive or that must be collected at prediction time (e.g., hiv_status or tumor_subtype), we may prefer to avoid soliciting information that is demonstrably useful for prediction.
|
| 138 |
+
|
| 139 |
+
References
|
| 140 |
+
|
| 141 |
+
[1] Agarwal, A., Beygelzimer, A., Dudík, M., Langford, J., and Wallach, H. A Reductions Approach to Fair Classification. In Proceedings of the 35th International Conference on Machine Learning, Proceedings of Machine Learning Research. PMLR, 2018.
|
| 142 |
+
|
| 143 |
+
[2] Agresti, A. An introduction to categorical data analysis. John Wiley & Sons, 2018.
|
| 144 |
+
|
| 145 |
+
[3] Allyn, J., Ferdynus, C., Bohrer, M., Dalban, C., Valance, D., and Allou, N. Simplified acute physiology score ii as predictor of mortality in intensive care units: a decision curve analysis. PloS one, 11(10): e0164828, 2016.
|
| 146 |
+
|
| 147 |
+
[4] Arlot, S. and Celisse, A. A survey of cross-validation procedures for model selection. Statistics surveys, 4: 40-79, 2010.
|
| 148 |
+
|
| 149 |
+
[5] Balcan, M.-F., Dick, T., Noothigattu, R., and Procaccia, A. D. Envy-free classification. arXiv preprint arXiv:1809.08700, 2018.
|
| 150 |
+
|
| 151 |
+
[6] Bansal, G., Gefen, D., et al. The impact of personal dispositions on information sensitivity, privacy concern and trust in disclosing health information online. Decision support systems, 49(2):138-150, 2010.
|
| 152 |
+
|
| 153 |
+
[7] Baron, R. M. and Kenny, D. A. The moderator-mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of personality and social psychology, 51(6):1173, 1986.
|
| 154 |
+
|
| 155 |
+
[8] Beauchamp, T. L., Childress, J. F., et al. Principles of Biomedical Ethics. Oxford University Press, USA, 2001.
|
| 156 |
+
|
| 157 |
+
[9] Bertsimas, D. and Kallus, N. From predictive to prescriptive analytics. Management Science, 66(3): 1025-1044, 2020.
|
| 158 |
+
|
| 159 |
+
[10] Bertsimas, D., Dunn, J., and Mundru, N. Optimal prescriptive trees. INFORMS Journal on Optimization, 1 (2):164-183, 2019.
|
| 160 |
+
|
| 161 |
+
[11] Bien, J., Taylor, J., and Tibshirani, R. A lasso for hierarchical interactions. Annals of statistics, 41(3):1111, 2013.
|
| 162 |
+
|
| 163 |
+
[12] Biggs, M., Sun, W., and Ettl, M. Model distillation for revenue optimization: Interpretable personalized pricing. arXiv preprint arXiv:2007.01903, 2020.
|
| 164 |
+
|
| 165 |
+
[13] Blaha, M. J. The critical importance of risk score calibration: time for transformative approach to risk score validation?, 2016.
|
| 166 |
+
|
| 167 |
+
[14] Celis, L. E., Huang, L., Keswani, V., and Vishnoi, N. K. Classification with fairness constraints: A meta-algorithm with provable guarantees. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 319-328. ACM, 2019.
|
| 168 |
+
|
| 169 |
+
[15] Commission, F. T. Equal credit opportunity act. https://www.fdic.gov/resources/supervision-and-examinations/consumer-compliance-examination-manual/documents/5/v-7-1.pdf, 2020.
|
| 170 |
+
|
| 171 |
+
[16] Corbett-Davies, S. and Goel, S. The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv:1808.00023, 2018.
|
| 172 |
+
|
| 173 |
+
[17] Detrano, R., Janosi, A., Steinbrunn, W., Pfisterer, M., Schmid, J.-J., Sandhu, S., Guppy, K. H., Lee, S., and Froelicher, V. International application of a new probability algorithm for the diagnosis of coronary artery disease. The American journal of cardiology, 64(5):304-310, 1989.
|
| 174 |
+
|
| 175 |
+
[18] DiCiccio, T. J. and Efron, B. Bootstrap confidence intervals. Statistical science, pp. 189-212, 1996.
|
| 176 |
+
|
| 177 |
+
[19] Dietterich, T. G. Approximate statistical tests for comparing supervised classification learning algorithms. Neural computation, 10(7):1895-1923, 1998.
|
| 178 |
+
|
| 179 |
+
[20] Do, V., Corbett-Davies, S., Atif, J., and Usunier, N. Online certification of preference-based fairness for personalized recommender systems. arXiv preprint arXiv:2104.14527, 2021.
|
| 180 |
+
|
| 181 |
+
[21] Dunn, O. J. Multiple comparisons among means. Journal of the American statistical association, 56(293): 52-64, 1961.
|
| 182 |
+
|
| 183 |
+
[22] Dwork, C., Hardt, M., Pitassi, T., Reingold, O., and Zemel, R. Faimess through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, pp. 214-226, 2012.
|
| 184 |
+
|
| 185 |
+
[23] Dwork, C., Immorlica, N., Kalai, A. T., and Leiserson, M. Decoupled classifiers for group-fair and efficient machine learning. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency, volume 81 of Proceedings of Machine Learning Research, pp. 119-133. PMLR, 2018.
|
| 186 |
+
|
| 187 |
+
[24] Elmachtoub, A. N., Gupta, V., and Hamilton, M. The value of personalized pricing. Available at SSRN 3127719, 2018.
|
| 188 |
+
|
| 189 |
+
[25] Eneanya, N. D., Yang, W., and Reese, P. P. Reconsidering the consequences of using race to estimate kidney function. Jama, 322(2):113-114, 2019.
|
| 190 |
+
|
| 191 |
+
[26] Eusebi, P. Diagnostic accuracy measures. Cerebrovascular Diseases, 36(4):267-272, 2013.
|
| 192 |
+
|
| 193 |
+
[27] Fan, H. and Poole, M. S. What is personalization? perspectives on the design and implementation of personalization in information systems. Journal of Organizational Computing and Electronic Commerce, 16(3-4):179-202, 2006.
|
| 194 |
+
|
| 195 |
+
[28] Feldman, M., Friedler, S. A., Moeller, J., Scheidegger, C., and Venkatasubramanian, S. Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 259-268. ACM, 2015.
|
| 196 |
+
|
| 197 |
+
[29] Finlayson, S. G., Subbaswamy, A., Singh, K., Bowers, J., Kupke, A., Zittrain, J., Kohane, I. S., and Saria, S. The clinician and dataset shift in artificial intelligence. The New England journal of medicine, 385(3): 283-286, 2021.
|
| 198 |
+
|
| 199 |
+
[30] Gneiting, T. and Raftery, A. E. Strictly proper scoring rules, prediction, and estimation. Journal of the American statistical Association, 102(477):359-378, 2007.
|
| 200 |
+
|
| 201 |
+
[31] Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., Venugopalan, S., Widner, K., Madams, T., Cuadros, J., et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. Jama, 316(22):2402-2410, 2016.
|
| 202 |
+
|
| 203 |
+
[32] Guo, L. L., Pfohl, S. R., Fries, J., Posada, J., Fleming, S. L., Aftandilian, C., Shah, N., and Sung, L. Systematic review of approaches to preserve machine learning performance in the presence of temporal dataset shift in clinical medicine. Applied Clinical Informatics, 12(04):808-815, 2021.
|
| 204 |
+
|
| 205 |
+
[33] Hardt, M., Price, E., Srebro, N., et al. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems, pp. 3315-3323, 2016.
|
| 206 |
+
|
| 207 |
+
[34] Harutyunyan, H., Khachatrian, H., Kale, D. C., Ver Steeg, G., and Galstyan, A. Multitask learning and benchmarking with clinical time series data. Scientific data, 6(1):1-18, 2019.
|
| 208 |
+
|
| 209 |
+
[35] Hébert-Johnson, Ú., Kim, M., Reingold, O., and Rothblum, G. Multicalibration: Calibration for the (computationally-identifiable) masses. In Proceedings of the International Conference on Machine Learning, pp. 1944-1953, 2018.
|
| 210 |
+
|
| 211 |
+
[36] Hu, L. and Chen, Y. Fair Classification and Social Welfare. arXiv preprint arXiv:1905.00147, 2019.
|
| 212 |
+
|
| 213 |
+
[37] Jaques, N., Taylor, T. S., Nosakhare, N. E., Sano, S. A., and Picard R, P. R. Multi-task learning for predicting health, stress, and happiness. Neural Information Processing Systems (NeurIPS) Workshop on Machine Learning for Healthcare, 2016.
|
| 214 |
+
|
| 215 |
+
[38] Johnson, A. E., Pollard, T. J., Shen, L., Li-Wei, H. L., Feng, M., Ghassemi, M., Moody, B., Szolovits, P., Celi, L. A., and Mark, R. G. Mimic-iii, a freely accessible critical care database. Scientific data, 3(1):1-9, 2016.
|
| 216 |
+
|
| 217 |
+
[39] Jovanovic, B. Truthful disclosure of information. The Bell Journal of Economics, pp. 36-44, 1982.
|
| 218 |
+
|
| 219 |
+
[40] Kearns, M., Neel, S., Roth, A., and Wu, Z. S. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In Proceedings of the 35th International Conference on Machine Learning, Proceedings of Machine Learning Research. PMLR, 2018.
|
| 220 |
+
|
| 221 |
+
[41] Kent, D. M., Paulus, J. K., Van Klaveren, D., D'Agostino, R., Goodman, S., Hayward, R., Ioannidis, J. P., Patrick-Lake, B., Morton, S., Pencina, M., et al. The predictive approaches to treatment effect heterogeneity (path) statement. Annals of internal medicine, 172(1):35-45, 2020.
|
| 222 |
+
|
| 223 |
+
[42] Kessler, R. C., Adler, L., Ames, M., Demler, O., Faraone, S., Hiripi, E., Howes, M. J., Jin, R., Secnik, K., Spencer, T., et al. The world health organization adult adhd self-report scale (asrs): a short screening scale for use in the general population. Psychological medicine, 35(2):245-256, 2005.
|
| 224 |
+
|
| 225 |
+
[43] Kim, M. P., Korolova, A., Rothblum, G. N., and Yona, G. Preference-informed fairness. arXiv preprint arXiv:1904.01793, 2019.
|
| 226 |
+
|
| 227 |
+
[44] Kiviat, B. The moral limits of predictive practices: The case of credit-based insurance scores. American Sociological Review, 84(6):1134-1158, 2019.
|
| 228 |
+
|
| 229 |
+
[45] Kiviat, B. Which data fairly differentiate? american views on the use of personal data in two market settings. Sociological Science, 8:26-47, 2021.
|
| 230 |
+
|
| 231 |
+
[46] Kleinberg, J., Ludwig, J., Mullainathan, S., and Rambachan, A. Algorithmic Fairness. In AEA Papers and Proceedings, volume 108, pp. 22-27, 2018.
|
| 232 |
+
|
| 233 |
+
[47] Kravitz, R. L., Duan, N., and Braslow, J. Evidence-based medicine, heterogeneity of treatment effects, and the trouble with averages. The Milbank Quarterly, 82(4):661-687, 2004.
|
| 234 |
+
|
| 235 |
+
[48] Le Gall, J.-R., Lemeshow, S., and Saulnier, F. A new simplified acute physiology score (saps ii) based on a european/north american multicenter study. Jama, 270(24):2957-2963, 1993.
|
| 236 |
+
|
| 237 |
+
[49] Lim, M. and Hastie, T. Learning interactions via hierarchical group-lasso regularization. Journal of Computational and Graphical Statistics, 24(3):627-654, 2015.
|
| 238 |
+
|
| 239 |
+
[50] Lipton, Z., McAuley, J., and Chouldechova, A. Does mitigating ml's impact disparity require treatment disparity? In Advances in Neural Information Processing Systems 31, pp. 8135-8145, 2018.
|
| 240 |
+
|
| 241 |
+
[51] Martinez, N., Bertran, M., and Sapiro, G. Fairness with minimal harm: A pareto-optimal approach for healthcare. arXiv preprint arXiv:1911.06935, 2019.
|
| 242 |
+
|
| 243 |
+
[52] Martinez, N., Bertran, M., and Sapiro, G. Minimax pareto fairness: A multi objective perspective. In International Conference on Machine Learning, pp. 6755-6764. PMLR, 2020.
|
| 244 |
+
|
| 245 |
+
[53] Metaxa, D., Park, J. S., Robertson, R. E., Karahalios, K., Wilson, C., Hancock, J., Sandvig, C., et al. Auditing algorithms: Understanding algorithmic systems from the outside in. Foundations and Trends® in Human-Computer Interaction, 14(4):272-344, 2021.
|
| 246 |
+
|
| 247 |
+
[54] Naeini, M. P., Cooper, G., and Hauskrecht, M. Obtaining well calibrated probabilities using bayesian binning. In Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015.
|
| 248 |
+
|
| 249 |
+
[55] Narasimhan, H. Learning with complex loss functions and constraints. In International Conference on Artificial Intelligence and Statistics, pp. 1646-1654, 2018.
|
| 250 |
+
|
| 251 |
+
[56] Paulus, J. K., Wessler, B. S., Lundquist, C., Lai, L. L., Raman, G., Lutz, J. S., and Kent, D. M. Field synopsis of sex in clinical prediction models for cardiovascular disease. Circulation: Cardiovascular Quality and Outcomes, 9(2_suppl_1):S8-S15, 2016.
|
| 252 |
+
|
| 253 |
+
[57] Perez-Rodriguez, J. and de la Fuente, A. Now is the time for a postracial medicine: Biomedical research, the national institutes of health, and the perpetuation of scientific racism. The American Journal of Bioethics, 17(9):36-47, 2017.
|
| 254 |
+
|
| 255 |
+
[58] Pfohl, S., Marafino, B., Coulet, A., Rodriguez, F., Palaniappan, L., and Shah, N. H. Creating fair models of atherosclerotic cardiovascular disease risk. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 271-278, 2019.
|
| 256 |
+
|
| 257 |
+
[59] Platt, J. et al. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in large margin classifiers, 10(3):61-74, 1999.
|
| 258 |
+
|
| 259 |
+
[60] Pollard, T. J., Johnson, A. E., Raffa, J. D., Celi, L. A., Mark, R. G., and Badawi, O. The eicu collaborative research database, a freely available multi-center database for critical care research. Scientific data, 5(1): 1-13, 2018.
|
| 260 |
+
|
| 261 |
+
[61] Quiñonero-Candela, J., Sugiyama, M., Schwaighofer, A., and Lawrence, N. D. Dataset shift in machine learning. Mit Press, 2008.
|
| 262 |
+
|
| 263 |
+
[62] Savage, L. J. Elicitation of personal probabilities and expectations. Journal of the American Statistical Association, 66(336):783-801, 1971.
|
| 264 |
+
|
| 265 |
+
[63] Steyerberg, E. W. et al. Clinical prediction models. Springer, 2019.
|
| 266 |
+
|
| 267 |
+
[64] Suresh, H., Hunt, N., Johnson, A., Celi, L. A., Szolovits, P., and Ghassemi, M. Clinical intervention prediction and understanding with deep neural networks. In Machine Learning for Healthcare Conference, pp. 322-337. PMLR, 2017.
|
| 268 |
+
|
| 269 |
+
[65] Taylor, S., Jaques, N., Nosakhare, E., Sano, A., and Picard, R. Personalized multitask learning for predicting tomorrow's mood, stress, and health. IEEE Transactions on Affective Computing, 11(2):200-213, 2017.
|
| 270 |
+
|
| 271 |
+
[66] Ustun, B., Westover, M. B., Rudin, C., and Bianchi, M. T. Clinical prediction models for sleep apnea: the importance of medical history over symptoms. Journal of Clinical Sleep Medicine, 12(02):161-168, 2016.
|
| 272 |
+
|
| 273 |
+
[67] Ustun, B., Adler, L. A., Rudin, C., Faraone, S. V., Spencer, T. J., Berglund, P., Gruber, M. J., and Kessler, R. C. The world health organization adult attention-deficit/hyperactivity disorder self-report screening scale for dsm-5. Jama psychiatry, 74(5):520-527, 2017.
|
| 274 |
+
|
| 275 |
+
[68] Ustun, B., Liu, Y., and Parkes, D. Fairness without harm: Decoupled classifiers with preference guarantees. In International Conference on Machine Learning, pp. 6373-6382, 2019.
|
| 276 |
+
|
| 277 |
+
[69] Vaughan, G., Aseltine, R., Chen, K., and Yan, J. Efficient interaction selection for clustered data via stagewise generalized estimating equations. Statistics in Medicine, 39(22):2855-2868, 2020.
|
| 278 |
+
|
| 279 |
+
[70] Viviano, D. and Bradic, J. Fair policy targeting. arXiv preprint arXiv:2005.12395, 2020.
|
| 280 |
+
|
| 281 |
+
[71] Vyas, D. A., Eisenstein, L. G., and Jones, D. S. Hidden in plain sight-reconsidering the use of race correction in clinical algorithms, 2020.
|
| 282 |
+
|
| 283 |
+
[72] Wang, H., Ustun, B., and Calmon, F. P. Repairing without retraining: Avoiding disparate impact with counterfactual distributions. In Proceedings of the 36th International Conference on Machine Learning, Proceedings of Machine Learning Research. PMLR, 2019.
|
| 284 |
+
|
| 285 |
+
[73] Yala, A., Lehman, C., Schuster, T., Portnoi, T., and Barzilay, R. A deep learning mammography-based model for improved breast cancer risk prediction. Radiology, 292(1):60-66, 2019.
|
| 286 |
+
|
| 287 |
+
[74] Zafar, M. B., Valera, I., Gomez Rodriguez, M., and Gummadi, K. P. Fairness beyond disparate treatment and disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web, pp. 1171-1180. International World Wide Web Conferences Steering Committee, 2017.
|
| 288 |
+
|
| 289 |
+
[75] Zafar, M. B., Valera, I., Rodriguez, M., Gummadi, K., and Weller, A. From parity to preference-based notions of fairness in classification. In Advances in Neural Information Processing Systems, pp. 228-238, 2017.
|
| 290 |
+
|
| 291 |
+
[76] Zafar, M. B., Valera, I., Rogriguez, M. G., and Gummadi, K. P. Fairness Constraints: Mechanisms for Fair Classification. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of Machine Learning Research, pp. 962-970. PMLR, 20-22 Apr 2017.
|
| 292 |
+
|
| 293 |
+
[77] Zhan, Q., Sierra, E., Malmsten, J., Ye, Z., Rosenwaks, Z., and Zaninovic, N. Blastocyst score, a blastocyst quality ranking tool, is a predictor of blastocyst ploidy and implantation potential. F&S Reports, 1(2): 133-141, 2020.
|
| 294 |
+
|
| 295 |
+
[78] Zhu, X., Yao, J., and Huang, J. Deep convolutional neural network for survival analysis with pathological images. In 2016 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 544-547. IEEE, 2016.
|
| 296 |
+
|
| 297 |
+
## Checklist
|
| 298 |
+
|
| 299 |
+
1. For all authors...
|
| 300 |
+
|
| 301 |
+
1. Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? [Yes] ,
|
| 302 |
+
|
| 303 |
+
2. Did you describe the limitations of your work? [Yes] , see Section ??
|
| 304 |
+
|
| 305 |
+
3. Did you discuss any potential negative societal impacts of your work? [Yes], see Section ??.
|
| 306 |
+
|
| 307 |
+
4. Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] .
|
| 308 |
+
|
| 309 |
+
2. If you are including theoretical results...
|
| 310 |
+
|
| 311 |
+
1. Did you state the full set of assumptions of all theoretical results? [N/A]
|
| 312 |
+
|
| 313 |
+
2. Did you include complete proofs of all theoretical results? [N/A]
|
| 314 |
+
|
| 315 |
+
3. If you ran experiments...
|
| 316 |
+
|
| 317 |
+
1. Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes]
|
| 318 |
+
|
| 319 |
+
2. Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] , see Appendix G
|
| 320 |
+
|
| 321 |
+
3. Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [Yes]
|
| 322 |
+
|
| 323 |
+
4. Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes], see Appendix G
|
| 324 |
+
|
| 325 |
+
4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
|
| 326 |
+
|
| 327 |
+
1. If your work uses existing assets, did you cite the creators? [Yes]
|
| 328 |
+
|
| 329 |
+
2. Did you mention the license of the assets? [N/A]
|
| 330 |
+
|
| 331 |
+
3. Did you include any new assets either in the supplemental material or as a URL? [No] ,
|
| 332 |
+
|
| 333 |
+
4. Did you discuss whether and how consent was obtained from people whose data you're using/curating? [N/A]
|
| 334 |
+
|
| 335 |
+
5. Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [Yes]
|
| 336 |
+
|
| 337 |
+
5. If you used crowdsourcing or conducted research with human subjects...
|
| 338 |
+
|
| 339 |
+
1. Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A]
|
| 340 |
+
|
| 341 |
+
2. Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A]
|
| 342 |
+
|
| 343 |
+
3. Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A] A Notation
|
| 344 |
+
|
| 345 |
+
26 Below we provide a table that consolidates and describes the notation used throughout the paper.
|
| 346 |
+
|
| 347 |
+
<table><tr><td>Symbol</td><td>Meaning</td></tr><tr><td>${\mathbf{x}}_{i} = \left( {{x}_{i,1},{x}_{i,2},\ldots ,{x}_{i, d}}\right)$</td><td>feature vector of example $i$</td></tr><tr><td>${y}_{i} \in \mathcal{Y}$</td><td>label of example $i$</td></tr><tr><td>${g}_{i} \in \left\{ {{g}_{i,1},{g}_{i,2},\ldots ,{g}_{i, k}}\right\}$</td><td>group membership of example $i$</td></tr><tr><td>$\mathcal{G} = {\mathcal{G}}_{1} \times {\mathcal{G}}_{2} \times \ldots \times {\mathcal{G}}_{k}$</td><td>space of group attributes</td></tr><tr><td>$m = \left| \mathcal{G}\right|$</td><td>number of intersectional groups</td></tr><tr><td>${n}_{\mathbf{g}} \mathrel{\text{:=}} \sum 1\left\lbrack {{\mathbf{g}}_{i} = \mathbf{g}}\right\rbrack$</td><td>number of examples of group $\mathbf{g} \in \mathcal{G}$</td></tr><tr><td>${n}_{\mathbf{g}}^{ + } \mathrel{\text{:=}} \sum 1\left\lbrack {{\mathbf{g}}_{i} = \mathbf{g},{y}_{i} = + 1}\right\rbrack$</td><td>number of examples of group $\mathbf{g} \in \mathcal{G}$ with ${y}_{i} = + 1$</td></tr><tr><td>${n}_{g}^{ - }:=\sum 1\lbrack {g}_{i} = g,\;{y}_{i} = - 1\rbrack$</td><td>number of examples of group $\mathbf{g} \in \mathcal{G}$ with ${y}_{i} = - 1$</td></tr><tr><td>${\mathcal{H}}_{0}$</td><td>hypothesis class of generic model</td></tr><tr><td>$\mathcal{H}$</td><td>hypothesis class of personalized models</td></tr><tr><td>${h}_{0} \in : \mathcal{X} \rightarrow \mathcal{Y}$</td><td>generic model</td></tr><tr><td>$h : \mathcal{X} \times \mathcal{G} \rightarrow \mathcal{Y}$</td><td>personalized model</td></tr><tr><td>${h}_{\mathbf{g}} : \mathcal{X} \times \mathcal{G} \rightarrow \mathcal{Y}$</td><td>personalized classifier where group membership is reported truthfully (as $\mathbf{g}$ )</td></tr><tr><td>${R}_{\mathbf{g}}\left( {h}_{{\mathbf{g}}^{\prime }}\right)$</td><td>true risk of model $h$ of group $\mathbf{g}$ if they report $\mathbf{g}$</td></tr><tr><td>${\widehat{R}}_{\mathbf{g}}\left( {h}_{{\mathbf{g}}^{\prime }}\right)$</td><td>empirical risk of model $h$ of group $\mathbf{g}$ if they report ${\mathbf{g}}^{\prime }$</td></tr><tr><td>${\Delta }_{g}\left( {h,{h}^{\prime }}\right)$</td><td>gain (i.e., reduction in true risk) for group $\mathbf{g}$ when using $h$ rather than ${h}^{\prime }$</td></tr><tr><td>${\Delta }_{\mathbf{g}}\left( {{h}_{\mathbf{g}},{h}_{0}}\right)$</td><td>rationality gap for group $\mathbf{g}$ (performance gain when reporting $\mathbf{g}$ as opposed to concealing it)</td></tr><tr><td>${\Delta }_{\mathbf{g}}\left( {{h}_{\mathbf{g}},{h}_{{\mathbf{g}}^{\prime }}}\right)$</td><td>rationality gap for group $\mathbf{g}$ (performance gain when reporting $\mathbf{g}$ as opposed to concealing it)</td></tr><tr><td colspan="2">Table 2: Notation</td></tr></table>
|
| 348 |
+
|
| 349 |
+
## Preliminaries
|
| 350 |
+
|
| 351 |
+
We start with a dataset of $n$ examples ${\left( {\mathbf{x}}_{i},{y}_{i},{\mathbf{g}}_{i}\right) }_{i = 1}^{n}$ , where each example consists of a feature vector ${\mathbf{x}}_{i} = \left\lbrack {{x}_{i,1},\ldots ,{x}_{i, d}}\right\rbrack \in {\mathbb{R}}^{d}$ , a label ${y}_{i} \in \mathcal{Y}$ , and a vector of $k$ categorical group attributes ${\mathbf{g}}_{i} = \left\lbrack {{\mathbf{g}}_{i,1},\ldots ,{\mathbf{g}}_{i, k}}\right\rbrack \in {\mathcal{G}}_{1} \times \ldots \times {\mathcal{G}}_{k} = \mathcal{G}$ - e.g., ${\mathbf{g}}_{i} =$ [female, age $\geq {60}$ , blood_type = 0+]. We refer to ${\mathbf{g}}_{i}$ as the group membership of $i$ and to the set $\left\{ {i \mid {\mathbf{g}}_{i} = \mathbf{g}}\right\}$ as group $\mathbf{g}$ . We let ${n}_{\mathbf{g}} \mathrel{\text{:=}} \left| \left\{ {i \mid {\mathbf{g}}_{i} = \mathbf{g}}\right\} \right|$ denote the number of examples in group $\mathbf{g}$ , and let $m \mathrel{\text{:=}} \left| \mathcal{G}\right|$ denote the number of intersectional groups.
|
| 352 |
+
|
| 353 |
+
We use the data to fit a personalized model that uses group attributes $h : \mathcal{X} \times \mathcal{G} \rightarrow \mathcal{Y}$ ; and a generic model that does not ${h}_{0} : \mathcal{X} \rightarrow \mathcal{Y}$ . We fit both models via empirical risk minimization with a loss function $\ell : \mathcal{Y} \times \mathcal{Y} \rightarrow {\mathbb{R}}_{ + }$ , using $\widehat{R}\left( h\right)$ and $R\left( h\right)$ to denote the empirical risk and true risk, respectively. We assume that the personalized and generic models represent the best models trained on datasets with group attributes ${\left( {\mathbf{x}}_{i},{y}_{i},{\mathbf{g}}_{i}\right) }_{i = 1}^{n}$ and without them ${\left( {\mathbf{x}}_{i},{y}_{i}\right) }_{i = 1}^{n}$ :
|
| 354 |
+
|
| 355 |
+
$$
|
| 356 |
+
h \in \mathop{\operatorname{argmin}}\limits_{{h \in \mathcal{H}}}\widehat{R}\left( h\right) \;{h}_{0} \in \mathop{\operatorname{argmin}}\limits_{{h \in {\mathcal{H}}_{0}}}\widehat{R}\left( h\right)
|
| 357 |
+
$$
|
| 358 |
+
|
| 359 |
+
We evaluate the gains of personalization for a personalized model $h$ for each group. As part of this evaluation, we examine how the performance of $h$ for group $\mathbf{g}$ changes when they are assigned predictions that are personalized for another group ${\mathbf{g}}^{\prime } -$ i.e., the predictions that group $\mathbf{g}$ would receive by "misreporting" their group membership as ${\mathbf{g}}^{\prime }$ . We represent this formally by using ${h}_{{\mathbf{g}}^{\prime }} \mathrel{\text{:=}} h\left( {\cdot ,{\mathbf{g}}^{\prime }}\right)$ to denote a personalized model where group attributes are fixed to ${\mathbf{g}}^{\prime }$ . Given a personalized model $h$ , we measure its empirical risk and true risk for group $\mathbf{g}$ when they report group membership as ${\mathbf{g}}^{\prime }$ as:
|
| 360 |
+
|
| 361 |
+
$$
|
| 362 |
+
{\widehat{R}}_{\mathbf{g}}\left( {h}_{{\mathbf{g}}^{\prime }}\right) \mathrel{\text{:=}} \frac{1}{{n}_{\mathbf{g}}}\mathop{\sum }\limits_{{i : {\mathbf{g}}_{i} = \mathbf{g}}}\ell \left( {h\left( {{\mathbf{x}}_{i},{\mathbf{g}}^{\prime }}\right) ,{y}_{i}}\right) \;{R}_{\mathbf{g}}\left( {h}_{{\mathbf{g}}^{\prime }}\right) \mathrel{\text{:=}} \mathbb{E}\left\lbrack {\ell \left( {h\left( {\mathbf{x},{\mathbf{g}}^{\prime }}\right) , y}\right) \mid \mathcal{G} = \mathbf{g}}\right\rbrack .
|
| 363 |
+
$$
|
| 364 |
+
|
| 365 |
+
We assume that groups prefer models that assign more accurate predictions as measured in terms of true risk. We express the preferences of group $\mathbf{g}$ between $h$ and ${h}^{\prime }$ using the gain measure ${\Delta }_{\mathbf{g}}\left( {h,{h}^{\prime }}\right) \mathrel{\text{:=}} {R}_{\mathbf{g}}\left( h\right) - {R}_{\mathbf{g}}\left( {h}^{\prime }\right) .$
|
| 366 |
+
|
| 367 |
+
## B Related Work
|
| 368 |
+
|
| 369 |
+
Our work is related to several streams of research in algorithmic fairness. We propose to check the quality of personalization using preference-based notions of fairness [75, 68, 43, 70, 20]. We focus on intersectional groups [c.f., 40, 35], which are more granular than those considered in the literature yet large enough to estimate performance [c.f., 22, 5]. We study models that use group attributes to assign more accurate predictions over a heterogeneous population. Several works highlight the need to account for group membership [75, 23, 16, 46, 50, 72], observing that it is otherwise impossible for a model to achieve parity - i.e., to perform equally well for all groups [33, 74, 76, 28, 1, 55, 14]. Parity-based methods are ill-suited for personalization since they equalize performance by reducing performance for groups for who the model performs well, rather than improving performance for groups for who the model performs poorly [50, 36, 58, 51, 52].
|
| 370 |
+
|
| 371 |
+
We study personalization in models that encode personal characteristics through categorical attributes, which are widely used across medicine, consumer finance, and criminal justice (see use cases in §2). In medicine, for example, many models are fit using logistic regression with a one-hot encoding of categorical attributes $\left\lbrack {{63},{71},{25}}\right\rbrack$ . Existing work that evaluates the gain of personalization often does so at population-level rather at the level of group who provide personal data [37, 65]. This population-level focus characterizes technical work in this area: recent methods use categorical attributes to improve population-level performance by accounting for heterogeneity - e.g., by automatically including higher-order interaction effects $\left\lbrack {{11},{49},{69}}\right\rbrack$ or recursively partitioning data $\left\lbrack {{24},{12},{10},9}\right\rbrack$ . C Truthful Self-Reporting
|
| 372 |
+
|
| 373 |
+
Remark 2 (Truthful Self-Reporting). Consider a prediction task where each person reports their group membership to a personalized model. Let ${\mathbf{r}}_{i}$ denote the self-reported group membership of person i where:
|
| 374 |
+
|
| 375 |
+
${\mathbf{r}}_{i} = {\mathbf{g}}_{i} \Leftrightarrow i$ reports truthfully ${\mathbf{r}}_{i} \in \mathcal{G} \smallsetminus \left\{ {\mathbf{g}}_{i}\right\} \Leftrightarrow i$ misreports ${\mathbf{r}}_{i} = ? \Leftrightarrow i$ withhold
|
| 376 |
+
|
| 377 |
+
If a personalized model $h : \mathcal{X} \times \mathcal{G} \rightarrow \mathcal{Y}$ guarantees the fair use of a group attribute $\mathcal{G}$ then each person would opt to truthfully report as this strategy would maximize their expected performance:
|
| 378 |
+
|
| 379 |
+
$$
|
| 380 |
+
{\mathbf{g}}_{i} \in \mathop{\operatorname{argmin}}\limits_{{{\mathbf{r}}_{i} \in G\cup \{ ?\} }}\mathbb{E}\left\lbrack {\ell \left( {h\left( {\mathbf{x},{\mathbf{r}}_{i}}\right) ,{y}_{i}}\right) \mid \mathcal{G} = {\mathbf{g}}_{i}}\right\rbrack .
|
| 381 |
+
$$
|
| 382 |
+
|
| 383 |
+
Truthful reporting incentives reflect basic principles regarding consent in data privacy rights. In effect, a personalized model that exhibits a fair use violation for a specific group uses their group membership in a way that is coercive. If a group were allowed to report their personal information to the model at prediction time, they would opt to withhold or misreport this information. With respect to Definition 1, rationality ensures that a majority of $\mathbf{g}$ prefer to report group membership rather than withhold it. Envy-freeness ensures that a majority of group $\mathbf{g}$ prefer to report group membership rather than misreport it.
|
| 384 |
+
|
| 385 |
+
## D Testing & Verification
|
| 386 |
+
|
| 387 |
+
Point estimates of the gains of personalization are not reliable, especially for small groups. In a prediction task where a personalized model performs 5% worse than a generic model, a 5% drop could represent 5 mistakes for a group with 100 samples, or 200 mistakes for a group with 4000 samples. Measuring the statistical significance of gains can help us distinguish between such cases and inform our use of group attributes. In some applications, a significant fair use violation could warrant the need for a new model. In others, we may wish to ensure a significant gain to use a group attribute in the first place.
|
| 388 |
+
|
| 389 |
+
In practice, we check for a rationality violation using a one-sided hypothesis test of the form:
|
| 390 |
+
|
| 391 |
+
$$
|
| 392 |
+
{H}_{0} : R\left( {h}_{0}\right) - R\left( {h}_{\mathbf{g}}\right) \leq 0\;{H}_{A} : R\left( {h}_{0}\right) - R\left( {h}_{\mathbf{g}}\right) > 0
|
| 393 |
+
$$
|
| 394 |
+
|
| 395 |
+
Here, the null hypothesis ${H}_{0}$ assumes that group $\mathbf{g}$ prefers ${h}_{\mathbf{g}}$ to ${h}_{0}$ by default. Thus, we reject ${H}_{0}$ when there is enough evidence to support a rationality violation for $\mathbf{g}$ in a held-out dataset.
|
| 396 |
+
|
| 397 |
+
We can use an inverted setup where ${H}_{A} : R\left( {h}_{0}\right) - R\left( {h}_{\mathbf{g}}\right) < 0$ to check for gains from personalization. The testing procedure varies based on the performance metric used to evaluate the gains of personalization. In general, we can apply a bootstrap hypothesis test [18]. In some cases, there exist more powerful tests for specific performance metrics [see e.g., the McNemar test for accuracy 19]. We can repeat these tests across multiple groups to check for envy-freeness, or to check for all conditions in Definition 1. In the latter regimes, we can control for the false discovery rate using a standard Bonferroni correction[21], which is suitable even for non-independent tests.
|
| 398 |
+
|
| 399 |
+
## E Failure Modes of Personalization
|
| 400 |
+
|
| 401 |
+
In this Appendix, we describe additional mechanisms that lead personalized models to exhibit fair use violations. The mechanisms below reflect failure modes that arise in later stages of the machine learning pipeline, and that are more difficult to address through interventions.
|
| 402 |
+
|
| 403 |
+
### E.1 Model Selection
|
| 404 |
+
|
| 405 |
+
### E.2 ERM with a Surrogate Loss Function
|
| 406 |
+
|
| 407 |
+
Consider a setting where we want a personalized model that maximizes classification accuracy - i.e., one that minimizes the $0 - 1$ loss. If we fit this classifier using a linear SVM - e.g., by solving an ERM problem that optimizes the hinge loss - the approximation error between the 0-1 loss and the hinge loss can produce a fair use violation (see Figure 5). This example is specifically designed to avoid fair use violations that stem from model misspecification.
|
| 408 |
+
|
| 409 |
+
<table><tr><td colspan="3" rowspan="2">Data</td><td colspan="2">Generic ${h}_{0} = {h}_{0}\left( {x}_{1}\right) = {h}_{0}\left( {x}_{2}\right)$</td></tr><tr><td>Pred.</td><td>Mistakes</td></tr><tr><td>$\left( {g,{x}_{1},{x}_{2}}\right)$</td><td>${n}^{ + }$</td><td>${n}^{ - }$</td><td>${h}_{0}$</td><td>$R\left( {h}_{0}\right)$</td></tr><tr><td>(0,0,0)</td><td>0</td><td>30</td><td>-</td><td>0</td></tr><tr><td>(0,0,1)</td><td>0</td><td>0</td><td>-</td><td>0</td></tr><tr><td>(0,1,0)</td><td>0</td><td>20</td><td>-</td><td>0</td></tr><tr><td>(0,1,1)</td><td>0</td><td>0</td><td>-</td><td>0</td></tr><tr><td>(1,0,0)</td><td>25</td><td>0</td><td>-</td><td>25</td></tr><tr><td>(1,0,1)</td><td>0</td><td>0</td><td>-</td><td>0</td></tr><tr><td>(1,1,0)</td><td>15</td><td>0</td><td>-</td><td>15</td></tr><tr><td>(1,1,1)</td><td>0</td><td>0</td><td>-</td><td>0</td></tr><tr><td>Total</td><td>40</td><td>50</td><td/><td>40</td></tr><tr><td>$g = 0$</td><td>0</td><td>50</td><td/><td>0</td></tr><tr><td>$g = 1$</td><td>40</td><td>0</td><td/><td>40</td></tr></table>
|
| 410 |
+
|
| 411 |
+
Personalized Model (Selected) Personalized Model (Discarded)
|
| 412 |
+
|
| 413 |
+
<table><tr><td colspan="3">A result of the complement statistics of the other hand. ${h}_{S}\left( {{x}_{1}, g}\right)$</td></tr><tr><td>Pred.</td><td>Error</td><td>Gain</td></tr><tr><td>${h}_{S}$</td><td>$R\left( {h}_{S}\right)$</td><td>$\Delta {R}_{g}\left( {{h}_{0},{h}_{S}}\right)$</td></tr><tr><td>-</td><td>0</td><td>0</td></tr><tr><td>-</td><td>0</td><td>0</td></tr><tr><td>+</td><td>20</td><td>-20</td></tr><tr><td>+</td><td>0</td><td>0</td></tr><tr><td>+</td><td>0</td><td>25</td></tr><tr><td>+</td><td>0</td><td>0</td></tr><tr><td>+</td><td>0</td><td>15</td></tr><tr><td>-</td><td>0</td><td>0</td></tr><tr><td/><td>35</td><td>5</td></tr><tr><td/><td>20</td><td>-20</td></tr><tr><td/><td>15</td><td>25</td></tr></table>
|
| 414 |
+
|
| 415 |
+
<table><tr><td colspan="3">${h}_{D}\left( {{x}_{2}, g}\right)$</td></tr><tr><td>Pred.</td><td>Error</td><td>Gain</td></tr><tr><td>${h}_{D}$</td><td>$R\left( {h}_{D}\right)$</td><td>$\Delta {R}_{\mathbf{g}}\left( {{h}_{0},{h}_{D}}\right)$</td></tr><tr><td>+</td><td>30</td><td>-30</td></tr><tr><td>-</td><td>0</td><td>0</td></tr><tr><td>+</td><td>20</td><td>-20</td></tr><tr><td>-</td><td>0</td><td>0</td></tr><tr><td>+</td><td>0</td><td>25</td></tr><tr><td>-</td><td>0</td><td>0</td></tr><tr><td>+</td><td>0</td><td>15</td></tr><tr><td>-</td><td>0</td><td>0</td></tr><tr><td/><td>50</td><td>-10</td></tr><tr><td/><td>50</td><td>-50</td></tr><tr><td/><td>0</td><td>40</td></tr></table>
|
| 416 |
+
|
| 417 |
+
Figure 4: Standard model selection criteria can lead to fair use violations. We consider a 2D classification task with two groups $\mathbf{g} \in \{ 0,1\}$ where we need a model that can use at most one of the binary attributes $\left( {{x}_{1},{x}_{2}}\right) \in \{ 0,1{\} }^{2}$ . We fit a generic model and a personalized model with a one-hot encoding of group membership choosing the variable that minimizes the overall error rate. Here, each group performs better under different choices, defaulting to choice that benefits the majority group.
|
| 418 |
+
|
| 419 |
+

|
| 420 |
+
|
| 421 |
+
Figure 5: Fair use violations resulting from the use of surrogate loss function in ERM. Here, we are given data for classification task with features $\mathbf{x} = \left( {{x}_{1},{x}_{2}}\right)$ and a group attribute $\mathbf{g} = \{ A, B\}$ . We fit a linear SVM ${h}_{\mathbf{g}}$ by optimizing the hinge loss for a prediction task where evaluate the gains of personalization in terms of the error rate (i.e.,0-1 loss). In this case, the personalized model produces a fair use violation for Group $B$ due to an outlier ${\mathbf{x}}_{O}$ . We plot the data for group $A$ and group $B$ separately. Each plot shows the generic classifier $\left( {h}_{0}\right.$ ; grey) and the personalized classifiers for the corresponding group $\left( {h}_{A}\right.$ or $\left. {h}_{B}\right)$ ; black). As a baseline for comparison, we show the personalized models that we would obtain by optimizing an exact loss function (i.e., 0-1 loss, which matches the performance metric that we use to evaluate the gains for personalization). As shown, we would expect to avoid this violation had we fit a model by optimizing the 0-1 loss directly.
|
| 422 |
+
|
| 423 |
+
### E.3 Generalization & Dataset Shifts
|
| 424 |
+
|
| 425 |
+
Fair use violations can arise in deployment. Small samples may significantly distort the relative prevalence of each group, leading standard empirical risk minimization to fit a suboptimal generic model or personalized model (see Figure 6). Fair use violations can also arise as a result of changes in the data distribution [i.e., dataset shift ${61},{29},{32}\rbrack$ (see Figure 7)
|
| 426 |
+
|
| 427 |
+
<table><tr><td colspan="2">Group</td><td colspan="2">Training Data</td><td colspan="2">Data Distribution</td><td colspan="2">Model Predictions</td><td colspan="3">Observed Performance</td><td colspan="3">True Performance</td></tr><tr><td>${g}_{1}$</td><td>${g}_{2}$</td><td>${n}^{ + }$</td><td>${n}^{ - }$</td><td>${n}^{ + }$</td><td>${n}^{ - }$</td><td>${h}_{0}\left( \mathbf{x}\right)$</td><td>${h}_{\mathbf{g}}\left( {\mathbf{x},\mathbf{g}}\right)$</td><td>${R}_{\mathbf{g}}\left( {h}_{0}\right)$</td><td>${R}_{\mathbf{g}}\left( {h}_{\mathbf{g}}\right)$</td><td>${\Delta }_{\mathbf{g}}\left( {{h}_{\mathbf{g}},{h}_{0}}\right)$</td><td>${R}_{\mathbf{g}}\left( {h}_{0}\right)$</td><td>${R}_{\mathbf{g}}\left( {h}_{\mathbf{g}}\right)$</td><td>${\Delta }_{g}\left( {{h}_{g},{h}_{0}}\right)$</td></tr><tr><td>0</td><td>0</td><td>65</td><td>60</td><td>130</td><td>120</td><td>+</td><td>+</td><td>60</td><td>60</td><td>0</td><td>120</td><td>120</td><td>0</td></tr><tr><td>1</td><td>0</td><td>60</td><td>65</td><td>120</td><td>130</td><td>+</td><td>-</td><td>65</td><td>60</td><td>5</td><td>130</td><td>120</td><td>10</td></tr><tr><td>0</td><td>1</td><td>60</td><td>65</td><td>130</td><td>120</td><td>+</td><td>-</td><td>65</td><td>60</td><td>5</td><td>120</td><td>130</td><td>-10</td></tr><tr><td>1</td><td>1</td><td>70</td><td>55</td><td>140</td><td>110</td><td>+</td><td>+</td><td>55</td><td>55</td><td>0</td><td>110</td><td>110</td><td>0</td></tr><tr><td/><td>Total</td><td>255</td><td>245</td><td>520</td><td>480</td><td>-</td><td>N/A</td><td>245</td><td>235</td><td>10</td><td>480</td><td>470</td><td>0</td></tr></table>
|
| 428 |
+
|
| 429 |
+
Figure 6: Fair use violations can arise when personalizing models on small samples. Here, we show a 2D classification task in which a personalized model only exhibits fair use violations in deployment. Here, group (1,0)experiences an gain once the model is deployment. In contrast, group(0,1)experiences a fair use violation as a result of sampling error.
|
| 430 |
+
|
| 431 |
+
<table><tr><td>Group ${n}^{ - }$ 0 25 25 0 50</td><td>Training Data</td><td>True Distribution</td><td colspan="2">Model Predictions</td><td colspan="3">Train Performance</td><td colspan="3">True Performance</td></tr><tr><td>${g}_{1}$</td><td>${g}_{2}$${n}^{ + }$</td><td>${n}^{ + }$${n}^{ - }$</td><td>${h}_{0}\left( \mathbf{x}\right)$</td><td>${h}_{\mathbf{g}}\left( {\mathbf{x},\mathbf{g}}\right)$</td><td>${R}_{\mathbf{g}}\left( {h}_{0}\right)$</td><td>${R}_{g}\left( {h}_{g}\right)$</td><td>${\Delta }_{\mathbf{g}}\left( {{h}_{\mathbf{g}},{h}_{0}}\right)$</td><td>${R}_{\mathbf{g}}\left( {h}_{0}\right)$</td><td>${R}_{\mathbf{g}}\left( {h}_{\mathbf{g}}\right)$</td><td>${\Delta }_{\mathbf{g}}\left( {{h}_{\mathbf{g}},{h}_{0}}\right)$</td></tr><tr><td>0</td><td>020</td><td>200</td><td>+</td><td>+</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>1</td><td>05</td><td>525</td><td>+</td><td>-</td><td>25</td><td>5</td><td>20</td><td>25</td><td>5</td><td>20</td></tr><tr><td>0</td><td>15</td><td>3025</td><td>+</td><td>-</td><td>25</td><td>5</td><td>20</td><td>20</td><td>30</td><td>-10</td></tr><tr><td>1</td><td>120</td><td>200</td><td>+</td><td>+</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td/><td>Total50</td><td>7545</td><td>+</td><td>N/A</td><td>50</td><td>10</td><td>40</td><td>45</td><td>35</td><td>10</td></tr></table>
|
| 432 |
+
|
| 433 |
+
Figure 7: Label shift produces a fair use violation. Here, we train a linear classifier on a dataset with [one binary feature and one binary group attribute]. As shown, personalization leads to overall improvement reducing aggregate reduce from 50 to 24 and group-specific improvement on the training data. However, not all groups perform equally well in deployment. While groups(0,1)and(1,1)see improvements, a violation (red) occurs for group(1,0)due to the label shift where positive examples are no longer present meanwhile they were the majority in the training data.
|
| 434 |
+
|
| 435 |
+
<table><tr><td>GROUP</td><td colspan="2">TEST AUC</td><td colspan="2">INTERVENTIONS</td><td colspan="2">TEST ERROR</td><td colspan="2">INTERVENTION</td><td colspan="2">TEST ECE</td><td colspan="2">INTERVENTIONS</td></tr><tr><td>$g$</td><td>${R}_{\mathbf{g}}\left( {h}_{\mathbf{g}}\right)$</td><td>${\Delta }_{g}$</td><td>Assign ${h}_{0}$</td><td>Assign ${h}_{\text{dcp }}$</td><td>${R}_{\mathbf{g}}\left( {h}_{\mathbf{g}}\right)$</td><td>${\Delta }_{g}$</td><td>Assign ${h}_{0}$</td><td>Assign ${h}_{\mathbf{g}}^{\text{dcp }}$</td><td>${R}_{\mathbf{g}}\left( {h}_{\mathbf{g}}\right)$</td><td>${\Delta }_{g}$</td><td>Assign ${h}_{0}$</td><td>Assign ${h}_{g}^{\text{dcp }}$</td></tr><tr><td>female, black</td><td>0.463</td><td>0.024</td><td>0.024</td><td>0.334</td><td>52.2%</td><td>6.8%</td><td>6.8%</td><td>37.3%</td><td>31.6%</td><td>2.3%</td><td>2.3%</td><td>12.3%</td></tr><tr><td>female, white</td><td>0.846</td><td>0.004</td><td>0.004</td><td>0.004</td><td>21.7%</td><td>2.0%</td><td>2.0%</td><td>2.0%</td><td>10.2%</td><td>1.9%</td><td>1.9%</td><td>2.1%</td></tr><tr><td>female, other</td><td>0.860</td><td>-0.003</td><td>0.000</td><td>0.057</td><td>25.5%</td><td>1.3%</td><td>1.3%</td><td>14.8%</td><td>15.5%</td><td>0.9%</td><td>0.9%</td><td>5.0%</td></tr><tr><td>male, black</td><td>0.767</td><td>-0.001</td><td>0.000</td><td>0.104</td><td>34.0%</td><td>-5.2%</td><td>0.0%</td><td>15.6%</td><td>20.1%</td><td>-2.0%</td><td>0.0%</td><td>4.9%</td></tr><tr><td>male, white</td><td>0.767</td><td>0.004</td><td>0.004</td><td>0.038</td><td>29.2%</td><td>1.3%</td><td>1.3%</td><td>3.7%</td><td>10.3%</td><td>1.2%</td><td>1.2%</td><td>1.2%</td></tr><tr><td>male, other</td><td>0.836</td><td>-0.002</td><td>0.000</td><td>0.017</td><td>27.9%</td><td>-5.0%</td><td>0.0%</td><td>1.3%</td><td>15.4%</td><td>-1.6%</td><td>0.0%</td><td>0.0%</td></tr><tr><td>Total</td><td>0.800</td><td>0.006</td><td>-</td><td>-</td><td>28.3%</td><td>0.3%</td><td>-</td><td>-</td><td>4.7%</td><td>0.2%</td><td>-</td><td>-</td></tr></table>
|
| 436 |
+
|
| 437 |
+
Table 3: Fair use evaluation of a personalized logistic regression model with a one-hot encoding of group attributes for kidney. As shown, personalization can improve overall performance while reduces performance for specific groups (red). This result holds across all performance metrics. In such cases, we can resolve fair use violations and improve the gains from personalization by assigning personalized predictions to each group with multiple models. Here, we show the gains when we assign each group the most accurate predictions from either the personalized model ${h}_{g}$ or a generic classifier ${h}_{0}$ , assign each group the most accurate predictions from the personalized model ${h}_{g}$ or a decoupled classifier ${h}^{\mathrm{{dep}}}$ . We highlight cases this intervention led to a gain in green, and cases where it resolved a violation in yellow.
|
| 438 |
+
|
| 439 |
+
## 512 F Mortality Prediction for Acute Kidney Injury
|
| 440 |
+
|
| 441 |
+
In this section, we evaluate the gains of personalization in model to predict mortality for patients with acute kidney injury. We use our results to discuss how fair use evaluations as form of auditing [53] can inform the use of race in clinical prediction models, and describe simple interventions to mitigate harm.
|
| 442 |
+
|
| 443 |
+
### F.1 Setup
|
| 444 |
+
|
| 445 |
+
We consider a classification task to predict mortality for patients who receive continuous renal replacement therapy while in the ICU. The data consists of records for $n = {2066}$ patients from MIMIC III and IV [38]. Here, ${y}_{i} = + 1$ if patient $i$ dies in the ICU and $\Pr \left( {{y}_{i} = + 1}\right) = {51.1}\%$ . Each patient has $k = 2$ group attributes: sex $\in \{$ male, female $\}$ and race $\in \{$ white, black, other $\}$ and $d = {78}$ features related to their health, lab tests, length of stay, and potential for organ failure. We train and evaluate personalized models using the same setup as Section 4.1.
|
| 446 |
+
|
| 447 |
+
### F.2 Results
|
| 448 |
+
|
| 449 |
+
We show the performance of a personalized logistic regression model with a one-hot encoding in Table 3, and present results for other model classes in Appendix G. Overall, our findings show that personalization yields uneven gains at a group level. As in Section 4.2, we observe fair use violations across performance metrics and model classes. In this case, for example, the gains in error across range from -5.2% to 6.8%, and two groups experience statistically significant fair use violations: (male, black) and (male, other).
|
| 450 |
+
|
| 451 |
+
On the Use of Race Clinical prediction models include group attributes when there is a "plausible" causal relationship between group membership and the outcome of interest. These norms have led to development of widely-used clinical prediction models that use race and ethnicity [25,71]. Recently, Vyas et al. [71] discuss how these models can inflict harm and urge physicians to check if "race correction is based on robust [statistical] evidence."
|
| 452 |
+
|
| 453 |
+
Our results highlight how fair use evaluation can provide evidence that serves as a barrier to "race correction" in such cases. Here, checking rationality shows that a race-specific model can reduce performance for specific groups - e.g., (male, black) and (male, other). Checking envy-freeness reveals that certain groups expect better performance by misreporting their group membership - e.g., (male, other) would experience 5.6% gain in test error by reporting any other race.
|
| 454 |
+
|
| 455 |
+
Even in cases where including race can improve performance, we note that race may act as a proxy for broader social determinants of health. Thus, a model that includes race may act as a "smoke screen" in that it attributes differences in health outcomes to an immutable factor, and perpetuates inaction on the root causes of health disparities [57]. Given these drawbacks, the starting point should be evidence of gain rather than harm.
|
| 456 |
+
|
| 457 |
+
### F.3 Interventions
|
| 458 |
+
|
| 459 |
+
We use our results to simple interventions that can resolve fair use violations by assigning predictions from different models at prediction time. These interventions are admittedly simple, but have the benefit of being broadly applicable.
|
| 460 |
+
|
| 461 |
+
Assigning a Generic Model We assign groups who are subject to a fair use violation the predictions from a generic model ${h}_{0}$ . This intervention is guaranteed to resolve all fair use violations in a way that strictly improves performance, and may further reduce the use of personal data in prediction. In this case, it resolves all rationality violations $(2/3/2$ in terms of error/AUC/ECE respectively). We also observe a potential to reduce data use: seeing how both (male, black) and (male, other) experience a fair use violation in terms of error, we see that we could avoid soliciting race for all male patients and reduce test error by $1\%$ (as the loss in accuracy for (white, male) are offset by the gain in accuracy for (male, black) and (male, other).
|
| 462 |
+
|
| 463 |
+
Assigning a Decoupled Model We assign groups who experience a fair use violation the predictions from a decoupled model ${h}_{g}^{\mathrm{{dcp}}} -$ i.e., a model fit using only data from their group. While this approach may not resolve fair use violations, it can produce surprisingly large gains as decoupling effectively personalizes the entire model development pipeline. Our results in Table 3 show the potential gains of this intervention across performance metrics. Focusing on error, we see that one can: (1) eliminate fair use violations for 2 groups (male, black) and (male, other); (2) greatly improve the gains for 1 group, e.g., (female, black) who experience a gain of 37.3%; and (3) improve overall gains by 6.2%. We observe similar effects across other model classes and configurations.
|
| 464 |
+
|
| 465 |
+
## G Supporting Material for Sections $4\& \mathrm{\;F}$
|
| 466 |
+
|
| 467 |
+
In this Appendix, we provide: (i) additional information on the datasets used in Sections 4 and F; (ii) results showing the gains of personalization when fitting personalized neural nets and random forests.
|
| 468 |
+
|
| 469 |
+
### G.1 Additional Information on Datasets
|
| 470 |
+
|
| 471 |
+
<table><tr><td>$\mathbf{{Dataset}}$</td><td>$n$</td><td>$d$</td><td>$G$</td><td>Prediction Task</td><td>Reference</td></tr><tr><td>apnea</td><td>1,152</td><td>26</td><td>Age $\times$ Sex $= \{ < {30},{30}$ to ${60},{60} + \} \times \{$ Male, Female $\}$</td><td>patient has obstructive sleep apnea</td><td>Ustun et al. [66]</td></tr><tr><td>cardio_eicu</td><td>1.341</td><td>49</td><td>Age $\times$ Sex $= \{$ Young, Old $\} \times \{$ Male, Female $\}$</td><td>patient with cardiogenic shock dies</td><td>Pollard et al. [60]</td></tr><tr><td>cardio_mimic</td><td>5, 289</td><td>49</td><td>Age $\times$ Sex $= \{$ Young, Old $\} \times \{$ Male, Female $\}$</td><td>patient with cardiogenic shock dies</td><td>Johnson et al. [38]</td></tr><tr><td>heart</td><td>181</td><td>26</td><td>Age $\times$ Sex $= \{$ Young, Old $\} \times \{$ Male, Female $\}$</td><td>patient has heart disease</td><td>Detrano et al. [17]</td></tr><tr><td>kidney</td><td>2066</td><td>78</td><td>Sex $\times$ Race $= \{$ Male, Female $\} \times \{$ White, Black, Other $\}$</td><td>mortality of patient on CRRT</td><td>Johnson et al. [38]</td></tr><tr><td>mortality</td><td>21,139</td><td>484</td><td>Age $\times$ Sex $= \{ < {30},{30}$ to ${60},{60} + \} \times \{$ Male, Female $\}$</td><td>mortality of patient in ICU</td><td>Harutyunyan et al. [34]</td></tr><tr><td>saps</td><td>7, 797</td><td>36</td><td>Age $\times$ HIV $= \{ \leq {30},{30} + \} \times \{$ Positive, Negative $\}$</td><td>mortality of patient in ICU</td><td>Le Gall et al. [48]</td></tr></table>
|
| 472 |
+
|
| 473 |
+
Table 4: Overview of classification datasets used to train clinical prediction models in Sections 4 and F. We describe the conditions that lead to ${y}_{i}$ under each prediction task. All datasets used are publicly available, have been deidentified, and inspected to ensure that they contain no offensive content. In cases where data access requires consent or approval from the data holders, we have followed the proper procedure to obtain such consent. Datasets based on MIMIC-III [38] (kidney, mortality) and eICU [60] (cardio) are hosted on PhysioNet under the PhysioNet Credentialed Health Data License. The heart dataset is hosted on the UCI ML Repository under an Open Data license. The apnea and saps datasets must be requested from the authors of the papers listed above $\left\lbrack {{48},{66}}\right\rbrack$ . We minimally process each dataset to impute the values of missing points (using mean value imputation), and repair class imbalances across intersectional groups (to eliminate "trivial" fair use violations that occur due to class imbalance).
|
| 474 |
+
|
| 475 |
+
apnea We use the obstructive sleep apnea (OSA) dataset outlined in Ustun et al. [66]. In this dataset, we have a cohort of 1152 patients where 23% have OSA. We use all available features (e.g. BMI, comobordities, age, and sex) and binarize them, resulting in 26 binary features.
|
| 476 |
+
|
| 477 |
+
cardio_eicu & cardio_mimic Cardiogenic shock is a serious acute condition where the heart cannot provide sufficient blood to the vital organs. Using the eICU Collaborative Research Database V2.0 [60] and MIMIC-III database [38], we create a cohort of patients who have cardiogenic shock during the course of their intensive care unit (ICU) stay. We use an exhaustive set of clinical criteria
|
| 478 |
+
|
| 479 |
+
based on the patient's labs and vitals (i.e. presence of hypotension and organ hypoperfusion). The goal is to predict whether a patient with cardiogenic shock will die in hospital. As features, we summarize (minimums and maximums) relevant labs and vitals (e.g. systolic BP, heart rate, hemoglobin count) of each patient from the period of time prior to the onset of cardiogenic shock up to 24 hours. This results in a dataset containing 8,815 patients, 13.5% of whom die in hospital.
|
| 480 |
+
|
| 481 |
+
heart We use the Heart dataset from the UCI Machine Learning Repository, where the goal is to predict the presence of heart disease from clinical features. It consists of 303 patients, 54.5% of which have heart disease. We use all available features, treating cp, thal, ca, slope and restecg as categorical, and all remaining features as continuous.
|
| 482 |
+
|
| 483 |
+
kidney Using MIMIC-III and MIMIC-IV [38], we create a cohort of patients who were given Continuous Renal Replacement Therapy (CRRT) at any point during their ICU stay. For patients with multiple ICU stays, we select their first one. We define the target as whether the patient dies during the course of their selected hospital admission. As features, we select the most recent instances of relevant lab measurements (e.g. sodium, potassium, creatinine) prior to the CRRT start time, along with the patient's age, the number of hours they have been in ICU when CRRT was administered, and their Sequential Organ Failure Assessment (SOFA) score at admission. We treat all variables as continuous with the exception of the SOFA score, which we treat as ordinal. This results in a dataset of 1,722 CRRT patients, 51.1% of which die in-hospital. We define protected groups based on the patient's sex and self-reported race and ethnicity.
|
| 484 |
+
|
| 485 |
+
mortality We follow the cohort creation steps outlined by Harutyunyan et al. [34] for their in-hospital mortality prediction task. We select the first ICU stay longer than 48 hours of patients in MIMIC-III [38], and aim to predict whether they will die in-hospital during their corresponding hospital admission. As features, we bin the time-series lab and vital measurements provided by Harutyunyan et al. [34] into four 12-hour time-bins, and compute the mean in each time-bin. We additionally include the patient's age and sex as features. This results in a cohort of 21,139 patients, ${13.2}\%$ of whom die in hospital.
|
| 486 |
+
|
| 487 |
+
saps The Simplified Acute Physiology Score II (SAPS II) is a risk score that was developed for predicting mortality in the ICU [48]. This study was conducted in 137 medical centers across 12 countries contains 7,797 patients. For each patient we have access to demographics, comorbidities, and vitals which are used to predict the risk of mortality in the ICU. For group attributes we use age and HIV status. The percentage of patients in the dataset who experience mortality is 21.8%.
|
| 488 |
+
|
| 489 |
+
### G.2 Results for Neural Nets & Random Forests
|
| 490 |
+
|
| 491 |
+
In this Appendix, we present tables that summarize the gains of personalization for neural networks and random forests. The following tables are analogous to Table 1, except that they also include results for the kidney dataset in Section F.
|
| 492 |
+
|
| 493 |
+
#### G.2.1 Neural Nets
|
| 494 |
+
|
| 495 |
+
For our neural network models we trained them with two hidden layers of size 5 and 2 and learning rate of ${1}^{-3}$ . Additionally, we applied Platt scaling [59] the outputs of the neural network model to ensure that they were calibrated. We note similar findings described in Sections 4.2 and Section F for neural network models. For example, when looking at test error on cardio_eicu we are able to eliminate all fair use violations by decoupling models. Additionally, across datasets we are able to identify statistically significant fair use violations and gains as noted by the gains and violations rows.
|
| 496 |
+
|
| 497 |
+
<table><tr><td rowspan="2">Dataset</td><td rowspan="2">Metrics</td><td colspan="3">Test AUC</td><td colspan="3">Test ECE</td><td colspan="3">Test Error</td></tr><tr><td>1Hot</td><td>All</td><td>DCP</td><td>1Hot</td><td>All</td><td>DCP</td><td>1Hot</td><td>All</td><td>DCP</td></tr><tr><td rowspan="5">apnea</td><td>Personalized</td><td>0.705</td><td>0.524</td><td>0.622</td><td>6.3%</td><td>2.5%</td><td>5.3%</td><td>36.7%</td><td>50.6%</td><td>41.5%</td></tr><tr><td>Gain</td><td>-0.012</td><td>-0.193</td><td>-0.095</td><td>-0.7%</td><td>3.2%</td><td>0.4%</td><td>-3.3%</td><td>-17.2%</td><td>-8.1%</td></tr><tr><td>Best/Worst Gain</td><td>0.114 / -0.051</td><td>0.029 / -0.501</td><td>$- {0.068}/ - {0.328}$</td><td>10.9% / -8.2%</td><td>24.0% / 1.5%</td><td>9.8% / -5.7%</td><td>8.4% / -5.0%</td><td>7.1% / -43.5%</td><td>-2.2% I-50.5%</td></tr><tr><td>Rat. Gains/Viols</td><td>$3/3$</td><td>5/5</td><td>6/6</td><td>$3/3$</td><td>5/5</td><td>3/3</td><td>$1/2$</td><td>$1/1$</td><td>0/0</td></tr><tr><td>EF Gains/Viols</td><td>$1/0$</td><td>0.0</td><td>$2/1$</td><td>2/4</td><td>5/5</td><td>5/5</td><td>$6/6$</td><td>$6/6$</td><td>6/6</td></tr><tr><td rowspan="5">cardio_eicu</td><td>Personalized</td><td>0.739</td><td>0.738</td><td>0.687</td><td>4.5%</td><td>5.5%</td><td>5.4%</td><td>31.5%</td><td>31.8%</td><td>36.6%</td></tr><tr><td>Gain</td><td>0.001</td><td>-0.001</td><td>-0.051</td><td>2.3%</td><td>1.4%</td><td>1.5%</td><td>1.6%</td><td>1.3%</td><td>-3.5%</td></tr><tr><td>Best/Worst Gain</td><td>0.067 / -0.003</td><td>0.029 / -0.012</td><td>0.007 / -0.090</td><td>2.6% I - 1.2%</td><td>2.4% / -1.9%</td><td>4.9% I -3.0%</td><td>8.4% / -0.5%</td><td>5.5% / -1.3%</td><td>0.1% /-10.2%</td></tr><tr><td>Rat. Gains/Viols</td><td>0/0</td><td>$1/1$</td><td>$3/3$</td><td>$3/3$</td><td>$3/3$</td><td>2/2</td><td>$2/3$</td><td>$2/3$</td><td>0/0</td></tr><tr><td>EF Gains/Viols</td><td>0/0</td><td>0.0</td><td>$2/2$</td><td>$1/1$</td><td>2/2</td><td>$2/2$</td><td>$4/4$</td><td>$4/4$</td><td>$4/4$</td></tr><tr><td rowspan="5">cardio_mimic</td><td>Personalized</td><td>0.849</td><td>0.849</td><td>0.836</td><td>3.1%</td><td>4.7%</td><td>3.3%</td><td>23.7%</td><td>24.0%</td><td>23.9%</td></tr><tr><td>Gain</td><td>0.004</td><td>0.004</td><td>-0.009</td><td>1.1%</td><td>-0.4%</td><td>1.0%</td><td>0.6%</td><td>0.2%</td><td>0.4%</td></tr><tr><td>Best/Worst Gain</td><td>0.018 / -0.005</td><td>0.012 / -0.000</td><td>0.004 / -0.014</td><td>2.1% /-0.4%</td><td>1.4% I -2.3%</td><td>2.5% / -0.3%</td><td>2.0% / -1.1%</td><td>2.3% 1-2.4%</td><td>1.3% /-1.4%</td></tr><tr><td>Rat. Gains/Viols</td><td>$1/1$</td><td>0/0</td><td>2/2</td><td>2/2</td><td>2/2</td><td>2/2</td><td>$3/3$</td><td>2/2</td><td>2/2</td></tr><tr><td>EF Gains/Viols</td><td>0/0</td><td>$1/1$</td><td>$4/4$</td><td>$4/4$</td><td>3/3</td><td>$3/3$</td><td>$4/4$</td><td>$4/4$</td><td>$4/4$</td></tr><tr><td rowspan="5">heart</td><td>Personalized</td><td>0.457</td><td>0.736</td><td>0.554</td><td>22.7%</td><td>17.2%</td><td>18.1%</td><td>52.6%</td><td>27.6%</td><td>38.2%</td></tr><tr><td>Gain</td><td>-0.090</td><td>0.189</td><td>0.007</td><td>-9.1%</td><td>-3.6%</td><td>-4.5%</td><td>-1.3%</td><td>23.7%</td><td>13.2%</td></tr><tr><td>Best/Worst Gain</td><td>0.061 / -0.392</td><td>${0.317}/{0.023}$</td><td>0.257 / -0.023</td><td>4.8% I-29.8%</td><td>9.8%1-9.7%</td><td>6.2% I -14.8%</td><td>1.6% / -9.2%</td><td>38.0% / 4.6%</td><td>28.1% / 7.1%</td></tr><tr><td>Rat. Gains/Viols</td><td>$2/2$</td><td>0/0</td><td>$2/1$</td><td>$1/1$</td><td>$1/1$</td><td>$2/2$</td><td>0/2</td><td>4/4</td><td>3/4</td></tr><tr><td>EF Gains/Viols</td><td>$2/1$</td><td>1/0</td><td>$3/1$</td><td>$1/1$</td><td>$4/4$</td><td>0/0</td><td>$4/4$</td><td>$4/4$</td><td>$4/4$</td></tr><tr><td rowspan="5">kidney</td><td>Personalized</td><td>0.774</td><td>0.774</td><td>0.762</td><td>6.0%</td><td>6.2%</td><td>7.3%</td><td>29.2%</td><td>31.0%</td><td>30.9%</td></tr><tr><td>Gain</td><td>0.003</td><td>0.004</td><td>-0.009</td><td>-0.1%</td><td>-0.4%</td><td>-1.4%</td><td>-2.1%</td><td>-3.8%</td><td>-3.7%</td></tr><tr><td>Best/Worst Gain</td><td>0.039 / -0.057</td><td>0.026 / -0.095</td><td>0.033 / -0.152</td><td>2.8% /-1.6%</td><td>3.7%1-2.5%</td><td>0.8% / -5.4%</td><td>-0.6% I-7.5%</td><td>4.7%1-5.4%</td><td>-1.5% /-18.9%</td></tr><tr><td>Rat. Gains/Viols</td><td>$2/2$</td><td>3/3</td><td>$4/4$</td><td>2/2</td><td>2/2</td><td>$1/1$</td><td>0.00</td><td>$1/1$</td><td>0/0</td></tr><tr><td>EF Gains/Viols</td><td>$1/0$</td><td>1/0</td><td>$4/4$</td><td>3/5</td><td>$3/3$</td><td>$2/2$</td><td>6/6</td><td>6/6</td><td>6/6</td></tr><tr><td rowspan="5">mortality</td><td>Personalized</td><td>0.870</td><td>0.869</td><td>0.895</td><td>2.8%</td><td>4.3%</td><td>3.0%</td><td>20.9%</td><td>21.5%</td><td>17.7%</td></tr><tr><td>Gain</td><td>-0.004</td><td>-0.004</td><td>0.022</td><td>0.6%</td><td>-0.9%</td><td>0.5%</td><td>-0.4%</td><td>-1.0%</td><td>2.8%</td></tr><tr><td>Best/Worst Gain</td><td>0.025 / -0.019</td><td>-0.001 /-0.015</td><td>0.039 / 0.005</td><td>2.7% 1-1.7%</td><td>0.5% / -1.2%</td><td>8.0% / 0.1%</td><td>5.1%1-2.2%</td><td>-0.3%1-2.3%</td><td>12.6% / 0.1%</td></tr><tr><td>Rat. Gains/Viols</td><td>$3/3$</td><td>5/5</td><td>0/0</td><td>3/3</td><td>$2/2$</td><td>5/5</td><td>3/3</td><td>0/0</td><td>6/6</td></tr><tr><td>EF Gains/Viols</td><td>6/2</td><td>$4/4$</td><td>6/6</td><td>$1/5$</td><td>$4/4$</td><td>0/0</td><td>$6/6$</td><td>6/6</td><td>6/6</td></tr><tr><td rowspan="5">saps</td><td>Personalized</td><td>0.157</td><td>0.872</td><td>0.758</td><td>37.8%</td><td>7.8%</td><td>31.5%</td><td>63.4%</td><td>21.7%</td><td>48.9%</td></tr><tr><td>Gain</td><td>-0.037</td><td>0.678</td><td>0.565</td><td>7.5%</td><td>37.5%</td><td>13.9%</td><td>-1.8%</td><td>39.9%</td><td>12.7%</td></tr><tr><td>Best/Worst Gain</td><td>0.101 / -0.041</td><td>0.745 / 0.657</td><td>0.743 / -0.273</td><td>27.1% / 1.5%</td><td>43.2% I -3.5%</td><td>49.9% / 6.4%</td><td>0.0% / -5.8%</td><td>53.9% / 1.4%</td><td>22.0% / 0.0%</td></tr><tr><td>Rat. Gains/Viols</td><td>$3/2$</td><td>0/0</td><td>$1/1$</td><td>4/4</td><td>$3/3$</td><td>$4/4$</td><td>0/1</td><td>3/4</td><td>$3/4$</td></tr><tr><td>EF Gains/Viols</td><td>$3/1$</td><td>$3/1$</td><td>0/0</td><td>$3/3$</td><td>$3/3$</td><td>$3/3$</td><td>4/4</td><td>4/4</td><td>$4/4$</td></tr></table>
|
| 498 |
+
|
| 499 |
+
Table 5: Performance of personalized neural network models on all datasets. We show the gains of personalization in terms of test AUC, ECE, and error. We report: model performance at the population level, the overall gain of personalization, the range of gains over $m$ intersectional groups, and the number of rationality and envy-freeness gains/violations (evaluated using a bootstrap hypothesis test at a 10% significance level).
|
| 500 |
+
|
| 501 |
+
## 620 G.2.2 Random Forests
|
| 502 |
+
|
| 503 |
+
For our random forest models, we trained each with the following hyperparameters: 100 estimators, max depth of 20, minimum samples per split is 5, and minimum number of samples in each leaf is 2 . For random forests, we expect that these models will perform well when optimizing error but will not necessarily have high AUC or be well calibrated (i.e. low ECE). We note this in the Table below. For example, using an intersectional encoding with random forests in effecitve in minimizing fair use violations on error across multiple datasets (e.g. apnea, kidney). As noted with both logistic regression and neural networks, we are able to reliably identify statistically significant violations.
|
| 504 |
+
|
| 505 |
+
<table><tr><td rowspan="2">Dataset</td><td rowspan="2">Metrics</td><td colspan="2">Test AUC</td><td colspan="2">Test ECE</td><td colspan="2">Test Error</td></tr><tr><td>1Hot</td><td>All</td><td>1Hot</td><td>All</td><td>1Hot</td><td>All</td></tr><tr><td rowspan="5">apnea</td><td>Personalized</td><td>0.757</td><td>0.759</td><td>7.7%</td><td>7.1%</td><td>30.5%</td><td>31.2%</td></tr><tr><td>Gain</td><td>0.002</td><td>0.004</td><td>-0.4%</td><td>0.6%</td><td>0.8%</td><td>-0.5%</td></tr><tr><td>Best/Worst Gain</td><td>0.064 / -0.001</td><td>0.020 / -0.009</td><td>4.4%1-2.7%</td><td>6.8% I -0.3%</td><td>9.8% 1-2.8%</td><td>4.3%1-1.5%</td></tr><tr><td>Rat. Gains/Viols</td><td>1/1</td><td>2/2</td><td>2/2</td><td>3/3</td><td>5/5</td><td>2/4</td></tr><tr><td>EF Gains/Viols</td><td>4/0</td><td>4/2</td><td>4/5</td><td>3/3</td><td>6/6</td><td>6/6</td></tr><tr><td rowspan="5">cardio_eicu</td><td>Personalized</td><td>0.764</td><td>0.772</td><td>7.8%</td><td>8.7%</td><td>30.8%</td><td>30.2%</td></tr><tr><td>Gain</td><td>0.001</td><td>-0.000</td><td>-1.3%</td><td>-0.7%</td><td>1.0%</td><td>0.4%</td></tr><tr><td>Best/Worst Gain</td><td>0.009 / -0.022</td><td>0.014 / -0.028</td><td>3.7% 1-0.8%</td><td>1.0% 1-3.5%</td><td>4.9% I -1.6%</td><td>1.4% I -1.8%</td></tr><tr><td>Rat. Gains/Viols</td><td>1/1</td><td>2/2</td><td>2/2</td><td>1/1</td><td>$3/3$</td><td>$1/3$</td></tr><tr><td>EF Gains/Viols</td><td>1/1</td><td>$3/3$</td><td>$3/3$</td><td>$3/3$</td><td>$4/4$</td><td>$4/4$</td></tr><tr><td rowspan="5">cardio_mimic</td><td>Personalized</td><td>0.847</td><td>0.847</td><td>9.2%</td><td>9.7%</td><td>24.0%</td><td>24.3%</td></tr><tr><td>Gain</td><td>-0.003</td><td>0.001</td><td>0.4%</td><td>-0.4%</td><td>-0.2%</td><td>-0.3%</td></tr><tr><td>Best/Worst Gain</td><td>-0.001 / -0.004</td><td>0.002 / -0.001</td><td>1.2% / 0.2%</td><td>0.3% 1-0.9%</td><td>0.8% / -1.1%</td><td>0.3% I -1.4%</td></tr><tr><td>Rat. Gains/Viols</td><td>4/4</td><td>2/2</td><td>$4/4$</td><td>1/1</td><td>$1/1$</td><td>1/1</td></tr><tr><td>EF Gains/Viols</td><td>2/2</td><td>$2/1$</td><td>$1/1$</td><td>$3/3$</td><td>$4/4$</td><td>$4/4$</td></tr><tr><td rowspan="5">heart</td><td>Personalized</td><td>0.897</td><td>0.896</td><td>12.0%</td><td>14.4%</td><td>17.1%</td><td>22.4%</td></tr><tr><td>Gain</td><td>-0.006</td><td>-0.000</td><td>4.6%</td><td>-2.7%</td><td>-2.6%</td><td>-2.6%</td></tr><tr><td>Best/Worst Gain</td><td>0.006 / -0.025</td><td>0.000 / -0.026</td><td>5.5% 1 -1.8%</td><td>9.3% I -5.1%</td><td>9.6% I -10.8%</td><td>5.6% I -10.7%</td></tr><tr><td>Rat. Gains/Viols</td><td>3/1</td><td>4/1</td><td>2/2</td><td>$1/1$</td><td>1/2</td><td>0/2</td></tr><tr><td>EF Gains/Viols</td><td>4/0</td><td>2/0</td><td>2/2</td><td>2/2</td><td>$4/4$</td><td>$4/4$</td></tr><tr><td rowspan="5">kidney</td><td>Personalized</td><td>0.775</td><td>0.778</td><td>8.8%</td><td>9.0%</td><td>29.2%</td><td>29.1%</td></tr><tr><td>Gain</td><td>0.001</td><td>0.003</td><td>-0.7%</td><td>-1.1%</td><td>1.2%</td><td>1.0%</td></tr><tr><td>Best/Worst Gain</td><td>0.010 / -0.017</td><td>0.009 / -0.019</td><td>2.0% 1 -1.9%</td><td>1.2% 1-3.8%</td><td>5.3% 1-0.9%</td><td>2.0% 1-3.1%</td></tr><tr><td>Rat. Gains/Viols</td><td>3/2</td><td>$3/3$</td><td>1/1</td><td>1/1</td><td>4/5</td><td>2/5</td></tr><tr><td>EF Gains/Viols</td><td>3/0</td><td>$3/2$</td><td>4/6</td><td>$3/3$</td><td>6/6</td><td>6/6</td></tr><tr><td rowspan="5">mortality</td><td>Personalized</td><td>0.806</td><td>0.806</td><td>10.9%</td><td>10.8%</td><td>27.2%</td><td>27.1%</td></tr><tr><td>Gain</td><td>0.002</td><td>-0.001</td><td>-0.6%</td><td>-0.0%</td><td>0.3%</td><td>-0.1%</td></tr><tr><td>Best/Worst Gain</td><td>0.005 / 0.001</td><td>0.012 / -0.004</td><td>1.4% I -1.2%</td><td>1.5% I -3.1%</td><td>0.9% I -0.5%</td><td>0.7% 1-1.8%</td></tr><tr><td>Rat. Gains/Viols</td><td>0/0</td><td>2/2</td><td>1/1</td><td>2/2</td><td>3/3</td><td>$3/4$</td></tr><tr><td>EF Gains/Viols</td><td>6/0</td><td>$3/1$</td><td>3/5</td><td>2/6</td><td>6/6</td><td>6/6</td></tr><tr><td rowspan="5">saps</td><td>Personalized</td><td>0.879</td><td>0.878</td><td>4.6%</td><td>4.9%</td><td>20.1%</td><td>20.0%</td></tr><tr><td>Gain</td><td>-0.002</td><td>-0.002</td><td>0.0%</td><td>0.1%</td><td>-0.4%</td><td>-0.4%</td></tr><tr><td>Best/Worst Gain</td><td>0.000 / -0.050</td><td>0.050 / -0.002</td><td>11.2% / -1.6%</td><td>0.2% 1-3.5%</td><td>0.0% I -10.0%</td><td>0.2%1-5.4%</td></tr><tr><td>Rat. Gains/Viols</td><td>3/2</td><td>2/1</td><td>2/2</td><td>2/2</td><td>0/1</td><td>0/2</td></tr><tr><td>EF Gains/Viols</td><td>3/1</td><td>$4/2$</td><td>$4/4$</td><td>$4/4$</td><td>$4/4$</td><td>$4/4$</td></tr></table>
|
| 506 |
+
|
| 507 |
+
Table 6: Performance of personalized random forest models on all datasets. We show the gains of personalization in terms of test AUC, ECE, and error. We report: model performance at the population level, the overall gain of personalization, the range of gains over $m$ intersectional groups, and the number of rationality and envy-freeness gains/violations (evaluated using a bootstrap hypothesis test at a 10% significance level).
|
| 508 |
+
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/TulqHKf4uPn/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,276 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ WHEN PERSONALIZATION HARMS: RECONSIDERING THE USE OF GROUP ATTRIBUTES FOR PREDICTION
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s) Affiliation Address email
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
Machine learning models often use group attributes to assign personalized predictions. In this work, we show that models that use group attributes can assign unnecessarily inaccurate predictions to specific groups - i.e., that training a model with group attributes can reduce performance for specific groups. We propose formal conditions to ensure the "fair use" of group attributes in prediction models - i.e., collective preference guarantees that can be checked by training one additional model. We characterize how machine learning models can exhibit fair use due to standard practices in specification, training, and deployment. We study the prevalence of fair use violations in clinical prediction models. Our results highlight the inability to resolve fair use violations, underscore the need to measure the gains of personalization for all groups who provide personal data, and illustrate actionable interventions to mitigate harm.
|
| 8 |
+
|
| 9 |
+
§ 13 1 INTRODUCTION
|
| 10 |
+
|
| 11 |
+
Machine learning models are often used to support or automate decisions that affect people. In medicine, for example, models diagnose illnesses [64, 31, 73], estimate survival rates [78], and predict treatment response [41]. In such applications, medical decisions follow the ethical principles of beneficence ("do the best") and non-maleficence ("do no harm") [8]. In turn, models that support medical decisions are designed to perform as well as possible without inflicting harm. These principles explain why so many clinical prediction models use group attributes that encode characteristics like sex and age - i.e. characteristics that would be prohibited for models in lending or hiring. To predict as well as possible on a heterogeneous population, models must encode all characteristics that could tell people apart [47].
|
| 12 |
+
|
| 13 |
+
The prevalence of group attributes in prediction models reflects a need for personalization, ${}^{1}$ but do personalized models that use group attributes improve performance for every group? In this paper, we refer to this principle as fair use. Fair use enshrines the basic promise of personalization in applications like precision medicine - i.e., that each person who reports personal characteristics should expect a tailored performance gain in return. In prediction tasks with group attributes, this means that every group should expect better performance from a personalized model that solicits group membership compared to a generic model that does not. These gains should be tailored, meaning that every group should prefer their personalized predictions over the personalized predictions assigned to another group. Machine learning models are trained to use group attributes in ways that improve performance at a population level. In practice, this means that models trained with group attributes assign predictions that are unnecessarily inaccurate to specific groups due to routine decisions in model specification or model selection (see Figure 1). In many real-world applications, this drop in performance reflects harm. In clinical applications, for example, inaccurate predictions undermine medical decisions and health outcomes. This harm is silent and avoidable. Silent because fair use violations would only draw attention if model developers were to evaluate the gains of personalization for intersectional groups. Avoidable because a fair use violation shows that a group could receive better predictions from a generic model or a personalized model for another group; thus we can always resolve a fair use violation by assigning predictions from this better performing model.
|
| 14 |
+
|
| 15 |
+
${}^{1}$ Personalization is a term that encompasses a breadth of techniques that use personal data. Here, we use it to describe approaches that target groups rather than individuals - i.e., "categorization" rather than "individualization" as per the taxonomy of Fan & Poole [27].
|
| 16 |
+
|
| 17 |
+
max width=
|
| 18 |
+
|
| 19 |
+
GROUP SIZE 2|c|ERROR RATE GAIN
|
| 20 |
+
|
| 21 |
+
1-5
|
| 22 |
+
$g$ ${n}_{g}$ $R\left( {h}_{0}\right)$ ${R}_{\mathbf{g}}\left( {h}_{\mathbf{g}}\right)$ ${\Delta }_{\mathbf{g}}\left( {{h}_{\mathbf{g}},{h}_{0}}\right)$
|
| 23 |
+
|
| 24 |
+
1-5
|
| 25 |
+
female, <30 48 38.1% 26.8% 11.3%
|
| 26 |
+
|
| 27 |
+
1-5
|
| 28 |
+
male, <30 49 23.9% 26.7% -2.8%
|
| 29 |
+
|
| 30 |
+
1-5
|
| 31 |
+
female, 30 to 60 307 30.3% 29.1% 1.2%
|
| 32 |
+
|
| 33 |
+
1-5
|
| 34 |
+
male, 30 to 60 307 15.4% 15.2% 0.2%
|
| 35 |
+
|
| 36 |
+
1-5
|
| 37 |
+
female, 60+ 123 19.3% 21.9% -2.6%
|
| 38 |
+
|
| 39 |
+
1-5
|
| 40 |
+
male, 60+ 181 11.0% 8.2% 2.8%
|
| 41 |
+
|
| 42 |
+
1-5
|
| 43 |
+
Total 1152 20.4% 19.4% 1.0%
|
| 44 |
+
|
| 45 |
+
1-5
|
| 46 |
+
|
| 47 |
+
Figure 1: Personalization can reduce performance for specific groups. We show the gains of personalization for a classifier to screen for obstructive sleep apnea (i.e., the apnea dataset in §4). We fit a personalized model ${h}_{g}$ and generic model ${h}_{0}$ with logistic regression, personalizing ${h}_{g}$ with a one-hot encoding of sex and age_group. As shown, personalization reduces training error from 20.4% to 19.4% but increases training error at for 2 groups: (female, 60+) and (male, <30). These effects are also present on test data.
|
| 48 |
+
|
| 49 |
+
Although many prediction models that use group attributes to assign personalized predictions, there is little awareness that this practice could reduce performance at a group level [see e.g., 2, 63]. Simply put, it is hard to imagine how a model that accounts for group membership can perform worse than a model that does not. Our goal in this paper is to expose this effect and lay the foundations to address it. To this end, we characterize how fair use violations arise, demonstrate their prevalence in real-world applications, and propose interventions to mitigate their harm. Specifically, the main contributions of our work include:
|
| 50 |
+
|
| 51 |
+
1. We propose formal conditions to ensure the fair use of group attributes in prediction models.
|
| 52 |
+
|
| 53 |
+
2. We characterize how common approaches to personalization in machine learning can produce personalized models to exhibit fair use violations. These "failure modes" delineate the root causes of fair use violations, and inform interventions that mitigate harm.
|
| 54 |
+
|
| 55 |
+
3. We conduct a comprehensive study on the gains of personalization in clinical prediction models for decision-making, ranking, and risk assessment. Our results demonstrate the prevalence of fair use violations across model classes and personalization techniques, and highlight the challenges of resolving these violations through changes to model development.
|
| 56 |
+
|
| 57 |
+
4. We present a case study on personalization for a model trained to predict mortality for patients with acute kidney injury. Our study shows how a fair use audit can safeguard against "race correction" in clinical prediction models, and facilitate targeted interventions that reduce harm (Appendix F).
|
| 58 |
+
|
| 59 |
+
§ 60 2 FAIR USE GUARANTEES
|
| 60 |
+
|
| 61 |
+
51 In this section, we present formal conditions for the fair use of group attributes in prediction. We provide notation and preliminaries for this section in Appendix A.
|
| 62 |
+
|
| 63 |
+
§ 2.1 FAIR USE
|
| 64 |
+
|
| 65 |
+
64 We start with Definition 1, which characterizes the fair use of a group attribute in terms of collective preference guarantees.
|
| 66 |
+
|
| 67 |
+
3 Definition 1 (Fair Use). A personalized model $h : \mathcal{X} \times \mathcal{G} \rightarrow \mathcal{Y}$ guarantees the fair use of a group attribute $\mathcal{G}$ if
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
{\Delta }_{\mathbf{g}}\left( {{h}_{\mathbf{g}},{h}_{0}}\right) \geq 0
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
\text{ for all groups }\mathbf{g} \in \mathcal{G}\text{ , } \tag{1}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
{\Delta }_{\mathbf{g}}\left( {{h}_{\mathbf{g}},{h}_{{\mathbf{g}}^{\prime }}}\right) \geq 0 \tag{2}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
Condition (1) captures rationality for group $\mathbf{g}$ : a majority of group $\mathbf{g}$ prefers a personalized model ${h}_{\mathbf{g}}$ to a generic model ${h}_{0}$ . Condition (2) captures envy-freeness for group $\mathbf{g}$ : a majority of group $\mathbf{g}$ prefers their predictions to predictions personalized for any other group. These conditions enshrine minimal expectations of groups from a personalized model. Without rationality, a majority in some group would prefer the generic model. Without envy-freeness, a majority in some group would prefer the personalized predictions assigned to another group.
|
| 82 |
+
|
| 83 |
+
The fair use conditions in Definition 1 are collective, in that performance is measured over individuals in a group; and weak, in that the expected performance gain is non-negative - i.e., no group will be harmed. The conditions can be adapted to different prediction tasks by choosing a suitable risk metric. Since fair use conditions represent guarantees on the expected gains of personalization, a suitable metric should measure model performance exactly (c.f. a surrogate metric that we optimize to fit a model (see Figure 5 in Section 3). In classification tasks where we want accurate decisions, this would be the error rate. In tasks where we want reliable risk estimates, it would be the expected calibration error [54].
|
| 84 |
+
|
| 85 |
+
Personalized models that obey fair use guarantees incentivize groups to truthfully report group membership in deployment [see e.g., 39, 62, 30]. ]
|
| 86 |
+
|
| 87 |
+
§ 2.2 USE CASES
|
| 88 |
+
|
| 89 |
+
§ RELEVANT USE CASES FOR FAIR USE GUARANTEES INCLUDE:
|
| 90 |
+
|
| 91 |
+
Protected Classes: Models sometimes include group attributes that encode immutable characteristics due to application-specific norms or special provisions [see 44, 45]. For example, sex is a protected characteristic in employment law, but not in medicine [see e.g., 56, for a discussion on the use of sex to predict cardiovascular disease]. Likewise, U.S. regulations allow credit scores to use age if it does not harm older applicants [15]. In such cases, models should use these attributes in a way that leads to tailored performance gains for every group.
|
| 92 |
+
|
| 93 |
+
Sensitive Data: Models that use attributes like hiv status should guarantee a tailored improvement performance for the sensitive group, hiv $= +$ . Otherwise, it would be better not to solicit this information in the first place as the information could inflicts harm when leaked [see e.g., 6].
|
| 94 |
+
|
| 95 |
+
Self-Reported Data: Certain kinds of models require users to report their data at prediction time [see e.g., self-report diagnostics 42, 67]. These models should obey fair use conditions to incentivize users to report their data truthfully (see Remark 2)
|
| 96 |
+
|
| 97 |
+
Costly Data: Group attributes can encode data collected at prediction time - e.g., an attribute like tumor_subtype whose value can only be determined by an invasive medical test. Models that ensure fair use with respect to tumor_subtype guarantee that patients with a specific type of tumor will not receive a less accurate prediction after undergoing the procedure.
|
| 98 |
+
|
| 99 |
+
§ 3 FAILURE MODES OF PERSONALIZATION
|
| 100 |
+
|
| 101 |
+
In this section, we describe how common approaches to personalization can reduce performance for specific groups. Our goal is to highlight failure modes that apply to a broad range of prediction tasks. We pair each failure mode with toy examples, focusing on simple classification tasks that can be checked manually. ${}^{2}$
|
| 102 |
+
|
| 103 |
+
§ 3.1 MODEL SPECIFICATION
|
| 104 |
+
|
| 105 |
+
We start with misspecification - i.e., when we fit models that cannot represent the role of group membership in the data distribution. A common form of misspecification occurs when we personalize simple models using a one-hot encoding. In such cases, models exhibit fair use violations on data distributions that exhibit intersectionality (see Figure 2). Consider, for example, a logistic regression model with a one-hot encoding that assigns higher risk to patients who are $\circ 1\mathrm{\;d}$ and to patients who are male. This would lead to a fair use violation for patients who are old and male if their true risk were lower than either group alone.
|
| 106 |
+
|
| 107 |
+
${}^{2}$ In most cases, we train a linear classifier that minimizes the error rate on a perfectly sampled training dataset - i.e., where $\frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}1\left\lbrack {{\mathbf{x}}_{i} = \mathbf{x},{y}_{i} = y,{\mathbf{g}}_{i} = \mathbf{g}}\right\rbrack = \mathbb{P}\left( {\mathbf{x},y,\mathbf{g}}\right)$ for all $\left( {\mathbf{x},y,\mathbf{g}}\right) \in \mathcal{X} \times \mathcal{Y} \times \mathcal{G}$ . This condition ensures that the training error matches the test error.
|
| 108 |
+
|
| 109 |
+
max width=
|
| 110 |
+
|
| 111 |
+
Group 2|c|Data 2|c|Predictions 2|c|Mistakes Gain
|
| 112 |
+
|
| 113 |
+
1-8
|
| 114 |
+
$g$ ${n}_{\mathbf{g}}^{ + }$ ${n}_{g}^{ - }$ ${h}_{0}$ ${h}_{g}$ ${R}_{\mathbf{g}}\left( {h}_{0}\right)$ ${R}_{g}\left( {h}_{g}\right)$ $\Delta {R}_{\mathbf{g}}\left( {{h}_{\mathbf{g}},{h}_{0}}\right)$
|
| 115 |
+
|
| 116 |
+
1-8
|
| 117 |
+
young, female 0 24 - + 0 24 -24
|
| 118 |
+
|
| 119 |
+
1-8
|
| 120 |
+
young, male 25 0 - + 25 0 25
|
| 121 |
+
|
| 122 |
+
1-8
|
| 123 |
+
old, female 25 0 - + 25 0 25
|
| 124 |
+
|
| 125 |
+
1-8
|
| 126 |
+
old, male 0 27 - - 0 0 0
|
| 127 |
+
|
| 128 |
+
1-8
|
| 129 |
+
Total 50 51 X X 50 24 26
|
| 130 |
+
|
| 131 |
+
1-8
|
| 132 |
+
|
| 133 |
+
Figure 2: Fair use violations due to model misspecification. Here, we are given ${n}^{ + } = {50}$ positive examples and ${n}^{ - } = {51}$ negative examples for 2D classification task where $\mathbf{g} \in \{$ male, female $\} \times \{$ old, young $\}$ . We fit two linear classifiers: ${h}_{0}$ , a generic model without group attributes, and ${h}_{g}$ a personalized model with a one-hot encoding. As shown, personalization reduces overall error from 50 to 24. However, not all groups benefit from personalization: (young, female) now receives less accurate predictions while (old, male) receives no gain. Here, ${h}_{g}$ also violates envy-freeness for (young, female) as individuals in this group would receive more accurate predictions by misreporting their group membership as (old, male).
|
| 134 |
+
|
| 135 |
+
Misspecification can also arise due to a failure to account for group-specific interaction effects - e.g., instances where group attributes act as mediator or moderator variables [see e.g., 7]. In Figure 3, we show an example that exhibits the hallmarks of personalization: a generic model performs poorly on "heterogeneous" groups $A$ and $C$ , and a personalized model that accounts for group membership improves overall performance by assigning more accurate predictions to $A$ and $C$ . In this case, the resulting model exhibits a fair use violation for group $B$ because a generic model performs as well as possible for group $B$ . In practice, we can avoid these issues by either fitting models that are rich enough to capture these effects, or by training a separate model for each group. Both are challenging in tasks with multiple groups as we must either specify interactions for each group, or fit models using a limited amount of data for each group.
|
| 136 |
+
|
| 137 |
+
< g r a p h i c s >
|
| 138 |
+
|
| 139 |
+
Figure 3: Fair use violation resulting from model misspecification. We consider a 2D classification task with heterogeneous groups $\mathbf{g} = \{ A,B,C\}$ where an ideal model should assign a personalized intercept to each group and a personalized slope to group $B$ . In this case, a personalized model with a one-hot encoding would fit a personalized intercept for each group, but fail to fit a personalized slope for group $B$ . The personalized model would improve overall performance by assigning more accurate predictions to groups $A$ and $C$ . However, it would result in a fair use violation by performing worse for group $B$ .
|
| 140 |
+
|
| 141 |
+
§ 3.2 MODEL SELECTION
|
| 142 |
+
|
| 143 |
+
Model development often involves choosing one model from a family of candidate models - e.g., when we set a regularization penalty to avoid overfitting, or choose a subset of variables to improve usability. Common criteria for model selection consist choosing a model on the basis of population-level performance [e.g., mean K-CV test error 4]. In practice, this choice can lead to models that reduce performance for a specific group. We demonstrate this effect in Figure 4. The example highlights how fair use violations may be unavoidable in settings where we are forced to assign predictions with a single model - as there may not exist a model that ensure fair use for all groups.
|
| 144 |
+
|
| 145 |
+
§ 3.3 OTHER FAILURE MODES & DISCUSSION
|
| 146 |
+
|
| 147 |
+
Work in personalization naturally presumes that fitting a model with group attributes will provide a uniform performance gain to all groups. In practice, however, this only holds under restrictive assumptions. We include a similar discussion of other failure modes along with examples in Appendix E including: training with a surrogate loss function; generalization; and dataset shifts. The failure models that we have covered in this section are chosen since they motivate potential interventions for model development. For example, one could avoid the fair use violations in Figure 2 by using an intersectional one-hot encoding, and avoid violations across across all cases by training decoupled models.
|
| 148 |
+
|
| 149 |
+
§ 4 EMPIRICAL STUDY
|
| 150 |
+
|
| 151 |
+
In this section, we study fair use in clinical prediction models - i.e. models that routinely include group attributes where fair use violations inflict harm. Our goals are to measure the prevalence of fair use violations and to evaluate how these change as a result of interventions in model development. We attach all software to reproduce the results in this section to our submission, and include additional details on our setup and additional experimental results in the supplement.
|
| 152 |
+
|
| 153 |
+
§ 4.1 SETUP
|
| 154 |
+
|
| 155 |
+
We work with 6 datasets for clinical prediction tasks (see Table 1). We split each dataset into a training sample (80%) to fit models, and a test sample (20%) to evaluate the gains of personalization. We use the training data from each dataset to fit 9 kinds of personalized models. Each personalized model belongs to one of 3 model classes: logistic regression (LR), random forests (RF), and neural nets (NN); and accounts for group membership using one of 3 personalization techniques.
|
| 156 |
+
|
| 157 |
+
The three personalization techniques being: One-hot Encoding (1Hot): We fit a model with dummy variables for each group attribute, Intersectional Encoding (All): We fit a model with dummy variables for each intersectional group, and Decoupling (DCP): We fit a model for each intersectional group using its own data. The three techniques represent increasingly complex ways to account for group membership where complexity is measured by the interactions between group attributes and other features: 1Hot reflect no interactions; All reflect interactions between group attributes; and DCP reflects all possible attributes between group attributes and features.
|
| 158 |
+
|
| 159 |
+
We evaluate the gains of personalization for each model in terms of three performance metrics: (1) error rate, which reflects the accuracy of yes-or-no predictions [for a diagnostic test, e.g., 26]; (2) expected calibration error (ECE), which measures the reliability of risk predictions [for a medical risk score, e.g., 13]; (3) area under ROC curve (AUC), which measures accuracy in ranking [for a prioritization tool, e.g., 77].
|
| 160 |
+
|
| 161 |
+
§ 4.2 RESULTS
|
| 162 |
+
|
| 163 |
+
We summarize our results for logistic regression in Table 1 and for other model classes in Appendix G.
|
| 164 |
+
|
| 165 |
+
On Prevalence Our results show that personalized models can improve performance at a population level yet reduce performance for specific groups. These fair use violations arise across datasets, personalization techniques, and model classes. Consider the standard configuration used to develop clinical prediction models - i.e., a logistic regression model with a one-hot encoding of group attributes $\left( {\mathrm{{LR}} + 1\mathrm{{Hot}}}\right)$ . Here, we find that at least one group experiences a statistically significant fair use violation in terms of error on 4/6 datasets (5/6 for AUC and ECE).
|
| 166 |
+
|
| 167 |
+
On Personalization Techniques Our results show that there is no one personalization technique that minimizes fair use violations. In Table 1, for example, the best personalization technique for cardio_eicu is intersectional encoding while the best personalization technique for mortality
|
| 168 |
+
|
| 169 |
+
max width=
|
| 170 |
+
|
| 171 |
+
2*Dataset 2*Metrics 3|c|Test AUC 3|c|Test ECE 3|c|Test Error
|
| 172 |
+
|
| 173 |
+
3-11
|
| 174 |
+
1Hot All DCP 1Hot All DCP 1Hot All DCP
|
| 175 |
+
|
| 176 |
+
1-11
|
| 177 |
+
apnea Personalized 0.750 0.750 0.803 7.5% 5.5% 7.2% 34.2% 33.8% 26.2%
|
| 178 |
+
|
| 179 |
+
1-11
|
| 180 |
+
$n = {1152},d = {26}$ Gain 0.001 0.000 0.053 -1.5% 0.6% -1.1% -1.0% -0.7% 7.0%
|
| 181 |
+
|
| 182 |
+
1-11
|
| 183 |
+
$\mathcal{G} = \{$ age, sex $\}$ Best/Worst Gain 0.002 / -0.001 0.001 /-0.016 0.119 / -0.005 0.7%1-7.1% 0.7% 1-4.6% 1.7% 1-6.6% 0.0% 1-9.9% 1.8% 1-7.8% 21.7% I -7.8%
|
| 184 |
+
|
| 185 |
+
1-11
|
| 186 |
+
$m = 6$ Rat. Gains/Viols $1/2$ $1/4$ 4/0 $1/3$ 1/3 2/2 0/4 $1/3$ 4/1
|
| 187 |
+
|
| 188 |
+
1-11
|
| 189 |
+
[66] EF Gains/Viols 0/0 0/0 3/0 0/3 0/3 4/0 0/6 0/5 $4/1$
|
| 190 |
+
|
| 191 |
+
1-11
|
| 192 |
+
5*cardio_eicu $n = {1341},d = {49}$ $\mathcal{G} = \{$ age, sex $\}$ $m = 4$ [60] Personalized 0.768 0.767 0.762 4.4% 4.6% 8.9% 29.1% 29.1% 29.5%
|
| 193 |
+
|
| 194 |
+
2-11
|
| 195 |
+
Gain 0.000 -0.001 -0.007 0.4% 0.2% -4.1% -0.4% -0.4% -0.9%
|
| 196 |
+
|
| 197 |
+
2-11
|
| 198 |
+
Best/Worst Gain 0.002 / -0.001 0.001 /-0.001 0.094 / -0.099 1.6% / -1.5% 0.9% / -0.2% -1.1% / -6.3% 0.0% / -3.1% 0.2%1-3.1% 12.9% / -8.9%
|
| 199 |
+
|
| 200 |
+
2-11
|
| 201 |
+
Rat. Gains/Viols 2/2 2/1 $1/2$ $2/1$ 1/0 0/4 0/2 $1/2$ 2/2
|
| 202 |
+
|
| 203 |
+
2-11
|
| 204 |
+
EF Gains/Viols 0/0 0/0 $3/1$ 0/2 0/2 $1/1$ 0/3 0/3 3/1
|
| 205 |
+
|
| 206 |
+
1-11
|
| 207 |
+
5*cardio_mimic $n = {5289},d = {49}$ $\mathcal{G} = \{$ age. sex $\}$ $m = 4$ [38] Personalized 0.854 0.854 0.870 2.1% 2.3% 2.3% 23.3% 23.4% 21.4%
|
| 208 |
+
|
| 209 |
+
2-11
|
| 210 |
+
Gain 0.001 0.001 0.017 -0.4% -0.5% -0.6% 0.3% 0.3% 2.2%
|
| 211 |
+
|
| 212 |
+
2-11
|
| 213 |
+
Best/Worst Gain 0.001 / -0.000 0.001 / -0.000 0.051 / 0.006 0.5% / 0.4% 0.6% /-0.2% 0.6% 1-2.3% 0.9% / -0.1% 0.9% / -0.1% 7.6%1-0.2%
|
| 214 |
+
|
| 215 |
+
2-11
|
| 216 |
+
Rat. Gains/Viols 2/1 2/1 4/0 4/0 3/0 1/2 3/0 3/0 3A0
|
| 217 |
+
|
| 218 |
+
2-11
|
| 219 |
+
EFGains/Viols 0/0 0/0 4/0 $1/3$ 0/1 $3/1$ 0/3 0/3 4/0
|
| 220 |
+
|
| 221 |
+
1-11
|
| 222 |
+
5*heart $n = {181},d = {26}$ $\mathcal{G} = \{$ sex.age $\}$ $m = 4$ [17] Personalized 0.870 0.846 0.817 8.4% 17.8% 17.5% 19.7% 19.7% 15.8%
|
| 223 |
+
|
| 224 |
+
2-11
|
| 225 |
+
Gain -0.007 -0.030 -0.060 2.8% -6.6% -6.3% -1.3% -1.3% 2.6%
|
| 226 |
+
|
| 227 |
+
2-11
|
| 228 |
+
Best/Worst Gain 0.007 / -0.031 0.024 /-0.050 0.039 / -0.190 4.4% / -0.6% -1.8%/-3.1% 10.1% / -4.6% 0.0%1-6.1% 0.0% / -12.1% 10.6% / -8.4%
|
| 229 |
+
|
| 230 |
+
2-11
|
| 231 |
+
Rat. Gains/Viols 1/1 1/1 0/3 2/1 0/4 2/1 0/1 0/3 3/1
|
| 232 |
+
|
| 233 |
+
2-11
|
| 234 |
+
EFGains/Viols 0/0 0/0 $1/2$ 0/2 0/3 2/2 0/1 0/1 2/1
|
| 235 |
+
|
| 236 |
+
1-11
|
| 237 |
+
5*mortality $n = {25366},d = {468}$ $\mathcal{G} = \{$ age. sex $\}$ $m = 6$ [38] Personalized 0.848 0.848 0.880 2.0% 2.1% 2.5% 23.6% 23.4% 20.2%
|
| 238 |
+
|
| 239 |
+
2-11
|
| 240 |
+
Gain 0.000 0.001 0.033 0.2% 0.1% -0.3% -0.2% -0.0% 3.2%
|
| 241 |
+
|
| 242 |
+
2-11
|
| 243 |
+
Best/Worst Gain 0.005 / -0.001 0.005 /-0.000 0.111/0.012 1.5%10.1% 2.6% 1-0.3% 11.2% 1-2.4% 0.8% 1.2.5% 2.1% 1.0.4% 20.1%/-0.5%
|
| 244 |
+
|
| 245 |
+
2-11
|
| 246 |
+
Rat. Gains/Viols 3/3 3/2 60 5/0 5/1 3/2 2/4 3/2 5/1
|
| 247 |
+
|
| 248 |
+
2-11
|
| 249 |
+
EF Gains/Viols 0/0 0/0 6/0 $1/1$ 3/2 5/1 0/4 1/4 6/0
|
| 250 |
+
|
| 251 |
+
1-11
|
| 252 |
+
saps Personalized 0.890 0.890 0.888 1.5% 1.5% 2.0% 18.9% 18.9% 18.5%
|
| 253 |
+
|
| 254 |
+
1-11
|
| 255 |
+
$n = {7797},d = {36}$ Gain 0.001 0.001 -0.001 0.1% 0.1% -0.4% 0.0% 0.0% 0.4%
|
| 256 |
+
|
| 257 |
+
1-11
|
| 258 |
+
$\mathcal{G} = \{$ hiv.age $\}$ Best/Worst Gain 0.014 / -0.000 0.014 / -0.001 0.017 / -0.246 2.8% 1-1.5% 2.4% 1-0.6% 9.4% / -19.1% 19.0% / -10.4% 0.8% / -10.4% 3.5%1-23.3%
|
| 259 |
+
|
| 260 |
+
1-11
|
| 261 |
+
$m = 4$ Rat. Gains/Viols 1/1 1/1 2.12 2/0 7/1 2/2 2/1 1/3 2/1
|
| 262 |
+
|
| 263 |
+
1-11
|
| 264 |
+
[3] EFGains/Viols 0/0 0/0 $2/1$ 2/2 2/2 3/1 1/1 2/2 2/2
|
| 265 |
+
|
| 266 |
+
1-11
|
| 267 |
+
|
| 268 |
+
Table 1: Performance of personalized logistic regression models on all datasets. We show the gains of personalization in terms of test AUC, ECE, and error. We report: model performance at the population level, the overall gain of personalization, the range of gains over $m$ intersectional groups, and the number of rationality and envy-freeness gains/violations (evaluated using a bootstrap hypothesis test at a 10% significance level).
|
| 269 |
+
|
| 270 |
+
was decoupling. These strategies change across model classes - as the corresponding strategies for neural networks for cardio_eicu and mortality are decoupling and using an intersectional encoding, respectively (see Appendix G). In general, even strategies that exhibit few violations can fail critically. For example, $\mathrm{{LR}} + \mathrm{{DCP}}$ for saps leads to a ${10}\%$ increase in error for $\mathrm{{HIV}} + \& > {30}$ . Overall, these results suggest that the most consistent way to avoid the harm from a fair use violation is to check.
|
| 271 |
+
|
| 272 |
+
On Interventions in Model Development Our results show that routine decisions in model development can produce considerable differences in group-level performance and fair use violations. This suggests that if we are able to spot fair use violations, we may be able to minimize them by "interventions" to model development. In light of this, we consider interventions that address the failure modes in Section 3 - e.g., using an intersectional one-hot encoding, training decoupled models, and equalizing sample sizes.
|
| 273 |
+
|
| 274 |
+
In general, we find that applying these strategies can minimize fair use violations often. For example, we can eliminate all fair use violations for cardio_mimic in our standard configuration by training decoupled models. However, there is no "best" intervention that consistently resolve these violations. Typically, this is because an intervention that resolves a violation for one group will precipitate a violation for others. In cardio_eicu, for instances, a logistic regression model fit with a onehot encoding will exhibit a violation on old males. Switching an intersectional encoding will fix this violation but introduce a new one for old females.
|
| 275 |
+
|
| 276 |
+
On the Reliability of Gains & Violations Our results underscore the need for reliable procedures to discover fair use violations or claim gains from personalization. We can often find detectable instances of benefit or harm. For example, we find that on saps in our default configuration that we detect a gain from personalization for patients who are HIV negative and older than 30 . Additionally, in cardio_eicu when training $\mathrm{{LR}} + \mathrm{{All}}$ we detect a fair use violation for patients who are old females (see e.g., Rat Gains/Violations in Table 1). One actionable finding from an evaluation of the gains of personalization is a group does not experience a meaningful gain nor harm due to personalization. In such cases, one may wish to intervene to avoid soliciting unnecessary data: when group attributes encode information that is sensitive or that must be collected at prediction time (e.g., hiv_status or tumor_subtype), we may prefer to avoid soliciting information that is demonstrably useful for prediction.
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/UeYQXtI7nsX/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,396 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Generating Intuitive Fairness Specifications for Natural Language Processing
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
## Abstract
|
| 12 |
+
|
| 13 |
+
Text classifiers have promising applications in high-stake tasks such as resume screening and content moderation. These classifiers must be fair and avoid discriminatory decisions by being invariant to perturbations of sensitive attributes such as gender or ethnicity. However, there is a gap between human intuition about these perturbations and the formal similarity specifications capturing them. While existing research has started to address this gap, current methods are based on hardcoded word replacements, resulting in specifications with limited expressivity or ones that fail to fully align with human intuition (e.g., in cases of asymmetric counterfactuals). This work proposes novel methods for bridging this gap by discovering expressive and intuitive individual fairness specifications. We show how to leverage unsupervised style transfer and GPT-3's zero-shot capabilities to automatically generate expressive candidate pairs of semantically similar sentences that differ along sensitive attributes. We then validate the generated pairs via an extensive crowdsourcing study, which confirms that a lot of these pairs align with human intuition about fairness in toxicity classification. We also show how limited amounts of human feedback can be leveraged to learn a similarity specification.
|
| 14 |
+
|
| 15 |
+
## 17 1 Introduction
|
| 16 |
+
|
| 17 |
+
Text classifiers are being employed in tasks related to automated hiring [1], content moderation [2] and reducing the toxicity of language models [3]. However, they were shown to exhibit biases based on sensitive attributes, e.g., gender [4] or demographics [5], even for tasks in which these dimensions should be irrelevant. This can lead to unfair decisions, distort analyses based on these classifiers, or propagate undesirable stereotypes to downstream applications. The intuition that certain demographic indicators should not influence decisions can be formalized in terms of individual fairness [6], which posits that similar inputs should be treated similarly. In a classification setting we assume similar treatment for two inputs to require both inputs to be classified the same, while the notion of input similarity captures the intuition that certain input characteristics should not influence model decisions.
|
| 18 |
+
|
| 19 |
+
Key challenge: generating valid, intuitive and diverse fairness constraints A key challenge for ensuring individual fairness is defining the similarity notion $\phi$ , which can often be contentious, since fairness is a subjective concept, as well as highly task dependent [6, 7]. In text classification, most existing works have cast similarity in terms of word replacement [5, 8-10]. Given a sentence $s$ , a similar sentence ${s}^{\prime }$ is generated by replacing each word in $s$ , that belongs to a list of words ${A}_{i}$ indicative of a demographic group $i$ , by a word from list ${A}_{{i}^{\prime }}$ , indicative of another group ${i}^{\prime } \neq i$ . This approach has several limitations: (i) it relies on exhaustively curated word lists ${A}_{i}$ of sensitive terms, (ii) the expressivity of the generated pairs is limited to word replacements, and (iii) many terms are only indicative of demographic groups in specific contexts, hence directly replacing them with other terms will not always result in a similar pair $\left( {s,{s}^{\prime }}\right)$ according to human intuition. Indeed, word replacement rules can often produce sentence pairs that differ in an axis not relevant to fairness (e.g., by replacing "white house" with "black house"). In addition, they can generate asymmetric counterfactuals [5]: sentence pairs $\left( {s,{s}^{\prime }}\right)$ that look similar but do not warrant similar treatment. For example, in the context of toxicity classification, the text "The movie is so old" may not be considered toxic while "The movie is so gay" clearly is.
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
|
| 23 |
+
Figure 1: Workflow overview. We begin by generating sentence pairs using word replacement, and then add pairs of sentences leveraging style transfer and GPT-3. Then, we use active learning and crowdworker judgments to identify pairs that deserve similar treatment according to human intuition.
|
| 24 |
+
|
| 25 |
+
This work: generating fairness specifications for text classification The central challenge we consider in this work is generating a diverse set of input pairs that aligns with human intuition about which inputs should be treated similarly in the context of a fixed text classification task. We address this challenge via a three-stage pipeline (Fig. 1). First, we start from a dataset $D$ and generate a set ${C}^{\prime }$ of candidate pairs $\left( {s,{s}^{\prime }}\right)$ by applying word replacement to sentences $s \in D$ . Second, to improve the diversity of pairs, we extend ${C}^{\prime }$ to a larger set $C$ by borrowing ideas from unsupervised style transfer. We change markers of demographic groups, e.g., "women" or "black people" in sentences $s \in D$ by replacing the style classifier in modern unsupervised style transfer methods [11,12] with a classifier trained to identify mentions of demographic groups. In addition, we add pairs from GPT-3 [13], prompted to change markers of demographic groups for sentences in $D$ in a zero-shot fashion. Finally, to identify which of the generated pairs align with human intuition about fairness, we design a crowdsourcing study in which workers are presented with candidate pairs and indicate if the pairs should be treated similarly for the considered classification task or not. We employ active learning similar to [14] to train a BERT-based [15] classifier $\widehat{\varphi }$ to recognize pairs that should be treated similarly using a limited amount of human feedback and obtain a filtered set of pairs ${\widehat{C}}^{ \star } \subseteq C$ . Our pipeline can be used in the context of most text classification tasks and in this work we instantiate it in the context of toxicity classification using a large dataset for online content moderation.
|
| 26 |
+
|
| 27 |
+
Main contributions We make the following contributions: (i) we introduce a method for generating datasets of diverse candidate pairs for individual fairness specifications, leveraging GPT-3 and unsupervised style transfer to modify demographic attributes mentioned in sentences; (ii) we show that human feedback can be used to train a classifier which automatically identifies pairs that align with human fairness intuitions for a considered downstream task; (iii) we instantiate our framework in the context of toxicity classification, demonstrating that the proposed pairs are more diverse than word replacement pairs only and that crowdsourcing workers agree with more than 75% of them.
|
| 28 |
+
|
| 29 |
+
## 2 Related Work
|
| 30 |
+
|
| 31 |
+
Bias in NLP Early work on bias in NLP has focused on unwanted correlations between the word embeddings of identifiers for protected demographic groups and unrelated categories such as occupations [16, 17]. Recently, language models have been found to harbor stereotypical biases [10, 18-20]. Specific to text classification, identity terms such as "gay" and explicit indicators of gender 1 have been shown to impact the outputs of classifiers trained to identify toxic comments [8] or to predict a person's occupation from their biography [4]. Olteanu et al. [21] demonstrate that human perceptions of the quality of a toxicity classifier can depend on the precise nature of errors made by the classifier, as well as the annotators' previous experiences with hate speech. Blodgett et al. [22] recommend authors to explictly consider why, how and to whom the biases they identify are harmful.
|
| 32 |
+
|
| 33 |
+
Language models for data augmentation Ross et al. [23] automatically create contrast sets [24] with a language model perturbing sentences based on control codes, while Rios [25] use style transfer to change the dialect of African-American Vernacular English tweets to Standard American English to evaluate the sensitivity to dialect of toxicity classifier. Hartvigsen et al. [26] use language models to generate a balanced dataset of benign and toxic comments about minority groups to combat classifiers' reliance on spurious correlations between identity terms and toxicity. Meanwhile, Qian et al. [27] train a perturber model to imitate human rewrites of comments that modify mentions of demographic groups, and demonstrate that their perturber can be used to reduce demographic biases in language models. However, this approach is limited by its reliance on expensive human rewrites and is only used for perturbations along fixed demographic axes such as gender.
|
| 34 |
+
|
| 35 |
+
Learning fairness notions from data Ilvento [28] provides an algorithm to approximate individual fairness metrics for $N$ datapoints in $O\left( {N\log N}\right)$ queries, which can be practically infeasible. Meanwhile, Mukherjee et al. [29] suggest training a classifier to predict binary fairness judgments on pairs $\left( {s,{s}^{\prime }}\right)$ in order to learn a fairness metric $\phi$ , but restrict themselves to Mahalanobis distances on top of a feature representation $\xi \left( s\right)$ , limiting their expressive power. In contrast to our work, these works do not validate their learned fairness notions with human feedback. To that end, Cheng et al. [30] present an interface to holistically elicit stakeholders' fairness judgments, whereas Wang et al. [31] aim to learn a bilinear fairness metric for tabular data based on clustering human annotations.
|
| 36 |
+
|
| 37 |
+
## 3 Method
|
| 38 |
+
|
| 39 |
+
This section presents our end-to-end framework for generating and filtering valid candidate pairs for individual fairness specifications. In Sec. 3.1 we expand on existing word replacement definitions of individual fairness in text classification [5] by implementing three different ways to modify markers of demographic groups mentioned in a sentence $s$ . Then, in Sec. 3.2 we leverage human feedback to learn an approximate similarity function $\widehat{\varphi }$ to identify a set of relevant constraints ${\widehat{C}}^{ \star } \subseteq C$ .
|
| 40 |
+
|
| 41 |
+
### 3.1 Expanding fairness constraints
|
| 42 |
+
|
| 43 |
+
Word Replacement First, we enrich the word replacement method by using the extensive lists of words associated with different protected demographic groups presented in Smith et al. [20]. The pool of terms is substantially larger than the 50 identity terms from Garg et al. [5]. We modify markers of group $j$ in a comment $s$ by replacing all words on the respective list of words associated with group $j$ with words from the list associated with the target group ${j}^{\prime }$ .
|
| 44 |
+
|
| 45 |
+
Unsupervised Style Transfer Second, we use an unsupervised style transfer approach based on prototype editing (see [32] for an extensive review) to transform markers of a demographic group $j$ in a sentence $s$ to markers of another demographic group ${j}^{\prime }$ , creating a new sentence ${s}^{\prime }$ . Prototype editing identifies markers $a$ of a source style $A$ in a text $s$ , and substitutes them by markers ${a}^{\prime }$ of a target style ${A}^{\prime }$ . Our approach leverages that modern prototype editing algorithms utilize saliency methods in combination with a style classifier to identify markers of style, and instead uses a RoBERTa-based [33] classifier $c$ trained to identify sentences that mention specific demographic groups $j$ . Combining ideas from [11] and [12], we transform a sentence $s$ to mention demographic attribute ${j}^{\prime }$ instead of $j$ by iteratively masking tokens with large impact on the likelihood ${p}_{c}\left( {j \mid {s}_{m}}\right)$ (initially starting with ${s}_{m} = s$ ) until we reach a certain threshold, and filling the masked tokens using a BART-based [34] group-conditioned generator $g\left( {{s}_{m},{j}^{\prime }}\right)$ trained to fill masks in sentences about group ${j}^{\prime }$ .
|
| 46 |
+
|
| 47 |
+
The unsupervised style transfer approach is likely to reproduce terms encountered during training, helping it to pick up on rare demographic terms that are particular to its training distribution which can be chosen to equal the training distribution for downstream tasks. In addition, unlike concurrent work by Qian et al. [27], unsupervised style transfer only requires labels ${y}_{j}\left( s\right)$ indicating the mention of demographic group $j$ in a sentence $s$ rather than expensive human-written examples of demographic group transfer. This allows us to modify mentions of demographic groups across axes like gender, religion and race, rather than restricting ourselves to changes within these axes.
|
| 48 |
+
|
| 49 |
+
GPT-3 Lastly, we leverage GPT-3 [13] to transform markers of protected demographic groups. We consider three methods: using GPT-3 standard mode and GPT-3 edit mode to rewrite sentences mentioning group $j$ to mention group ${j}^{\prime }$ in a zero-shot fashion, as well as postprocessing sentences generated by word replacement to fix logical and grammatical inconsistencies with GPT-3 edit mode.
|
| 50 |
+
|
| 51 |
+
To ensure that mentions of demographic group $j$ were indeed replaced by ${j}^{\prime }$ going from $s$ to ${s}^{\prime }$ , we use the same group-presence classifier $c$ as for the unsupervised style transfer approach to heuristically identify successful group transfer and discard pairs $\left( {s,{s}^{\prime }}\right)$ for which group transfer failed, for all three of our approaches. Implementation details are described in App. C and App. E contains examples.
|
| 52 |
+
|
| 53 |
+
### 3.2 Learning the similarity function
|
| 54 |
+
|
| 55 |
+
In order to evaluate to what extend the proposed similarity criteria align with human intuition, we conduct a crowdsourcing study, described in more detail in Sec. 4, to obtain labels $\varphi \left( {s,{s}^{\prime }}\right)$ which indicate whether a pair $\left( {s,{s}^{\prime }}\right)$ should be treated similarly for the sake of individual fairness $\left( {\varphi \left( {s,{s}^{\prime }}\right) = 0}\right)$ or not $\left( {\varphi \left( {s,{s}^{\prime }}\right) = 1}\right)$ . We train a BERT-based [15] probabilistic model ${p}_{\widehat{\varphi }}\left( {s,{s}^{\prime }}\right)$ that predicts values of the similarity function $\varphi \left( {s,{s}^{\prime }}\right)$ for pairs $\left( {s,{s}^{\prime }}\right) \in C$ , and approximate the similarity function $\phi$ as $\widehat{\varphi }\left( {s,{s}^{\prime }}\right) \mathrel{\text{:=}} 1 \Leftrightarrow {p}_{\widehat{\varphi }}\left( {s,{s}^{\prime }}\right) > t$ for a given classification threshold $t$ . To make optimal use of costly human queries, we employ active learning when training the classifier $\widehat{\varphi }$ , selecting pairs to label based on the variation ratios $1 - \mathop{\max }\limits_{y}p\left( {y \mid x}\right)$ with $p$ estimated similar to Grießhaber et al. [14], based on Dropout-based Monte-Carlo [35, 36] applied to our model's classification head. Concretely, we iteratively select new unlabeled training data ${D}_{i} \subset C \smallsetminus \mathop{\bigcup }\limits_{{j < i}}{D}_{j}$ with $\left| {D}_{i}\right| = {1000}$ , based on the variation ratios, query labels for ${D}_{i}$ , and retrain $\widehat{\varphi }$ on ${D}_{i}$ . As different annotators can disagree about whether two sentences $s$ and ${s}^{\prime }$ should be treated similarly, we use a majority vote for evaluation. Inspired by Chen et al. [37]'s approach for dealing with noise in crowdsourcing, we use a single human query per pair $\left( {s,{s}^{\prime }}\right)$ during active learning, and relabel pairs that are likely to be mislabeled after active learning has concluded. See App. D for more details. When learning $\widehat{\varphi }$ is completed, we can define the set of filtered constraints ${\widehat{C}}^{ \star } = \left\{ {\left( {s,{s}^{\prime }}\right) \in C : \widehat{\varphi }\left( {s,{s}^{\prime }}\right) = 0}\right\}$ .
|
| 56 |
+
|
| 57 |
+
## 4 Experiments
|
| 58 |
+
|
| 59 |
+
In this section, we experimentally evaluate our framework. Our key findings are: (i) the pairs generated by our method are more diverse compared to word replacement pairs only (Sec. 4.2), while mostly aligning with human intuition about fairness (Sec. 4.3) and (ii) the underlying similarity function $\varphi$ can be approximated by active learning from human judgements (Sec. 4.4).
|
| 60 |
+
|
| 61 |
+
### 4.1 Dataset and setup
|
| 62 |
+
|
| 63 |
+
We focus on toxicity classification on the Jigsaw Civil Comments dataset [38]. The dataset contains around 2 million online comments $s$ with labels $y\left( s\right)$ indicating toxicity. We focus on a subset ${D}^{\prime } \subset D$ with labels ${A}_{j}\left( s\right)$ that indicate the presence of group $j$ in $s$ for training our group-presence classifier $c$ , and only consider comments $s$ that consist of at most 64 tokens. We construct a set $C$ of 100,000 constraints applying our different generation approaches to ${D}^{11}$ . More details on the generation and exact composition of $C$ , as well as example pairs $\left( {s,{s}^{\prime }}\right)$ , can be found in App. C Throughout this section, whenever we report fairness for a classifier $f$ , we refer to the proportion of pairs $\left( {s,{s}^{\prime }}\right)$ in a test pool of similar pairs for which $f\left( s\right) = f\left( {s}^{\prime }\right)$ rather than $f\left( s\right) \neq f\left( {s}^{\prime }\right)$ .
|
| 64 |
+
|
| 65 |
+
### 4.2 Diversity of generated fairness constraints
|
| 66 |
+
|
| 67 |
+
To validate that our candidate constraint set $C$ is more diverse than word replacement on its own, we train 4 different toxicity classifiers, using Counterfactual Logit Pairing (CLP) [5] to empirically enforce different constraint sets $C,{C}_{1},{C}_{2},{C}_{3}$ . Here $C$ corresponds to the full constraint set, as described in Sec. 3.1, while the other constraint sets have the same size as $C$ , but contain pairs generated by one method only. In particular, the pairs in ${C}_{1}$ were generated by word replacement using the 50 identity terms from Garg et al. [5] ${}^{2}$ , the pairs in ${C}_{2}$ were generated by word replacement, using the larger list of terms of Smith et al. [20], and the pairs in ${C}_{3}$ were derived by style transfer.
|
| 68 |
+
|
| 69 |
+
---
|
| 70 |
+
|
| 71 |
+
${}^{1}C$ contains ${42.5}\mathrm{\;K}$ word replacement and style transfer pairs each, and a total of ${15}\mathrm{\;K}$ GPT-3 pairs.
|
| 72 |
+
|
| 73 |
+
${}^{2}$ We did not discard any pairs from ${C}_{1}$ based on the group-presence classifier $c$ .
|
| 74 |
+
|
| 75 |
+
---
|
| 76 |
+
|
| 77 |
+
Table 1: Balanced accuracy and fairness for a RoBERTa-based classifier $f$ trained with CLP using different constraint sets for training. Results are averaged over 5 runs and $\pm$ indicates the difference from the upper/lower bound of a naive ${95}\%$ confidence interval assuming normally distributed errors.
|
| 78 |
+
|
| 79 |
+
<table><tr><td>Training/Evaluation</td><td>BA</td><td>${\mathrm{{WR}}}_{50}\left( {C}_{1}\right)$</td><td>WR $\left( {C}_{2}\right)$</td><td>$\operatorname{ST}\left( {C}_{3}\right)$</td><td>Full $C$</td></tr><tr><td>Baseline</td><td>${88.4} \pm {0.1}$</td><td>${78.4} \pm {1.4}$</td><td>${81.3} \pm {1.5}$</td><td>${76.7} \pm {1.8}$</td><td>${78.5} \pm {1.5}$</td></tr><tr><td>$\operatorname{CLP}\left( 5\right) {\mathrm{{WR}}}_{50}\left( {C}_{1}\right)$</td><td>${87.0} \pm {0.3}$</td><td>98.3 ± 0.1</td><td>${89.1} \pm {1.9}$</td><td>${86.3} \pm {1.9}$</td><td>${87.3} \pm {1.8}$</td></tr><tr><td>$\operatorname{CLP}\left( 5\right)$ WR $\left( {C}_{2}\right)$</td><td>${87.2} \pm {0.1}$</td><td>${93.1} \pm {1.2}$</td><td>${98.2} \pm {0.4}$</td><td>${90.5} \pm {1.7}$</td><td>${92.9} \pm {1.2}$</td></tr><tr><td>$\operatorname{CLP}\left( 5\right) \operatorname{ST}\left( {C}_{3}\right)$</td><td>${85.9} \pm {0.1}$</td><td>${95.3} \pm {0.4}$</td><td>${97.1} \pm {0.3}$</td><td>${95.4} \pm {0.4}$</td><td>${95.5} \pm {0.3}$</td></tr><tr><td>CLP(5) Full $C$</td><td>${85.0} \pm {3.4}$</td><td>${95.5} \pm {0.9}$</td><td>${97.8} \pm {0.6}$</td><td>${94.9} \pm {0.9}$</td><td>${95.7} \pm {0.8}$</td></tr></table>
|
| 80 |
+
|
| 81 |
+
Table 2: Human evaluation: Answers to questions about comment pairs $\left( {s,{s}^{\prime }}\right)$ . The first number represents the fraction of the answer across all queries, while the second number (in brackets) represents the fraction of comment pairs for which the answer was the majority vote across 9 queries.
|
| 82 |
+
|
| 83 |
+
<table><tr><td>Metric/Method</td><td>Word replacement</td><td>Style Transfer</td><td>GPT-3</td></tr><tr><td>Unfair: Average American</td><td>84.9 (97.5)</td><td>84.6 (95.8)</td><td>83.4 (95.0)</td></tr><tr><td>Unfair: Own Opinion</td><td>85.9 (97.5)</td><td>85.2 (96.2)</td><td>83.2 (93.7)</td></tr><tr><td>Group Transfer</td><td>89.3 (95.0)</td><td>79.2 (85.4)</td><td>81.9 (89.5)</td></tr><tr><td>Content preservation</td><td>88.1 (100)</td><td>79.2 (91.2)</td><td>78.4 (87.9)</td></tr><tr><td>Same Factuality</td><td>73.0 (84.1)</td><td>76.2 (87.5)</td><td>78.5 (89.1)</td></tr><tr><td>Same Grammaticality</td><td>91.2 (99.1)</td><td>92.9 (97.9)</td><td>92.9 (98.3)</td></tr></table>
|
| 84 |
+
|
| 85 |
+
We then cross-evaluate the performance of the 4 classifiers trained with these constraint sets in terms of their test-time fairness according to each of the 4 fairness criteria, and their balanced accuracy.
|
| 86 |
+
|
| 87 |
+
The results in Table 1 show that each classifier achieves high fairness when evaluated on the set of pairs corresponding to the constraints used during its training (numbers in italics) while performing worse on other constraint pairs. While this indicates that adherence to fairness constraints does not always generalize well across our generation methods, we note that training on style transfer pairs $(C$ or ${C}_{3}$ ) generalizes substantially better to ${C}_{2}$ than training on different word replacement pairs $\left( {C}_{1}\right.$ ; see the numbers in bold). More details can be found in App. C.
|
| 88 |
+
|
| 89 |
+
### 4.3 Relevance of generated fairness constraints
|
| 90 |
+
|
| 91 |
+
To validate that the generated fairness contraints are relevant and intuitive, we conducted a human evaluation with workers recruited via Amazon MTurk. The workers were presented with pairs $\left( {s,{s}^{\prime }}\right)$ consisting of a comment $s$ from the Civil Comments dataset, as well as a modified version ${s}^{\prime }$ and asked about whether they believe that the two comments should be treated similarly and whether they believed that the average American shared their opinion. Treatment was framed in terms of toxicity classification for the sake of content moderation, ensuring that we verify the relevance of the learned notions relevant to this specific task. The workers were also asked whether the demographic group was transferred correctly from a given $j$ to a given ${j}^{\prime }$ , whether the content of $s$ has been preserved in ${s}^{\prime }$ apart from the demographic group transfer, and whether there are differences in factuality and grammaticality between $s$ and ${s}^{\prime }$ . We collected human feedback for a set $S$ containing a total of 720 pairs $\left( {s,{s}^{\prime }}\right)$ with 240 each being produced by our style transfer approach, GPT-3 in a zero-shot fashion, and word replacement using the list from [5] as for ${C}_{1}$ . These 240 pairs per method were split into 80 pairs for each of the axes male $\leftrightarrow$ female, christian $\leftrightarrow$ muslim and black $\leftrightarrow$ white. Each pair $\left( {s,{s}^{\prime }}\right)$ was shown to nine different workers. Further details can be found in App. B.
|
| 92 |
+
|
| 93 |
+
Table 2 shows that all three methods mostly produce relevant fairness constraints, according to a majority of annotators. At the same time, they generally successfully modify the mentioned demographic group, and preserve content, factuality and grammaticality. While word replacement generally performs better in terms of group transfer and content preservation, it only has a small advantage in terms of relevance to fairness, perhaps due to its worse performance in terms of factuality: we found examples in which word replacement changed "white house" to "black house"; or Obama is referred to as "white" rather than "black". These pairs were not seen as fairness constraints by most annotators and judged badly in terms of preserving factuality. See B. 1 for more detailed results.
|
| 94 |
+
|
| 95 |
+
Table 3: Performance of differently trained classifiers $\widehat{\varphi }$ on the test set $T$ . Active learning classifiers are retrained 10 times on the last batch ${D}_{6}$ . Results are averaged and $\pm$ indicates the difference from the upper/lower bound of a naive 95% confidence interval assuming normally distributed errors.
|
| 96 |
+
|
| 97 |
+
<table><tr><td>Method</td><td>ACC</td><td>TNR</td><td>TPR</td><td>BA</td></tr><tr><td>Constant Baseline</td><td>78.8</td><td>100.0</td><td>0.0</td><td>50.0</td></tr><tr><td>Active Learning t=0.5</td><td>${79.8} \pm {0.3}$</td><td>${97.2} \pm {0.3}$</td><td>${15.1} \pm {1.2}$</td><td>56.1</td></tr><tr><td>Active Learning + Relabel t=0.5</td><td>${81.1} \pm {0.3}$</td><td>${95.5} \pm {0.7}$</td><td>${28.6} \pm {2.2}$</td><td>62.0</td></tr><tr><td>Active Learning t=0.01</td><td>${78.7} \pm {1.1}$</td><td>${87.5} \pm {2.1}$</td><td>${45.7} \pm {1.8}$</td><td>66.6</td></tr><tr><td>Active Learning + Relabel t=0.01</td><td>${78.3} \pm {0.7}$</td><td>${86.8} \pm {1.5}$</td><td>${46.6} \pm {2.5}$</td><td>66.7</td></tr></table>
|
| 98 |
+
|
| 99 |
+
### 4.4 Learning the similarity function
|
| 100 |
+
|
| 101 |
+
We employed our active learning approach to efficiently train a classifier $\widehat{\varphi }$ from relatively few human judgments, with the goal of using it to identify pairs that represent actual fairness constraints on the remaining pool of candidates. We conducted 6 steps of active learning with 1000 queries each and discarded failed queries, ending up with a total of 5490 labeled pairs $\left( {\left( {s,{s}^{\prime }}\right) ,\varphi \left( {s,{s}^{\prime }}\right) }\right)$ . Details on our model architecture and other hyperparameters can be found in App. D. We evaluate our learnt classifier on a test set $T$ consisting of 500 randomly selected pairs from $C$ for which five annotators were asked about the average American's fairness judgment.
|
| 102 |
+
|
| 103 |
+
Because ${78.8}\%$ of the pairs $\left( {s,{s}^{\prime }}\right)$ in $T$ represented fairness constraints $\left( {\varphi \left( {s,{s}^{\prime }}\right) = 0}\right)$ according to the majority of annotators, we report Balanced Accuracy (BA), in addition to standard accuracy (ACC) and the true positive and negative rates (TPR and TNR). Table 3 displays these metrics for classifiers resulting from our active learning method for different classification thresholds $t$ and with and without relabeling. We observe that $\widehat{\varphi }$ performs substantially better than random, achieving BA of ${66.7}\%$ when used with an aggressive classifier threshold $t$ . The table also validates our relabeling approach: after observing that our classifier was biased towards predicting $\varphi \left( {s,{s}^{\prime }}\right) = 0$ , we collected two additional labels for 500 pairs $\left( {s,{s}^{\prime }}\right)$ for which both the human and the predicted label were equal to zero $\left( {\widehat{\varphi }\left( {s,{s}^{\prime }}\right) = \varphi \left( {s,{s}^{\prime }}\right) = 0}\right)$ , selected based on the variation ratios. ${47}\%$ of these pairs received a majority vote of $\varphi \left( {s,{s}^{\prime }}\right) = 1$ , showing that our approach correctly identified pairs that were likely to be mislabeled. Retraining our classifier on the updated majority votes also substantially increased TPR at little costs to TNR, especially for balanced classification thresholds $t$ close to 0.5 .
|
| 104 |
+
|
| 105 |
+
According to a qualitative evaluation, most of the sentence pairs $\left( {s,{s}^{\prime }}\right)$ predicted to not represent fairness constraints $\left( {\widehat{\varphi }\left( {s,{s}^{\prime }}\right) = 1}\right)$ had the words "boy" or "man" replaced by terms denoting identity membership. Such sentence pairs were often not seen as fairness constraints by our annotators, as the inclusion of the identity term can be interpreted as aggressive or mocking. $\widehat{\varphi }$ also successfully identified sentence pairs $\left( {s,{s}^{\prime }}\right)$ for which ${s}^{\prime }$ was unrelated to $s$ , that were sometimes produced by GPT-3, as not representing fairness constraints. Additional results and details can be found in App. D.
|
| 106 |
+
|
| 107 |
+
## 5 Conclusion
|
| 108 |
+
|
| 109 |
+
We proposed a framework for producing expressive and intuitive specifications for individual fairness in text classification. We experimentally demonstrated that our constraints are indeed more expressive than previous constraints based on word replacement and validated that most of the generated fairness constraints were relevant in the context of toxicity classification according to human annotators. In addition, we used active learning to demonstrate that human fairness judgments can be predicted using limited amounts of training data. In future work we plan to utilize the generated filtered constraints to train a fair downstream toxicity classifier with better trade-off between accuracy and fairness.
|
| 110 |
+
|
| 111 |
+
References
|
| 112 |
+
|
| 113 |
+
[1] Vedant Bhatia, Prateek Rawat, Ajit Kumar, and Rajiv Ratn Shah. End-to-end resume parsing and finding candidates for a job description using bert. arXiv preprint arXiv:1910.03089, 2019.
|
| 114 |
+
|
| 115 |
+
[2] Bernhard Rieder and Yarden Skop. The fabrics of machine moderation: Studying the technical, normative, and organizational structure of perspective api. Big Data & Society, 8(2): 20539517211046181, 2021.
|
| 116 |
+
|
| 117 |
+
[3] Johannes Welbl, Amelia Glaese, Jonathan Uesato, Sumanth Dathathri, John Mellor, Lisa Anne Hendricks, Kirsty Anderson, Pushmeet Kohli, Ben Coppin, and Po-Sen Huang. Challenges in detoxifying language models. arXiv preprint arXiv:2109.07445, 2021.
|
| 118 |
+
|
| 119 |
+
[4] Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, and Adam Tauman Kalai. Bias in bios: A case study of semantic representation bias in a high-stakes setting. In proceedings of the Conference on Fairness, Accountability, and Transparency, pages 120-128, 2019.
|
| 120 |
+
|
| 121 |
+
[5] Sahaj Garg, Vincent Perot, Nicole Limtiaco, Ankur Taly, Ed H Chi, and Alex Beutel. Counterfactual fairness in text classification through robustness. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pages 219-226, 2019.
|
| 122 |
+
|
| 123 |
+
[6] Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, pages 214-226, 2012.
|
| 124 |
+
|
| 125 |
+
[7] Solon Barocas, Moritz Hardt, and Arvind Narayanan. Fairness in machine learning.
|
| 126 |
+
|
| 127 |
+
[8] Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. Measuring and mitigating unintended bias in text classification. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 67-73, 2018.
|
| 128 |
+
|
| 129 |
+
[9] Mikhail Yurochkin and Yuekai Sun. Sensei: Sensitive set invariance for enforcing individual fairness. arXiv preprint arXiv:2006.14168, 2020.
|
| 130 |
+
|
| 131 |
+
[10] Paul Pu Liang, Irene Mengze Li, Emily Zheng, Yao Chong Lim, Ruslan Salakhutdinov, and Louis-Philippe Morency. Towards debiasing sentence representations. arXiv preprint arXiv:2007.08100, 2020.
|
| 132 |
+
|
| 133 |
+
[11] Machel Reid and Victor Zhong. Lewis: Levenshtein editing for unsupervised text style transfer. arXiv preprint arXiv:2105.08206, 2021.
|
| 134 |
+
|
| 135 |
+
[12] Joosung Lee. Stable style transformer: Delete and generate approach with encoder-decoder for text style transfer. arXiv preprint arXiv:2005.12086, 2020.
|
| 136 |
+
|
| 137 |
+
[13] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901, 2020.
|
| 138 |
+
|
| 139 |
+
[14] Daniel Grießhaber, Johannes Maucher, and Ngoc Thang Vu. Fine-tuning bert for low-resource natural language understanding via active learning. arXiv preprint arXiv:2012.02462, 2020.
|
| 140 |
+
|
| 141 |
+
[15] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
|
| 142 |
+
|
| 143 |
+
[16] Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Advances in neural information processing systems, 29, 2016.
|
| 144 |
+
|
| 145 |
+
[17] Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186, 2017.
|
| 146 |
+
|
| 147 |
+
[18] Moin Nadeem, Anna Bethke, and Siva Reddy. Stereoset: Measuring stereotypical bias in pretrained language models. arXiv preprint arXiv:2004.09456, 2020.
|
| 148 |
+
|
| 149 |
+
[19] Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Yaron Singer, and Stuart Shieber. Investigating gender bias in language models using causal mediation analysis. Advances in Neural Information Processing Systems, 33:12388-12401, 2020.
|
| 150 |
+
|
| 151 |
+
[20] Eric Michael Smith, Melissa Hall Melanie Kambadur, Eleonora Presani, and Adina Williams. " i'm sorry to hear that": finding bias in language models with a holistic descriptor dataset. arXiv preprint arXiv:2205.09209, 2022.
|
| 152 |
+
|
| 153 |
+
[21] Alexandra Olteanu, Kartik Talamadupula, and Kush R Varshney. The limits of abstract evaluation metrics: The case of hate speech detection. In Proceedings of the 2017 ACM on web science conference, pages 405-406, 2017.
|
| 154 |
+
|
| 155 |
+
[22] Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. Language (technology) is power: A critical survey of" bias" in nlp. arXiv preprint arXiv:2005.14050, 2020.
|
| 156 |
+
|
| 157 |
+
[23] Alexis Ross, Tongshuang Wu, Hao Peng, Matthew E Peters, and Matt Gardner. Tailor: Generating and perturbing text with semantic controls. arXiv preprint arXiv:2107.07150, 2021.
|
| 158 |
+
|
| 159 |
+
[24] Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, et al. Evaluating models' local decision boundaries via contrast sets. arXiv preprint arXiv:2004.02709, 2020.
|
| 160 |
+
|
| 161 |
+
[25] Anthony Rios. Fuzze: Fuzzy fairness evaluation of offensive language classifiers on african-american english. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 881-889, 2020.
|
| 162 |
+
|
| 163 |
+
[26] Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, and Ece Kamar. Toxigen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection. arXiv preprint arXiv:2203.09509, 2022.
|
| 164 |
+
|
| 165 |
+
[27] Rebecca Qian, Candace Ross, Jude Fernandes, Eric Smith, Douwe Kiela, and Adina Williams. Perturbation augmentation for fairer nlp. arXiv preprint arXiv:2205.12586, 2022.
|
| 166 |
+
|
| 167 |
+
[28] Christina Ilvento. Metric learning for individual fairness. arXiv preprint arXiv:1906.00250, 2019.
|
| 168 |
+
|
| 169 |
+
[29] Debarghya Mukherjee, Mikhail Yurochkin, Moulinath Banerjee, and Yuekai Sun. Two simple ways to learn individual fairness metrics from data. In International Conference on Machine Learning, pages 7097-7107. PMLR, 2020.
|
| 170 |
+
|
| 171 |
+
[30] Hao-Fei Cheng, Logan Stapleton, Ruiqi Wang, Paige Bullock, Alexandra Chouldechova, Zhiwei Steven Steven Wu, and Haiyi Zhu. Soliciting stakeholders' fairness notions in child maltreatment predictive systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1-17, 2021.
|
| 172 |
+
|
| 173 |
+
[31] Hanchen Wang, Nina Grgic-Hlaca, Preethi Lahoti, Krishna P Gummadi, and Adrian Weller. An empirical study on learning fairness metrics for compas data with human supervision. arXiv preprint arXiv:1910.10255, 2019.
|
| 174 |
+
|
| 175 |
+
[32] Di Jin, Zhijing Jin, Zhiting Hu, Olga Vechtomova, and Rada Mihalcea. Deep learning for text style transfer: A survey. Computational Linguistics, 48(1):155-205, 2022.
|
| 176 |
+
|
| 177 |
+
[33] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
|
| 178 |
+
|
| 179 |
+
[34] Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461, 2019.
|
| 180 |
+
|
| 181 |
+
[35] Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pages 1050-1059. PMLR, 2016.
|
| 182 |
+
|
| 183 |
+
[36] Yarin Gal, Riashat Islam, and Zoubin Ghahramani. Deep bayesian active learning with image data. In International Conference on Machine Learning, pages 1183-1192. PMLR, 2017.
|
| 184 |
+
|
| 185 |
+
[37] Derek Chen, Zhou Yu, and Samuel R Bowman. Clean or annotate: How to spend a limited data collection budget. In Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing, pages 152-168, 2022.
|
| 186 |
+
|
| 187 |
+
[38] Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. Nuanced metrics for measuring unintended bias with real data for text classification. In Companion Proceedings of The 2019 World Wide Web Conference, WWW '19, page 491-500, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450366755. doi: 10.1145/ 3308560.3317593. URL https://doi.org/10.1145/3308560.3317593.
|
| 188 |
+
|
| 189 |
+
[39] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
|
| 190 |
+
|
| 191 |
+
[40] Jasmijn Bastings and Katja Filippova. The elephant in the interpretability room: Why use attention as explanation when we have saliency methods? arXiv preprint arXiv:2010.05607, 2020.
|
| 192 |
+
|
| 193 |
+
[41] Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654, 2020.
|
| 194 |
+
|
| 195 |
+
[42] Neil Houlsby, Ferenc Huszár, Zoubin Ghahramani, and Máté Lengyel. Bayesian active learning for classification and preference learning. arXiv preprint arXiv:1112.5745, 2011.
|
| 196 |
+
|
| 197 |
+
## A Ethical Considerations
|
| 198 |
+
|
| 199 |
+
Our human evaluation experiments involving workers from Mechanical Turk were reviewed and approved by the IRB of PLACEHOLDER . Workers on Mechanical Turk were warned that they might be shown offensive comments as part of our study and were able to opt out of participating in our study at any time. We also made sure that the per-task compensation was sufficiently high to result in a hourly compensation exceeding the US federal minimum wage. More details on our human evaluation experiments can be found in App. B.
|
| 200 |
+
|
| 201 |
+
While we believe that our results show that learning more precise fairness notions by involving human feedback is a very promising area of research, we caution against directly using our learnt similarity classifier $\phi$ for evaluating fairness in high-stakes real-world applications of toxicity classification. First, our results show that there is substantial disagreement between different survey participants about which pairs $\left( {s,{s}^{\prime }}\right)$ require equal treatment by a fair classifier. While resolving these disagreements via a majority vote is a natural choice, other approaches may be desired in some contexts (for e.g., enforcing equal treatment whenever at least one participant believes it is required). Second, our survey participants may have geographic biases and are neither direct stakeholders, nor experts in discrimination law and hate speech. Given that our learning approach shows promising signs of being able to improve upon existing fairness definitions despite large amounts of disagreement, which is likely to be less common for actual stakeholders and experts, we recommend using it in conjunction with fairness judgments provided by application-specific experts and stakeholders.
|
| 202 |
+
|
| 203 |
+
## B Further Details on Human evaluation
|
| 204 |
+
|
| 205 |
+
In order to participate, workers had to live in the US and be above 18 years old in addition to being experienced with MechanicalTurk (having completed more than ${5000}\mathrm{{HIT}}{\mathrm{s}}^{3}$ and having a good reputation (97% acceptance rate across all of the worker's HITs). Workers were warned about the potentially offensive content of some of the comments show in the study by the following statement: "Please note that this study contains offensive content. If you do not wish to see such content, please withdraw from the study by leaving this website." and were also told that they could withdraw from the study at any later point: "You may withdraw your participation at any time without specifying reasons and without any disadvantages (however, you will not get paid for the current HIT in case you withdraw before completing it)".
|
| 206 |
+
|
| 207 |
+
After encountering a high prevalence of bots, malicious workers or workers that fundamentally misunderstood our task instructions during pilot experiments, we had workers pass a qualification test by providing correct answers for nine out of ten queries $\varphi \left( {s,{s}^{\prime }}\right)$ for pairs that were hand-designed to have a relatively obvious correct answer. We validated these hand-designed pairs in a separate experiment, querying workers about $\varphi \left( {s,{s}^{\prime }}\right)$ for 11 pairs $\left( {s,{s}^{\prime }}\right)$ , and asking them to verbally explain each of their decisions, paying a total of $\$ {1.83}$ . We only included hand-designed pairs in the qualification test if at least eight out of ten workers produced the intended answer during this experiment, and no worker brought forward convincing arguments against this answer being correct.
|
| 208 |
+
|
| 209 |
+
Workers were paid $\$ {0.83}$ for a battery of 11 queries $\varphi \left( {s,{s}^{\prime }}\right)$ for the Active learning and the test set $T$ and first $\$ {1.20}$ , later $\$ {1.30}$ for 11 queries about all questions presented in Table 2. In both cases, workers had to explain one of their answers verbally, and one of the queries was a attention check pair $\left( {s,{s}^{\prime }}\right)$ with a relatively obvious correct answer constructed in the same way as for the qualification tests.
|
| 210 |
+
|
| 211 |
+
Blocks of queries with wrong answers to the attention check question or (partially) incoherent verbal explanations were manually reviewed, and thrown out in case we were not able to find evidence that the worker had correctly understood the task in which case the worker was not paid. Workers were made aware of this by the following statement in the task description: "To ensure workers pay attention, we include pairs of comments for which there is a clear correct answer at random positions. Please make sure to carefully read every comment and answer diligently, as your HIT will be rejected if your answer for one of these pairs of comments is wrong, in which case you will not receive a payment for the HIT. We will also ask you to produce an explanation for one of your answers. Please write at least one sentence and make sure your explanation is coherent with the answer you gave, as 1 your HIT might be rejected otherwise in which case you will not receive a payment for the HIT.". According to https://turkerview.com, a tool used by many professional workers on MechanicalTurk, we paid workers an average hourly rate of $\$ {16.17}$ , clearly exceeding the US minimum wage. While this is likely an overestimate, as not all workers use turkerview, the hourly rate is so high, that we still exceed the minimum wage for workers taking twice as long as the average worker using turkerview.
|
| 212 |
+
|
| 213 |
+
---
|
| 214 |
+
|
| 215 |
+
${}^{3}$ Bundled tasks on MechanicalTurk for which a remuneration is received on completion
|
| 216 |
+
|
| 217 |
+
---
|
| 218 |
+
|
| 219 |
+
Workers were introduced to the task as follows: "We are investigating fairness in toxic comment classification and are looking for pairs of comments with similar content that should be treated the same by a content moderator (if one of them was removed, the other one should be removed too). You will be shown pairs of comments and be asked whether the comments are similar in various ways and whether you would consider it unfair for a content moderator to remove one of the comments in each pair but not the other.". The queries read as follows for the first task:
|
| 220 |
+
|
| 221 |
+
- Without further context, would the average american consider it unfair or acceptable for a content moderator to delete one of the shown comments but not the other?
|
| 222 |
+
|
| 223 |
+
- It would be unfair for one of the two comments but not the other to be deleted (Both comments ought to be treated the same way, no matter what)
|
| 224 |
+
|
| 225 |
+
- If comment a) was deleted, comment b) should be deleted as well, but not the other way around. (Deleting comment b) would be more justifiable)
|
| 226 |
+
|
| 227 |
+
- If comment b) was deleted, comment a) should be deleted as well, but not the other way around. (Deleting comment a) would be more justifiable)
|
| 228 |
+
|
| 229 |
+
- It would be acceptable to delete either of the comments without deleting the other. (Treating the comments differently either way would be fine)
|
| 230 |
+
|
| 231 |
+
and as follows for the second one (Placeholders for group terms are presented in Italics. The answer summarized in Table 2 is highlighted in bold):
|
| 232 |
+
|
| 233 |
+
- Without further context, would you consider it unfair or acceptable for a content moderator to delete one of the shown comments but not the other?
|
| 234 |
+
|
| 235 |
+
- It would be unfair for one of the two comments but not the other to be deleted (Both comments ought to be treated the same way, no matter what)
|
| 236 |
+
|
| 237 |
+
- If comment a) was deleted, comment b) should be deleted as well, but not the other way around. (Deleting comment b) would be more justifiable)
|
| 238 |
+
|
| 239 |
+
- If comment b) was deleted, comment a) should be deleted as well, but not the other way around. (Deleting comment a) would be more justifiable)
|
| 240 |
+
|
| 241 |
+
- It would be acceptable to delete either of the comments without deleting the other. (Treating the comments differently either way would be fine)
|
| 242 |
+
|
| 243 |
+
- What do you expect the average American to answer to the previous question?
|
| 244 |
+
|
| 245 |
+
- It would be unfair for one of the two comments but not the other to be deleted (Both comments ought to be treated the same way, no matter what)
|
| 246 |
+
|
| 247 |
+
- If comment a) was deleted, comment b) should be deleted as well, but not the other way around. (Deleting comment b) would be more justifiable)
|
| 248 |
+
|
| 249 |
+
- If comment b) was deleted, comment a) should be deleted as well, but not the other way around. (Deleting comment a) would be more justifiable)
|
| 250 |
+
|
| 251 |
+
- It would be acceptable to delete either of the comments without deleting the other. (Treating the comments differently either way would be fine)
|
| 252 |
+
|
| 253 |
+
- Do you consider one of the two comments to be more factually correct than the other?
|
| 254 |
+
|
| 255 |
+
- Comment a
|
| 256 |
+
|
| 257 |
+
- Comment b
|
| 258 |
+
|
| 259 |
+
- Neither comment / too little context to tell
|
| 260 |
+
|
| 261 |
+
- Do you consider one of the two comments to be more grammatically correct than the other?
|
| 262 |
+
|
| 263 |
+
- Comment a
|
| 264 |
+
|
| 265 |
+
- Comment b
|
| 266 |
+
|
| 267 |
+
- Neither comment
|
| 268 |
+
|
| 269 |
+
- Is comment a) about group $a$ and comment b) about group $b$ ?
|
| 270 |
+
|
| 271 |
+
- Yes
|
| 272 |
+
|
| 273 |
+
- No, comment a) is not about group a
|
| 274 |
+
|
| 275 |
+
- No, comment b) is not about group $b$
|
| 276 |
+
|
| 277 |
+
- No, neither
|
| 278 |
+
|
| 279 |
+
- Apart from differences related to group $a$ and group $b$ , are both comments similar in terms of content?
|
| 280 |
+
|
| 281 |
+
- Yes, they are almost the same.
|
| 282 |
+
|
| 283 |
+
- They are somewhat similar, but differ in some additional details.
|
| 284 |
+
|
| 285 |
+
- There is an important additional difference between the comments' content
|
| 286 |
+
|
| 287 |
+
B. 1 shows the results of the human evaluation on our test set $S$ split along the axis of attribute transfer, rather than generation method as in 2 . Along with the results in Table 2 they show that despite the general agreement about the relevance of the generated fairness constraints, there is substantial disagreement between annotators when it comes to deviations from the most common answer across all comments. In all cases, the fraction of comments with majority vote equal to that answer is substantially higher than the overall fraction of these votes across all comments and annotators. The same is true for our set $T$ of 500 randomly selected pairs from $C$ for which we only asked about the average American’s fairness judgment: ${70.9}\%$ of the annotations were $\varphi \left( {s,{s}^{\prime }}\right) = 0$ , while the same was true for ${78.8}\%$ of the per-comment pair majority votes.
|
| 288 |
+
|
| 289 |
+
Table B.1: Human evaluation: Answers to questions about comment pairs $\left( {s,{s}^{\prime }}\right)$ grouped along demographic group transfers along different axes. The first number represents the fraction of the answer across all queries, while the second number (in the brackets) represents the fraction of comment pairs for which the answer was the majority vote across 9 queries.
|
| 290 |
+
|
| 291 |
+
<table><tr><td>Metric/Method</td><td>male $\leftrightarrow$ female</td><td>black $\leftrightarrow$ white</td><td>christian $\leftrightarrow$ muslim</td></tr><tr><td>Unfair: Average American</td><td>83.5 (96.6)</td><td>82.2 (94.5)</td><td>87.2 (97.0)</td></tr><tr><td>Unfair: Own Opinion</td><td>83.5 (96.6)</td><td>82.4 (92.9)</td><td>88.4 (97.9)</td></tr><tr><td>Group Transfer</td><td>82.6 (91.6)</td><td>81.6 (86.6)</td><td>86.2 (91.6)</td></tr><tr><td>Content preservation</td><td>84.9 (95.4)</td><td>79.5 (92.0)</td><td>81.3 (91.6)</td></tr><tr><td>Same Factuality</td><td>75.3 (82.9)</td><td>73.6 (85.0)</td><td>78.8 (92.9)</td></tr><tr><td>Same Grammaticality</td><td>90.5 (97.5)</td><td>92.2 (98.3)</td><td>94.3 (99.5)</td></tr></table>
|
| 292 |
+
|
| 293 |
+
## C Further details on style transfer
|
| 294 |
+
|
| 295 |
+
Unsupervised style transfer To transform markers of demographic groups in sentences, we first finetune a Multi-headed RoBERTa-based [33] classifier $c$ to predict labels ${y}_{j}$ indicating the presence of markers of a demographic group $j$ from a list of protected demographic groups $J$ in a sentence $s$ . We use the population labels ("Black", "Male", "Heterosexual", "Muslim", etc.) that are provided for a subset of the Civil comments dataset. The group-presence classifier $c$ is based on the roberta-base model, followed by a linear layer with 768 neurons applied to the output embedding of the first token only, a Tanh layer, another linear layer mapping to a single dimension, and a Sigmoid layer. We train $c$ for 3 epochs with a batch size of 16 and use the Adam optimizer [39] with learning rate 0.00001 to optimize the binary Cross Entropy loss, reweighed by relative label frequency in the dataset. Table C. I shows the balanced accuracy on the test set for all demographic groups in the dataset. For our downstream applications of $c$ , we restrict ourselves to the demographic groups for which the classifier $c$ ’s balanced accuracy is above ${90}\%$ . Furthermore, we also exclude the group labeled "mental illness" because the word replacement lists we used lack a clear analogon.
|
| 296 |
+
|
| 297 |
+
Then, we finetune a BART-based [34] generator $g$ on a mask-filling task on the same data: For every data point $s$ , we sample a group from the set of demographic groups $j$ mentioned in $s$ , i.e. $\left\{ {j : {y}_{j}\left( s\right) = 1}\right\}$ , skipping sentences $s$ for which no group $j$ meets this criterion. Inspired by [11] we mask all of $s$ ’s tokens that have an above-average attention value for the 11th layer of the classifier $c$ , merge consecutive mask tokens into one, and prepend the name of the sampled group $j$ to the masked sentence before fedding it to the generator $g$ . The generator $g$ is then finetuned to reconstruct $s$ using token-wise Cross Entropy.
|
| 298 |
+
|
| 299 |
+
Table C.1: Balanced accuracies of the group-presence classifier $c$ for different labels
|
| 300 |
+
|
| 301 |
+
<table><tr><td>Category</td><td>BA</td></tr><tr><td>Male</td><td>96.5</td></tr><tr><td>Female</td><td>97.8</td></tr><tr><td>Transgender</td><td>99.3</td></tr><tr><td>Other gender</td><td>50.0</td></tr><tr><td>Heterosexual</td><td>98.1</td></tr><tr><td>Homosexual</td><td>99.3</td></tr><tr><td>Bisexual</td><td>65.4</td></tr><tr><td>Other sexuality</td><td>50.0</td></tr></table>
|
| 302 |
+
|
| 303 |
+
<table><tr><td>Category</td><td>BA</td></tr><tr><td>Christian</td><td>96.6</td></tr><tr><td>Jewish</td><td>98.9</td></tr><tr><td>Muslim</td><td>98.9</td></tr><tr><td>Hindu</td><td>98.2</td></tr><tr><td>Buddhist</td><td>99.2</td></tr><tr><td>Atheist</td><td>99.6</td></tr><tr><td>Other religion</td><td>50.0</td></tr><tr><td>Other disability</td><td>50.0</td></tr></table>
|
| 304 |
+
|
| 305 |
+
<table><tr><td>Category</td><td>BA</td></tr><tr><td>Physical disability</td><td>54.9</td></tr><tr><td>Intellectual disability</td><td>54.3</td></tr><tr><td>Mental illness</td><td>98.3</td></tr><tr><td>Black</td><td>99.2</td></tr><tr><td>White</td><td>99.5</td></tr><tr><td>Asian</td><td>98.3</td></tr><tr><td>Latino</td><td>96.6</td></tr><tr><td>Other race</td><td>55.5</td></tr></table>
|
| 306 |
+
|
| 307 |
+
The BART-based generator $g$ is trained starting from the pretrained facebook/bart-large model for a single epoch with batch size 4, again using Adam and a learning rate of 0.00001 . For filling in masked sentences, we pick the completion with the largest difference in the classifier $c$ ’s pre-sigmoid activation for the target and source demographic groups ${j}^{\prime }$ and $j$ among candidate sentences produced by a beam search generation using the generator $g$ with width 5 .
|
| 308 |
+
|
| 309 |
+
To transfer an example $s$ from mentioning group $j$ to mentioning group ${j}^{\prime }$ , we follow [12] and iteratively mask the token for which masking reduces ${p}_{c}\left( {{y}_{j} \mid x}\right)$ the most, until we reach a threshold of ${p}_{c}\left( {{y}_{j} \mid x}\right) < {0.25}$ . We use this approach rather than the attention-based masking from [11] because of the lack of theoretical motivation for using attention to identify important features [40], and because attention scores are the same for all of our model's group-presence prediction heads, rather than specific to a particular group $j$ . Then, we prepend a verbal representation of label ${j}^{\prime }$ to $s$ to form a prompt $p$ , and generate a sentence ${s}^{\prime }$ as $g\left( p\right)$ .
|
| 310 |
+
|
| 311 |
+
Word replacement Our word replacement approach is based on the list of words provided in [20]: Given a sentence $s$ mentioning demographic group $j$ and a target attribute ${j}^{\prime }$ , we replace all words in $s$ that are on the list associated with $j$ with random words from the list associated with ${j}^{\prime }$ , replacing nouns with nouns and descriptors with descriptors whenever possible, and nouns with descriptors otherwise. The full list of words we used for word replacement is displayed in Table E.I.
|
| 312 |
+
|
| 313 |
+
GPT-3 We accessed GPT-3 using OpenAI's API ${}^{5}$ . For our first approach, we used the "text-davinci- 001" version of GPT3 in a zero-shot manner with the prompt: "Please rewrite the following sentence to be about ${j}^{\prime }$ rather than $j$ :" followed by a new line and the targeted sentence $s$ . The second approach was based on the beta-version of GPT-3’s editing mode ${}^{6}$ . Here, ${s}^{\prime }$ is produced using the model "text-davinci-edit-001" with the instruction "Rewrite the text to be about ${j}^{\prime }$ rather than ${j}^{\prime \prime }$ . Lastly, we used to same model in conjunction with word replacement: First, we generated a candidate sentence ${s}^{\prime \prime }$ using the procedure described in the word replacement section. Then, in order to fix issues caused by the context-blindness of the word replacement approach, we postprocessed ${s}^{\prime \prime }$ using "text-davinci-edit-001" with the instruction "Fix grammatical errors and logical inconsistencies" to produce ${s}^{\prime }$ . We used temperature $= {0.7}$ and top_p $= 1$ in all our approaches and used max_tokens $= {64}$ for "text-davinci-001" to control the length of the modified sentence ${s}^{\prime }$ .
|
| 314 |
+
|
| 315 |
+
Post-filtering For all three approaches, we performed a post-filtering step to reduce the prevalence of unsuccesful attempts at demographic group transfer in our set of constraints $C$ . Given a pair $\left( {s,{s}^{\prime }}\right)$ of an original sentence and a modified version, we only include it in our set of constraints $C$ , if the classifier probability ${p}_{c}\left( {{y}_{{j}^{\prime }} \mid {s}^{\prime }}\right)$ for label ${j}^{\prime }$ is below 0.5 and the classifier probability ${p}_{c}\left( {{y}_{j} \mid {s}^{\prime }}\right)$ for label $j$ is above 0.5 .
|
| 316 |
+
|
| 317 |
+
As mentioned in Sec. 4.1, we attempt to produce modified comments ${s}_{{j}^{\prime }}^{\prime }$ mentioning group ${j}^{\prime }$ for each $s$ in ${D}^{\prime }$ for all demographic groups $j$ with ${y}_{j}\left( s\right) = 1$ and all possible target groups ${j}^{\prime }$ for word replacement and style transfer. For GPT-3, we attempted a total of 75 generations for each of our three generation modes per axis pair of demographic groups $\left( {j,{j}^{\prime }}\right)$ and direction of group transfer, with the source sentences $s$ randomly selected among the sentences with label $j$ in ${D}^{\prime }$ . For constructing the secondary test set $S$ , we attempted more generations for the axes male $\leftrightarrow$ female, christian $\leftrightarrow$ muslim and black $\leftrightarrow$ white, homosexual $\leftrightarrow$ heterosexual. The latter axis was left out of $S$ because we found that the rate of successful generations was too limited. We generated a maximum of 2250 attempts up until a total of 250 successful generations (post-filtering step passed) for GPT-3's zero-shot mode, a maximum of 750 until to a total of 100 successful generations for GPT-3's edit mode, and up until a total of 100 successful generations for GPT-3 based postprocessing of word replacement. Table C. 2 shows the overall amount of generated pairs per method.
|
| 318 |
+
|
| 319 |
+
---
|
| 320 |
+
|
| 321 |
+
${}^{4}$ We used attention during the training of $g$ , for which dropping out some tokens unrelated to $j$ is less problematic, in order to save resources.
|
| 322 |
+
|
| 323 |
+
${}^{5}$ https://openai.com/api/
|
| 324 |
+
|
| 325 |
+
${}^{6}$ https://openai.com/blog/gpt-3-edit-insert/
|
| 326 |
+
|
| 327 |
+
---
|
| 328 |
+
|
| 329 |
+
Table C.2: Amount of generated pairs $\left( {s,{s}^{\prime }}\right)$ per generation method.
|
| 330 |
+
|
| 331 |
+
<table><tr><td>Generation Method</td><td>Total (Train)</td><td>Total (Test)</td><td>In $C$ (Train)</td><td>In $C$ (Test)</td></tr><tr><td>Word Replacement</td><td>980667</td><td>331490</td><td>42500</td><td>10625</td></tr><tr><td>Style Transfer</td><td>681111</td><td>229883</td><td>42500</td><td>10625</td></tr><tr><td>GPT-3 Zero-Shot</td><td>6322</td><td>2139</td><td>6200</td><td>1550</td></tr><tr><td>GPT-3 Edit Mode</td><td>3704</td><td>1199</td><td>3500</td><td>875</td></tr><tr><td>GPT-3 Postprocessing</td><td>5330</td><td>1831</td><td>5300</td><td>1325</td></tr></table>
|
| 332 |
+
|
| 333 |
+
As an additional experiment to validate the increased diversity of our constraint set $C$ we train a similarity classifier $7 \mid \widehat{\varphi }$ , on $C$ to distinguish pairs $\left( {s,{s}^{\prime }}\right)$ generated by word replacement from pairs generated by style transfer or GPT-3. Training on 100000 examples without label noise, we are able to achieve ${91.6}\%$ test accuracy on a balanced test set, suggesting that there is a meaningful difference between pairs generated by word replacement and the rest of the constraint candidates $C$ .
|
| 334 |
+
|
| 335 |
+
## D Further details on learning similarity functions
|
| 336 |
+
|
| 337 |
+
First, Proposition D. 1 below establishes that robustness with respect to a binary similarity function $\varphi$ , i.e. $\varphi \left( {s,{s}^{\prime }}\right) = 0 \Rightarrow f\left( s\right) = f\left( {s}^{\prime }\right)$ , can fully capture the definition of individual fairness as Lipschitz-Continuity proposed by Dwork et al. [6] for deterministic classifiers $f$ .
|
| 338 |
+
|
| 339 |
+
Proposition D.1. Given a metric $d : X \times X \rightarrow \mathbb{R}$ , a binary metric ${d}_{b} : Y \times Y \rightarrow \{ 0,1\}$ and a constant $L > 0$ , there exists a similarity function $\varphi : X \times X \rightarrow \{ 0,1\}$ such that a function $f$ : $\left( {X, d}\right) \rightarrow \left( {Y,{d}_{b}}\right)$ is Lipschitz-Continuous with constant $L$ if and only if $\varphi \left( {x,{x}^{\prime }}\right) \geq {d}_{b}\left( {f\left( x\right) , f\left( {x}^{\prime }\right) }\right)$ for all $x,{x}^{\prime } \in X$ .
|
| 340 |
+
|
| 341 |
+
Proof. Define $\varphi \left( {x,{x}^{\prime }}\right) \mathrel{\text{:=}} \mathbb{1}\left\{ {{Ld}\left( {x,{x}^{\prime }}\right) \geq 1}\right\}$ . Then whenever ${d}_{b}\left( {f\left( x\right) , f\left( {x}^{\prime }\right) }\right) = 1$ , we have ${d}_{b}\left( {f\left( x\right) , f\left( {x}^{\prime }\right) }\right) = 1 \leq \varphi \left( {x,{x}^{\prime }}\right)$ if and only if ${d}_{b}\left( {f\left( x\right) , f\left( {x}^{\prime }\right) }\right) \leq {Ld}\left( {x,{x}^{\prime }}\right)$ . But if ${d}_{b}\left( {f\left( x\right) , f\left( {x}^{\prime }\right) }\right) = 0$ , the Lipschitz inequality is allways true. Now, assume that $f$ is not Lipschitz: Then, there exist $x,{x}^{\prime } \in X$ such that $1 = {d}_{b}\left( {f\left( x\right) , f\left( {x}^{\prime }\right) }\right) > {Ld}\left( {x,{x}^{\prime }}\right)$ , implying $0 = \varphi \left( {x,{x}^{\prime }}\right) < {d}_{b}\left( {f\left( x\right) , f\left( {x}^{\prime }\right) }\right) = 1$
|
| 342 |
+
|
| 343 |
+
We use a BERT-based classifier that acts on a pair $\left( {s,{s}^{\prime }}\right)$ by first tokenizing both $s$ and ${s}^{\prime }$ and padding the token representation to a length of 64, concatenating these tokens and feeding the concatenated token representation into a pretrained bert-uncased-base model. We then apply a linear layer with dropout $\left( {p = {0.1}}\right)$ followed by a Tanh layer and a second linear layer with dropout $\left( {p = {0.1}}\right)$ to obtain single dimensional logits, to which a sigmoid layer is applied before computing the binary Cross Entropy loss. We use BERT rather than more modern models such as RoBERTa [33] and Deberta [41], as we have found it to clearly outperform them for our task, plausibly because BERT uses a next-sentence-prediction task during pretraining, which is structurally similar to our task of comparing two sentences. Table D. 1 demonstrates the advantage of using BERT, as well as concatenating token representations rather than learning based on the difference between separately produced BERT features for both $s$ and ${s}^{\prime }$ . Unless stated otherwise, our Active Learning approach trains for five epochs on each queried block ${D}_{i}$ before selecting new data ${D}_{i + 1}$ to label.
|
| 344 |
+
|
| 345 |
+
---
|
| 346 |
+
|
| 347 |
+
${}^{7}$ Using the same architecture as for our active learning experiments described in App. D
|
| 348 |
+
|
| 349 |
+
---
|
| 350 |
+
|
| 351 |
+
Table D.1: Different architectures trained for one epoch on 5000 samples from a set of pairs $\left( {s,{s}^{\prime }}\right)$ generated using word replacement to distinguish demograpghic group transfer within the same category of gender and sexuality, race and religion vs across categories $\left( {\varphi }_{2}\right)$ . "Featurediff" uses a linear model applied to the difference of model features produced for the respective first tokens in $s$ and ${s}^{\prime }$ . "Bilinear" uses a bilinear model on top of these feature differences instead. "Merge" appends ${s}^{\prime }$ to $s$ before tokenization and learns a linear model on top of the model features for this combined input. "Concat" operates similarly, but first tokenizes $s$ and ${s}^{\prime }$ and pads both to 64 tokens before feeding the concatenated tokens into the model. No dropout was used in the post-BERT layers for these experiments. All results averaged over 10 runs and $\pm$ indicates the difference from the upper/lower bound of a naive 95% confidence interval assuming normally distributed errors.
|
| 352 |
+
|
| 353 |
+
<table><tr><td>Model</td><td>BA</td></tr><tr><td>BERT-Concat</td><td>86.7</td></tr><tr><td>BERT-Merge</td><td>79.9</td></tr><tr><td>BERT-Featurediff</td><td>67.8</td></tr><tr><td>DeBERTa-Concat</td><td>54.7</td></tr><tr><td>DeBERTa-Merge</td><td>53.2</td></tr><tr><td>DeBERTa-Featurediff</td><td>50.8</td></tr><tr><td>RoBERTa-Concat</td><td>52.1</td></tr><tr><td>RoBERTa-Merge</td><td>50.3</td></tr><tr><td>RoBERTa-Featurediff</td><td>51.1</td></tr><tr><td>BERT-Large-Concat</td><td>84.4</td></tr><tr><td>BERT-Large-Merge</td><td>84.1</td></tr><tr><td>BERT-Large-Featurediff</td><td>59.2</td></tr><tr><td>BERT-Bilinear</td><td>50.7</td></tr></table>
|
| 354 |
+
|
| 355 |
+
### D.1 Synthetic Data
|
| 356 |
+
|
| 357 |
+
For active learning, we freeze the underlying BERT model during the active learning selection and only apply MC-Dropout on the level of the classifier head, similar to [14], but unlike them we do not use BALD [42] and instead approximate $p\left( {y \mid s,{s}^{\prime }}\right)$ averaging the models’ predicted probabilities ${p}_{\widehat{\varphi }}\left( {y \mid s,{s}^{\prime }, w}\right)$ for 50 sampled dropout masks $w$ . We call this approach LC-UNC and experimented with various alternative selection criteria. Unlike LC-UNC, LC directly approximates $1 - \mathop{\max }\limits_{y}p\left( {y \mid s,{s}^{\prime }}\right)$ using a single forward pass through the $\widehat{\varphi }$ with deactivated dropout. BALD is the approach from [14], while VARRA and Majority approximate $1 - \mathop{\max }\limits_{y}p\left( {y \mid s,{s}^{\prime }}\right)$ using MC-Dropout differently than LC-UNC: In Majority, $p\left( {y \mid s,{s}^{\prime }}\right)$ is approximated as the fraction of dropout samples $w$ for which $\widehat{\varphi } = 1$ , while VARRA averages $1 - \mathop{\max }\limits_{y}{p}_{\widehat{\varphi }}\left( {y \mid s,{s}^{\prime }, w}\right)$ over dropout samples $w$ instead of averaging ${p}_{\widehat{\varphi }}\left( {y \mid s,{s}^{\prime }, w}\right)$ before applying the maximum operator. In addition, the table contains the "automatic relabeling" condition in which ${D}_{i}$ is selected from the whole of $C$ rather than just the previously unlabeled examples ${D}_{i} \subset C \smallsetminus \mathop{\bigcup }\limits_{{j < i}}{D}_{j}$ . During training, pairs $\left( {s,{s}^{\prime }}\right)$ that have been queried multiple times are labelled according to the majority vote of all queries, and as 0.5 in case of a tie.
|
| 358 |
+
|
| 359 |
+
We validate the efficacy of our active learning approach for learning the similarity function $\varphi \left( {s,{s}^{\prime }}\right)$ with a limited amount of noisy queries. For this, we define two synthetic similarity functions ${\varphi }_{i} : i \in \{ 1,2\}$ . The first, ${\varphi }_{1}$ is equal to zero, whenever a pair $\left( {s,{s}^{\prime }}\right)$ was generated via word replacement and equal to one otherwise, as in the first experiment from the previous section. The second, ${\varphi }_{2}$ is equal to zero, whenever the group $j$ of $s$ that was removed and the added group ${j}^{\prime }$ in ${s}^{\prime }$ are within the same category of gender and sexuality, race, or religion, and equal to one otherwise. For example, a pair $\left( {s,{s}^{\prime }}\right)$ for which markers of "White people" in $s$ were modified to markers of "Black people" in ${s}^{\prime }$ would have ${\varphi }_{2}\left( {s,{s}^{\prime }}\right) = 0$ , while ${\varphi }_{2}\left( {s,{s}^{\prime }}\right)$ would be one if the group was modified to "muslim" in ${s}^{\prime }$ instead. We simulate the label noise introduced by annotators’ disagreement by independently flipping each label with probability $p = {0.3}$ during training the similarity classifier $\widehat{\varphi }$ For training with 3 instead of one query per data point, we reduce the overall amount of training data from 10000 samples in $C$ to 3333 samples and reduce the probability of flipping labels to $p = {0.216}$ , simulating a majority vote. In turn, the active learning approach selects 333 instead of 1000 data points for labeling in each of its ten steps in that scenario. Table D. 2 shows that active learning noticeably outperforms randomly sampling data points for our task, that there is no clear direct benefit from employing multiple queries per pair $\left( {s,{s}^{\prime }}\right) \in C$ over obtaining labels for previously unseen pairs, an that the LC-UNC setup is usually
|
| 360 |
+
|
| 361 |
+
Table D.2: Balanced accuracy for BERT classifier trained using a constant amount of ${50}\mathrm{k}$ gradient steps and a constant amount of ${10}\mathrm{k}$ queries. All results are averaged over 10 runs and $\pm$ indicates the difference from the upper/lower bound of a naive ${95}\%$ confidence interval assuming normally distributed errors.
|
| 362 |
+
|
| 363 |
+
<table><tr><td>Method/Dataset</td><td>${\varphi }_{2}$ (Same category)</td><td>${\varphi }_{1}$ (Word replacement)</td></tr><tr><td>Random sampling, 1 query</td><td>${75.1} \pm {3.6}$</td><td>${74.8} \pm {1.8}$</td></tr><tr><td>Random sampling, 3 queries</td><td>${71.6} \pm {3.9}$</td><td>${72.5} \pm {1.5}$</td></tr><tr><td>Random sampling, 5 queries</td><td>${70.7} \pm {2.7}$</td><td>${73.4} \pm {1.8}$</td></tr><tr><td>BALD 1 query</td><td>${75.9} \pm {4.0}$</td><td>${77.9} \pm {2.1}$</td></tr><tr><td>BALD 3 queries</td><td>${73.8} \pm {6.5}$</td><td>${78.1} \pm {1.7}$</td></tr><tr><td>BALD automatic relabeling</td><td>${76.1} \pm {4.5}$</td><td>${77.6} \pm {2.6}$</td></tr><tr><td>LC 1 query</td><td>${79.1} \pm {4.4}$</td><td>${78.5} \pm {1.8}$</td></tr><tr><td>LC 3 queries</td><td>${74.6} \pm {2.4}$</td><td>${79.5} \pm {1.8}$</td></tr><tr><td>LC automatic relabeling</td><td>${73.4} \pm {5.9}$</td><td>${78.2} \pm {1.3}$</td></tr><tr><td>LC-UNC 1 query</td><td>${79.0} \pm {4.9}$</td><td>${79.7} \pm {1.5}$</td></tr><tr><td>LC-UNC 3 queries</td><td>${75.8} \pm {5.4}$</td><td>${78.7} \pm {2.6}$</td></tr><tr><td>LC-UNC automatic relabeling</td><td>${76.6} \pm {3.9}$</td><td>${76.7} \pm {1.5}$</td></tr><tr><td>VARRA 1 query</td><td>${77.3} \pm {7.4}$</td><td>${78.9} \pm {2.1}$</td></tr><tr><td>VARRA 3 queries</td><td>${73.1} \pm {5.7}$</td><td>${79.8} \pm {1.6}$</td></tr><tr><td>VARRA automatic relabeling</td><td>${77.7} \pm {2.9}$</td><td>${78.0} \pm {1.3}$</td></tr><tr><td>Majority 1 query</td><td>${74.9} \pm {3.5}$</td><td>${76.8} \pm {2.4}$</td></tr><tr><td>Majority 3 queries</td><td>${78.7} \pm {5.2}$</td><td>${79.6} \pm {0.9}$</td></tr><tr><td>Majority automatic relabeling</td><td>${74.4} \pm {6.2}$</td><td>${77.9} \pm {1.8}$</td></tr></table>
|
| 364 |
+
|
| 365 |
+
### D.2 Human Evaluation
|
| 366 |
+
|
| 367 |
+
Table D.3 and D.4 show additional results on the active learning from human feedback. As above, we tested our approach using different filtering thresholds $t$ on the two test sets $T$ (Table D.3) and $S$ (Table D.4). In the Retrain condition, the classifier $\widehat{\varphi }$ was trained for a single epoch on all labeled datapoints $\mathop{\bigcup }\limits_{{i < n}}{D}_{i}$ in order to combat potential issues with catastrophic forgetting. In the Retrain + Reweigh condition, the same was done, but the Cross Entropy loss was reweighed to balance the empirical label frequencies in $\mathop{\bigcup }\limits_{{i < n}}{D}_{i}$ . In the From Scratch setting, we train a new classifier on $\mathop{\bigcup }\limits_{{i < n}}{D}_{i}$ for 5 epochs from scratch without first training it separately on any ${D}_{i}$ . Again, datapoints are reweighed according to their empirical frequency in $\mathop{\bigcup }\limits_{{i < n}}{D}_{i}$ in the From Scratch + Reweigh setting.
|
| 368 |
+
|
| 369 |
+
## E Word Lists And Example Generations
|
| 370 |
+
|
| 371 |
+
Table E.2-E.4 show 5 randomly example pairs $\left( {s,{s}^{\prime }}\right)$ produced by our style transfer approach and GPT-3 in zero-shot and edit mode. Warning: Some of the example texts contain offensive language.
|
| 372 |
+
|
| 373 |
+
Table D.3: Results for active learning to predict human fairness judgments, on test data $T$ . Active learning classifiers are retrained 10 times on the last batch ${D}_{6}$ . Results are averaged and $\pm$ indicates the difference from the upper/lower bound of a naive ${95}\%$ confidence interval assuming normally distributed errors.
|
| 374 |
+
|
| 375 |
+
<table><tr><td>Method</td><td>ACC</td><td>TNR</td><td>TPR</td></tr><tr><td>Baseline: Constant 0</td><td>78.8</td><td>100.0</td><td>0.0</td></tr><tr><td>AL t=0.5</td><td>${79.8} \pm {0.3}$</td><td>${97.2} \pm {0.3}$</td><td>${15.1} \pm {1.2}$</td></tr><tr><td>AL + Relabel t=0.5</td><td>${81.1} \pm {0.3}$</td><td>${95.5} \pm {0.7}$</td><td>${28.6} \pm {2.2}$</td></tr><tr><td>AL + Relabel + Retrain t=0.5</td><td>${79.6} \pm {0.4}$</td><td>${95.3} \pm {1.4}$</td><td>${21.5} \pm {3.9}$</td></tr><tr><td>$\mathrm{{AL}} +$ Relabel+Retrain+Reweight $= {0.5}$</td><td>${79.6} \pm {0.8}$</td><td>${93.9} \pm {1.6}$</td><td>${26.6} \pm {3.4}$</td></tr><tr><td>From Scratch t=0.5</td><td>${77.5} \pm {1.3}$</td><td>${90.8} \pm {3.3}$</td><td>${28.1} \pm {7.1}$</td></tr><tr><td>From Scratch + Reweigh t=0.5</td><td>${77.7} \pm {1.4}$</td><td>${91.0} \pm {2.7}$</td><td>${28.3} \pm {5.0}$</td></tr><tr><td>AL t=0.1</td><td>${80.0} \pm {0.5}$</td><td>${95.2} \pm {0.7}$</td><td>${23.7} \pm {3.5}$</td></tr><tr><td>AL + Relabel t=0.1</td><td>${80.7} \pm {0.6}$</td><td>${93.0} \pm {0.9}$</td><td>${35.0} \pm {1.3}$</td></tr><tr><td>AL + Relabel + Retrain t=0.1</td><td>${62.1} \pm {5.6}$</td><td>${61.5} \pm {8.9}$</td><td>${64.0} \pm {7.0}$</td></tr><tr><td>$\mathrm{{AL}} +$ Relabeling+Retrain+Reweight $= {0.1}$</td><td>${52.8} \pm {6.2}$</td><td>${46.8} \pm {7.7}$</td><td>${75.0} \pm {4.6}$</td></tr><tr><td>From Scratch t=0.1</td><td>${53.4} \pm {7.9}$</td><td>${48.6} \pm {14.3}$</td><td>${71.1} \pm {9.2}$</td></tr><tr><td>From Scratch + Reweighed t=0.1</td><td>${54.8} \pm {6.7}$</td><td>${51.2} \pm {10.5}$</td><td>${67.9} \pm {9.1}$</td></tr><tr><td>AL t=0.01</td><td>${78.7} \pm {1.1}$</td><td>${87.5} \pm {2.1}$</td><td>${45},7 \pm {1.8}$</td></tr><tr><td>AL + Relabel t=0.01</td><td>${78.3} \pm {0.7}$</td><td>${86.8} \pm {1.5}$</td><td>${46.6} \pm {2.5}$</td></tr><tr><td>AL + Relabel + Retrain t=0.01</td><td>${21.2} \pm {0.1}$</td><td>${0.0} \pm {0.0}$</td><td>${100} \pm {0.0}$</td></tr><tr><td>AL + Relabel + Retrain + Reweigh t=0.01</td><td>${21.1} \pm {0.0}$</td><td>${0.0} \pm {0.0}$</td><td>${100} \pm {0.0}$</td></tr><tr><td>From Scratch t=0.01</td><td>${21.7} \pm {0.5}$</td><td>${0.0} \pm {0.0}$</td><td>${99.5} \pm {0.6}$</td></tr><tr><td>From Scratch + Reweigh t=0.01</td><td>${21.8} \pm {1.5}$</td><td>${1.5} \pm {3.6}$</td><td>${98.3} \pm {1.7}$</td></tr></table>
|
| 376 |
+
|
| 377 |
+
Table D.4: Results for active learning to predict human fairness judgments, using the separate test data $S$ . Active learning classifiers are retrained 10 times on the last batch ${D}_{6}$ . Results are averaged and $\pm$ indicates the difference from the upper/lower bound of a naive ${95}\%$ confidence interval assuming normally distributed errors.
|
| 378 |
+
|
| 379 |
+
<table><tr><td>Method</td><td>ACC</td><td>TNR</td><td>TPR</td></tr><tr><td>Baseline: Constant 0</td><td>96.1</td><td>100.0</td><td>0.0</td></tr><tr><td>AL t=0.5</td><td>${93.8} \pm {0.5}$</td><td>${97.0} \pm {0.6}$</td><td>${14.6} \pm {2.2}$</td></tr><tr><td>AL + Relabel t=0.5</td><td>${92.1} \pm {0.6}$</td><td>${95.1} \pm {0.7}$</td><td>${18.9} \pm {2.7}$</td></tr><tr><td>AL + Relabel + Retrain $\mathrm{t} = {0.5}$</td><td>${90.7} \pm {1.7}$</td><td>${93.8} \pm {1.9}$</td><td>${12.8} \pm {4.0}$</td></tr><tr><td>$\mathrm{{AL}} +$ Relabel+Retrain+Reweight $= {0.5}$</td><td>${89.0} \pm {1.3}$</td><td>${92.0} \pm {1.4}$</td><td>${16.4} \pm {3.4}$</td></tr><tr><td>From Scratch t=0.5</td><td>${89.2} \pm {2.6}$</td><td>${91.8} \pm {2.5}$</td><td>${25.7} \pm {5.5}$</td></tr><tr><td>From Scratch + Reweigh t=0.5</td><td>${89.2} \pm {2.5}$</td><td>${91.8} \pm {2.7}$</td><td>${25.7} \pm {4.4}$</td></tr><tr><td>AL t=0.1</td><td>${90.4} \pm {1.3}$</td><td>${93.3} \pm {1.3}$</td><td>${21.0} \pm {2.3}$</td></tr><tr><td>AL + Relabel t=0.1</td><td>${89.6} \pm {0.8}$</td><td>${92.2} \pm {0.8}$</td><td>${24.6} \pm {1.4}$</td></tr><tr><td>AL + Relabel + Retrain $\mathrm{t} = {0.1}$</td><td>${60.0} \pm {8.1}$</td><td>${59.5} \pm {8.8}$</td><td>${72.8} \pm {11.9}$</td></tr><tr><td>$\mathrm{{AL}} +$ Relabel+Retrain+Reweight $= {0.1}$</td><td>${46.7} \pm {7.4}$</td><td>${45.2} \pm {8.0}$</td><td>${83.9} \pm {7.6}$</td></tr><tr><td>From Scratch t=0.1</td><td>${50.6} \pm {10.4}$</td><td>${49.8} \pm {11.2}$</td><td>${69.6} \pm {9.3}$</td></tr><tr><td>From Scratch + Reweigh t=0.1</td><td>${55.0} \pm {9.4}$</td><td>${54.5} \pm {10.0}$</td><td>${66.7} \pm {6.6}$</td></tr><tr><td>AL t=0.01</td><td>${80.6} \pm {2.3}$</td><td>${82.3} \pm {2.7}$</td><td>${38.2} \pm {6.8}$</td></tr><tr><td>AL + Relabel t=0.01</td><td>${80.2} \pm {1.3}$</td><td>${85.5} \pm {1.4}$</td><td>${30.0} \pm {2.7}$</td></tr><tr><td>AL + Relabel + Retrain t=0.01</td><td>${3.9} \pm {0.0}$</td><td>${0.0} \pm {0.0}$</td><td>${100.0} \pm {0.0}$</td></tr><tr><td>$\mathrm{{AL}} +$ Relabel+Retrain+Reweight $= {0.01}$</td><td>${3.9} \pm {0.0}$</td><td>${0.0} \pm {0.0}$</td><td>${100.0} \pm {0.0}$</td></tr><tr><td>From Scratch t=0.01</td><td>${4.6} \pm {0.9}$</td><td>${0.0} \pm {0.1}$</td><td>${99.6} \pm {0.4}$</td></tr><tr><td>From Scratch + Reweigh t=0.01</td><td>${5.4} \pm {3.9}$</td><td>${1.6} \pm {3.2}$</td><td>${50.8} \pm {1.6}$</td></tr></table>
|
| 380 |
+
|
| 381 |
+
<table><tr><td>Demographic Group</td><td>Descriptors</td><td>Nouns</td></tr><tr><td>Male</td><td>male, manly, masculine</td><td>man, men, grandfather, bro, guy, boy, father, dad, son, husbands, hus- band, grandpa, brother</td></tr><tr><td>Female</td><td>female, pregnant, feminine, femme, womanly</td><td>woman, women, grandmother, lady, ladies, girl, mother, mom, daughter, wives, wife, grandma, sister</td></tr><tr><td>Transgender</td><td>transsexual, FTM, F2M, MTF, trans, M2F, transgender, trans female, trans feminine, trans femme, trans fem, trans male, trans masculine, trans masc</td><td/></tr><tr><td>Heterosexual</td><td>straight, heterosexual, hetero</td><td/></tr><tr><td>Homosexual</td><td>gay, homosexual, lesbian</td><td/></tr><tr><td>Christian</td><td>Amish, Catholic, Christian, Evan- gelical, Lutheran, Mormon, Protes- tant, Unitarian</td><td/></tr><tr><td>Jewish</td><td>Conservative Jewish, Orthodox Jewish, Reform Jewish, Hasidic, Jewish</td><td/></tr><tr><td>Muslim</td><td>Muslim, Shia, Sunni</td><td/></tr><tr><td>Hindu</td><td>Hindu</td><td/></tr><tr><td>Buddhist</td><td>Buddhist</td><td/></tr><tr><td>Atheist</td><td>atheist, irreligious</td><td/></tr><tr><td>Black</td><td>African-American, African, Black, BIPOC</td><td/></tr><tr><td>White</td><td>European-American, European, Caucasian, White</td><td/></tr><tr><td>Asian</td><td>Chinese-American, American- Born Chinese, Chinese, Indian- American, Indian, Korean- American, Korean, Vietnamese- American, Vietnamese, AAPI</td><td/></tr><tr><td>Latino</td><td>Cuban-American, Cuban, Dominican-American, minican, Salvadoran-American, Salvadoran, Guatemalan-American, Guatemalan, Mexican-American, Mexican, Filipina-American, Fil- ipina, Filipino-American, Filipino, Hispanic, Latinx, Latine, Latino, Latina, Latin American</td><td/></tr></table>
|
| 382 |
+
|
| 383 |
+
Table E.1: Lists of words used for word replacementm, based on Smith et al. [20]
|
| 384 |
+
|
| 385 |
+
<table><tr><td>Original Comment</td><td>Modified Comment</td></tr><tr><td>'Hahaha, really? Thousands of years of war and it was mostly white men, huh? Read some history about non-European cultures to pierce your idyllic bubble.'</td><td>'Hahaha, really? Thousands of years of war and it was mostly white European Christians, huh? Read some history about non-European cultures to pierce your idyllic bubble'</td></tr><tr><td>'I would really like to see your source for the Japanese fear of the Russians as being any kind of factor whatsoever. For most of the Japanese people, the Russians were not even on the radar screen!!'</td><td>'I would really like to see your source for the Buddhist fear of the west as being any kind of factor whatsoever. For most of the western people, the west and religion were not even on the radar screen!!’</td></tr><tr><td>'Yes: at least implicitly it endorses the old canard that the United States is a "Christian nation."’</td><td>'Yes: at least implicitly it endorses the old canard that the United States is a "white racist nation."’</td></tr><tr><td>'Please be careful what you label fake news. News of shootings (Greenville?), migrants in Sweden raising crime rates, Hispanics being rapists and murderers, illegal voting, etc those are examples of fake news....feet on couch? not so much....’</td><td>'Please be careful what you label fake news. Hundreds of shootings (Green Mountain?), migrants in Sweden raising crime rates, ho- mosexuals being raped and murderers, ille- gal voting, etc those are examples of fake news....feet on couch? not so much.'</td></tr><tr><td>'And often they are not so silent. Gay friends have told me about various such clergy who are well known - only no one speaks out about it.’</td><td>'And often they are not so silent. Gay friends have told me about various such men who are well known - only no one speaks out about it.’</td></tr></table>
|
| 386 |
+
|
| 387 |
+
Table E.2: Examples of pairs produced by our style transfer approach
|
| 388 |
+
|
| 389 |
+
<table><tr><td>Original Comment</td><td>Modified Comment</td></tr><tr><td>'OK, so now Im confused. Is it or is it not OK to mock a reporters physical handicap? Call a Latina beauty queen "Miss Housekeeping"? Post a mock video showing yourself clocking a woman you dont like with a hard-hit golf ball?’</td><td>’ Is it or is it not OK to mock a reporteré physical handicap? Call a Muslim beauty queen "Miss Housekeeping"? Post a mock video showing yourself clocking a woman you doní like with a hard-hit golf ball?'</td></tr><tr><td>"You caught me. I confess. I've been lying this whole time. I'm not actually an atheist, of course I believe in a god. I see him in the mirror every day !"</td><td>" You caught me. I confess. I've been lying this whole time. I'm not actually a homosex- ual, of course I believe in a god. I see him in the mirror every day !"</td></tr><tr><td>"Denying the existence of racism in Amer- ica,(or most anywhere), is just ignoring real- ity. The formal dicrimination has largely been curbed(with exception of using race to design Congressional districts), but those wonderful 'family values' of the white South still carry the mythology of inherent white superiority"</td><td>". Denying the existence of racism in Amer- ica,(or most anywhere), is just ignoring real- ity. The formal dicrimination has largely been curbed(with exception of using race to design Congressional districts), but those wonderful 'family values' of the Hindu South still carry the mythology of inherent Hindu"</td></tr><tr><td>'Cite to "the Republicans, Trump being a Re- publican, who are against gays" please. New one to me. (Other than Trump being regis- tered Republican of course, lol!)'</td><td>Christian people are against gays.’</td></tr><tr><td>"You'd better watch out if you don't claim Christianity unto yourself like they do. It is true that people harbor resentment over peo- ple whom they view as different. It's the only reason I can think of why they hate gay peo- ple."</td><td>" You'd better watch out if you don't claim Christianity unto yourself like they do. It is true that people harbor resentment over peo- ple whom they view as different. It's the only reason I can think of why they hate Asians."</td></tr></table>
|
| 390 |
+
|
| 391 |
+
Table E.3: Examples of pairs produced by GPT-3 (zero-shot)
|
| 392 |
+
|
| 393 |
+
<table><tr><td>Original Comment</td><td>Modified Comment</td></tr><tr><td>'So do they cover their faces? This is about being able to identify women for practical purposes. Makes sense to moi.’</td><td>'So do they cover their faces? This is about being able to identify White people for practi- cal purposes. Makes sense to moi.</td></tr><tr><td>LGBTQI+M(xn) W(omxn)Y(oxth)C(hxldrxn)O(thxrs)’</td><td>'Christianity is the best religion. Christian people are the best people. Jesus Christ was a man who lived a perfect life and died for our sins. We should all be following Jesus because he is the author and finisher of our faith. ’</td></tr><tr><td>'Bust Judaism, Christianity and Islam are all Abrahamic religions tracing their origins to the patriarch Abraham. They are all related.'</td><td>'Bust Hispanic people are all the same. They are all related.</td></tr><tr><td>"AO96 doesn't allow men into womens rest rooms or locker rooms. You do realize that trans people have been using the bathroom with you forever right? Have you ever been injured by any of these people in all the years you have been using public facilities?"</td><td>"AO96 doesn't allow white people into black rest rooms or locker rooms. You do realize that trans people have been using the bath- room with you forever right? Have you ever been injured by any of these people in all the years you have been using public facilities?"</td></tr><tr><td>'so.. I . have seen dozens of women in niqab in Toronto this week alone Almost every day I see groups of them. The problem with the niqab policy is that is was for such a silly narrow issue. The niaqb should have been banned from public buildings and shopping malls.’</td><td>'so.. I . have seen dozens of black people in the niqab in Toronto this week alone Almost every day I see groups of them. The problem with the niqab policy is that is was for such a silly narrow issue. The niaqb should have been banned from public buildings and shop- ping malls. ’</td></tr></table>
|
| 394 |
+
|
| 395 |
+
Table E.4: Examples of pairs produced by GPT-3 (edit mode)
|
| 396 |
+
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/UeYQXtI7nsX/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,165 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ GENERATING INTUITIVE FAIRNESS SPECIFICATIONS FOR NATURAL LANGUAGE PROCESSING
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
§ ABSTRACT
|
| 12 |
+
|
| 13 |
+
Text classifiers have promising applications in high-stake tasks such as resume screening and content moderation. These classifiers must be fair and avoid discriminatory decisions by being invariant to perturbations of sensitive attributes such as gender or ethnicity. However, there is a gap between human intuition about these perturbations and the formal similarity specifications capturing them. While existing research has started to address this gap, current methods are based on hardcoded word replacements, resulting in specifications with limited expressivity or ones that fail to fully align with human intuition (e.g., in cases of asymmetric counterfactuals). This work proposes novel methods for bridging this gap by discovering expressive and intuitive individual fairness specifications. We show how to leverage unsupervised style transfer and GPT-3's zero-shot capabilities to automatically generate expressive candidate pairs of semantically similar sentences that differ along sensitive attributes. We then validate the generated pairs via an extensive crowdsourcing study, which confirms that a lot of these pairs align with human intuition about fairness in toxicity classification. We also show how limited amounts of human feedback can be leveraged to learn a similarity specification.
|
| 14 |
+
|
| 15 |
+
§ 17 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
Text classifiers are being employed in tasks related to automated hiring [1], content moderation [2] and reducing the toxicity of language models [3]. However, they were shown to exhibit biases based on sensitive attributes, e.g., gender [4] or demographics [5], even for tasks in which these dimensions should be irrelevant. This can lead to unfair decisions, distort analyses based on these classifiers, or propagate undesirable stereotypes to downstream applications. The intuition that certain demographic indicators should not influence decisions can be formalized in terms of individual fairness [6], which posits that similar inputs should be treated similarly. In a classification setting we assume similar treatment for two inputs to require both inputs to be classified the same, while the notion of input similarity captures the intuition that certain input characteristics should not influence model decisions.
|
| 18 |
+
|
| 19 |
+
Key challenge: generating valid, intuitive and diverse fairness constraints A key challenge for ensuring individual fairness is defining the similarity notion $\phi$ , which can often be contentious, since fairness is a subjective concept, as well as highly task dependent [6, 7]. In text classification, most existing works have cast similarity in terms of word replacement [5, 8-10]. Given a sentence $s$ , a similar sentence ${s}^{\prime }$ is generated by replacing each word in $s$ , that belongs to a list of words ${A}_{i}$ indicative of a demographic group $i$ , by a word from list ${A}_{{i}^{\prime }}$ , indicative of another group ${i}^{\prime } \neq i$ . This approach has several limitations: (i) it relies on exhaustively curated word lists ${A}_{i}$ of sensitive terms, (ii) the expressivity of the generated pairs is limited to word replacements, and (iii) many terms are only indicative of demographic groups in specific contexts, hence directly replacing them with other terms will not always result in a similar pair $\left( {s,{s}^{\prime }}\right)$ according to human intuition. Indeed, word replacement rules can often produce sentence pairs that differ in an axis not relevant to fairness (e.g., by replacing "white house" with "black house"). In addition, they can generate asymmetric counterfactuals [5]: sentence pairs $\left( {s,{s}^{\prime }}\right)$ that look similar but do not warrant similar treatment. For example, in the context of toxicity classification, the text "The movie is so old" may not be considered toxic while "The movie is so gay" clearly is.
|
| 20 |
+
|
| 21 |
+
< g r a p h i c s >
|
| 22 |
+
|
| 23 |
+
Figure 1: Workflow overview. We begin by generating sentence pairs using word replacement, and then add pairs of sentences leveraging style transfer and GPT-3. Then, we use active learning and crowdworker judgments to identify pairs that deserve similar treatment according to human intuition.
|
| 24 |
+
|
| 25 |
+
This work: generating fairness specifications for text classification The central challenge we consider in this work is generating a diverse set of input pairs that aligns with human intuition about which inputs should be treated similarly in the context of a fixed text classification task. We address this challenge via a three-stage pipeline (Fig. 1). First, we start from a dataset $D$ and generate a set ${C}^{\prime }$ of candidate pairs $\left( {s,{s}^{\prime }}\right)$ by applying word replacement to sentences $s \in D$ . Second, to improve the diversity of pairs, we extend ${C}^{\prime }$ to a larger set $C$ by borrowing ideas from unsupervised style transfer. We change markers of demographic groups, e.g., "women" or "black people" in sentences $s \in D$ by replacing the style classifier in modern unsupervised style transfer methods [11,12] with a classifier trained to identify mentions of demographic groups. In addition, we add pairs from GPT-3 [13], prompted to change markers of demographic groups for sentences in $D$ in a zero-shot fashion. Finally, to identify which of the generated pairs align with human intuition about fairness, we design a crowdsourcing study in which workers are presented with candidate pairs and indicate if the pairs should be treated similarly for the considered classification task or not. We employ active learning similar to [14] to train a BERT-based [15] classifier $\widehat{\varphi }$ to recognize pairs that should be treated similarly using a limited amount of human feedback and obtain a filtered set of pairs ${\widehat{C}}^{ \star } \subseteq C$ . Our pipeline can be used in the context of most text classification tasks and in this work we instantiate it in the context of toxicity classification using a large dataset for online content moderation.
|
| 26 |
+
|
| 27 |
+
Main contributions We make the following contributions: (i) we introduce a method for generating datasets of diverse candidate pairs for individual fairness specifications, leveraging GPT-3 and unsupervised style transfer to modify demographic attributes mentioned in sentences; (ii) we show that human feedback can be used to train a classifier which automatically identifies pairs that align with human fairness intuitions for a considered downstream task; (iii) we instantiate our framework in the context of toxicity classification, demonstrating that the proposed pairs are more diverse than word replacement pairs only and that crowdsourcing workers agree with more than 75% of them.
|
| 28 |
+
|
| 29 |
+
§ 2 RELATED WORK
|
| 30 |
+
|
| 31 |
+
Bias in NLP Early work on bias in NLP has focused on unwanted correlations between the word embeddings of identifiers for protected demographic groups and unrelated categories such as occupations [16, 17]. Recently, language models have been found to harbor stereotypical biases [10, 18-20]. Specific to text classification, identity terms such as "gay" and explicit indicators of gender 1 have been shown to impact the outputs of classifiers trained to identify toxic comments [8] or to predict a person's occupation from their biography [4]. Olteanu et al. [21] demonstrate that human perceptions of the quality of a toxicity classifier can depend on the precise nature of errors made by the classifier, as well as the annotators' previous experiences with hate speech. Blodgett et al. [22] recommend authors to explictly consider why, how and to whom the biases they identify are harmful.
|
| 32 |
+
|
| 33 |
+
Language models for data augmentation Ross et al. [23] automatically create contrast sets [24] with a language model perturbing sentences based on control codes, while Rios [25] use style transfer to change the dialect of African-American Vernacular English tweets to Standard American English to evaluate the sensitivity to dialect of toxicity classifier. Hartvigsen et al. [26] use language models to generate a balanced dataset of benign and toxic comments about minority groups to combat classifiers' reliance on spurious correlations between identity terms and toxicity. Meanwhile, Qian et al. [27] train a perturber model to imitate human rewrites of comments that modify mentions of demographic groups, and demonstrate that their perturber can be used to reduce demographic biases in language models. However, this approach is limited by its reliance on expensive human rewrites and is only used for perturbations along fixed demographic axes such as gender.
|
| 34 |
+
|
| 35 |
+
Learning fairness notions from data Ilvento [28] provides an algorithm to approximate individual fairness metrics for $N$ datapoints in $O\left( {N\log N}\right)$ queries, which can be practically infeasible. Meanwhile, Mukherjee et al. [29] suggest training a classifier to predict binary fairness judgments on pairs $\left( {s,{s}^{\prime }}\right)$ in order to learn a fairness metric $\phi$ , but restrict themselves to Mahalanobis distances on top of a feature representation $\xi \left( s\right)$ , limiting their expressive power. In contrast to our work, these works do not validate their learned fairness notions with human feedback. To that end, Cheng et al. [30] present an interface to holistically elicit stakeholders' fairness judgments, whereas Wang et al. [31] aim to learn a bilinear fairness metric for tabular data based on clustering human annotations.
|
| 36 |
+
|
| 37 |
+
§ 3 METHOD
|
| 38 |
+
|
| 39 |
+
This section presents our end-to-end framework for generating and filtering valid candidate pairs for individual fairness specifications. In Sec. 3.1 we expand on existing word replacement definitions of individual fairness in text classification [5] by implementing three different ways to modify markers of demographic groups mentioned in a sentence $s$ . Then, in Sec. 3.2 we leverage human feedback to learn an approximate similarity function $\widehat{\varphi }$ to identify a set of relevant constraints ${\widehat{C}}^{ \star } \subseteq C$ .
|
| 40 |
+
|
| 41 |
+
§ 3.1 EXPANDING FAIRNESS CONSTRAINTS
|
| 42 |
+
|
| 43 |
+
Word Replacement First, we enrich the word replacement method by using the extensive lists of words associated with different protected demographic groups presented in Smith et al. [20]. The pool of terms is substantially larger than the 50 identity terms from Garg et al. [5]. We modify markers of group $j$ in a comment $s$ by replacing all words on the respective list of words associated with group $j$ with words from the list associated with the target group ${j}^{\prime }$ .
|
| 44 |
+
|
| 45 |
+
Unsupervised Style Transfer Second, we use an unsupervised style transfer approach based on prototype editing (see [32] for an extensive review) to transform markers of a demographic group $j$ in a sentence $s$ to markers of another demographic group ${j}^{\prime }$ , creating a new sentence ${s}^{\prime }$ . Prototype editing identifies markers $a$ of a source style $A$ in a text $s$ , and substitutes them by markers ${a}^{\prime }$ of a target style ${A}^{\prime }$ . Our approach leverages that modern prototype editing algorithms utilize saliency methods in combination with a style classifier to identify markers of style, and instead uses a RoBERTa-based [33] classifier $c$ trained to identify sentences that mention specific demographic groups $j$ . Combining ideas from [11] and [12], we transform a sentence $s$ to mention demographic attribute ${j}^{\prime }$ instead of $j$ by iteratively masking tokens with large impact on the likelihood ${p}_{c}\left( {j \mid {s}_{m}}\right)$ (initially starting with ${s}_{m} = s$ ) until we reach a certain threshold, and filling the masked tokens using a BART-based [34] group-conditioned generator $g\left( {{s}_{m},{j}^{\prime }}\right)$ trained to fill masks in sentences about group ${j}^{\prime }$ .
|
| 46 |
+
|
| 47 |
+
The unsupervised style transfer approach is likely to reproduce terms encountered during training, helping it to pick up on rare demographic terms that are particular to its training distribution which can be chosen to equal the training distribution for downstream tasks. In addition, unlike concurrent work by Qian et al. [27], unsupervised style transfer only requires labels ${y}_{j}\left( s\right)$ indicating the mention of demographic group $j$ in a sentence $s$ rather than expensive human-written examples of demographic group transfer. This allows us to modify mentions of demographic groups across axes like gender, religion and race, rather than restricting ourselves to changes within these axes.
|
| 48 |
+
|
| 49 |
+
GPT-3 Lastly, we leverage GPT-3 [13] to transform markers of protected demographic groups. We consider three methods: using GPT-3 standard mode and GPT-3 edit mode to rewrite sentences mentioning group $j$ to mention group ${j}^{\prime }$ in a zero-shot fashion, as well as postprocessing sentences generated by word replacement to fix logical and grammatical inconsistencies with GPT-3 edit mode.
|
| 50 |
+
|
| 51 |
+
To ensure that mentions of demographic group $j$ were indeed replaced by ${j}^{\prime }$ going from $s$ to ${s}^{\prime }$ , we use the same group-presence classifier $c$ as for the unsupervised style transfer approach to heuristically identify successful group transfer and discard pairs $\left( {s,{s}^{\prime }}\right)$ for which group transfer failed, for all three of our approaches. Implementation details are described in App. C and App. E contains examples.
|
| 52 |
+
|
| 53 |
+
§ 3.2 LEARNING THE SIMILARITY FUNCTION
|
| 54 |
+
|
| 55 |
+
In order to evaluate to what extend the proposed similarity criteria align with human intuition, we conduct a crowdsourcing study, described in more detail in Sec. 4, to obtain labels $\varphi \left( {s,{s}^{\prime }}\right)$ which indicate whether a pair $\left( {s,{s}^{\prime }}\right)$ should be treated similarly for the sake of individual fairness $\left( {\varphi \left( {s,{s}^{\prime }}\right) = 0}\right)$ or not $\left( {\varphi \left( {s,{s}^{\prime }}\right) = 1}\right)$ . We train a BERT-based [15] probabilistic model ${p}_{\widehat{\varphi }}\left( {s,{s}^{\prime }}\right)$ that predicts values of the similarity function $\varphi \left( {s,{s}^{\prime }}\right)$ for pairs $\left( {s,{s}^{\prime }}\right) \in C$ , and approximate the similarity function $\phi$ as $\widehat{\varphi }\left( {s,{s}^{\prime }}\right) \mathrel{\text{ := }} 1 \Leftrightarrow {p}_{\widehat{\varphi }}\left( {s,{s}^{\prime }}\right) > t$ for a given classification threshold $t$ . To make optimal use of costly human queries, we employ active learning when training the classifier $\widehat{\varphi }$ , selecting pairs to label based on the variation ratios $1 - \mathop{\max }\limits_{y}p\left( {y \mid x}\right)$ with $p$ estimated similar to Grießhaber et al. [14], based on Dropout-based Monte-Carlo [35, 36] applied to our model's classification head. Concretely, we iteratively select new unlabeled training data ${D}_{i} \subset C \smallsetminus \mathop{\bigcup }\limits_{{j < i}}{D}_{j}$ with $\left| {D}_{i}\right| = {1000}$ , based on the variation ratios, query labels for ${D}_{i}$ , and retrain $\widehat{\varphi }$ on ${D}_{i}$ . As different annotators can disagree about whether two sentences $s$ and ${s}^{\prime }$ should be treated similarly, we use a majority vote for evaluation. Inspired by Chen et al. [37]'s approach for dealing with noise in crowdsourcing, we use a single human query per pair $\left( {s,{s}^{\prime }}\right)$ during active learning, and relabel pairs that are likely to be mislabeled after active learning has concluded. See App. D for more details. When learning $\widehat{\varphi }$ is completed, we can define the set of filtered constraints ${\widehat{C}}^{ \star } = \left\{ {\left( {s,{s}^{\prime }}\right) \in C : \widehat{\varphi }\left( {s,{s}^{\prime }}\right) = 0}\right\}$ .
|
| 56 |
+
|
| 57 |
+
§ 4 EXPERIMENTS
|
| 58 |
+
|
| 59 |
+
In this section, we experimentally evaluate our framework. Our key findings are: (i) the pairs generated by our method are more diverse compared to word replacement pairs only (Sec. 4.2), while mostly aligning with human intuition about fairness (Sec. 4.3) and (ii) the underlying similarity function $\varphi$ can be approximated by active learning from human judgements (Sec. 4.4).
|
| 60 |
+
|
| 61 |
+
§ 4.1 DATASET AND SETUP
|
| 62 |
+
|
| 63 |
+
We focus on toxicity classification on the Jigsaw Civil Comments dataset [38]. The dataset contains around 2 million online comments $s$ with labels $y\left( s\right)$ indicating toxicity. We focus on a subset ${D}^{\prime } \subset D$ with labels ${A}_{j}\left( s\right)$ that indicate the presence of group $j$ in $s$ for training our group-presence classifier $c$ , and only consider comments $s$ that consist of at most 64 tokens. We construct a set $C$ of 100,000 constraints applying our different generation approaches to ${D}^{11}$ . More details on the generation and exact composition of $C$ , as well as example pairs $\left( {s,{s}^{\prime }}\right)$ , can be found in App. C Throughout this section, whenever we report fairness for a classifier $f$ , we refer to the proportion of pairs $\left( {s,{s}^{\prime }}\right)$ in a test pool of similar pairs for which $f\left( s\right) = f\left( {s}^{\prime }\right)$ rather than $f\left( s\right) \neq f\left( {s}^{\prime }\right)$ .
|
| 64 |
+
|
| 65 |
+
§ 4.2 DIVERSITY OF GENERATED FAIRNESS CONSTRAINTS
|
| 66 |
+
|
| 67 |
+
To validate that our candidate constraint set $C$ is more diverse than word replacement on its own, we train 4 different toxicity classifiers, using Counterfactual Logit Pairing (CLP) [5] to empirically enforce different constraint sets $C,{C}_{1},{C}_{2},{C}_{3}$ . Here $C$ corresponds to the full constraint set, as described in Sec. 3.1, while the other constraint sets have the same size as $C$ , but contain pairs generated by one method only. In particular, the pairs in ${C}_{1}$ were generated by word replacement using the 50 identity terms from Garg et al. [5] ${}^{2}$ , the pairs in ${C}_{2}$ were generated by word replacement, using the larger list of terms of Smith et al. [20], and the pairs in ${C}_{3}$ were derived by style transfer.
|
| 68 |
+
|
| 69 |
+
${}^{1}C$ contains ${42.5}\mathrm{\;K}$ word replacement and style transfer pairs each, and a total of ${15}\mathrm{\;K}$ GPT-3 pairs.
|
| 70 |
+
|
| 71 |
+
${}^{2}$ We did not discard any pairs from ${C}_{1}$ based on the group-presence classifier $c$ .
|
| 72 |
+
|
| 73 |
+
Table 1: Balanced accuracy and fairness for a RoBERTa-based classifier $f$ trained with CLP using different constraint sets for training. Results are averaged over 5 runs and $\pm$ indicates the difference from the upper/lower bound of a naive ${95}\%$ confidence interval assuming normally distributed errors.
|
| 74 |
+
|
| 75 |
+
max width=
|
| 76 |
+
|
| 77 |
+
Training/Evaluation BA ${\mathrm{{WR}}}_{50}\left( {C}_{1}\right)$ WR $\left( {C}_{2}\right)$ $\operatorname{ST}\left( {C}_{3}\right)$ Full $C$
|
| 78 |
+
|
| 79 |
+
1-6
|
| 80 |
+
Baseline ${88.4} \pm {0.1}$ ${78.4} \pm {1.4}$ ${81.3} \pm {1.5}$ ${76.7} \pm {1.8}$ ${78.5} \pm {1.5}$
|
| 81 |
+
|
| 82 |
+
1-6
|
| 83 |
+
$\operatorname{CLP}\left( 5\right) {\mathrm{{WR}}}_{50}\left( {C}_{1}\right)$ ${87.0} \pm {0.3}$ 98.3 ± 0.1 ${89.1} \pm {1.9}$ ${86.3} \pm {1.9}$ ${87.3} \pm {1.8}$
|
| 84 |
+
|
| 85 |
+
1-6
|
| 86 |
+
$\operatorname{CLP}\left( 5\right)$ WR $\left( {C}_{2}\right)$ ${87.2} \pm {0.1}$ ${93.1} \pm {1.2}$ ${98.2} \pm {0.4}$ ${90.5} \pm {1.7}$ ${92.9} \pm {1.2}$
|
| 87 |
+
|
| 88 |
+
1-6
|
| 89 |
+
$\operatorname{CLP}\left( 5\right) \operatorname{ST}\left( {C}_{3}\right)$ ${85.9} \pm {0.1}$ ${95.3} \pm {0.4}$ ${97.1} \pm {0.3}$ ${95.4} \pm {0.4}$ ${95.5} \pm {0.3}$
|
| 90 |
+
|
| 91 |
+
1-6
|
| 92 |
+
CLP(5) Full $C$ ${85.0} \pm {3.4}$ ${95.5} \pm {0.9}$ ${97.8} \pm {0.6}$ ${94.9} \pm {0.9}$ ${95.7} \pm {0.8}$
|
| 93 |
+
|
| 94 |
+
1-6
|
| 95 |
+
|
| 96 |
+
Table 2: Human evaluation: Answers to questions about comment pairs $\left( {s,{s}^{\prime }}\right)$ . The first number represents the fraction of the answer across all queries, while the second number (in brackets) represents the fraction of comment pairs for which the answer was the majority vote across 9 queries.
|
| 97 |
+
|
| 98 |
+
max width=
|
| 99 |
+
|
| 100 |
+
Metric/Method Word replacement Style Transfer GPT-3
|
| 101 |
+
|
| 102 |
+
1-4
|
| 103 |
+
Unfair: Average American 84.9 (97.5) 84.6 (95.8) 83.4 (95.0)
|
| 104 |
+
|
| 105 |
+
1-4
|
| 106 |
+
Unfair: Own Opinion 85.9 (97.5) 85.2 (96.2) 83.2 (93.7)
|
| 107 |
+
|
| 108 |
+
1-4
|
| 109 |
+
Group Transfer 89.3 (95.0) 79.2 (85.4) 81.9 (89.5)
|
| 110 |
+
|
| 111 |
+
1-4
|
| 112 |
+
Content preservation 88.1 (100) 79.2 (91.2) 78.4 (87.9)
|
| 113 |
+
|
| 114 |
+
1-4
|
| 115 |
+
Same Factuality 73.0 (84.1) 76.2 (87.5) 78.5 (89.1)
|
| 116 |
+
|
| 117 |
+
1-4
|
| 118 |
+
Same Grammaticality 91.2 (99.1) 92.9 (97.9) 92.9 (98.3)
|
| 119 |
+
|
| 120 |
+
1-4
|
| 121 |
+
|
| 122 |
+
We then cross-evaluate the performance of the 4 classifiers trained with these constraint sets in terms of their test-time fairness according to each of the 4 fairness criteria, and their balanced accuracy.
|
| 123 |
+
|
| 124 |
+
The results in Table 1 show that each classifier achieves high fairness when evaluated on the set of pairs corresponding to the constraints used during its training (numbers in italics) while performing worse on other constraint pairs. While this indicates that adherence to fairness constraints does not always generalize well across our generation methods, we note that training on style transfer pairs $(C$ or ${C}_{3}$ ) generalizes substantially better to ${C}_{2}$ than training on different word replacement pairs $\left( {C}_{1}\right.$ ; see the numbers in bold). More details can be found in App. C.
|
| 125 |
+
|
| 126 |
+
§ 4.3 RELEVANCE OF GENERATED FAIRNESS CONSTRAINTS
|
| 127 |
+
|
| 128 |
+
To validate that the generated fairness contraints are relevant and intuitive, we conducted a human evaluation with workers recruited via Amazon MTurk. The workers were presented with pairs $\left( {s,{s}^{\prime }}\right)$ consisting of a comment $s$ from the Civil Comments dataset, as well as a modified version ${s}^{\prime }$ and asked about whether they believe that the two comments should be treated similarly and whether they believed that the average American shared their opinion. Treatment was framed in terms of toxicity classification for the sake of content moderation, ensuring that we verify the relevance of the learned notions relevant to this specific task. The workers were also asked whether the demographic group was transferred correctly from a given $j$ to a given ${j}^{\prime }$ , whether the content of $s$ has been preserved in ${s}^{\prime }$ apart from the demographic group transfer, and whether there are differences in factuality and grammaticality between $s$ and ${s}^{\prime }$ . We collected human feedback for a set $S$ containing a total of 720 pairs $\left( {s,{s}^{\prime }}\right)$ with 240 each being produced by our style transfer approach, GPT-3 in a zero-shot fashion, and word replacement using the list from [5] as for ${C}_{1}$ . These 240 pairs per method were split into 80 pairs for each of the axes male $\leftrightarrow$ female, christian $\leftrightarrow$ muslim and black $\leftrightarrow$ white. Each pair $\left( {s,{s}^{\prime }}\right)$ was shown to nine different workers. Further details can be found in App. B.
|
| 129 |
+
|
| 130 |
+
Table 2 shows that all three methods mostly produce relevant fairness constraints, according to a majority of annotators. At the same time, they generally successfully modify the mentioned demographic group, and preserve content, factuality and grammaticality. While word replacement generally performs better in terms of group transfer and content preservation, it only has a small advantage in terms of relevance to fairness, perhaps due to its worse performance in terms of factuality: we found examples in which word replacement changed "white house" to "black house"; or Obama is referred to as "white" rather than "black". These pairs were not seen as fairness constraints by most annotators and judged badly in terms of preserving factuality. See B. 1 for more detailed results.
|
| 131 |
+
|
| 132 |
+
Table 3: Performance of differently trained classifiers $\widehat{\varphi }$ on the test set $T$ . Active learning classifiers are retrained 10 times on the last batch ${D}_{6}$ . Results are averaged and $\pm$ indicates the difference from the upper/lower bound of a naive 95% confidence interval assuming normally distributed errors.
|
| 133 |
+
|
| 134 |
+
max width=
|
| 135 |
+
|
| 136 |
+
Method ACC TNR TPR BA
|
| 137 |
+
|
| 138 |
+
1-5
|
| 139 |
+
Constant Baseline 78.8 100.0 0.0 50.0
|
| 140 |
+
|
| 141 |
+
1-5
|
| 142 |
+
Active Learning t=0.5 ${79.8} \pm {0.3}$ ${97.2} \pm {0.3}$ ${15.1} \pm {1.2}$ 56.1
|
| 143 |
+
|
| 144 |
+
1-5
|
| 145 |
+
Active Learning + Relabel t=0.5 ${81.1} \pm {0.3}$ ${95.5} \pm {0.7}$ ${28.6} \pm {2.2}$ 62.0
|
| 146 |
+
|
| 147 |
+
1-5
|
| 148 |
+
Active Learning t=0.01 ${78.7} \pm {1.1}$ ${87.5} \pm {2.1}$ ${45.7} \pm {1.8}$ 66.6
|
| 149 |
+
|
| 150 |
+
1-5
|
| 151 |
+
Active Learning + Relabel t=0.01 ${78.3} \pm {0.7}$ ${86.8} \pm {1.5}$ ${46.6} \pm {2.5}$ 66.7
|
| 152 |
+
|
| 153 |
+
1-5
|
| 154 |
+
|
| 155 |
+
§ 4.4 LEARNING THE SIMILARITY FUNCTION
|
| 156 |
+
|
| 157 |
+
We employed our active learning approach to efficiently train a classifier $\widehat{\varphi }$ from relatively few human judgments, with the goal of using it to identify pairs that represent actual fairness constraints on the remaining pool of candidates. We conducted 6 steps of active learning with 1000 queries each and discarded failed queries, ending up with a total of 5490 labeled pairs $\left( {\left( {s,{s}^{\prime }}\right) ,\varphi \left( {s,{s}^{\prime }}\right) }\right)$ . Details on our model architecture and other hyperparameters can be found in App. D. We evaluate our learnt classifier on a test set $T$ consisting of 500 randomly selected pairs from $C$ for which five annotators were asked about the average American's fairness judgment.
|
| 158 |
+
|
| 159 |
+
Because ${78.8}\%$ of the pairs $\left( {s,{s}^{\prime }}\right)$ in $T$ represented fairness constraints $\left( {\varphi \left( {s,{s}^{\prime }}\right) = 0}\right)$ according to the majority of annotators, we report Balanced Accuracy (BA), in addition to standard accuracy (ACC) and the true positive and negative rates (TPR and TNR). Table 3 displays these metrics for classifiers resulting from our active learning method for different classification thresholds $t$ and with and without relabeling. We observe that $\widehat{\varphi }$ performs substantially better than random, achieving BA of ${66.7}\%$ when used with an aggressive classifier threshold $t$ . The table also validates our relabeling approach: after observing that our classifier was biased towards predicting $\varphi \left( {s,{s}^{\prime }}\right) = 0$ , we collected two additional labels for 500 pairs $\left( {s,{s}^{\prime }}\right)$ for which both the human and the predicted label were equal to zero $\left( {\widehat{\varphi }\left( {s,{s}^{\prime }}\right) = \varphi \left( {s,{s}^{\prime }}\right) = 0}\right)$ , selected based on the variation ratios. ${47}\%$ of these pairs received a majority vote of $\varphi \left( {s,{s}^{\prime }}\right) = 1$ , showing that our approach correctly identified pairs that were likely to be mislabeled. Retraining our classifier on the updated majority votes also substantially increased TPR at little costs to TNR, especially for balanced classification thresholds $t$ close to 0.5 .
|
| 160 |
+
|
| 161 |
+
According to a qualitative evaluation, most of the sentence pairs $\left( {s,{s}^{\prime }}\right)$ predicted to not represent fairness constraints $\left( {\widehat{\varphi }\left( {s,{s}^{\prime }}\right) = 1}\right)$ had the words "boy" or "man" replaced by terms denoting identity membership. Such sentence pairs were often not seen as fairness constraints by our annotators, as the inclusion of the identity term can be interpreted as aggressive or mocking. $\widehat{\varphi }$ also successfully identified sentence pairs $\left( {s,{s}^{\prime }}\right)$ for which ${s}^{\prime }$ was unrelated to $s$ , that were sometimes produced by GPT-3, as not representing fairness constraints. Additional results and details can be found in App. D.
|
| 162 |
+
|
| 163 |
+
§ 5 CONCLUSION
|
| 164 |
+
|
| 165 |
+
We proposed a framework for producing expressive and intuitive specifications for individual fairness in text classification. We experimentally demonstrated that our constraints are indeed more expressive than previous constraints based on word replacement and validated that most of the generated fairness constraints were relevant in the context of toxicity classification according to human annotators. In addition, we used active learning to demonstrate that human fairness judgments can be predicted using limited amounts of training data. In future work we plan to utilize the generated filtered constraints to train a fair downstream toxicity classifier with better trade-off between accuracy and fairness.
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/V7TaczasnAk/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/V7TaczasnAk/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,207 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ FORGETTING DATA FROM PRE-TRAINED GANS
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
§ ABSTRACT
|
| 12 |
+
|
| 13 |
+
Large pre-trained generative models are known to occasionally output undesirable samples, which contributes to their trustworthiness. The common way to mitigate this is to re-train them differently from scratch using different data or different regularization - which uses a lot of computational resources and does not always fully address the problem. In this work, we take a different, more compute-friendly approach and investigate how to post-edit a model after training so that it "forgets", or refrains from outputting certain kinds of samples. We show that forgetting is different from data deletion, and data deletion may not always lead to forgetting. We then consider Generative Adversarial Networks (GANs), and provide three different algorithms for data forgetting that differ on how the samples to be forgotten are described. Extensive evaluations on real-world image datasets show that our algorithms out-perform data deletion baselines, and are capable of forgetting data while retaining high generation quality at a fraction of the cost of full re-training.
|
| 14 |
+
|
| 15 |
+
§ 14 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
Generative Adversarial Networks (GANs) are large neural generative models that learn a complicated probability distribution from data and then generate samples from it. These models have been immensely successful in many large scale tasks from multiple domains, such as images [Zhu et al., 2020, Karras et al., 2020, 2021], point clouds [Zhang et al., 2021], video [Tulyakov et al., 2018], text [de Masson d'Autume et al., 2019], and speech [Kong et al., 2020].
|
| 18 |
+
|
| 19 |
+
However, it is also well-known that many deep generative models frequently output undesirable samples, which makes them less reliable and trustworthy. Image models generate blurred samples [Kaneko and Harada, 2021] or checkerboard artifacts [Odena et al., 2016, Zhang et al., 2019, Wang et al., 2020, Schwarz et al., 2021], speech models produce unnatural sound [Donahue et al., 2018, Thiem et al., 2020], and language models emit offensive text [Abid et al., 2021, Perez et al., 2022]. Thus, an important question is how to mitigate these artifacts, which would improve the trustworthiness of these models.
|
| 20 |
+
|
| 21 |
+
One way to mitigate undesirable samples is to re-design the entire training pipeline including data augmentation, model architecture and loss functions, and then re-train the entire model from scratch [Isola et al., 2017, Aitken et al., 2017, Kaneko and Harada, 2021] - a strategy that has been used in prior work. This approach is very compute-intensive as modern GANs can be extremely expensive to train. In addition, other problems may become apparent after training, and resolving them may require multiple re-trainings. To address this challenge, we consider post-editing, which means modifying a pre-trained model in a certain way rather than training it differently from scratch. This is a much more computationally efficient process that has shown empirical success in many supervised learning tasks [Frankle and Carbin, 2018, Zhou et al., 2021, Taha et al., 2021], but has not been studied much for unsupervised learning. In particular, we propose a post-editing framework to forget undesirable samples that might be generated by a GAN, which we call data forgetting.
|
| 22 |
+
|
| 23 |
+
A second plausible solution for mitigating undesirable samples is to use a classifier to filter them out after generation. This approach, however, has several drawbacks. Classifiers can take a significant amount of space and time after deploymen. Additionally, if the generative model is handed to a third party, then the model trainer has no control over whether the filter will ultimately be used. Data forgetting via post-editing, on the other hand, offers a cleaner solution which does not suffer from these limitations.
|
| 24 |
+
|
| 25 |
+
A third plausible solution is data deletion or machine unlearning - post-edit the model to approximate a re-trained model that is obtained by re-training from scratch after removing the undesirable samples from the training data. However, this does not always work - as we show in Section E.3, deletion does not necessarily lead to forgetting in constrained models. Additionally, the undesirable samples may simply be artifacts of the neural generative model and may not exist in the training data; examples include unnatural sounds emitted by speech models and blurred images from image models. Data forgetting, in contrast, can address all these challenges.
|
| 26 |
+
|
| 27 |
+
There are two major technical challenges that we need to resolve in order to do effective data forgetting. The first is how to describe the samples to be forgotten. This is important as data forgetting algorithms need to be tailored to specific descriptions. The second challenge is that we need to carefully balance data forgetting with retaining good generation quality, which means the latent space and the networks must be carefully manipulated.
|
| 28 |
+
|
| 29 |
+
In this work, we propose a systematic framework for forgetting data from pre-trained generative models (see Section 2). We model data forgetting as learning the data distribution restricted to the complement of a forgetting set $\Omega$ . We then formalize three ways of describing forgetting sets, namely data-based (where a pre-specified set is given), validity-based (where there is a validity checker), and classifier-based (where there is a differentiable classifier).
|
| 30 |
+
|
| 31 |
+
Then, we introduce three data forgetting algorithms, one for each description (see Section 3). Prior works have looked at avoiding negative samples in the re-training setting with different descriptions and purposes [Sinha et al., 2020, Asokan and Seelamantula, 2020]. They introduce fake distributions to penalize the generation of negative samples. We extend this idea to data forgetting by defining the fake distribution as a mixture of the generative distribution and a forgetting distribution supported on $\Omega$ . We prove the optimal generator can recover the target distribution when label smoothing [Salimans et al., 2016, Szegedy et al., 2016, Warde-Farley and Goodfellow, 2016] is used.
|
| 32 |
+
|
| 33 |
+
Based on our theory, we introduce the data-based forgetting algorithm (Alg. 1). We then combine this algorithm with an improper active learning algorithm by Hanneke et al. [2018] and introduce the validity-based forgetting algorithm (Alg. 2). Finally, we propose to use a guide function to guide the discriminator via a classifier, and introduce the classifier-based forgetting algorithm (Alg. 3).
|
| 34 |
+
|
| 35 |
+
Finally, we empirically evaluate these forgetting algorithms via experiments on real-world image datasets (see Section 4). We show that these algorithms can forget quickly while keeping high generation quality. We then investigate applications of data forgetting, and use our algorithms to remove different biases that may not exist in the training set but are learned by the pre-trained model. This demonstrates that data forgetting can be used to reduce biases and improve generation quality, and hence improve the trustworthiness of generative models.
|
| 36 |
+
|
| 37 |
+
In summary, our contributions are as follows:
|
| 38 |
+
|
| 39 |
+
* We formalize the problem of post-editing generative models to prevent them from outputting undesirable samples as "data forgetting" and establish its differences with data deletion.
|
| 40 |
+
|
| 41 |
+
* We propose three data augmentation-based algorithms for forgetting data from pre-trained GANs as a function of how the inputs to be forgotten are described.
|
| 42 |
+
|
| 43 |
+
* We theoretically prove that data forgetting can be achieved by the proposed algorithms.
|
| 44 |
+
|
| 45 |
+
* We extensively evaluate our algorithms on real world image datasets. We show these algorithms can forget data quickly while retaining high generation quality. Moreover, we find data forgetting performs better than data deletion in a de-biasing experiment.
|
| 46 |
+
|
| 47 |
+
§ 2 A FORMAL FRAMEWORK FOR DATA FORGETTING
|
| 48 |
+
|
| 49 |
+
3 Let ${p}_{\text{ data }}$ be the data distribution on ${\mathbb{R}}^{d}$ and $X \sim {p}_{\text{ data }}$ be i.i.d. training samples. Let $\mathcal{A}$ be a generative model and $\mathcal{M} = \mathcal{A}\left( X\right)$ be the pre-trained model on $X$ , which learns ${p}_{\text{ data }}$ . In this paper,
|
| 50 |
+
|
| 51 |
+
we consider $\mathcal{A}$ to be a GAN learning algorithm [Goodfellow et al.,2014a], and $\mathcal{M}$ contains two networks, $D$ (discriminator) and $G$ (generator), which are jointly trained to optimize
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
\mathop{\min }\limits_{G}\mathop{\max }\limits_{D}{\mathbb{E}}_{x \sim {p}_{\text{ data }}}\log D\left( x\right) + {\mathbb{E}}_{z \sim \mathcal{N}\left( {0,I}\right) }\log \left( {1 - D\left( {G\left( z\right) }\right) }\right) . \tag{1}
|
| 55 |
+
$$
|
| 56 |
+
|
| 57 |
+
§ 2.1 DATA FORGETTING FRAMEWORK
|
| 58 |
+
|
| 59 |
+
Let the forgetting set $\Omega \subset {\mathbb{R}}^{d}$ be the set of samples we would like the model to forget. Formally, the goal is to develop a forgetting algorithm $\mathcal{D}$ such that ${\mathcal{M}}^{\prime } = \mathcal{D}\left( {\mathcal{M},\Omega }\right)$ learns the data distribution restricted to the complement $\bar{\Omega } = {\mathbb{R}}^{d} \smallsetminus \Omega$ , i.e. ${\left. {p}_{\text{ data }}\right| }_{\bar{\Omega }}$ . Examples of $\Omega$ include inconsistent, blurred, unrealistic, or banned samples that are possibly generated by the model.
|
| 60 |
+
|
| 61 |
+
The forgetting set $\Omega$ , in addition to the pre-trained model, is considered as an input to the forgetting algorithm. We consider three kinds of $\Omega$ , namely data-based, validity-based, and classifier-based.
|
| 62 |
+
|
| 63 |
+
§ 2.2 FORGETTING SET DESCRIPTIONS
|
| 64 |
+
|
| 65 |
+
We propose three different descriptions for the forgetting set $\Omega$ . First, the data-based $\Omega$ is a predefined set of samples in ${\mathbb{R}}^{d}$ , such as a transformation applied on all training samples [Sinha et al., 2020]. Second, the validity-based $\Omega$ is defined as all invalid samples according to a validity function $\mathbf{v} : {\mathbb{R}}^{d} \rightarrow \{ 0,1\}$ , where 0 means invalid and 1 means valid. This is similar to the setting in Hanneke et al. [2018]. Finally, let $\mathbf{f} : {\mathbb{R}}^{d} \rightarrow \left\lbrack {0,1}\right\rbrack$ be a soft classifier that outputs the probability that a sample belongs to a certain binary class, and $\tau \in \left( {0,1}\right)$ be a threshold. Then, the classifier-based $\Omega$ is defined as $\{ x : \mathbf{f}\left( x\right) < \tau \}$ . For example, $\mathbf{f}$ can be an offensive text classifier in language generation tasks [Pitsilis et al., 2018].
|
| 66 |
+
|
| 67 |
+
§ 2.3 DATA DELETION VERSUS DATA FORGETTING
|
| 68 |
+
|
| 69 |
+
Motivated by privacy laws such as the GDPR and the CCPA, there has been a recent body of work on data deletion or machine unlearning [Cao and Yang, 2015, Guo et al., 2019, Schelter, 2020, Neel et al., 2021, Sekhari et al., 2021, Izzo et al., 2021, Ullah et al., 2021]. In data deletion, we are given a subset set ${X}^{\prime } \subset X$ of the training set to be deleted from an already-trained model, and the goal is to approximate the re-trained model $\mathcal{A}\left( {X \smallsetminus {X}^{\prime }}\right)$ . While there are some superficial similarities - in that the goal is to post-edit models in order to "remove" a few data points, there are two very important differences.
|
| 70 |
+
|
| 71 |
+
The first is that data forgetting requires the model to assign zero likelihood to the forgetting set $\Omega$ in order to avoid generating samples from this region; this is not the case in data deletion - in fact, we present an example below which shows that data deletion of a set ${X}^{\prime }$ may not cause a generative model to forget ${X}^{\prime }$ .
|
| 72 |
+
|
| 73 |
+
Specifically, in Fig. 1, the entire data distribution ${p}_{\text{ data }} = \mathcal{N}\left( {0,1}\right)$ (blue line) is the standard Gaussian distribution on $\mathbb{R}$ . We set the forgetting set $\Omega = \left( {-\infty , - {1.5}\rbrack \cup \lbrack {1.5},\infty }\right)$ , so the blue samples fall in $\Omega$ and orange samples outside. The learning algorithm $\mathcal{A}$ is the maximum likelihood Gaussian learner that fits the mean and variance of the data. With $n = {80}$ samples, the learnt density $\mathcal{A}\left( X\right)$ is shown in green. If the blue samples were deleted, and the model re-fitted, the newly learnt density $\mathcal{A}\left( {X \smallsetminus {X}^{\prime }}\right)$ would be the red line. Notice that this red line has considerable density on the blue points - and so these points are not forgotten. In contrast, the correct forgetting solution that forgets the samples in $\Omega$ would be the orange density. Thus deletion does not necessarily lead to forgetting.
|
| 74 |
+
|
| 75 |
+
The second difference is that the forgetting set $\Omega$ may have a zero intersection with the training data, but may appear in the generated data due to artifacts of the model. Examples include unnatural sounds emitted by speech models, and blurred images from image models. Data forgetting, in contrast to data deletion, can address this challenge.
|
| 76 |
+
|
| 77 |
+
§ 3 METHODS
|
| 78 |
+
|
| 79 |
+
In this section, we describe algorithms for each kind of forgetting set described in Section 2. We also provide theory on the optimality of the generator and the discriminator. Finally, we generalize the algorithms to situations where we would like the model to forget the union of multiple forgetting sets
|
| 80 |
+
|
| 81 |
+
< g r a p h i c s >
|
| 82 |
+
|
| 83 |
+
Figure 1: An example showing difference between data forgetting and data deletion. The goal of data deletion is to approximate the re-trained model (red density), while the goal of data forgetting is to approximate the restricted density (orange density).
|
| 84 |
+
|
| 85 |
+
§ 3.1 DATA-BASED FORGETTING SET
|
| 86 |
+
|
| 87 |
+
The data-based forgetting set $\Omega$ is a pre-defined set of samples we would like the model to forget. One example is a transformation function NegAug applied to all training samples, where NegAug makes realistic images unrealistic or inconsistent [Sinha et al., 2020]. Another example can be visually nice samples outside data manifold when the training set is small [Asokan and Seelamantula, 2020].
|
| 88 |
+
|
| 89 |
+
In our framework, the forgetting set $\Omega$ can be any set of carefully designed or selected samples depending on the purpose of forgetting them - which includes but does not limit to improving the generation quality of the model. For example, we expect the model to improve on fairness, bias, ethics or privacy when $\Omega$ is properly constructed with unfair, biased, unethical, or atypical samples.
|
| 90 |
+
|
| 91 |
+
To forget $\Omega$ , we regard both generated samples and $\Omega$ to be fake samples, and all training samples that are not in $\Omega$ to be real samples [Sinha et al.,2020, Asokan and Seelamantula,2020]. Let ${p}_{\Omega }$ be a distribution such that $\operatorname{supp}\left( {p}_{\Omega }\right) = \Omega$ . Then, the fake data distribution ${p}_{\text{ fake }}$ is a mixture of the generative distribution ${p}_{G} = G\# \mathcal{N}\left( {0,I}\right)$ and ${p}_{\Omega }$ :
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
{p}_{\text{ fake }} = \lambda \cdot {p}_{G} + \left( {1 - \lambda }\right) \cdot {p}_{\Omega }, \tag{2}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
where $\lambda \in \left( {0,1}\right)$ is a hyperparameter. We also apply label smoothing [Salimans et al.,2016, Szegedy et al., 2016, Warde-Farley and Goodfellow, 2016] techniques to the minimax loss function in order to improve robustness. Let ${\alpha }_{ + } \in \left( {\frac{1}{2},1}\right\rbrack$ be the positive target (such as 0.9) and ${\alpha }_{ - } \in \left\lbrack {0,\frac{1}{2}}\right)$ be the negative target (such as 0.1 ). Then, the loss function is
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
L\left( {G,D}\right) = {\mathbb{E}}_{x \sim {p}_{\text{ data }}{ \mid }_{\bar{\Omega }}}\left\lbrack {{\alpha }_{ + }\log D\left( x\right) + \left( {1 - {\alpha }_{ + }}\right) \log \left( {1 - D\left( x\right) }\right) }\right\rbrack \tag{3}
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
+ {\mathbb{E}}_{x \sim {p}_{\text{ fake }}}\left\lbrack {{\alpha }_{ - }\log D\left( x\right) + \left( {1 - {\alpha }_{ - }}\right) \log \left( {1 - D\left( x\right) }\right) }\right\rbrack .
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
Theorem 1. The optimal solution to $\mathop{\min }\limits_{G}\mathop{\max }\limits_{D}L\left( {G,D}\right)$ is
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
{D}^{ * } = \frac{{\alpha }_{ + }{p}_{\text{ data }}{\left| {}_{\bar{\Omega }} + {\alpha }_{ - }\left( \lambda {p}_{G} + \left( 1 - \lambda \right) {p}_{\Omega }\right) ,{p}_{{G}^{ * }} = {p}_{\text{ data }}\right| }_{\bar{\Omega }}}{{p}_{\text{ data }}{\left| {}_{\bar{\Omega }} + \lambda {p}_{G} + \left( 1 - \lambda \right) {p}_{\Omega }\right| }^{2}}. \tag{4}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
We provide the proof and theoretical extension to the more general $f$ -GAN [Nowozin et al.,2016] setting in Appendix B. In the data-based setting, we let ${p}_{\Omega } = \mathcal{U}\left( \Omega \right)$ , the uniform distribution on $\Omega$ . We assume $\Omega$ has positive, finite Lebesgue measure in ${\mathbb{R}}^{d}$ so that $\mathcal{U}\left( \Omega \right)$ is well-defined. The proposed method is summarized in Alg. 1.
|
| 114 |
+
|
| 115 |
+
Our objective function is connected to Sinha et al. [2020] and Asokan and Seelamantula [2020] in the sense that ${p}_{\Omega }$ is an instance of the negative distribution described in their frameworks. However, there are several significant differences between our method and theirs: (1) we start from a pre-trained model,(2) we aim to learn ${\left. {p}_{\text{ data }}\right| }_{\bar{\Omega }}$ rather than ${p}_{\text{ data }}$ and therefore do not require $\Omega \cap \operatorname{supp}\left( {p}_{\text{ data }}\right)$ to be the empty set, and (3) we use label smoothing techniques to improve robustness and provide theory for this setting. These differences are also true in the following sections.
|
| 116 |
+
|
| 117 |
+
§ 3.2 VALIDITY-BASED FORGETTING SET
|
| 118 |
+
|
| 119 |
+
Let $\mathbf{v} : {\mathbb{R}}^{d} \rightarrow \{ 0,1\}$ be a validity function that indicates whether a sample is valid. Then, validity-based forgetting set $\Omega$ is the set of all invalid samples $\{ x : \mathbf{v}\left( x\right) = 0\}$ . For example, $\mathcal{M}$ is a
|
| 120 |
+
|
| 121 |
+
Algorithm 1 Forgetting Algorithm for Data-based Forgetting Set
|
| 122 |
+
|
| 123 |
+
Inputs: Pre-trained model $\mathcal{M} = \left( {{G}_{0},{D}_{0}}\right)$ , train set $X$ , forgetting set $\Omega$ .
|
| 124 |
+
|
| 125 |
+
Initialize $G = {G}_{0},D = {D}_{0}$ .
|
| 126 |
+
|
| 127 |
+
Define the fake data distribution ${p}_{\text{ fake }}$ according to (2) with ${p}_{\Omega } = \mathcal{U}\left( \Omega \right)$ .
|
| 128 |
+
|
| 129 |
+
Train $G,D$ to optimize (3): $\mathop{\min }\limits_{G}\mathop{\max }\limits_{D}L\left( {G,D}\right)$ .
|
| 130 |
+
|
| 131 |
+
return ${\mathcal{M}}^{\prime } = \left( {G,D}\right)$ .
|
| 132 |
+
|
| 133 |
+
Algorithm 2 Forgetting Algorithm for Validity-based Forgetting Set
|
| 134 |
+
|
| 135 |
+
Inputs: Pre-trained model $\mathcal{M} = \left( {{G}_{0},{D}_{0}}\right)$ , train set $X$ , validity function $\mathbf{v}$ .
|
| 136 |
+
|
| 137 |
+
Initialize ${\Omega }^{\prime } = \{ x \in X : \mathbf{v}\left( x\right) = 0\} ,{\mathcal{M}}_{0} = \mathcal{M}$ .
|
| 138 |
+
|
| 139 |
+
for $i = 0,\cdots ,R - 1$ do
|
| 140 |
+
|
| 141 |
+
Initialize $G = {G}_{i},D = {D}_{i}$ . Draw $T$ samples ${X}_{\text{ gen }}^{\left( i\right) }$ from ${G}_{i}$ .
|
| 142 |
+
|
| 143 |
+
Query $\mathbf{v}$ and add invalid samples to ${\Omega }^{\prime } : {\Omega }^{\prime } \leftarrow {\Omega }^{\prime } \cup \left\{ {x \in {X}_{\text{ gen }}^{\left( i\right) } : \mathbf{v}\left( x\right) = 0}\right\}$ .
|
| 144 |
+
|
| 145 |
+
Define the fake data distribution ${p}_{\text{ fake }}$ according to (2) with ${p}_{\Omega } = \mathcal{U}\left( {\Omega }^{\prime }\right)$ .
|
| 146 |
+
|
| 147 |
+
Let ${\mathcal{M}}_{i + 1} = \left( {{G}_{i + 1},{D}_{i + 1}}\right)$ optimize (3): $\mathop{\min }\limits_{G}\mathop{\max }\limits_{D}L\left( {G,D}\right)$ .
|
| 148 |
+
|
| 149 |
+
end for
|
| 150 |
+
|
| 151 |
+
return ${\mathcal{M}}^{\prime } = \left( {{G}_{R},{D}_{R}}\right)$
|
| 152 |
+
|
| 153 |
+
Algorithm 3 Forgetting Algorithm for Classifier-based Forgetting Set
|
| 154 |
+
|
| 155 |
+
Inputs: Pre-trained model $\mathcal{M} = \left( {{G}_{0},{D}_{0}}\right)$ , train set $X$ , differentiable classifier $\mathbf{f}$ .
|
| 156 |
+
|
| 157 |
+
Initialize $G = {G}_{0},D = {D}_{0}$ .
|
| 158 |
+
|
| 159 |
+
Define the fake data distribution ${p}_{\text{ fake }}$ according to (2) with ${p}_{\Omega } = \mathcal{U}\left( {\{ x \in X : \mathbf{f}\left( x\right) < \tau \} }\right)$ .
|
| 160 |
+
|
| 161 |
+
Train $G,D$ to optimize (3): $\mathop{\min }\limits_{G}\mathop{\max }\limits_{D}L\left( {G,\operatorname{guide}\left( {D,\mathbf{f}}\right) }\right)$ , where $\operatorname{guide}\left( {\cdot , \cdot }\right)$ is defined in (6).
|
| 162 |
+
|
| 163 |
+
return ${\mathcal{M}}^{\prime } = \left( {G,D}\right)$ .
|
| 164 |
+
|
| 165 |
+
code generation model, and $\mathbf{v}$ is a compiler that indicates whether the code is free of syntax errors [Hanneke et al.,2018]. Different from the data-based setting, the validity-based $\Omega$ may have infinite Lebesgue measure, such as a halfspace, and consequently $\mathcal{U}\left( \Omega \right)$ may not be well-defined.
|
| 166 |
+
|
| 167 |
+
To forget $\Omega$ , we let ${p}_{\Omega }$ in (2) to be a mixture of ${\left. {p}_{\text{ data }}\right| }_{\Omega }$ and ${\left. {p}_{G}\right| }_{\Omega }$ . This corresponds to a simplified version of the improper active learning algorithm introduced by Hanneke et al. [2018] with our Alg. 1 as their optimization oracle. The idea is to apply Alg. 1 for $R$ rounds. After each round, we query the validity of $T$ newly generated samples and use invalid samples to form a data-based forgetting set ${\Omega }^{\prime }$ . In contrast to the data-based approach, this active algorithm focuses on invalid samples that are more likely to be generated, and therefore efficiently penalizes generation of invalid samples. The proposed method is summarized in Alg. 2.
|
| 168 |
+
|
| 169 |
+
The total number of queries to the validity function $\mathbf{v}$ is $\left| X\right| + T \times R$ . In case $\mathbf{v}$ is expensive to run, we would like to achieve better data forgetting within a limited number of queries. From the data-driven point of view, we hope to collect as many invalid samples as possible. This is done by setting $R = 1$ and $T$ maximized if we assume less invalid samples are generated after each iteration. However, this may not be the case in practice. We hypothesis some samples are easier to forget while others harder. By setting $R > 1$ , we expect an increasing fraction of invalid generated samples to be hard to forget after each iteration. Focusing on these hard samples can potentially help the generator forget them. Since it is hard to directly analyze neural networks, we leave the rigorous study to future work. In Appendix C, we study a much simplified dynamical system corresponding to Alg. 2, where we show the invalidity (the mass of ${p}_{G}$ on $\Omega$ ) converges to zero, and provide optimal $T$ and $R$ values.
|
| 170 |
+
|
| 171 |
+
§ 3.3 CLASSIFIER-BASED FORGETTING SET
|
| 172 |
+
|
| 173 |
+
We would like the model to forget samples with certain (potentially undesirable) property. Let $\mathbf{f} : {\mathbb{R}}^{d} \rightarrow \left\lbrack {0,1}\right\rbrack$ be a soft binary classifier on the property ( 0 means having the property and 1 means not having it), and $\tau \in \left( {0,1}\right)$ be a threshold. The classifier-based forgetting set $\Omega$ is then defined as $\{ x : \mathbf{f}\left( x\right) < \tau \}$ . For example, the property can be being offensive in language generation, containing no speech in speech synthesis, or visual inconsistency in image generation. We consider $\mathrm{f}$ to be a trained machine learning model that is fully accessible and differentiable.
|
| 174 |
+
|
| 175 |
+
To forget $\Omega$ , we let ${p}_{\Omega }$ be a mixture of ${\left. {p}_{\text{ data }}\right| }_{\Omega }$ and ${\left. {p}_{G}\right| }_{\Omega }$ , similar to the validity-based approach. We use $\mathbf{f}$ to guide the discriminator and make it able to easily detect samples from $\Omega$ . Let guide(D, f)be a guided discriminator that assigns small values to $x$ when $\mathbf{f}\left( x\right) < \tau$ or $D\left( x\right)$ is small (i.e. $x \sim {p}_{\text{ fake }}$ ), and large values to $x$ when $\mathbf{f}\left( x\right) > \tau$ and $D\left( x\right)$ is large (i.e. $x \sim {p}_{\text{ data }}{}_{\mid \bar{\Omega }}$ ). Instead of optimizing $L\left( {G,D}\right)$ in (3), we optimize $L\left( {G,\operatorname{guide}\left( {D,\mathbf{f}}\right) }\right)$ . This will effectively update $G$ by preventing it from generating samples in $\Omega$ . According to Theorem 1, the optimal discriminator is the solution to
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
\operatorname{guide}\left( {{D}^{ * },\mathbf{f}}\right) = \frac{{\left. {\alpha }_{ + }{p}_{\text{ data }}\right| }_{\bar{\Omega }} + {\alpha }_{ - }\left( {\lambda {p}_{G} + \left( {1 - \lambda }\right) {p}_{\Omega }}\right) }{{\left. {p}_{\text{ data }}\right| }_{\bar{\Omega }} + \lambda {p}_{G} + \left( {1 - \lambda }\right) {p}_{\Omega }}. \tag{5}
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
Therefore, the design of the guide function must make (5) feasible. In this paper, we let
|
| 182 |
+
|
| 183 |
+
$$
|
| 184 |
+
\operatorname{guide}\left( {D,\mathbf{f}}\right) \left( x\right) = \left\{ \begin{matrix} D\left( x\right) & \text{ if }\mathbf{f}\left( x\right) \geq \tau \\ {\alpha }_{ - } + \left( {D\left( x\right) - {\alpha }_{ - }}\right) \mathbf{f}\left( x\right) & \text{ otherwise } \end{matrix}\right. \tag{6}
|
| 185 |
+
$$
|
| 186 |
+
|
| 187 |
+
The feasibility of (5) is discussed in Appendix D. The proposed method is summarized in Alg. 3. The classifier-based $\Omega$ generalizes the validity-based $\Omega$ . First, any validity-based $\Omega$ can be represented by a classifier-based $\Omega$ if we let $\mathbf{f} = \mathbf{v}$ and $\tau = \frac{1}{2}$ . Next, we note there is a trivial way to deal with classifier-based $\Omega$ via the validity-based approach - by setting $\mathbf{v}\left( x\right) = 1\{ \mathbf{f}\left( x\right) < \tau \}$ . However, potentially useful information such as values and gradients of $\mathbf{f}$ are lost, and we will evaluate this effect in experiments. In addition, the classifier-based approach does not maintain the potentially large set of invalid generated samples, as this step is automatically done in the guide function.
|
| 188 |
+
|
| 189 |
+
§ 3.4 GENERALIZATION TO MULTIPLE FORGETTING SETS
|
| 190 |
+
|
| 191 |
+
Let ${\left\{ {\Omega }_{k}\right\} }_{k = 1}^{K}$ be disjoint sets in ${\mathbb{R}}^{d}$ , and we would like the model to forget $\Omega = \mathop{\bigcup }\limits_{{k = 1}}^{K}{\Omega }_{k}$ . In the data-based setting, we let ${p}_{\Omega } = \mathcal{U}\left( \Omega \right) = \mathcal{U}\left( {\mathop{\bigcup }\limits_{{k = 1}}^{K}{\Omega }_{k}}\right)$ . In the validity-based setting, each ${\Omega }_{k}$ is associated with a validity function ${\mathbf{v}}_{k}$ . We let the overall validity function to be $\mathbf{v}\left( x\right) = \mathop{\min }\limits_{k}{\mathbf{v}}_{k}\left( x\right)$ . In the classifier-based setting, each ${\Omega }_{k}$ is associated with a classifier ${\mathbf{f}}_{k}$ . Similar to the validity-based setting, we let the overall $\mathbf{f}$ to be $\mathbf{f}\left( x\right) = \mathop{\min }\limits_{k}{\mathbf{f}}_{k}\left( x\right)$ .
|
| 192 |
+
|
| 193 |
+
§ 4 EXPERIMENTS
|
| 194 |
+
|
| 195 |
+
In this section, we aim to answer the following questions.
|
| 196 |
+
|
| 197 |
+
* How well can the algorithms in Section 3 forget samples in practice?
|
| 198 |
+
|
| 199 |
+
* Can these algorithms be used to de-bias pre-trained models?
|
| 200 |
+
|
| 201 |
+
* Can these algorithms be used to understand training data?
|
| 202 |
+
|
| 203 |
+
We examine these questions by focusing on several real-world image datasets, including MNIST $\left( {{28} \times {28}}\right)$ [LeCun et al.,2010], CIFAR $\left( {{32} \times {32}}\right)$ [Krizhevsky et al.,2009], CelebA $\left( {{64} \times {64}}\right)$ [Liu et al.,2015] and STL-10 (96 × 96) [Coates et al.,2011] datasets. We demonstrate main experiments in Appendix E, and provide more detailed results afterwards. Specifically, in Appendix E.2 and F, we investigate how well these algorithms can forget samples with a specific label. In Appendix E.3 and G, we investigate how well these algorithms can de-bias pre-trained models and improve generation quality. In Appendix E. 4 and H, we use these algorithms to understand training data through the lens of data forgetting.
|
| 204 |
+
|
| 205 |
+
§ 5 CONCLUSION
|
| 206 |
+
|
| 207 |
+
In this paper, we propose a systematic framework for forgetting data from pre-trained generative models. We provide three different algorithms for GANs that differ on how the samples to be forgotten are described. We provide theoretical results that data forgetting can be achieved. We then empirically investigate data forgetting on real-world image datasets, and show that our algorithms are capable of forgetting data while retaining high generation quality at a fraction of the cost of full re-training. One limitation or our paper is that the proposed framework only applies to unconditional generative models. It is an important future direction to define data forgetting and propose algorithms for conditional generative models, which are more widely used in downstream deep learning applications.
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/VFGgG8XpFLu/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,526 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Certified Defences Against Adversarial Patch Attacks on Semantic Segmentation
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
## Abstract
|
| 12 |
+
|
| 13 |
+
Adversarial patch attacks are an emerging security threat for real world deep learning applications. We present DEMASKED SMOOTHING, the first approach (up to our knowledge) to certify the robustness of semantic segmentation models against this threat model. Previous work on certifiably defending against patch attacks has mostly focused on image classification task and often required changes in the model architecture and additional training which is undesirable and computationally expensive. In DEMASKED SMOOTHING, any segmentation model can be applied without particular training, fine-tuning, or restriction of the architecture. Using different masking strategies, DEMASKED SMOOTHING can be applied both for certified detection and certified recovery. In extensive experiments we show that DEMASKED SMOOTHING can on average certify ${63}\%$ of the pixel predictions for a $1\%$ patch in the detection task and 46% against a 0.5% patch for the recovery task on the ADE20K dataset.
|
| 14 |
+
|
| 15 |
+
## 14 1 Introduction
|
| 16 |
+
|
| 17 |
+
Physically realizable adversarial attacks are a threat for safety-critical (semi-)autonomous systems such as self-driving cars or robots. Adversarial patches [1, 2] are the most prominent example of such an attack. Their realizability has been demonstrated repeatedly, for instance by Lee and Kolter [3]: an attacker places a printed version of an adversarial patch in the physical world to fool a deep learning system. While empirical defenses [4-7] may offer robustness against known attacks, it does not provide any guarantees against unknown future attacks [8]. Thus, certified defenses for the patch threat model, which allow guaranteed robustness against all possible attacks for the given threat model, are crucial for safety-critical applications.
|
| 18 |
+
|
| 19 |
+
Research on certifiable defenses against adversarial patches can be broadly categorized into certified recovery and certified detection. Certified recovery [8-16] has the objective to make a correct prediction on an input even in the presence of an adversarial patch. In contrast, certified detection [17-20] provides a weaker guarantee by only aiming at detecting inputs containing adversarial patches. While certified recovery is more desirable in principle, it typically comes at a high cost of reduced performance on clean data. In practice, certified detection might be preferable because it allows maintaining high clean performance. Most existing certifiable defenses against patches are focused on image classification. DetectorGuard [21] and ObjectSeeker [22] that certifiably defend against patch hiding attacks on object detectors. Moreover, existing defences are not easily applicable to arbitrary downstream models, because they assume either that the downstream model is trained explicitly for being certifiably robust [9, 12], or that the model has a certain network architecture such as BagNet [10, 12, 11] or a vision transformer [15, 20]. PatchCleanser [14], which can be combined with arbitrary downstream models but is restricted to image classification. Adversarial patch attacks were also proposed for the image segmentation problem [23], mostly for attacking CNN-based models that use a localized receptive field [24]. However, recently self-attention based vision transformers [25] have achieved new state-of-the-art in the image segmentation task [26, 27]. Their output may become more vulnerable to adversarial patches if they manage to manipulate the global self-attention [28]. We demonstrate how significant parts of the segmentation output may be affected by a small patch for Swin tranfromer [26] in Figure 1a. Full details on the attack and defending against it with our method are available in Appendix E. We point out that preventive certified defences are important because newly developed attacks can immediately be used to compromise safety-critical applications unless they are properly defended.
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
|
| 23 |
+
Figure 1: (a) A simple patch attack on the Swin transformer [26] manages to switch the prediction for a big part of the image. (b) Masking the patch. (c) A sketch of DEMASKED SMOOTHING for certified image segmentation. First, we generate a set of masked versions of the image such that each possible patch can only affect a certain number of masked images. Then we use image inpainting to partially recover the information lost during masking and then apply an arbitrary segmentation method. The output is obtained by aggregating the segmentations pixelwise.
|
| 24 |
+
|
| 25 |
+
In this work, we propose the novel framework DEMASKED SMOOTHING (Figure 1c) to obtain the first (up to our knowledge) certified defences against patch attacks on semantic segmentation models. Similarly to previous work [9], we mask different parts of the input (Figure 1b) and provide guarantees with respect to every possible patch that is not larger than a certain pre-defined size. While prior work required the classification model to deal with such masked inputs, we leverage recent progress in image inpainting [29] to reconstruct the input before passing it to the downstream model. This decoupling of image demasking from the segmentation task allows us to support arbitrary downstream models. Moreover, we can leverage state of the art methods for image inpainting. We also propose different masking schemes tailored for the segmentation task that provide the dense input allowing the demasking model to understand the scene but still satisfy the guarantees with respect to the adversarial patch. We summarize our contributions as follows:
|
| 26 |
+
|
| 27 |
+
- We propose DEMASKED SMOOTHING which is the first (to the best of our knowledge) certified recovery or certified detection based defence against adversarial patch attacks on semantic segmentation models (Section 4).
|
| 28 |
+
|
| 29 |
+
- DEMASKED SMOOTHING can do certified detection and recovery with any off-the-shelf segmentation model without requiring finetuning or any other adaptation.
|
| 30 |
+
|
| 31 |
+
- We implement DEMASKED SMOOTHING, evaluate it for different certification objectives and masking schemes (Section 5). We can certify ${63}\%$ of all pixels in certified detection for a 1% patch and 46% in certified recovery for a 0.5% patch for the BEiT-B [30] segmentation model on the ADE20K [31] dataset.
|
| 32 |
+
|
| 33 |
+
## 652 Related Work
|
| 34 |
+
|
| 35 |
+
6 Certified recovery. The first certified recovery defence against patches was proposed by Chiang 7 et al. [8] for classification models . De-Randomized Smoothing (DRS) [9] significantly improved certified accuracy. Models with small receptive fields such as BagNets [32] were adopted for this task either by combining them with some fixed postprocessing [10, 11] or by training them end-to-end for certified recovery [12]. DRS was also applied [15] to Vision Transfomers (ViTs) [25]. In contrast to these works, our Demasked Smoothing can be applied to models with arbitrary architecture. PatchCleanser [14] has this property as well but it is limited to image classification. Certified recovery against patches has also been extended to object detection to defend against patch hiding attacks [18, 22]. Randomized smoothing [33] has been applied to certify semantic segmentation models against ${\ell }_{2}$ -norm bounded adversarial attacks [34]. However, to the best of our knowledge, no certified defence against patch attacks for semantic segmentation has been proposed so far.
|
| 36 |
+
|
| 37 |
+
Certified detection. In this alternative to certified recovery, an adversarial patch is allowed to change the model prediction. However, if it succeeds in doing so, the attack is certifiably detected. Minority Reports [17] was the first certified detection method against patches. PatchGuard++ [18] is has significantly improved the inference time. ScaleCert [19] uses "superficial important neurons" to datect an attack. Lastly, PatchVeto [20] implements masking by removing certain input patches of the ViT. In this work, we propose a novel method for certified detection in the semantic segmentation.
|
| 38 |
+
|
| 39 |
+
Image reconstruction. The problem of learning to reconstruct the full image from masked inputs was pioneered by Vincent et al. [35]. It recently attracted attention as proxy task for self-supervised pre-training, especially for the ViTs [30, 36]. Recent approaches to this problem are using Fourier convolutions [37] and ViTs [29]. SPG-Net [38] trains a subnetwork to reconstruct the full semantic segmentation directly from the masked input as a part of the image inpainting pipeline. In this work, we use the state-of-the-art ZITS [29] inpainting method.
|
| 40 |
+
|
| 41 |
+
## 3 Problem Setup
|
| 42 |
+
|
| 43 |
+
Semantic segmentation. In this work, we focus on the semantic segmentation task. Let $\mathcal{X}$ be a set of rectangular images. Let $x \in \mathcal{X}$ be an image with height $H$ , width $W$ and the number of channels $C$ . We denote $\mathcal{Y}$ to be a finite label set. The goal is to find the segmentation map $s \in {\mathcal{Y}}^{H \times W}$ for $x$ . For each pixel ${x}_{i, j}$ , the corresponding label ${s}_{i, j}$ denotes the class of the object to which ${x}_{i, j}$ belongs. We denote $\mathbb{S}$ to be a set of segmentation maps and $f : \mathcal{X} \rightarrow \mathbb{S}$ to be a segmentation model.
|
| 44 |
+
|
| 45 |
+
Threat model. Let us consider an untargeted adversarial patch attack on a segmentation model. Consider an image $x \in {\left\lbrack 0,1\right\rbrack }^{H \times W \times C}$ and its ground truth segmentation map $s$ . Assume that the attacker can modify an arbitrary rectangular region of the image $x$ which has a size of ${H}^{\prime } \times {W}^{\prime }$ . We refer to this modification as a patch. Let $l \in \{ 0,1{\} }^{H \times W}$ be a binary mask that defines the patch location in the image in which ones denote the pixels belonging to the patch. Let $\mathcal{L}$ be a set of all possible patch locations for a given image $x$ . Let $p \in {\left\lbrack 0,1\right\rbrack }^{H \times W \times C}$ be the modification itself. We define an operator $A\left( {x, p, l}\right) = \left( {1 - l}\right) \odot x + l \odot p$ , where $\odot$ is element-wise product. The operator $A$ applies the ${H}^{\prime } \times {W}^{\prime }$ subregion of $p$ defined by a binary mask $l$ to the image $x$ while keeping the rest of the image unchanged. We denote $\mathcal{P} \mathrel{\text{:=}} {\left\lbrack 0,1\right\rbrack }^{H \times W \times C} \times \mathcal{L}$ to be a set of all possible patch configurations(p, l)that define an ${H}^{\prime } \times {W}^{\prime }$ patch. Let $s \in \mathbb{S}$ be the ground truth segmentation for $x$ and $Q\left( {f\left( x\right) , s}\right)$ be some quality metric. The attacker’s goal is to find $\left( {{p}^{ \star },{l}^{ \star }}\right) = \arg \mathop{\min }\limits_{{\left( {p, l}\right) \in \mathcal{P}}}Q\left( {f\left( {A\left( {x, p, l}\right) }\right) , s}\right)$ . In this paper, we propose certified defences against any possible attack from $\mathcal{P}$ including $\left( {{p}^{ \star },{l}^{ \star }}\right)$ . We consider two robustness objectives.
|
| 46 |
+
|
| 47 |
+
Certified recovery. For a pixel ${x}_{i, j}$ our goal is to verify that the following statement is true
|
| 48 |
+
|
| 49 |
+
$$
|
| 50 |
+
\forall \left( {p, l}\right) \in \mathcal{P} : f{\left( A\left( x, p, l\right) \right) }_{i, j} = f{\left( x\right) }_{i, j} \tag{1}
|
| 51 |
+
$$
|
| 52 |
+
|
| 53 |
+
Certified detection. We define a verification function $v : \mathcal{X} \rightarrow \{ 0,1{\} }^{H \times W}$ . If $v{\left( x\right) }_{i, j} = 1$ , then the adversarial patch attack on ${x}_{i, j}$ can be detected by applying $v$ to the attacked image ${x}^{\prime } = A\left( {x, p, l}\right)$ .
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
v{\left( x\right) }_{i, j} = 1 \Rightarrow \left\lbrack {\forall \left( {p, l}\right) \in \mathcal{P} : v{\left( A\left( x, p, l\right) \right) }_{i, j} = 1 \rightarrow f{\left( A\left( x, p, l\right) \right) }_{i, j} = f{\left( x\right) }_{i, j}}\right\rbrack \tag{2}
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
$v{\left( {x}^{\prime }\right) }_{i, j} = 0$ means an alert on pixel ${x}_{i, j}^{\prime }$ . However, if ${x}^{\prime }$ is not an adversarial example, then this is a false alert. In that case the fraction of pixels for which we return false alert is called false alert ratio (FAR). The secondary objective is to keep FAR as small as possible.
|
| 60 |
+
|
| 61 |
+
Depending on the objective our goal is to certify one of the conditions1,2for each pixel ${x}_{i, j}$ . This provides us an upper bound on an attacker’s effectiveness under any adversarial patch attack from $\mathcal{P}$ .
|
| 62 |
+
|
| 63 |
+
## 4 Demasked Smoothing
|
| 64 |
+
|
| 65 |
+
DEMASKED SMOOTHING (Figure 1c) consists of several steps. First, we apply a predefined set of masks with specific properties to the input image to obtain a set of masked images. Then we reconstruct the masked regions of each image based on the available information with an inpainting model $g$ . After that we apply a segmentation model $f$ to the demasked results. Finally, we aggregate the segmentation outcomes and make a conclusion for the original image with respect to the statements [1] or [2]. See Algorithm 1 in Appendix B.
|
| 66 |
+
|
| 67 |
+
### 4.1 Input masking
|
| 68 |
+
|
| 69 |
+
Motivation. Like in previous work (Section 2) we apply masking patterns to the input image and use predictions on masked images to aggregate the robust result. If an adversarial patch is completely masked, it has no effect on further processing. However, in semantic segmentation, we predict not a single whole-image label like in the classification task, but a separate label for each pixel. Thus, making prediction on a masked image must allow us to predict the labels also for the masked pixels.
|
| 70 |
+
|
| 71 |
+
Preliminaries. Consider an image $x \in {\left\lbrack 0,1\right\rbrack }^{H \times W \times C}$ . We define "*" to be a special masking symbol that does not correspond to any pixel value and has the property $\forall z \in \mathbb{R} : z \times * = *$ . Note that $*$ needs to be different from 0 since 0 is a valid pixel value in unmasked inputs. Let $m \in \{ * ,1{\} }^{H \times W}$ be a mask. We call the element-wise product $x \odot m$ a masking of $x$ . In a masking, a subset of pixels becomes $*$ and the rest remains unchanged. We consider the patches of size at most ${H}^{\prime } \times {W}^{\prime }$ .
|
| 72 |
+
|
| 73 |
+
Certified recovery. We break $m$ into an array $B$ of non-intersecting blocks, each having the same size ${H}^{\prime } \times {W}^{\prime }$ as the adversarial patch. We index the blocks as $B\left\lbrack {q, r}\right\rbrack ,1 \leq q \leq \left\lceil \frac{H}{{H}^{\prime }}\right\rceil ,1 \leq r \leq \left\lceil \frac{W}{{W}^{\prime }}\right\rceil$ . We say that the block $B\left\lbrack {q, r}\right\rbrack$ is visible in a mask $m$ if $\forall \left( {i, j}\right) \in B\left\lbrack {q, r}\right\rbrack : {m}_{i, j} = 1$ . Consider an array $M$ of $K$ masks. We define each mask $M\left\lbrack k\right\rbrack$ by a set of blocks that are visible in it. Each block is visible in exactly one mask and masked in the others. We say that a mask $m$ is affected by a patch(p, l) if $A\left( {x, p, l}\right) \odot m \neq x \odot m$ . We define $T\left( M\right) = \mathop{\max }\limits_{{\left( {p, l}\right) \in \mathcal{P}}}\left| {\{ m \in M \mid A\left( {x, p, l}\right) \odot m \neq x \odot m\} }\right|$ . That is: $T\left( M\right)$ is the largest number of masks affected by some patch. If $M$ is defined, we refer to the value $T\left( M\right)$ as $T$ for simplicity. We define column masking $M$ for which $T = 2$ . We assign every $k$ -th block column to be visible in the mask $M\left\lbrack k\right\rbrack$ (Figure 2b). Any $\left( {p, l}\right) \in \mathcal{P}$ can intersect at most two adjacent columns since(p, l)has the same width as a column. Thus, it can affect at most two masks (Figure 2b). A similar scheme can be proposed for the rows. Due to the block size the patch(p, l)cannot intersect more than four blocks at once. We define a mask set that we call 3-mask s. t. for any four adjacent blocks two are visible in the same mask (Figures 2c). Hence, a patch for 3-mask can affect no more than 3 masks, $T = 3$ . To achieve $T = 4$ any assignment of visible blocks to the masks works. We consider 4-mask that allows uniform coverage of the visible blocks in the image (Figure 2d). See details in Appendix B.
|
| 74 |
+
|
| 75 |
+
Certified detection. We define ${M}_{d}$ to be a set of masks for certified detection (we use subscript $d$ for distinction). ${M}_{d}$ should have the property: $\forall \left( {p, l}\right) \in \mathcal{P}\exists m \in {M}_{d} : A\left( {x, p, l}\right) \odot m = x \odot m$ i. e. for every patch exists at least one mask not affected by this patch. See details in Appendix B.
|
| 76 |
+
|
| 77 |
+
### 4.2 Certification
|
| 78 |
+
|
| 79 |
+
Certified recovery. For the threat model $\mathcal{P}$ consider a set $M$ of $K$ masks. We define a function $h : \mathcal{X} \rightarrow \mathbb{S}$ that assigns a class to the pixel ${x}_{i, j}$ via majority voting over class predictions of each reconstructed segmentation in $S$ . A class for the pixel that is predicted by the largest number of segmentations is assigned. We break the ties by assigning a class with a smaller index.
|
| 80 |
+
|
| 81 |
+

|
| 82 |
+
|
| 83 |
+
Figure 2: certified recovery: column mask (b), 3-mask (c), 4-mask (d); certified detection (e, f).
|
| 84 |
+
|
| 85 |
+

|
| 86 |
+
|
| 87 |
+
Figure 3: Reconstructing the masked images with ZITS [29]
|
| 88 |
+
|
| 89 |
+
Theorem 1. If the number of masks $K$ satisfies $K \geq {2T}\left( M\right) + 1$ and for a pixel ${x}_{i, j}$ we have
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
\forall S\left\lbrack k\right\rbrack \in S : S{\left\lbrack k\right\rbrack }_{i, j} = h{\left( x\right) }_{i, j}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
(i.e. all the votes agree), then $\forall \left( {p, l}\right) \in \mathcal{P} : h{\left( A\left( x, p, l\right) \right) }_{i, j} = h{\left( x\right) }_{i, j}$ .
|
| 96 |
+
|
| 97 |
+
Certified detection. Consider ${M}_{d} = {\left\{ {M}_{d}\left\lbrack k\right\rbrack \right\} }_{k = 1}^{K}$ . For a set of demasked segmentations S we define the verification map $v{\left( x\right) }_{i, j} \mathrel{\text{:=}} \left\lbrack {f{\left( x\right) }_{i, j} = S{\left\lbrack 1\right\rbrack }_{i, j} = \ldots = S{\left\lbrack K\right\rbrack }_{i, j}}\right\rbrack$ i.e. the original segmentation is equal to all the other segmentations on masked-demasked inputs, including the one in which the potential patch was completely masked.
|
| 98 |
+
|
| 99 |
+
Theorem 2. Assume that $v{\left( x\right) }_{i, j} = 1$ . Then
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
\forall \left( {p, l}\right) \in \mathcal{P} : v{\left( A\left( x, p, l\right) \right) }_{i, j} = 1 \Rightarrow f{\left( A\left( x, p, l\right) \right) }_{i, j} = f{\left( x\right) }_{i, j}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
See the proofs for both theorems in Appendix A. For a given image $x$ the verification map $v\left( x\right)$ is complementary to the model segmentation output $f\left( x\right)$ that stays unchanged. Thus, there is no drop in clean performance however we may have some false positive alerts in the verification map $v$ in the clean setting.
|
| 106 |
+
|
| 107 |
+
## 5 Experiments
|
| 108 |
+
|
| 109 |
+
In this section, we evaluate DEMASKED SMOOTHING with the masking schemes proposed in Section 4, compare our approach with the direct application of Derandomized Smoothing [9] to the segmentation task and evaluate the performance on different datasets and models. Certified recovery and certified detection provide certificates of different strength (Section 4) which are not comparable. We evaluate them separately for different patch sizes.
|
| 110 |
+
|
| 111 |
+
Experimental Setup. We evaluate DEMASKED SMOOTHING on two challenging semantic segmentation datasets: ADE20K [31] (150 classes, 2000 validation images) and COCO-Stuff-10K [39] (171 classes, 1000 validation images). For demasking we use the ZITS [29] inpainting model with the checkpoint trained on Places2 [40] from the official paper repository1. As a segmentation model $f$ we use BEiT [30], Swin [26], PSPNet [24] and DeepLab v3 [41]. We use the model implementations provided in the mmsegmentation framework [42]. An illustration of the image reconstruction and respective segmentation can be found in Figure 3.
|
| 112 |
+
|
| 113 |
+
Evaluation. We compute mIoU, mean recall (mR) and certified mean recall (cmR). See detailed explanation of these metrics in Appendix C. In certified detection, we additionally consider false alert ratio (FAR) which is the fraction of correctly classified pixels for which we return an alert on a clean image. Smaller FAR is preferable. Due to our threat model, certifying small objects in the scene can be difficult because they can be partially or completely covered by an adversarial patchn. To provide an additional perspective on our methods, we also evaluate mR and cmR specifically for the "big" classes, which occupy on average more than ${20}\%$ of the images in which they appear. These are, for example, road, building, train, and sky, which are important for understanding the scene. The full list is provided in the Appendix 1. We run the evaluation in parallel on 5 Nvidia Tesla V100-32GB GPUs. The certification for the whole ADE20K validation set with ZITS and BEiT-B takes around 1.2 hours for certified recovery and 2 hours for certified detection (due to a larger number of masks).
|
| 114 |
+
|
| 115 |
+
Discussion. In Table 1, we compare different masking schemes proposed in Section 4.1. Evaluation of all the models with all the masking schemes is consistent with these results and can be found in
|
| 116 |
+
|
| 117 |
+
---
|
| 118 |
+
|
| 119 |
+
https://github.com/DQiaole/ZITS_inpainting
|
| 120 |
+
|
| 121 |
+
---
|
| 122 |
+
|
| 123 |
+
Table 1: Comparison of different masking schemes proposed in Section 4.1, mIoU - mean intersection over union, mR - mean recall, cmR - certified mean recall. %C - mean percentage of certified and correct pixels in the image. For detection, we provide clean mIoU since the output is unaffected and mean false alert rate (FAR) (lower is better). See additional results in Appendix F.
|
| 124 |
+
|
| 125 |
+
<table><tr><td rowspan="2">dataset</td><td rowspan="2">segm</td><td rowspan="2">mode</td><td rowspan="2">mask</td><td rowspan="2">mloU</td><td colspan="2">big</td><td colspan="2">all</td><td rowspan="2">%C</td><td rowspan="2">FAR $\downarrow$</td></tr><tr><td>mR</td><td>cmR</td><td>mR</td><td>cmR</td></tr><tr><td rowspan="6">ADE20K</td><td rowspan="6">BEiT-B</td><td>detection</td><td>column</td><td>53.08</td><td>70.92</td><td>57.33</td><td>64.45</td><td>32.55</td><td>63.55</td><td>20.04</td></tr><tr><td>1% patch</td><td>row</td><td/><td/><td>50.05</td><td/><td>26.65</td><td>58.34</td><td>25.24</td></tr><tr><td rowspan="4">recovery 0.5% patch</td><td>column</td><td>24.92</td><td>60.77</td><td>41.26</td><td>29.84</td><td>12.98</td><td>46.22</td><td rowspan="4">N/A</td></tr><tr><td>row</td><td>16.33</td><td>46.91</td><td>16.72</td><td>19.51</td><td>4.83</td><td>31.71</td></tr><tr><td>3-mask</td><td>19.90</td><td>56.90</td><td>26.51</td><td>23.86</td><td>7.54</td><td>38.64</td></tr><tr><td>4-mask</td><td>18.82</td><td>52.96</td><td>23.75</td><td>22.56</td><td>5.87</td><td>34.36</td></tr></table>
|
| 126 |
+
|
| 127 |
+
Table 2: Demasked Smoothing results with column masking for different models
|
| 128 |
+
|
| 129 |
+
<table><tr><td rowspan="2">mode</td><td rowspan="2">dataset</td><td rowspan="2">segm</td><td rowspan="2">mloU</td><td colspan="2">big</td><td colspan="2">all</td><td rowspan="2">%C</td><td rowspan="2">FAR $\downarrow$</td></tr><tr><td>mR</td><td>cmR</td><td>mR</td><td>cmR</td></tr><tr><td rowspan="5">detection 1 % patch</td><td rowspan="3">ADE20K</td><td>BEiT-B</td><td>53.08</td><td>70.92</td><td>57.33</td><td>64.45</td><td>32.55</td><td>63.55</td><td>20.04</td></tr><tr><td>PSPNet</td><td>44.39</td><td>61.83</td><td>50.02</td><td>54.74</td><td>26.37</td><td>60.57</td><td>20.08</td></tr><tr><td>Swin-B</td><td>48.13</td><td>68.51</td><td>55.45</td><td>59.13</td><td>29.06</td><td>61.44</td><td>20.31</td></tr><tr><td rowspan="2">COCO10K</td><td>PSPNet</td><td>37.76</td><td>71.71</td><td>56.86</td><td>49.65</td><td>26.80</td><td>47.09</td><td>21.43</td></tr><tr><td>DeepLab v3</td><td>37.81</td><td>72.52</td><td>56.54</td><td>49.98</td><td>26.86</td><td>46.55</td><td>21.89</td></tr><tr><td rowspan="5">recovery 0.5 % patch</td><td rowspan="3">ADE20K</td><td>BEIT-B</td><td>24.92</td><td>60.77</td><td>41.26</td><td>29.84</td><td>12.98</td><td>46.22</td><td rowspan="5">N/A</td></tr><tr><td>PSPNet</td><td>19.17</td><td>51.90</td><td>34.11</td><td>23.66</td><td>10.76</td><td>44.90</td></tr><tr><td>Swin-B</td><td>22.43</td><td>59.75</td><td>34.88</td><td>27.09</td><td>11.70</td><td>46.14</td></tr><tr><td rowspan="2">COCO10K</td><td>PSPNet</td><td>21.94</td><td>61.56</td><td>36.67</td><td>29.94</td><td>11.13</td><td>29.51</td></tr><tr><td>DeepLab v3</td><td>23.12</td><td>62.60</td><td>33.84</td><td>31.59</td><td>11.55</td><td>28.71</td></tr></table>
|
| 130 |
+
|
| 131 |
+
Appendix F. We see that column masking achieves better results in both certification modes. We attribute the effectiveness of column masking to the fact most of the images in the datasets have a clear horizont line, therefore having a visible column provides a slice of the image that intersects most of the scene background objects.
|
| 132 |
+
|
| 133 |
+
In Table 2, we evaluate our method with column masking on different models. For certified detection we can certify more than ${60}\%$ of the pixels with all models on ADE20K and more than ${46}\%$ on COCO10K. False alert ratio on correctly classified pixels is around 20%. In certified recovery, we certify more than ${44}\%$ pixels on ADE20K and more than ${28}\%$ pixels on COCO10K. See the comparison with DRS [9] adaptded for segmentation in Appendix A. We evaluate the performance of our method for different patch sizes in Section C. Ablations with respect to inpainting can be found in Appendix G. DemaskedSMOOTHING illustrations procedure are provided in Appendix D.
|
| 134 |
+
|
| 135 |
+
## 204 6 Conclusion
|
| 136 |
+
|
| 137 |
+
In this work, we propose DEMASKED SMOOTHING, the first (up to our knowledge) certified defence framework against patch attacks on segmentation models. Due to its novel design based on masking schemes and image demasking, DEMASKED SMOOTHING is compatible with any segmentation model and can on average certify ${63}\%$ of the pixel predictions for a 1% patch in the detection task and 46% against a 0.5% patch for the recovery task on the ADE20K dataset.
|
| 138 |
+
|
| 139 |
+
Ethical and Societal Impact This work contributes to the field of certified defences against physically-realizable adversarial attacks. The proposed approach allows to certify robustness of safety-critical applications such as medical imaging or autonomous driving. The defence might be used to improve robustness of systems used for malicious purposes such as (semi-)autonomous weaponry or unauthorized surveillance. This danger may be mitigated e.g. by using a system of 5 sparsely distributed patches which makes certifying the image more challenging. All activities in our organization are carbon neutral, so our experiments do not leave any carbon dioxide footprint.
|
| 140 |
+
|
| 141 |
+
References
|
| 142 |
+
|
| 143 |
+
[1] Tom Brown, Dandelion Mane, Aurko Roy, Martin Abadi, and Justin Gilmer. Adversarial patch. In Advances Neural Information Processing System (NeurIPS), 2017. URL https: //arxiv.org/pdf/1712.09665.pdf. arXiv: 1712.09665.
|
| 144 |
+
|
| 145 |
+
[2] Danny Karmon, Daniel Zoran, and Yoav Goldberg. LaVAN: Localized and visible adversarial noise. In International Conference on Machine Learning (ICML), pages 2507-2515, 2018. URL https://proceedings.mlr.press/v80/karmon18a.html
|
| 146 |
+
|
| 147 |
+
[3] Mark Lee and J. Zico Kolter. On physical adversarial patches for object detection. International Conference on Machine Learning (Workshop), 2019. URL http://arxiv.org/abs/1906 11897.
|
| 148 |
+
|
| 149 |
+
[4] Jamie Hayes. On visible adversarial perturbations & digital watermarking. In IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPR), 2018. URL http://openaccess.thecvf.com/content_cvpr_2018_workshops/w32/ html/Hayes_On_Visible_Adversarial_CVPR_2018_paper.html
|
| 150 |
+
|
| 151 |
+
[5] Muzammal Naseer, Salman Khan, and Fatih Porikli. Local gradients smoothing: Defense against localized adversarial attacks. In IEEE Winter Conference on Applications of Computer Vision (WACV), 2019. URL https://doi.org/10.1109/WACV.2019.00143.
|
| 152 |
+
|
| 153 |
+
[6] Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. International Journal of Computer Vision, October 2019. ISSN 1573-1405.
|
| 154 |
+
|
| 155 |
+
[7] Tong Wu, Liang Tong, and Yevgeniy Vorobeychik. Defending against physically realizable attacks on image classification. In International Conference on Learning Representations (ICLR), 2020. URL https://arxiv.org/abs/1909.09552.
|
| 156 |
+
|
| 157 |
+
[8] Ping-yeh Chiang, Renkun Ni, Ahmed Abdelkader, Chen Zhu, Chris Studor, and Tom Goldstein. Certified defenses for adversarial patches. In International Conference on Learning Representations (ICLR), 2020.
|
| 158 |
+
|
| 159 |
+
[9] Alexander Levine and Soheil Feizi. (De)Randomized Smoothing for Certifiable Defense against Patch Attacks. In Advances in Neural Information Processing Systems (NeurIPS), volume 33, 2020.
|
| 160 |
+
|
| 161 |
+
[10] Zhanyuan Zhang, Benson Yuan, Michael McCoyd, and David Wagner. Clipped BagNet: Defending Against Sticker Attacks with Clipped Bag-of-features. In 3rd Deep Learning and Security Workshop (DLS), 2020.
|
| 162 |
+
|
| 163 |
+
[11] Chong Xiang, Arjun Nitin Bhagoji, Vikash Sehwag, and Prateek Mittal. Patchguard: A provably robust defense against adversarial patches via small receptive fields and masking. In 30th USENIX Security Symposium (USENIX Security), 2021.
|
| 164 |
+
|
| 165 |
+
[12] Jan Hendrik Metzen and Maksym Yatsura. Efficient certified defenses against patch attacks on image classifiers. In International Conference on Learning Representations (ICLR), 2021. URL https://openreview.net/forum?id=hr-3PMvDpil
|
| 166 |
+
|
| 167 |
+
[13] Wan-Yi Lin, Fatemeh Sheikholeslami, jinghao shi, Leslie Rice, and J Zico Kolter. Certified robustness against physically-realizable patch attack via randomized cropping, 2021. URL https://openreview.net/forum?id=vttv9ADGuWF.
|
| 168 |
+
|
| 169 |
+
[14] Chong Xiang, Saeed Mahloujifar, and Prateek Mittal. Patchcleanser: Certifiably robust defense against adversarial patches for any image classifier. In 31st USENIX Security Symposium (USENIX Security), 2022.
|
| 170 |
+
|
| 171 |
+
[15] Hadi Salman, Saachi Jain, Eric Wong, and Aleksander Madry. Certified patch robustness via smoothed vision transformers, 2021. arXiv:2110.07719.
|
| 172 |
+
|
| 173 |
+
[16] Zhaoyu Chen, Bo Li, Jianghe Xu, Shuang Wu, Shouhong Ding, and Wenqiang Zhang. Towards practical certifiable patch defense with vision transformer. CVPR, 2022.
|
| 174 |
+
|
| 175 |
+
[17] Michael McCoyd, Won Park, Steven Chen, Neil Shah, Ryan Roggenkemper, Minjune Hwang, Jason Xinyu Liu, and David Wagner. Minority reports defense: Defending against adversarial patches, 2020. arXiv:2004.13799.
|
| 176 |
+
|
| 177 |
+
[18] Chong Xiang and Prateek Mittal. Patchguard++: Efficient provable attack detection against adversarial patches, 2021. arXiv:2104.12609.
|
| 178 |
+
|
| 179 |
+
[19] Husheng Han, Kaidi Xu, Xing Hu, Xiaobing Chen, Ling Liang, Zidong Du, Qi Guo, Yanzhi Wang, and Yunji Chen. Scalecert: Scalable certified defense against adversarial patches with sparse superficial layers. In Advances in Neural Information Processing Systems (NeurIPS), 2021.
|
| 180 |
+
|
| 181 |
+
[20] Yuheng Huang and Yuanchun Li. Zero-shot certified defense against adversarial patches with vision transformers, 2021. arXiv:2111.10481.
|
| 182 |
+
|
| 183 |
+
[21] Chong Xiang and Prateek Mittal. Detectorguard: Provably securing object detectors against localized patch hiding attacks. In ACM Conference on Computer and Communications Security (CCS), 2021.
|
| 184 |
+
|
| 185 |
+
[22] Chong Xiang, Alexander Valtchanov, Saeed Mahloujifar, and Prateek Mittal. Objectseeker: Certifiably robust object detection against patch hiding attacks via patch-agnostic masking, 2022. arXiv:2202.01811.
|
| 186 |
+
|
| 187 |
+
[23] Federico Nesti, Giulio Rossolini, Saasha Nair, Alessandro Biondi, and Giorgio C. Buttazzo. Evaluating the robustness of semantic segmentation for autonomous driving against real-world adversarial patch attacks. 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 2826-2835, 2022.
|
| 188 |
+
|
| 189 |
+
[24] Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia. Pyramid scene parsing network. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
|
| 190 |
+
|
| 191 |
+
[25] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations (ICLR), 2021. URL https://openreview.net/forum?id=YicbFdNTTy.
|
| 192 |
+
|
| 193 |
+
[26] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In International Conference on Computer Vision (ICCV), 2021.
|
| 194 |
+
|
| 195 |
+
[27] Walid Bousselham, Guillaume Thibault, Lucas Pagano, Archana Machireddy, Joe Gray, Young Hwan Chang, and Xubo Song. Efficient self-ensemble for semantic segmentation, 2021. URL https://arxiv.org/abs/2111.13280.
|
| 196 |
+
|
| 197 |
+
[28] Giulio Lovisotto, Nicole Finnie, Mauricio Munoz, Chaithanya Kumar Mummadi, and Jan Hendrik Metzen. Give me your attention: Dot-product attention considered harmful for adversarial patch robustness. CVPR, 2022.
|
| 198 |
+
|
| 199 |
+
[29] Qiaole Dong, Chenjie Cao, and Yanwei Fu. Incremental transformer structure enhanced image inpainting with masking positional encoding. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
|
| 200 |
+
|
| 201 |
+
[30] Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. BEit: BERT pre-training of image transformers. In International Conference on Learning Representations (ICLR), 2022. URL https://openreview.net/forum?id=p-BhZSz59o4
|
| 202 |
+
|
| 203 |
+
[31] Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Scene parsing through ade ${20}\mathrm{k}$ dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
|
| 204 |
+
|
| 205 |
+
[32] Wieland Brendel and Matthias Bethge. Approximating CNNs with bag-of-local-features models works surprisingly well on ImageNet. In International Conference on Learning Representations (ICLR), 2019.
|
| 206 |
+
|
| 207 |
+
[33] Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. Certified adversarial robustness via randomized smoothing. In Proceedings of Machine Learning Research, pages 1310-1320, 2019. URL http://proceedings.mlr.press/v97/cohen19c.html
|
| 208 |
+
|
| 209 |
+
[34] Marc Fischer, Maximilian Baader, and Martin T. Vechev. Scalable certified segmentation via randomized smoothing. In ${ICML},{2021}$ .
|
| 210 |
+
|
| 211 |
+
[35] Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11(110):3371-3408, 2010. URL http://jmlr.org/papers/v11/vincent10a.html
|
| 212 |
+
|
| 213 |
+
[36] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners, 2021. arXiv:2111.06377.
|
| 214 |
+
|
| 215 |
+
[37] Roman Suvorov, Elizaveta Logacheva, Anton Mashikhin, Anastasia Remizova, Arsenii Ashukha, Aleksei Silvestrov, Naejin Kong, Harshith Goka, Kiwoong Park, and Victor S. Lempitsky. Resolution-robust large mask inpainting with fourier convolutions. 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 3172-3182, 2022.
|
| 216 |
+
|
| 217 |
+
[38] Yuhang Song, Chao Yang, Yeji Shen, Peng Wang, Qin Huang, and C.-C. Jay Kuo. Spg-net: Segmentation prediction and guidance network for image inpainting. In ${BMVC},{2018}$ .
|
| 218 |
+
|
| 219 |
+
[39] Holger Caesar, Jasper Uijlings, and Vittorio Ferrari. Coco-stuff: Thing and stuff classes in context. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
|
| 220 |
+
|
| 221 |
+
[40] Bolei Zhou, Aditya Khosla, Àgata Lapedriza, Antonio Torralba, and Aude Oliva. Places: An image database for deep scene understanding. Journal of Vision, 17, 10 2016. doi: 10.1167/17. 10.296.
|
| 222 |
+
|
| 223 |
+
[41] Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In ${ECCV}$ , 2018.
|
| 224 |
+
|
| 225 |
+
[42] MMSegmentation Contributors. MMSegmentation: Openmmlab semantic segmentation toolbox and benchmark. https://github.com/open-mmlab/mmsegmentation, 2020.
|
| 226 |
+
|
| 227 |
+
[43] Chu-Tak Li, Wan-Chi Siu, Zhi-Song Liu, Li-Wen Wang, and Daniel Pak-Kong Lun. Deepgin: Deep generative inpainting network for extreme image inpainting, 2020.
|
| 228 |
+
|
| 229 |
+
## A Proofs (Section 4)
|
| 230 |
+
|
| 231 |
+
In this section, we provide the proofs for the theorems stated in Section 4.
|
| 232 |
+
|
| 233 |
+
Certified recovery. For the threat model $\mathcal{P}$ (Section 3) consider a set $M$ of $K$ masks. We define a function $h : \mathcal{X} \rightarrow \mathbb{S}$ that assigns a class to the pixel ${x}_{i, j}$ via majority voting over class predictions of each reconstructed segmentation in $S$ . A class for the pixel that is predicted by the largest number of segmentations is assigned. We break the ties by assigning a class with a smaller index.
|
| 234 |
+
|
| 235 |
+
Theorem 1. (Section 4.2) If the number of masks $K$ satisfies $K \geq {2T}\left( M\right) + 1$ and for a pixel ${x}_{i, j}$ we have
|
| 236 |
+
|
| 237 |
+
$$
|
| 238 |
+
\forall S\left\lbrack k\right\rbrack \in S : S{\left\lbrack k\right\rbrack }_{i, j} = h{\left( x\right) }_{i, j}
|
| 239 |
+
$$
|
| 240 |
+
|
| 241 |
+
i.e. all the votes agree, then $\forall \left( {p, l}\right) \in \mathcal{P} : h{\left( A\left( x, p, l\right) \right) }_{i, j} = h{\left( x\right) }_{i, j}$ .
|
| 242 |
+
|
| 243 |
+
Proof. Assume that
|
| 244 |
+
|
| 245 |
+
$$
|
| 246 |
+
\exists \left( {p, l}\right) \in \mathcal{P} : h{\left( A\left( x, p, l\right) \right) }_{i, j} \neq h{\left( x\right) }_{i, j}
|
| 247 |
+
$$
|
| 248 |
+
|
| 249 |
+
Let us denote ${x}^{\prime } \mathrel{\text{:=}} A\left( {x, p, l}\right)$ and ${S}^{\prime }$ to be the segmentation array for ${x}^{\prime }$ . Then the class $h{\left( x\right) }_{i, j}$ did not get the majority vote for ${S}^{\prime }$ . However, by definition of $T\left( M\right)$ we know that(p, l)could affect at most $T\left( M\right)$ segmentations. Since all $K$ segmentations of $S$ have voted for $h{\left( x\right) }_{i, j}$ , then at least $K - T > \frac{K}{2}$ of them are still voting for $h{\left( x\right) }_{i, j}$ in ${S}^{\prime }$ meaning that $h{\left( x\right) }_{i, j}$ still has the majority vote in ${S}^{\prime }$ . Therefore $h{\left( {x}^{\prime }\right) }_{i, j} = h{\left( x\right) }_{i, j}$
|
| 250 |
+
|
| 251 |
+
Certified detection. Consider ${M}_{d} = {\left\{ {M}_{d}\left\lbrack k\right\rbrack \right\} }_{k = 1}^{K}$ . For a set of demasked segmentations S we define the verification map $v{\left( x\right) }_{i, j} \mathrel{\text{:=}} \left\lbrack {f{\left( x\right) }_{i, j} = S{\left\lbrack 1\right\rbrack }_{i, j} = \ldots = S{\left\lbrack K\right\rbrack }_{i, j}}\right\rbrack$ i.e. the original segmentation coincides with all the other segmentations including the one in which the potential patch was completely masked.
|
| 252 |
+
|
| 253 |
+
Theorem 2 (Section 4.2) Assume that $v{\left( x\right) }_{i, j} = 1$ . Then
|
| 254 |
+
|
| 255 |
+
$$
|
| 256 |
+
\forall \left( {p, l}\right) \in \mathcal{P} : v{\left( A\left( x, p, l\right) \right) }_{i, j} = 1 \Rightarrow f{\left( A\left( x, p, l\right) \right) }_{i, j} = f{\left( x\right) }_{i, j}
|
| 257 |
+
$$
|
| 258 |
+
|
| 259 |
+
Proof. Assume that $\exists \left( {p, l}\right) \in \mathcal{P}$ s. t. $v{\left( A\left( x, p, l\right) \right) }_{i, j} = 1$ and $f{\left( A\left( x, p, l\right) \right) }_{i, j} \neq f{\left( x\right) }_{i, j}$ . Let us denote ${x}^{\prime } \mathrel{\text{:=}} A\left( {x, p, l}\right)$ and ${S}^{\prime }$ to be the segmentation set for ${x}^{\prime }$ . By definition of ${M}_{d},\exists {M}_{d}\left\lbrack k\right\rbrack \in {M}_{d}$ s. t. ${M}_{d}\left\lbrack k\right\rbrack$ masks the patch(p, l)Hence,
|
| 260 |
+
|
| 261 |
+
$$
|
| 262 |
+
g\left( {x \odot {M}_{d}\left\lbrack k\right\rbrack }\right) = g\left( {{x}^{\prime } \odot {M}_{d}\left\lbrack k\right\rbrack }\right) ,
|
| 263 |
+
$$
|
| 264 |
+
|
| 265 |
+
$$
|
| 266 |
+
S\left\lbrack k\right\rbrack = f\left( {g\left( {x \odot {M}_{d}\left\lbrack k\right\rbrack }\right) }\right) = f\left( {g\left( {{x}^{\prime } \odot {M}_{d}\left\lbrack k\right\rbrack }\right) }\right) = {S}^{\prime }\left\lbrack k\right\rbrack ,
|
| 267 |
+
$$
|
| 268 |
+
|
| 269 |
+
Since $v{\left( x\right) }_{i, j} = 1$ , we have $f{\left( x\right) }_{i, j} = S{\left\lbrack k\right\rbrack }_{i, j}$ . Since $v{\left( {x}^{\prime }\right) }_{i, j} = 1$ , we have $f{\left( {x}^{\prime }\right) }_{i, j} = {S}^{\prime }{\left\lbrack k\right\rbrack }_{i, j}$ . Thus, $f{\left( {x}^{\prime }\right) }_{i, j} = f{\left( x\right) }_{i, j}.$
|
| 270 |
+
|
| 271 |
+
## B Detailed description of masking strategies
|
| 272 |
+
|
| 273 |
+
In this section, we provide additional details for constructing certified recovery masks proposed in Section 4.1.
|
| 274 |
+
|
| 275 |
+
Certified recovery. We define mask sets $M$ that satisfy different values of $T$ . We divide the image $x$ into a set of non-intersecting blocks $B$ of the same size as an adversarial patch, ${H}^{\prime } \times {W}^{\prime }$ (see Figure 5, $1 \leq q \leq \left\lceil {H/{H}^{\prime }}\right\rceil ,1 \leq r \leq \left\lceil {W/{W}^{\prime }}\right\rceil$ . In each mask, each of these blocks will be either masked or not masked (i. e. visible). Moreover, for each block there exists only one mask in which it is visible. For a set $M$ of $K$ masks we define the mapping ${\mu }_{M} : B \rightarrow \{ 1,\ldots , K\}$ . If $\mu \left( {B\left\lbrack {q, r}\right\rbrack }\right) = k$ , then $B\left\lbrack {q, r}\right\rbrack$ is not masked in $M\left\lbrack k\right\rbrack$ . Therefore, each mask $M\left\lbrack k\right\rbrack$ is defined by a ${B}_{k} \subset B$ s. t. for $b \in {B}_{k}\mu \left( b\right) = k$ .
|
| 276 |
+
|
| 277 |
+
We define a set $M$ that we call 3-mask for which $T\left( M\right) = 3$ . We assign the blocks in each row to the masks as follows: $\mu \left( {B\left\lbrack {1,1}\right\rbrack }\right) = 1;\mu \left( {B\left\lbrack {1,2}\right\rbrack }\right) = \mu \left( {B\left\lbrack {1,3}\right\rbrack }\right) = 2;\mu \left( {B\left\lbrack {1,4}\right\rbrack }\right) = \mu \left( {B\left\lbrack {1,5}\right\rbrack }\right) = 3$ and so on until we reach the end of the row. If we finish the first row with the value $k$ , then we start the second row as follows $\mu \left( {B\left\lbrack {2,1}\right\rbrack }\right) = \mu \left( {B\left\lbrack {2,2}\right\rbrack }\right) = k + 1;\mu \left( {B\left\lbrack {2,3}\right\rbrack }\right) = \mu \left( {B\left\lbrack {2,4}\right\rbrack }\right) = k + 2$ : ... If we finish the second row on $n$ , we start the third row similarly to the first: $\mu \left( {B\left\lbrack {3,1}\right\rbrack }\right) = n + 1$ ; $\mu \left( {B\left\lbrack {3,2}\right\rbrack }\right) = \mu \left( {B\left\lbrack {3,3}\right\rbrack }\right) = n + 2;\ldots$ When we reach the number $K$ , we start from 1 again (Figure 5d). Due to the block size, the patch cannot intersect more than four blocks at once. Our parity-alternating block sequence ensures that in any such intersection of four blocks either the top ones or the bottom ones will belong to the same masking, so at most three different maskings can be affected.
|
| 278 |
+
|
| 279 |
+

|
| 280 |
+
|
| 281 |
+
Figure 4: The masked columns of the first two adjacent masks (blue for the first one and red for the second one). If the patch is not completely masked by the first mask, it should be visible at the column ${W}^{\prime \prime } + 1$ (the masked part of the patch is dark-grey and the visible part is in light-grey). However then the patch will be completely masked by the second mask.
|
| 282 |
+
|
| 283 |
+
We define a set $M$ that we call 4-mask for which $T\left( M\right) = 4$ . Due to our block size any assignment of masks will work because the patch cannot intersect more than four blocks. We consider the one that allows uniform distribution of the unmasked blocks (Figure 5g). We point out that for the described methods each masking keeps approximately $1/K$ of the pixels visible and the unmasked regions are uniformly distributed in the image. This means that for any masked pixel there exists an unmasked region located close enough to this pixel. It is the core difference between our masks and the ones proposed for certified classification such as block or column smoothing [9]. It was observed that the image demasking is facilitated when the visible regions are uniformly spread in the masked image [36]. We present the full demasked smoothing procedure in Algorithm 1.
|
| 284 |
+
|
| 285 |
+
Certified detection. We define ${M}_{d}$ to be a set of masks for certified detection (we use subscript $d$ for distinction). ${M}_{d}$ should have the property: $\forall \left( {p, l}\right) \in \mathcal{P}\exists m \in {M}_{d} : A\left( {x, p, l}\right) \odot m = x \odot m$ i. e. for every patch exists at least one mask not affected by this patch. See deteils in Appendix B. For a patch of size ${H}^{\prime } \times {W}^{\prime }$ we consider $K = W - {W}^{\prime } + 1$ masks such that the mask ${M}_{d}\left\lbrack k\right\rbrack$ masks a column of width ${W}^{\prime }$ starting at the horizontal position $k$ in the image (Figure 2e). To obtain the guarantee for the same $\mathcal{P}$ with a smaller $K$ , we consider a set of strided columns of width ${W}^{\prime \prime } \geq {W}^{\prime }$ and stride ${W}^{\prime \prime } - {W}^{\prime } + 1$ that also satisfy the condition.
|
| 286 |
+
|
| 287 |
+
Lemma 1. Consider an image of the size $H \times W$ . Let ${H}^{\prime } \times {W}^{\prime }$ be a fixed adversarial patch size. Let ${M}^{d}\left( {K,\mathcal{L}}\right)$ be a set of masks where each mask is masking an $H \times {W}^{\prime \prime }$ vertical column, ${W}^{\prime \prime } \geq {W}^{\prime }$ . Let the stride between the columns in two adjacent masks be ${W}^{\prime \prime } - {W}^{\prime } + 1$ . Then for any location $l \in \mathcal{L}$ of the patch, there exists a mask that covers it completely.
|
| 288 |
+
|
| 289 |
+
Proof. (Adapted from the proof of Lemma 4 in PatchCleanser [14]). Without loss of generality, we consider the first two adjacent column masks. The first one covers the columns from 1 to ${W}^{\prime \prime }$ . The second mask covers the columns from $1 + \left( {{W}^{\prime \prime } - {W}^{\prime } + 1}\right) = {W}^{\prime \prime } - {W}^{\prime } + 2$ to $\left( {{W}^{\prime \prime } - {W}^{\prime } + 2}\right) + \left( {{W}^{\prime \prime } - 1}\right) = 2{W}^{\prime \prime } - {W}^{\prime } + 1$ (See Figure 4). Now consider an adversarial patch of size ${H}^{\prime } \times {W}^{\prime }$ . Let us find the smallest possible start index of this patch so that it does not get covered by the first mask. For that it should be visible at the column ${W}^{\prime \prime } + 1$ and, therefore, start at the column with index not smaller than $\left( {{W}^{\prime \prime } + 1}\right) - {W}^{\prime } + 1 = {W}^{\prime \prime } - {W}^{\prime } + 2$ . However, it is the same column in which second mask starts. Therefore, given that ${W}^{\prime \prime } \geq {W}^{\prime }$ we have that the patch is completely masked by the second mask. Then for a patch which is only partially masked by the second mask from the left we use an analogous argument to show that it is completely masked by the third mask and so on.
|
| 290 |
+
|
| 291 |
+
A similar scheme can be proposed for the rows (Figure 2f). Alternatively, we could use a set of block masks of size ${H}^{\prime } \times {W}^{\prime }$ . Then the number of masks grows quadratically with the image resolution. 6 Hence, in the experiments we focus on the column and the row masking schemes.
|
| 292 |
+
|
| 293 |
+

|
| 294 |
+
|
| 295 |
+
Figure 5: (a) examples of a mask for the column masks with $T = 2\left( {\mathrm{\;b},\mathrm{c}}\right) ,3$ -mask with $T = 3\left( {\mathrm{\;d},\mathrm{e}}\right)$ , and 4-mask with $T = 4\left( {\mathrm{f},\mathrm{g}}\right)$ with the number of masks $K = 5,7,9$ respectively. The number of a block denotes in which mask it is not masked (there is only one such mask for each block). For each mask set, we show one of the locations $l$ in which an adversarial patch affects $T$ different maskings.
|
| 296 |
+
|
| 297 |
+
Algorithm 1 Demasked Smoothing
|
| 298 |
+
|
| 299 |
+
---
|
| 300 |
+
|
| 301 |
+
Input: image $x \in {\left\lbrack 0,1\right\rbrack }^{H \times W \times C}$ , patch size $\left( {{H}^{\prime },{W}^{\prime }}\right)$ , certification type CT (recovery or detection),
|
| 302 |
+
|
| 303 |
+
mask type MT (column, row,3-mask,4-mask), inpainting model $g$ , segmentation model $f$
|
| 304 |
+
|
| 305 |
+
Output: segmentation map $h \in {\mathcal{Y}}^{H \times W}$ , certification (or verification) map $v \in \{ 0,1{\} }^{H \times W}$
|
| 306 |
+
|
| 307 |
+
$: M \leftarrow$ CreateMaskArray $\left( {H, W,{H}^{\prime },{W}^{\prime },\mathrm{{CT}},\mathrm{{MT}}}\right) \; \vartriangleright$ according to section4
|
| 308 |
+
|
| 309 |
+
for $k \leftarrow 1,\ldots ,\left| M\right|$ do
|
| 310 |
+
|
| 311 |
+
$S\left\lbrack k\right\rbrack \leftarrow f\left( {g\left( {x \odot M\left\lbrack k\right\rbrack }\right) }\right) \vartriangleright$ mask input, inpaint the masked regions, and apply segmentation
|
| 312 |
+
|
| 313 |
+
end for
|
| 314 |
+
|
| 315 |
+
: if $\mathrm{{CT}} =$ ’recovery’ then $h \leftarrow$ MajorityVote $\left( S\right) \vartriangleright$ vote over the classes predicted for each pixel
|
| 316 |
+
|
| 317 |
+
else $h \leftarrow f\left( x\right)$ $\vartriangleright$ in detection case, output clean segmentation
|
| 318 |
+
|
| 319 |
+
end if
|
| 320 |
+
|
| 321 |
+
8: $v \leftarrow \operatorname{AllEqual}\left( {S, h}\right) \; \vartriangleright$ assign 1 for the pixels where all $S\left\lbrack k\right\rbrack$ agree with $h$ and 0 otherwise
|
| 322 |
+
|
| 323 |
+
Return $h, v$
|
| 324 |
+
|
| 325 |
+
---
|
| 326 |
+
|
| 327 |
+
## C Evaluation metrics
|
| 328 |
+
|
| 329 |
+
For both certified recovery and certified detection, we provide a standard segmentation output (without any abstention) and a corresponding certification map (Figure 3). In case of certified detection, the segmentation output remains the same as for the original segmentation model, however, there may be false alerts in the certificaton map. For the certified recovery, the output is obtained by a majority vote over the segmentations of demasked images (Section 4.2). We evaluate the mean intersection over union (mIoU) for these outputs. The certification map is obtained by assigning to each certified pixel the corresponding class from the segmentation output and assigning a special uncertified label to all non-certified pixels. For each image we evaluate the fraction of pixels which are certified and correct (coincide with the ground truth). $\% \mathrm{C}$ is a mean of these fractions over all the images in the dataset. In semantic segmentation task, the class frequencies are usually skewed, therefore global pixel-wise accuracy alone is an insufficient metric.
|
| 330 |
+
|
| 331 |
+
Matching the certification map separately for each class $y \in \mathcal{Y}$ with the ground truth segmentation for $y$ in the image $x$ allows us to compute the guaranteed lower bound $\left( {{cT}{P}_{y}\left( x\right) }\right)$ on the number of true positive pixel predictions $\left( {T{P}_{y}\left( x\right) }\right)$ i.e. those that were correctly classified into $y$ . If a pixel was certified with a correct class, then this prediction cannot be changed by a patch (or, alternatively, the change will be detected by the verification function $v$ in certified detection). We consider recall ${R}_{y}\left( x\right) = \frac{T{P}_{y}\left( x\right) }{T{P}_{y}\left( x\right) + F{N}_{y}\left( x\right) }$ where $F{N}_{y}\left( x\right)$ is the number of false negative predictions for $y$ in $x.{P}_{y}\left( x\right) = T{P}_{y}\left( x\right) + F{N}_{y}\left( x\right)$ is the total area of $y$ in the ground truth and does not depend on our prediction. We can evaluate certified recall $c{R}_{y}\left( x\right) = \frac{{cT}{P}_{y}\left( x\right) }{{P}_{y}\left( x\right) }$ , a lower bound on the recall ${R}_{y}\left( x\right)$ . Total recall and certified total recall of class $y$ in a dataset $D$ are $T{R}_{y}\left( D\right) =$ $\frac{\mathop{\sum }\limits_{{x \in D}}T{P}_{y}\left( x\right) }{\mathop{\sum }\limits_{{x \in D}}{P}_{y}\left( x\right) }$ and ${cT}{R}_{y}\left( D\right) = \frac{\mathop{\sum }\limits_{{x \in D}}{cT}{P}_{y}\left( x\right) }{\mathop{\sum }\limits_{{x \in D}}{P}_{y}\left( x\right) }$ respectively. Then, we obtain mean recall ${mR}\left( D\right) =$ $\frac{1}{\left| \mathcal{Y}\right| }\mathop{\sum }\limits_{{y \in \mathcal{Y}}}T{R}_{y}\left( D\right)$ and certified mean recall $\operatorname{cmR}\left( D\right) = \frac{1}{\left| \mathcal{Y}\right| }\mathop{\sum }\limits_{{y \in \mathcal{Y}}}{cT}{R}_{y}\left( D\right)$ . Evaluating lower bounds on other popular metrics such as mean precision or mIoU this way results in vacuous upper bound since they depend on the upper bound on false positive(FP)predictions. For the pixels that are not certified we cannot guarantee that they will not be assigned to a certain class, therefore, a non-trivial upper bound on ${FP}$ is not straightforward. We leave this direction for future work. In certified detection, we additionally consider false alert ratio (FAR) which is the fraction of correctly classified pixels for which we return an alert on a clean image. Smaller FAR is preferable.
|
| 332 |
+
|
| 333 |
+

|
| 334 |
+
|
| 335 |
+
Figure 6: Performance for different adversarial patch sizes evaluated on 200 ADE20K images.
|
| 336 |
+
|
| 337 |
+
Figure 6 shows how the performance of DEMASKED SMOOTHING depends on the patch size for the BEIT-B model. We see that certified detection metrics remain high even for a patch as big as 5% of the image surface and for the recovery they slowly deteriorate as we increase the patch size to 2%.
|
| 338 |
+
|
| 339 |
+
## D Test-time input certification
|
| 340 |
+
|
| 341 |
+
In this section, we discuss how certified recovery (Theorem 1) can be applied to guaranteed verification of the robustness on a test image. We also discuss how robustness guarantees for the test-time images can be evaluated by using a dataset of clean images such as ADE20K [31] or COCO-Stuff-10K [39].
|
| 342 |
+
|
| 343 |
+
### D.1 Test-time certified recovery
|
| 344 |
+
|
| 345 |
+
Let ${x}^{\prime }$ be a test-time input which can be either a clean image or an image attacked with an adversarial patch. We know that there exists a clean image $x$ corresponding to ${x}^{\prime }$ which removes the patch if it is present. We have either ${x}^{\prime } = x$ or ${x}^{\prime } \in A\left( x\right)$ , where $A\left( x\right) \mathrel{\text{:=}} \{ A\left( {x, p, l}\right) \mid \left( {p, l}\right) \in \mathcal{P}\}$ . However, at test time we do not have access to the clean image $x$ .
|
| 346 |
+
|
| 347 |
+
Our goal is to certify that for our segmentation model $h$ and a pixel ${x}_{i, j}$ we have $h{\left( {x}^{\prime }\right) }_{i, j} = h{\left( x\right) }_{i, j}$ . We can achieve this result by applying the recovery certification (Theorem 1) to the test-time image. It allows us to verify whether $\forall \left( {p, l}\right) \in \mathcal{P} : h{\left( A\left( {x}^{\prime }, p, l\right) \right) }_{i, j} = h{\left( {x}^{\prime }\right) }_{i, j}$ . We also know that if ${x}^{\prime } \in A\left( x\right)$ , then $x \in A\left( {x}^{\prime }\right)$ (Figure 7a). Indeed, if ${x}^{\prime }$ is only different from $x$ by one patch, then $x$ can be be obtained from ${x}^{\prime }$ by removing this patch. Therefore, by obtaining the guarantee for $A\left( {x}^{\prime }\right)$ , we implicitly obtain the guarantee also for the image $x$ even though we do not have direct access to it.
|
| 348 |
+
|
| 349 |
+
We note that this test-time guarantee is only possible for certified recovery. In certified detection, we would need to evaluate the verification function $v$ (Theorem 2) for both the clean image $x$ and the attacked image ${x}^{\prime }$ to obtain the result. This cannot be done if $\bar{x}$ is implicit.
|
| 350 |
+
|
| 351 |
+
### D.2 Robustness guarantees evaluation
|
| 352 |
+
|
| 353 |
+
The typical certified robust error for a given test data set (and pixel(i, j)in the segmentation case) is an estimate for
|
| 354 |
+
|
| 355 |
+
$$
|
| 356 |
+
{\mathbb{E}}_{X \sim D}\left\lbrack {\mathop{\max }\limits_{{\left( {p, l}\right) \in P}}{\mathbb{1}}_{h{\left( A\left( X, p, l\right) \right) }_{i, j} \neq h{\left( X\right) }_{i, j}}}\right\rbrack ,
|
| 357 |
+
$$
|
| 358 |
+
|
| 359 |
+

|
| 360 |
+
|
| 361 |
+
Figure 7: (a) certified inference; (b) double adversarial neighbourhood; (c) original image (d) certification against two patches
|
| 362 |
+
|
| 363 |
+
where $D$ is the data generating probability measure and we assume that our test set to be an i.i.d. sample of it. This is the expected robust error (worst case over our threat model $P$ for clean inputs) for a given pixel(i, j). Using the test sample to get an estimate of this quantity, we get a probabilistic guarantee that the corresponding pixel(i, j)of a new clean test sample ${x}^{\prime }$ drawn i.i.d. from $P$ will have its whole "patch"-neighborhood certified.
|
| 364 |
+
|
| 365 |
+
However, more important for a practical security analysis is that we can certify a given instance, which can be even potentially adversarially perturbed. Formally, this means that for an input $z \in A\left( x\right)$ , where $x \sim P$ is an unknown sample from $P$ , that we guarantee
|
| 366 |
+
|
| 367 |
+
$$
|
| 368 |
+
\forall \left( {p, l}\right) \in P : h{\left( A\left( z, p, l\right) \right) }_{i, j} = h{\left( z\right) }_{i, j},
|
| 369 |
+
$$
|
| 370 |
+
|
| 371 |
+
and as $x \in A\left( {z, p, l}\right)$ this implies that we certify that the pixel(i, j)of the potentially manipulated image is classified the same as pixel(i, j)of the unperturbed image $x$ .
|
| 372 |
+
|
| 373 |
+
However, it is now tricky to get even a probabilistic estimate of the quantity
|
| 374 |
+
|
| 375 |
+
$$
|
| 376 |
+
{\mathbb{E}}_{x \sim D}\mathop{\max }\limits_{{\left( {p, l}\right) \in P}}\left\lbrack {\mathop{\max }\limits_{{\left( {q, m}\right) \in P}}{\mathbb{1}}_{h{\left( A\left( A\left( x, p, l\right) , q, m\right) \right) }_{i, j} = h{\left( A\left( x, p, l\right) \right) }_{i, j}}}\right\rbrack ,
|
| 377 |
+
$$
|
| 378 |
+
|
| 379 |
+
as the outer maximization process cannot be simply simulated by doing adversarial patch attacks on a clean test dataset.
|
| 380 |
+
|
| 381 |
+
We propose a way to evaluate a guaranteed lower bound on the fraction of certified test-time inputs by using a dataset of clean images. Instead of considering a standard one-patch neighbourhood $A\left( x\right)$ defined by our threat model (Section 3), we propose to consider a neighbourhood ${A}^{2}\left( x\right)$ of two independent patches (Figure 7b). ${A}^{2}\left( x\right)$ contains all the images ${x}^{\prime } \in A\left( x\right)$ as well as their respective patch neighbourhoods $A\left( {x}^{\prime }\right)$ . Therefore, by verifying that $\forall \left( {{p}_{1},{l}_{1}}\right) ,\left( {{p}_{2},{l}_{2}}\right) \in$ $\mathcal{P} : h{\left( A\left( A\left( x,{p}_{1},{l}_{1}\right) ,{p}_{2},{l}_{2}\right) \right) }_{i, j} = h{\left( x\right) }_{i, j}$ , we guarantee that $\forall {x}^{\prime } \in A\left( x\right) \forall \left( {p, l}\right) \in \mathcal{P}$ : $h{\left( A\left( {x}^{\prime }, p, l\right) \right) }_{i, j} = h{\left( {x}^{\prime }\right) }_{i, j}.$
|
| 382 |
+
|
| 383 |
+
We note that corresponding reasoning could be applied to certification in ${\ell }_{p}$ models. Then ${A}^{2}\left( x\right)$ would correspond to doubling the radius of the $\epsilon$ -ball instead of adding a second patch.
|
| 384 |
+
|
| 385 |
+
Note that Theorem 1 can be directly extended to a threat model of $N$ patches. In the worst case each of the $N$ patches can affect $T$ different maskings. Therefore, we need to change the condition of Theorem 1 to $K \geq {2NT} + 1$ . We apply the described method to evaluating the test-time certification guarantees for a toy example of a ${0.1}\%$ patch in Table 3 . We also illustrate how a column mask looks in this case in Figure 7.
|
| 386 |
+
|
| 387 |
+
Table 3: Inference recovery robustness estimate. To illustrate our point we certify an example for a 0.1% patch.
|
| 388 |
+
|
| 389 |
+
<table><tr><td rowspan="2">dataset</td><td rowspan="2">segm</td><td rowspan="2">mask</td><td rowspan="2">mloU</td><td colspan="2">big</td><td colspan="2">all</td><td rowspan="2">%C</td></tr><tr><td>mR</td><td>cmR</td><td>mR</td><td>cmR</td></tr><tr><td rowspan="2">ADE20K COCO10K</td><td rowspan="2">BEiT-B</td><td rowspan="2">col</td><td>19.73</td><td>36.95</td><td>16.64</td><td>24.23</td><td>9.24</td><td>41.96</td></tr><tr><td>26.36</td><td>69.63</td><td>35.34</td><td>34.92</td><td>11.13</td><td>28.17</td></tr></table>
|
| 390 |
+
|
| 391 |
+
## E Adversarial patch example
|
| 392 |
+
|
| 393 |
+
In this section, we demonstrate an example of a real adversarial patch for a semantic segmentation model similar to the one illustrated in the Figure 1a and show how it is handled by our certified defences. We illustrate it for the Swin [26] model on one of the images from the ADE20K [31] dataset.
|
| 394 |
+
|
| 395 |
+

|
| 396 |
+
|
| 397 |
+
Figure 8: Patch attack illustration with Swin [26] and an ADE20K image. A patch occupying 1% of the image surface changes the segmentation.
|
| 398 |
+
|
| 399 |
+
### E.1 Patch optimization
|
| 400 |
+
|
| 401 |
+
We set the patch size to $1\%$ of the image surface. We select a fixed position for a patch on the rear window of a car (Figure 8a). For each pixel we extract a list of predicted logits corresponding to each class and apply multi-margin loss with respect to the ground truth label of the respective pixel. We use random patch initialization without restarts. As an optimizer we use projected gradient descent (PGD) with 1000 steps and initial step size of 0.01 . We use cosine step size schedule and momentum for the gradient with the rate of 0.9 . The optimization plot and the patch efficiency at different iterations of the PGD are illustrated in the Figure 8.
|
| 402 |
+
|
| 403 |
+
### E.2 Certified recovery
|
| 404 |
+
|
| 405 |
+
We denote the original image as $x$ and the patched image as ${x}^{\prime }$ . The voting-based segmentation function $h$ (Section 4.2) provides the majority-vote prediction $h\left( x\right)$ and the corresponding certification map which shows the pixels where all the votes agree. In Figure we see that a part of the building and the road is certified which means that this prediction cannot be affected by an adversarial patch. Figure demonstrates $h\left( {x}^{\prime }\right)$ which correctly segments those regions in presence of an adversarial patch that fools the original model.
|
| 406 |
+
|
| 407 |
+
### E.3 Certified detection
|
| 408 |
+
|
| 409 |
+
We perform our analysis by evaluating the verification map $v$ (Section 4.2) for the original image $x$ and for the patched image ${x}^{\prime }$ . We see that in $v\left( x\right)$ a major part of the building is certified i. e. for a part of pixels ${x}_{i, j}$ that belong to the building and the road we have $v{\left( x\right) }_{i, j} = 1$ . However, $v{\left( {x}^{\prime }\right) }_{i, j} = 0$ for those pixels. It means that we have detected that the prediction on this input is potentially affected by an adversarial patch.
|
| 410 |
+
|
| 411 |
+
## F Additional experiments
|
| 412 |
+
|
| 413 |
+
In Tables 4 and 5, we provide additional experimental results for evaluating different masking schemes proposed in Section 4.1 on different models.
|
| 414 |
+
|
| 415 |
+

|
| 416 |
+
|
| 417 |
+
Figure 9: Certified recovery for a 1 $\%$ patch used in the attack. The majority vote function $h$ recovers the prediction in presence of an adversarial patch that fools the undefended model. The segmentation for the original and patched image in (a) and (c) are the same for the regions certified in the certification maps (b) or (d). The certification maps (b) and (d) are also almost the same.
|
| 418 |
+
|
| 419 |
+

|
| 420 |
+
|
| 421 |
+
Figure 10: $f$ is a segmentation model (Swin [26]) and $v$ is the verification function (Section 4.2). For an attacked image ${x}^{\prime }v\left( {x}^{\prime }\right)$ detects the region of $f\left( {x}^{\prime }\right)$ which was (potentially) affected by an adversarial patch.
|
| 422 |
+
|
| 423 |
+
## 526 G Inpainting ablation studies
|
| 424 |
+
|
| 425 |
+
We perform ablation studies with respect to the demasking step. The results are in Table 6. Figure 11 provides additional illustrations. As can be seen from the results, our method heavily benefits from having available stronger inpainting models that allow achieving better clean and certified accuracy. We consider this property actually as a strength of our method since it will automatically benefit from future research and developments of stronger inpainting methods. For certified recovery, we also compare it to GIN [43] based on a generative model that we trained on ADE20K (without using style losses based on ImageNet trained VGG). The results are in Table 7. Illustrations can be found in Figure 12.
|
| 426 |
+
|
| 427 |
+
Table 4: The certified detection results(%) for a patch occupying no more than 1% of the image. mIoU - mean intersection over union, mR - mean recall, cmR - certified mean recall. %C - mean percentage of certified and correct pixels in the image.
|
| 428 |
+
|
| 429 |
+
<table><tr><td rowspan="2">dataset</td><td rowspan="2">segm</td><td rowspan="2">mask</td><td rowspan="2">mIoU</td><td colspan="2">big</td><td colspan="2">all</td><td rowspan="2">%C</td></tr><tr><td>mR</td><td>cmR</td><td>mR</td><td>cmR</td></tr><tr><td rowspan="4">ADE20K</td><td rowspan="2">PSPNet</td><td>col</td><td rowspan="2">44.39</td><td rowspan="2">61.83</td><td>50.02</td><td rowspan="2">54.74</td><td>26.37</td><td>60.57</td></tr><tr><td>row</td><td>42.44</td><td>19.88</td><td>54.62</td></tr><tr><td rowspan="2">Swin</td><td>col</td><td rowspan="2">48.13</td><td rowspan="2">68.51</td><td>55.45</td><td rowspan="2">59.13</td><td>29.06</td><td>61.44</td></tr><tr><td>row</td><td>47.21</td><td>22.04</td><td>55.93</td></tr><tr><td rowspan="4">COCO10K</td><td rowspan="2">PSPNet</td><td>col</td><td rowspan="2">37.76</td><td rowspan="2">71.71</td><td>56.86</td><td rowspan="2">49.65</td><td>26.80</td><td>47.61</td></tr><tr><td>row</td><td>51.05</td><td>23.51</td><td>43.40</td></tr><tr><td rowspan="2">DeepLab v3</td><td>col</td><td rowspan="2">37.81</td><td rowspan="2">72.52</td><td>56.54</td><td rowspan="2">49.98</td><td>26.86</td><td>47.17</td></tr><tr><td>row</td><td>50.51</td><td>23.89</td><td>43.19</td></tr></table>
|
| 430 |
+
|
| 431 |
+
Table 5: The certified recovery results(%) against a 0.5% patch. 3-mask and 4-mask correspond to $T = 3$ and $T = 4$ respectively (Figure 2)
|
| 432 |
+
|
| 433 |
+
<table><tr><td rowspan="2">dataset</td><td rowspan="2">segm</td><td rowspan="2">mask</td><td rowspan="2">mIoU</td><td colspan="2">big</td><td colspan="2">all</td><td rowspan="2">%C</td></tr><tr><td>mR</td><td>cmR</td><td>mR</td><td>cmR</td></tr><tr><td rowspan="8">ADE20K</td><td rowspan="4">PSPNet</td><td>col</td><td>19.17</td><td>51.90</td><td>34.11</td><td>23.66</td><td>10.76</td><td>44.90</td></tr><tr><td>row</td><td>12.00</td><td>36.26</td><td>12.03</td><td>15.03</td><td>3.74</td><td>28.29</td></tr><tr><td>3-mask</td><td>15.00</td><td>44.93</td><td>19.55</td><td>18.41</td><td>5.58</td><td>35.85</td></tr><tr><td>4-mask</td><td>12.74</td><td>40.41</td><td>15.86</td><td>15.87</td><td>4.14</td><td>31.22</td></tr><tr><td rowspan="4">Swin</td><td>col</td><td>22.43</td><td>59.75</td><td>34.88</td><td>27.09</td><td>11.70</td><td>46.14</td></tr><tr><td>row</td><td>13.58</td><td>42.88</td><td>15.13</td><td>16.70</td><td>4.46</td><td>30.64</td></tr><tr><td>3-mask</td><td>17.06</td><td>51.03</td><td>24.15</td><td>20.74</td><td>6.65</td><td>38.27</td></tr><tr><td>4-mask</td><td>14.77</td><td>46.67</td><td>17.74</td><td>10.05</td><td>4.72</td><td>34.04</td></tr><tr><td rowspan="8">COCO10K</td><td rowspan="4">PSPNet</td><td>col</td><td>21.94</td><td>61.56</td><td>36.67</td><td>29.94</td><td>11.13</td><td>29.51</td></tr><tr><td>row</td><td>18.87</td><td>58.04</td><td>20.90</td><td>26.16</td><td>6.14</td><td>19.31</td></tr><tr><td>3-mask</td><td>18.82</td><td>59.26</td><td>29.00</td><td>25.85</td><td>7.56</td><td>25.21</td></tr><tr><td>4-mask</td><td>17.46</td><td>58.47</td><td>23.63</td><td>24.35</td><td>5.51</td><td>20.36</td></tr><tr><td rowspan="4">DeepLab v3</td><td>col</td><td>23.12</td><td>62.60</td><td>33.84</td><td>31.59</td><td>11.55</td><td>28.71</td></tr><tr><td>row</td><td>20.04</td><td>55.71</td><td>17.80</td><td>27.89</td><td>6.28</td><td>17.04</td></tr><tr><td>3-mask</td><td>20.14</td><td>58.02</td><td>27.14</td><td>27.82</td><td>8.05</td><td>24.30</td></tr><tr><td>4-mask</td><td>19.35</td><td>58.22</td><td>22.01</td><td>26.74</td><td>5.79</td><td>19.38</td></tr></table>
|
| 434 |
+
|
| 435 |
+

|
| 436 |
+
|
| 437 |
+
Figure 11: Results without image demasking. The solid color inpainting is treated as a separate object in the scene because we need to classify every pixel in semantic segmentation task. Therefore, it is hard to achieve a situation where all the demasked segmentation agree on some pixel which is represented in the Table 6.
|
| 438 |
+
|
| 439 |
+
Table 6: Comparison for demasked smoothing with and without demasking step. mIoU - mean intersection over union, mR - mean recall, cmR - certified mean recall. %C - mean percentage of certified and correct pixels in the image. We use Swin model on 200 ADE20K images with column masking for certified detection and certified recovery. We compare masking the columns with solid black color without demasking to ZITS demasking.
|
| 440 |
+
|
| 441 |
+
<table><tr><td rowspan="2">mode</td><td rowspan="2">patch size</td><td rowspan="2">demasking</td><td rowspan="2">mloU</td><td colspan="2">big</td><td colspan="2">all</td><td rowspan="2">%C</td></tr><tr><td>mR</td><td>cmR</td><td>mR</td><td>cmR</td></tr><tr><td>detection</td><td>1.0%</td><td>✓ ✘</td><td>38.56</td><td>67.25</td><td>58.85 19.49</td><td>53.37</td><td>23.35 3.09</td><td>62.89 21.19</td></tr><tr><td rowspan="2">recovery</td><td rowspan="2">0.5%</td><td>✓</td><td>19.09</td><td>66.03</td><td>52.71</td><td>23.02</td><td>12.66</td><td>47.05</td></tr><tr><td>✘</td><td>1.10</td><td>15.09</td><td>7.71</td><td>1.79</td><td>0.72</td><td>18.59</td></tr></table>
|
| 442 |
+
|
| 443 |
+
Table 7: Comparison of our two demasking methods: ZITS and GIN. mIoU - mean intersection over union, mR - mean recall, cmR - certified mean recall. %C - mean percentage of certified and correct pixels in the image. We use Swin model on 200 ADE20K images.
|
| 444 |
+
|
| 445 |
+
<table><tr><td rowspan="2">demasking</td><td rowspan="2">trained on</td><td rowspan="2">mIoU</td><td colspan="2">big</td><td colspan="2">all</td><td rowspan="2">%C</td></tr><tr><td>mR</td><td>cmR</td><td>mR</td><td>cmR</td></tr><tr><td>ZITS [29]</td><td>Places2</td><td>19.09</td><td>66.03</td><td>52.71</td><td>23.02</td><td>12.66</td><td>47.05</td></tr><tr><td>GIN [43]</td><td>ADE20K</td><td>5.46</td><td>32.27</td><td>19.05</td><td>7.62</td><td>3.52</td><td>32.08</td></tr></table>
|
| 446 |
+
|
| 447 |
+

|
| 448 |
+
|
| 449 |
+
Figure 12: Comparison between GIN and ZITS inpainting
|
| 450 |
+
|
| 451 |
+
## H Comparison to simplified Derandomized Smoothing
|
| 452 |
+
|
| 453 |
+
Derandomized Smoothing (DRS) [9] was proposed for certified recovery, therefore in this section we focus on this task. Direct adaptation of derandomized smoothing to semantic segmentation task requires training a model that is able to predict the full image segmentation from a small visible region. Since it is not immediately clear to us what architectural design and training procedure would be needed to train such a model, we consider a simplified version of DRS that we call DRS-S. In this version, we consider an off-the-shelf semantic segmentation model and evaluate how it performs with column masking from DRS. Therefore, we do not encode the masked regions with the special 'NULL' value like in DRS but use black color instead. That is because an off-the-shelf model cannot work with 'NULL' values.
|
| 454 |
+
|
| 455 |
+
We run our experiments on ADE20K dataset. We consider the DRS parameters from the recent SOTA version of Derandomized Smoothing by Salman et al. [15]. They use column width $b = {19}$ and stride $s = {10}$ for certified classification of ${224} \times {224}$ ImageNet images. To account for the fact that ADE20K images have larger resolution than ImageNet, we scale the parameters to column width $b = {42}$ and stride $s = {22}$ . To make the comparison consistent with the rest of our results, we use the patch occupying ${0.5}\%$ of the image.
|
| 456 |
+
|
| 457 |
+
From Table 8 we can see that DRS-S performs poorly on semantic segmentation task. The reason for that is illustrated in Figure 13. Processing the column region in 13c would probably be sufficient for a classification model to classify the image into the class "house". But it is clearly not sufficient to reconstruct the whole segmentation map 13e as can be seen in the Figure 13g. Whether doing this would be possible with a model specifically trained to reconstruct the segmentation map from a very small visible region is an open research question (up to our knowledge).
|
| 458 |
+
|
| 459 |
+
We point out that the value $\% \mathrm{C}$ of certified and correctly classified pixels in the Table 8 is still surprisingly high for DRS-S compared to other metircs. We attribute this to the fact that the solid black regions are usually treated as a wall by the segmentation model, therefore the images are usually segmented as a wall by the DRS majority voting. And the wall is a common part of both indoor and outdoor scenes in ADE20K as can be implied from the Table 9 of "big" ADE20K classes. Therefore, always classifying the output as a wall provides a decent fraction of correctly classified pixels because of the skewed classes.
|
| 460 |
+
|
| 461 |
+
However, to provide a better comparison with DRS, we emulate the model which is able to reconstruct the whole segmentation map from the column masking proposed in DRS. We do this by applying the demasking approach proposed in this work. We first try to reconstruct the whole image from one column and then segment it with an off-the-shelf model as we did with the masks proposed in this paper. We call this approach DRS-E and the results can be found in Table 8.
|
| 462 |
+
|
| 463 |
+

|
| 464 |
+
|
| 465 |
+
Figure 13
|
| 466 |
+
|
| 467 |
+
Table 8: Comparison of our method with simplified Derandomized Smoothing [9]. mIoU - mean intersection over union, mR - mean recall, cmR - certified mean recall. %C - mean percentage of certified and correct pixels in the image. We use Swin model on 200 ADE20K images.
|
| 468 |
+
|
| 469 |
+
<table><tr><td rowspan="2">method</td><td rowspan="2">mIoU</td><td colspan="2">big</td><td colspan="2">all</td><td rowspan="2">%C</td></tr><tr><td>mR</td><td>cmR</td><td>mR</td><td>cmR</td></tr><tr><td>Demasked (our)</td><td>19.09</td><td>66.03</td><td>52.71</td><td>23.02</td><td>12.66</td><td>47.05</td></tr><tr><td>DRS-S</td><td>0.42</td><td>11.35</td><td>9.08</td><td>1.04</td><td>0.83</td><td>28.01</td></tr><tr><td>DRS-E</td><td>9.12</td><td>54.67</td><td>41.78</td><td>11.04</td><td>7.86</td><td>45.03</td></tr></table>
|
| 470 |
+
|
| 471 |
+
## I A list of big classes
|
| 472 |
+
|
| 473 |
+
In Section C we suggest another perspective on the evaluation of our DEMASKED SMOOTHING by specifically considering its performance on "big" semantic classes. The object of these classes occupy on average more than ${20}\%$ of the images in which they appear. Correctly segmenting these classes is important for understanding the scene. In Tables 9 and 10 we provide the full list of such classes in ADE20K [31] and COCO-Stuff-10K [39] respectively together with the average fraction of pixels that they occupy in the images in which they are present. We point out that for COCO-Stuff-10K some typically smaller classes such as "sandwich" or "fruit" get included in the list of big classes because of the macro-scale images in which they occupy a big part of the scene.
|
| 474 |
+
|
| 475 |
+
## J Complexity analysis and parallelization
|
| 476 |
+
|
| 477 |
+
In DEMASKED SMOOTHING, we propose a set of $K$ masks that are applied to the original image (denote the cost of applying a single masking by $M$ ). As illustrated in Figure 1c, the masked images are demasked (denote the cost of demasking an image by $D$ ) and segmented (denote the cost of segmenting an image by $S$ ); thereupon per-mask segmentations are aggregated into a final segmentation and certification (cost of aggregation proportional to $K$ ). Asymptotically, compute grows thus with $O\left( {K\left( {M + D + S}\right) + K}\right)$ while the cost of a standard segmentation is $O\left( S\right)$ . Thus, for large $K$ or $M + D \gg S$ , real-time applicability would actually be impractical. However, we note that:
|
| 478 |
+
|
| 479 |
+
1. $M + D$ is roughly of the same size as $S$ for typical DL-based inpainting and segmentation models.
|
| 480 |
+
|
| 481 |
+
2. For certified recovery, we operate in a setting where $\mathrm{K}$ is small $\left( {K \in 5,7,9}\right)$ and does not grow with the image resolution. This is unlike Derandomized Smoothing and its derivatives, where the number of masks in the recovery task grows with the image resolution (or randomized smoothing with thousands of samples per input). This small value of $\mathrm{K}$ benefits our the method in time-sensitive
|
| 482 |
+
|
| 483 |
+
Table 9: The list of 19 "big" classes for ADE20K [31] (out of 150 classes in total) with their average fraction of occupied pixels in the images where they are present (%) and index in the list of dataset classes. We define a class to be "big" if it occupies on average more than ${20}\%$ of the pixels in the images in which this class appears.
|
| 484 |
+
|
| 485 |
+
<table><tr><td>#</td><td>index</td><td>name</td><td>fraction</td><td>#</td><td>index</td><td>name</td><td>fraction</td></tr><tr><td>1</td><td>0</td><td>wall</td><td>25.88</td><td>11</td><td>79</td><td>hovel</td><td>25.93</td></tr><tr><td>2</td><td>1</td><td>building</td><td>32.36</td><td>12</td><td>88</td><td>booth</td><td>23.91</td></tr><tr><td>3</td><td>2</td><td>sky</td><td>21.54</td><td>13</td><td>96</td><td>escalator</td><td>20.96</td></tr><tr><td>4</td><td>7</td><td>bed</td><td>21.25</td><td>14</td><td>103</td><td>ship</td><td>26.81</td></tr><tr><td>5</td><td>21</td><td>water</td><td>22.10</td><td>15</td><td>104</td><td>fountain</td><td>28.81</td></tr><tr><td>6</td><td>29</td><td>field</td><td>22.97</td><td>16</td><td>107</td><td>washer</td><td>22.07</td></tr><tr><td>7</td><td>46</td><td>sand</td><td>21.22</td><td>17</td><td>109</td><td>swimming pool</td><td>28.87</td></tr><tr><td>8</td><td>48</td><td>skyscraper</td><td>42.92</td><td>18</td><td>114</td><td>tent</td><td>34.57</td></tr><tr><td>9</td><td>54</td><td>runway</td><td>28.05</td><td>19</td><td>128</td><td>lake</td><td>34.57</td></tr><tr><td>10</td><td>55</td><td>case</td><td>37.57</td><td/><td/><td/><td/></tr></table>
|
| 486 |
+
|
| 487 |
+
Table 10: The list of 21 "big" classes for COCO-Stuff-10K [39] (out of 171 classes in total) with their average fraction of occupied pixels in the images where they are present (%) and index in the list of dataset classes. We define a class to be "big" if it occupies on average more than ${20}\%$ of the pixels in the images in which this class appears.
|
| 488 |
+
|
| 489 |
+
<table><tr><td>#</td><td>index</td><td>name</td><td>fraction</td><td>#</td><td>index</td><td>name</td><td>fraction</td></tr><tr><td>1</td><td>6</td><td>bus</td><td>21.46</td><td>11</td><td>105</td><td>floor-stone</td><td>20.10</td></tr><tr><td>2</td><td>7</td><td>train</td><td>23.11</td><td>12</td><td>111</td><td>fruit</td><td>20.48</td></tr><tr><td>3</td><td>20</td><td>COW</td><td>24.17</td><td>13</td><td>113</td><td>grass</td><td>23.25</td></tr><tr><td>4</td><td>21</td><td>elephant</td><td>28.50</td><td>14</td><td>134</td><td>playingfield</td><td>38.64</td></tr><tr><td>5</td><td>49</td><td>sandwich</td><td>23.99</td><td>15</td><td>137</td><td>river</td><td>40.01</td></tr><tr><td>6</td><td>51</td><td>broccoli</td><td>20.18</td><td>16</td><td>143</td><td>sand</td><td>26.37</td></tr><tr><td>7</td><td>54</td><td>pizza</td><td>25.86</td><td>17</td><td>144</td><td>sea</td><td>36.51</td></tr><tr><td>8</td><td>60</td><td>bed</td><td>36.86</td><td>18</td><td>146</td><td>sky-other</td><td>22.94</td></tr><tr><td>9</td><td>61</td><td>dining table</td><td>21.71</td><td>19</td><td>148</td><td>snow</td><td>51.60</td></tr><tr><td>10</td><td>95</td><td>clouds</td><td>24.11</td><td>20</td><td>159</td><td>vegetable</td><td>20.35</td></tr><tr><td/><td/><td/><td/><td>21</td><td>167</td><td>water-other</td><td>21.67</td></tr></table>
|
| 490 |
+
|
| 491 |
+
applications. For certified detection, we can adjust the number of masks for the computational speed by using strided masking as suggested in Section 4.1.
|
| 492 |
+
|
| 493 |
+
3. Morover, masking, demasking, and segmenting for different masks do not use any shared data and can thus be fully parallelized if sufficiently powerful hardware is available. Only the aggregation step requires the results of all the previous stages. However, aggregation time is small compared to the other stages. In terms of latency, a fully parallelized version of our procedure would thus have a latency proportional to $O\left( {M + D + S + K}\right)$ . For small $K$ and $M + D \approx S$ , application to real-time video can be facilitated by means of parallelization.
|
| 494 |
+
|
| 495 |
+
## 500 K Used data
|
| 496 |
+
|
| 497 |
+
In this work, we only use the datasets published under formal licenses: ADE20K [31] and COCO-Stuff-10K [39]. To the best of our knowledge, data used in this project do not contain any personally identifiable information or offensive content. The models ZITS [29] and Swin [26] are published under Apache-2.0 license. The text of the license for PSPNet [24] can be found here: https: //github.com/hszhao/PSPNet/blob/master/LICENSE
|
| 498 |
+
|
| 499 |
+
## L Demasked Smoothing Visualization
|
| 500 |
+
|
| 501 |
+
In this section, we provide additional illustrations of our method (Figures 14, 15, 16, 17). Similarly to the Table 1 we certify against a 1% patch for the detection task and against 0.5% patch for the recovery task. For each mask type we illustrate all the stages summarized in the Figure 1c. We also 10 provide examples of certification maps for certified recovery and certified detection with different images (Figure 18, 19).
|
| 502 |
+
|
| 503 |
+

|
| 504 |
+
|
| 505 |
+
Figure 14: DEMASKED SMOOTHING detection column masking illustration for an image from ADE20K [31]. We illustrate five masks out of twenty.
|
| 506 |
+
|
| 507 |
+

|
| 508 |
+
|
| 509 |
+
Figure 15: DEMASKED SMOOTHING recovery column masking illustration for an image from ADE20K [31].
|
| 510 |
+
|
| 511 |
+

|
| 512 |
+
|
| 513 |
+
Figure 16: DEMASKED SMOOTHING recovery masking for $T = 3, K = 7$ masks (Section 4.1 illustration for an image from ADE20K.
|
| 514 |
+
|
| 515 |
+

|
| 516 |
+
|
| 517 |
+
Figure 17: DEMASKED SMOOTHING recovery masking for $T = 4, K = 9$ masks (Section 4.1 illustration for an image from ADE20K [31].
|
| 518 |
+
|
| 519 |
+

|
| 520 |
+
|
| 521 |
+
Figure 18: Certification map examples on ADE20K [31] with ZITS [29] and Swin [26].
|
| 522 |
+
|
| 523 |
+

|
| 524 |
+
|
| 525 |
+
Figure 19: Certification map examples on ADE20K [31] with ZITS [29] and Swin [26].
|
| 526 |
+
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/VFGgG8XpFLu/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,197 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ CERTIFIED DEFENCES AGAINST ADVERSARIAL PATCH ATTACKS ON SEMANTIC SEGMENTATION
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
§ ABSTRACT
|
| 12 |
+
|
| 13 |
+
Adversarial patch attacks are an emerging security threat for real world deep learning applications. We present DEMASKED SMOOTHING, the first approach (up to our knowledge) to certify the robustness of semantic segmentation models against this threat model. Previous work on certifiably defending against patch attacks has mostly focused on image classification task and often required changes in the model architecture and additional training which is undesirable and computationally expensive. In DEMASKED SMOOTHING, any segmentation model can be applied without particular training, fine-tuning, or restriction of the architecture. Using different masking strategies, DEMASKED SMOOTHING can be applied both for certified detection and certified recovery. In extensive experiments we show that DEMASKED SMOOTHING can on average certify ${63}\%$ of the pixel predictions for a $1\%$ patch in the detection task and 46% against a 0.5% patch for the recovery task on the ADE20K dataset.
|
| 14 |
+
|
| 15 |
+
§ 14 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
Physically realizable adversarial attacks are a threat for safety-critical (semi-)autonomous systems such as self-driving cars or robots. Adversarial patches [1, 2] are the most prominent example of such an attack. Their realizability has been demonstrated repeatedly, for instance by Lee and Kolter [3]: an attacker places a printed version of an adversarial patch in the physical world to fool a deep learning system. While empirical defenses [4-7] may offer robustness against known attacks, it does not provide any guarantees against unknown future attacks [8]. Thus, certified defenses for the patch threat model, which allow guaranteed robustness against all possible attacks for the given threat model, are crucial for safety-critical applications.
|
| 18 |
+
|
| 19 |
+
Research on certifiable defenses against adversarial patches can be broadly categorized into certified recovery and certified detection. Certified recovery [8-16] has the objective to make a correct prediction on an input even in the presence of an adversarial patch. In contrast, certified detection [17-20] provides a weaker guarantee by only aiming at detecting inputs containing adversarial patches. While certified recovery is more desirable in principle, it typically comes at a high cost of reduced performance on clean data. In practice, certified detection might be preferable because it allows maintaining high clean performance. Most existing certifiable defenses against patches are focused on image classification. DetectorGuard [21] and ObjectSeeker [22] that certifiably defend against patch hiding attacks on object detectors. Moreover, existing defences are not easily applicable to arbitrary downstream models, because they assume either that the downstream model is trained explicitly for being certifiably robust [9, 12], or that the model has a certain network architecture such as BagNet [10, 12, 11] or a vision transformer [15, 20]. PatchCleanser [14], which can be combined with arbitrary downstream models but is restricted to image classification. Adversarial patch attacks were also proposed for the image segmentation problem [23], mostly for attacking CNN-based models that use a localized receptive field [24]. However, recently self-attention based vision transformers [25] have achieved new state-of-the-art in the image segmentation task [26, 27]. Their output may become more vulnerable to adversarial patches if they manage to manipulate the global self-attention [28]. We demonstrate how significant parts of the segmentation output may be affected by a small patch for Swin tranfromer [26] in Figure 1a. Full details on the attack and defending against it with our method are available in Appendix E. We point out that preventive certified defences are important because newly developed attacks can immediately be used to compromise safety-critical applications unless they are properly defended.
|
| 20 |
+
|
| 21 |
+
< g r a p h i c s >
|
| 22 |
+
|
| 23 |
+
Figure 1: (a) A simple patch attack on the Swin transformer [26] manages to switch the prediction for a big part of the image. (b) Masking the patch. (c) A sketch of DEMASKED SMOOTHING for certified image segmentation. First, we generate a set of masked versions of the image such that each possible patch can only affect a certain number of masked images. Then we use image inpainting to partially recover the information lost during masking and then apply an arbitrary segmentation method. The output is obtained by aggregating the segmentations pixelwise.
|
| 24 |
+
|
| 25 |
+
In this work, we propose the novel framework DEMASKED SMOOTHING (Figure 1c) to obtain the first (up to our knowledge) certified defences against patch attacks on semantic segmentation models. Similarly to previous work [9], we mask different parts of the input (Figure 1b) and provide guarantees with respect to every possible patch that is not larger than a certain pre-defined size. While prior work required the classification model to deal with such masked inputs, we leverage recent progress in image inpainting [29] to reconstruct the input before passing it to the downstream model. This decoupling of image demasking from the segmentation task allows us to support arbitrary downstream models. Moreover, we can leverage state of the art methods for image inpainting. We also propose different masking schemes tailored for the segmentation task that provide the dense input allowing the demasking model to understand the scene but still satisfy the guarantees with respect to the adversarial patch. We summarize our contributions as follows:
|
| 26 |
+
|
| 27 |
+
* We propose DEMASKED SMOOTHING which is the first (to the best of our knowledge) certified recovery or certified detection based defence against adversarial patch attacks on semantic segmentation models (Section 4).
|
| 28 |
+
|
| 29 |
+
* DEMASKED SMOOTHING can do certified detection and recovery with any off-the-shelf segmentation model without requiring finetuning or any other adaptation.
|
| 30 |
+
|
| 31 |
+
* We implement DEMASKED SMOOTHING, evaluate it for different certification objectives and masking schemes (Section 5). We can certify ${63}\%$ of all pixels in certified detection for a 1% patch and 46% in certified recovery for a 0.5% patch for the BEiT-B [30] segmentation model on the ADE20K [31] dataset.
|
| 32 |
+
|
| 33 |
+
§ 652 RELATED WORK
|
| 34 |
+
|
| 35 |
+
6 Certified recovery. The first certified recovery defence against patches was proposed by Chiang 7 et al. [8] for classification models . De-Randomized Smoothing (DRS) [9] significantly improved certified accuracy. Models with small receptive fields such as BagNets [32] were adopted for this task either by combining them with some fixed postprocessing [10, 11] or by training them end-to-end for certified recovery [12]. DRS was also applied [15] to Vision Transfomers (ViTs) [25]. In contrast to these works, our Demasked Smoothing can be applied to models with arbitrary architecture. PatchCleanser [14] has this property as well but it is limited to image classification. Certified recovery against patches has also been extended to object detection to defend against patch hiding attacks [18, 22]. Randomized smoothing [33] has been applied to certify semantic segmentation models against ${\ell }_{2}$ -norm bounded adversarial attacks [34]. However, to the best of our knowledge, no certified defence against patch attacks for semantic segmentation has been proposed so far.
|
| 36 |
+
|
| 37 |
+
Certified detection. In this alternative to certified recovery, an adversarial patch is allowed to change the model prediction. However, if it succeeds in doing so, the attack is certifiably detected. Minority Reports [17] was the first certified detection method against patches. PatchGuard++ [18] is has significantly improved the inference time. ScaleCert [19] uses "superficial important neurons" to datect an attack. Lastly, PatchVeto [20] implements masking by removing certain input patches of the ViT. In this work, we propose a novel method for certified detection in the semantic segmentation.
|
| 38 |
+
|
| 39 |
+
Image reconstruction. The problem of learning to reconstruct the full image from masked inputs was pioneered by Vincent et al. [35]. It recently attracted attention as proxy task for self-supervised pre-training, especially for the ViTs [30, 36]. Recent approaches to this problem are using Fourier convolutions [37] and ViTs [29]. SPG-Net [38] trains a subnetwork to reconstruct the full semantic segmentation directly from the masked input as a part of the image inpainting pipeline. In this work, we use the state-of-the-art ZITS [29] inpainting method.
|
| 40 |
+
|
| 41 |
+
§ 3 PROBLEM SETUP
|
| 42 |
+
|
| 43 |
+
Semantic segmentation. In this work, we focus on the semantic segmentation task. Let $\mathcal{X}$ be a set of rectangular images. Let $x \in \mathcal{X}$ be an image with height $H$ , width $W$ and the number of channels $C$ . We denote $\mathcal{Y}$ to be a finite label set. The goal is to find the segmentation map $s \in {\mathcal{Y}}^{H \times W}$ for $x$ . For each pixel ${x}_{i,j}$ , the corresponding label ${s}_{i,j}$ denotes the class of the object to which ${x}_{i,j}$ belongs. We denote $\mathbb{S}$ to be a set of segmentation maps and $f : \mathcal{X} \rightarrow \mathbb{S}$ to be a segmentation model.
|
| 44 |
+
|
| 45 |
+
Threat model. Let us consider an untargeted adversarial patch attack on a segmentation model. Consider an image $x \in {\left\lbrack 0,1\right\rbrack }^{H \times W \times C}$ and its ground truth segmentation map $s$ . Assume that the attacker can modify an arbitrary rectangular region of the image $x$ which has a size of ${H}^{\prime } \times {W}^{\prime }$ . We refer to this modification as a patch. Let $l \in \{ 0,1{\} }^{H \times W}$ be a binary mask that defines the patch location in the image in which ones denote the pixels belonging to the patch. Let $\mathcal{L}$ be a set of all possible patch locations for a given image $x$ . Let $p \in {\left\lbrack 0,1\right\rbrack }^{H \times W \times C}$ be the modification itself. We define an operator $A\left( {x,p,l}\right) = \left( {1 - l}\right) \odot x + l \odot p$ , where $\odot$ is element-wise product. The operator $A$ applies the ${H}^{\prime } \times {W}^{\prime }$ subregion of $p$ defined by a binary mask $l$ to the image $x$ while keeping the rest of the image unchanged. We denote $\mathcal{P} \mathrel{\text{ := }} {\left\lbrack 0,1\right\rbrack }^{H \times W \times C} \times \mathcal{L}$ to be a set of all possible patch configurations(p, l)that define an ${H}^{\prime } \times {W}^{\prime }$ patch. Let $s \in \mathbb{S}$ be the ground truth segmentation for $x$ and $Q\left( {f\left( x\right) ,s}\right)$ be some quality metric. The attacker’s goal is to find $\left( {{p}^{ \star },{l}^{ \star }}\right) = \arg \mathop{\min }\limits_{{\left( {p,l}\right) \in \mathcal{P}}}Q\left( {f\left( {A\left( {x,p,l}\right) }\right) ,s}\right)$ . In this paper, we propose certified defences against any possible attack from $\mathcal{P}$ including $\left( {{p}^{ \star },{l}^{ \star }}\right)$ . We consider two robustness objectives.
|
| 46 |
+
|
| 47 |
+
Certified recovery. For a pixel ${x}_{i,j}$ our goal is to verify that the following statement is true
|
| 48 |
+
|
| 49 |
+
$$
|
| 50 |
+
\forall \left( {p,l}\right) \in \mathcal{P} : f{\left( A\left( x,p,l\right) \right) }_{i,j} = f{\left( x\right) }_{i,j} \tag{1}
|
| 51 |
+
$$
|
| 52 |
+
|
| 53 |
+
Certified detection. We define a verification function $v : \mathcal{X} \rightarrow \{ 0,1{\} }^{H \times W}$ . If $v{\left( x\right) }_{i,j} = 1$ , then the adversarial patch attack on ${x}_{i,j}$ can be detected by applying $v$ to the attacked image ${x}^{\prime } = A\left( {x,p,l}\right)$ .
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
v{\left( x\right) }_{i,j} = 1 \Rightarrow \left\lbrack {\forall \left( {p,l}\right) \in \mathcal{P} : v{\left( A\left( x,p,l\right) \right) }_{i,j} = 1 \rightarrow f{\left( A\left( x,p,l\right) \right) }_{i,j} = f{\left( x\right) }_{i,j}}\right\rbrack \tag{2}
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
$v{\left( {x}^{\prime }\right) }_{i,j} = 0$ means an alert on pixel ${x}_{i,j}^{\prime }$ . However, if ${x}^{\prime }$ is not an adversarial example, then this is a false alert. In that case the fraction of pixels for which we return false alert is called false alert ratio (FAR). The secondary objective is to keep FAR as small as possible.
|
| 60 |
+
|
| 61 |
+
Depending on the objective our goal is to certify one of the conditions1,2for each pixel ${x}_{i,j}$ . This provides us an upper bound on an attacker’s effectiveness under any adversarial patch attack from $\mathcal{P}$ .
|
| 62 |
+
|
| 63 |
+
§ 4 DEMASKED SMOOTHING
|
| 64 |
+
|
| 65 |
+
DEMASKED SMOOTHING (Figure 1c) consists of several steps. First, we apply a predefined set of masks with specific properties to the input image to obtain a set of masked images. Then we reconstruct the masked regions of each image based on the available information with an inpainting model $g$ . After that we apply a segmentation model $f$ to the demasked results. Finally, we aggregate the segmentation outcomes and make a conclusion for the original image with respect to the statements [1] or [2]. See Algorithm 1 in Appendix B.
|
| 66 |
+
|
| 67 |
+
§ 4.1 INPUT MASKING
|
| 68 |
+
|
| 69 |
+
Motivation. Like in previous work (Section 2) we apply masking patterns to the input image and use predictions on masked images to aggregate the robust result. If an adversarial patch is completely masked, it has no effect on further processing. However, in semantic segmentation, we predict not a single whole-image label like in the classification task, but a separate label for each pixel. Thus, making prediction on a masked image must allow us to predict the labels also for the masked pixels.
|
| 70 |
+
|
| 71 |
+
Preliminaries. Consider an image $x \in {\left\lbrack 0,1\right\rbrack }^{H \times W \times C}$ . We define "*" to be a special masking symbol that does not correspond to any pixel value and has the property $\forall z \in \mathbb{R} : z \times * = *$ . Note that $*$ needs to be different from 0 since 0 is a valid pixel value in unmasked inputs. Let $m \in \{ * ,1{\} }^{H \times W}$ be a mask. We call the element-wise product $x \odot m$ a masking of $x$ . In a masking, a subset of pixels becomes $*$ and the rest remains unchanged. We consider the patches of size at most ${H}^{\prime } \times {W}^{\prime }$ .
|
| 72 |
+
|
| 73 |
+
Certified recovery. We break $m$ into an array $B$ of non-intersecting blocks, each having the same size ${H}^{\prime } \times {W}^{\prime }$ as the adversarial patch. We index the blocks as $B\left\lbrack {q,r}\right\rbrack ,1 \leq q \leq \left\lceil \frac{H}{{H}^{\prime }}\right\rceil ,1 \leq r \leq \left\lceil \frac{W}{{W}^{\prime }}\right\rceil$ . We say that the block $B\left\lbrack {q,r}\right\rbrack$ is visible in a mask $m$ if $\forall \left( {i,j}\right) \in B\left\lbrack {q,r}\right\rbrack : {m}_{i,j} = 1$ . Consider an array $M$ of $K$ masks. We define each mask $M\left\lbrack k\right\rbrack$ by a set of blocks that are visible in it. Each block is visible in exactly one mask and masked in the others. We say that a mask $m$ is affected by a patch(p, l) if $A\left( {x,p,l}\right) \odot m \neq x \odot m$ . We define $T\left( M\right) = \mathop{\max }\limits_{{\left( {p,l}\right) \in \mathcal{P}}}\left| {\{ m \in M \mid A\left( {x,p,l}\right) \odot m \neq x \odot m\} }\right|$ . That is: $T\left( M\right)$ is the largest number of masks affected by some patch. If $M$ is defined, we refer to the value $T\left( M\right)$ as $T$ for simplicity. We define column masking $M$ for which $T = 2$ . We assign every $k$ -th block column to be visible in the mask $M\left\lbrack k\right\rbrack$ (Figure 2b). Any $\left( {p,l}\right) \in \mathcal{P}$ can intersect at most two adjacent columns since(p, l)has the same width as a column. Thus, it can affect at most two masks (Figure 2b). A similar scheme can be proposed for the rows. Due to the block size the patch(p, l)cannot intersect more than four blocks at once. We define a mask set that we call 3-mask s. t. for any four adjacent blocks two are visible in the same mask (Figures 2c). Hence, a patch for 3-mask can affect no more than 3 masks, $T = 3$ . To achieve $T = 4$ any assignment of visible blocks to the masks works. We consider 4-mask that allows uniform coverage of the visible blocks in the image (Figure 2d). See details in Appendix B.
|
| 74 |
+
|
| 75 |
+
Certified detection. We define ${M}_{d}$ to be a set of masks for certified detection (we use subscript $d$ for distinction). ${M}_{d}$ should have the property: $\forall \left( {p,l}\right) \in \mathcal{P}\exists m \in {M}_{d} : A\left( {x,p,l}\right) \odot m = x \odot m$ i. e. for every patch exists at least one mask not affected by this patch. See details in Appendix B.
|
| 76 |
+
|
| 77 |
+
§ 4.2 CERTIFICATION
|
| 78 |
+
|
| 79 |
+
Certified recovery. For the threat model $\mathcal{P}$ consider a set $M$ of $K$ masks. We define a function $h : \mathcal{X} \rightarrow \mathbb{S}$ that assigns a class to the pixel ${x}_{i,j}$ via majority voting over class predictions of each reconstructed segmentation in $S$ . A class for the pixel that is predicted by the largest number of segmentations is assigned. We break the ties by assigning a class with a smaller index.
|
| 80 |
+
|
| 81 |
+
< g r a p h i c s >
|
| 82 |
+
|
| 83 |
+
Figure 2: certified recovery: column mask (b), 3-mask (c), 4-mask (d); certified detection (e, f).
|
| 84 |
+
|
| 85 |
+
< g r a p h i c s >
|
| 86 |
+
|
| 87 |
+
Figure 3: Reconstructing the masked images with ZITS [29]
|
| 88 |
+
|
| 89 |
+
Theorem 1. If the number of masks $K$ satisfies $K \geq {2T}\left( M\right) + 1$ and for a pixel ${x}_{i,j}$ we have
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
\forall S\left\lbrack k\right\rbrack \in S : S{\left\lbrack k\right\rbrack }_{i,j} = h{\left( x\right) }_{i,j}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
(i.e. all the votes agree), then $\forall \left( {p,l}\right) \in \mathcal{P} : h{\left( A\left( x,p,l\right) \right) }_{i,j} = h{\left( x\right) }_{i,j}$ .
|
| 96 |
+
|
| 97 |
+
Certified detection. Consider ${M}_{d} = {\left\{ {M}_{d}\left\lbrack k\right\rbrack \right\} }_{k = 1}^{K}$ . For a set of demasked segmentations S we define the verification map $v{\left( x\right) }_{i,j} \mathrel{\text{ := }} \left\lbrack {f{\left( x\right) }_{i,j} = S{\left\lbrack 1\right\rbrack }_{i,j} = \ldots = S{\left\lbrack K\right\rbrack }_{i,j}}\right\rbrack$ i.e. the original segmentation is equal to all the other segmentations on masked-demasked inputs, including the one in which the potential patch was completely masked.
|
| 98 |
+
|
| 99 |
+
Theorem 2. Assume that $v{\left( x\right) }_{i,j} = 1$ . Then
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
\forall \left( {p,l}\right) \in \mathcal{P} : v{\left( A\left( x,p,l\right) \right) }_{i,j} = 1 \Rightarrow f{\left( A\left( x,p,l\right) \right) }_{i,j} = f{\left( x\right) }_{i,j}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
See the proofs for both theorems in Appendix A. For a given image $x$ the verification map $v\left( x\right)$ is complementary to the model segmentation output $f\left( x\right)$ that stays unchanged. Thus, there is no drop in clean performance however we may have some false positive alerts in the verification map $v$ in the clean setting.
|
| 106 |
+
|
| 107 |
+
§ 5 EXPERIMENTS
|
| 108 |
+
|
| 109 |
+
In this section, we evaluate DEMASKED SMOOTHING with the masking schemes proposed in Section 4, compare our approach with the direct application of Derandomized Smoothing [9] to the segmentation task and evaluate the performance on different datasets and models. Certified recovery and certified detection provide certificates of different strength (Section 4) which are not comparable. We evaluate them separately for different patch sizes.
|
| 110 |
+
|
| 111 |
+
Experimental Setup. We evaluate DEMASKED SMOOTHING on two challenging semantic segmentation datasets: ADE20K [31] (150 classes, 2000 validation images) and COCO-Stuff-10K [39] (171 classes, 1000 validation images). For demasking we use the ZITS [29] inpainting model with the checkpoint trained on Places2 [40] from the official paper repository1. As a segmentation model $f$ we use BEiT [30], Swin [26], PSPNet [24] and DeepLab v3 [41]. We use the model implementations provided in the mmsegmentation framework [42]. An illustration of the image reconstruction and respective segmentation can be found in Figure 3.
|
| 112 |
+
|
| 113 |
+
Evaluation. We compute mIoU, mean recall (mR) and certified mean recall (cmR). See detailed explanation of these metrics in Appendix C. In certified detection, we additionally consider false alert ratio (FAR) which is the fraction of correctly classified pixels for which we return an alert on a clean image. Smaller FAR is preferable. Due to our threat model, certifying small objects in the scene can be difficult because they can be partially or completely covered by an adversarial patchn. To provide an additional perspective on our methods, we also evaluate mR and cmR specifically for the "big" classes, which occupy on average more than ${20}\%$ of the images in which they appear. These are, for example, road, building, train, and sky, which are important for understanding the scene. The full list is provided in the Appendix 1. We run the evaluation in parallel on 5 Nvidia Tesla V100-32GB GPUs. The certification for the whole ADE20K validation set with ZITS and BEiT-B takes around 1.2 hours for certified recovery and 2 hours for certified detection (due to a larger number of masks).
|
| 114 |
+
|
| 115 |
+
Discussion. In Table 1, we compare different masking schemes proposed in Section 4.1. Evaluation of all the models with all the masking schemes is consistent with these results and can be found in
|
| 116 |
+
|
| 117 |
+
https://github.com/DQiaole/ZITS_inpainting
|
| 118 |
+
|
| 119 |
+
Table 1: Comparison of different masking schemes proposed in Section 4.1, mIoU - mean intersection over union, mR - mean recall, cmR - certified mean recall. %C - mean percentage of certified and correct pixels in the image. For detection, we provide clean mIoU since the output is unaffected and mean false alert rate (FAR) (lower is better). See additional results in Appendix F.
|
| 120 |
+
|
| 121 |
+
max width=
|
| 122 |
+
|
| 123 |
+
2*dataset 2*segm 2*mode 2*mask 2*mloU 2|c|big 2|c|all 2*%C 2*FAR $\downarrow$
|
| 124 |
+
|
| 125 |
+
6-9
|
| 126 |
+
mR cmR mR cmR
|
| 127 |
+
|
| 128 |
+
1-11
|
| 129 |
+
6*ADE20K 6*BEiT-B detection column 53.08 70.92 57.33 64.45 32.55 63.55 20.04
|
| 130 |
+
|
| 131 |
+
3-11
|
| 132 |
+
1% patch row X X 50.05 X 26.65 58.34 25.24
|
| 133 |
+
|
| 134 |
+
3-11
|
| 135 |
+
4*recovery 0.5% patch column 24.92 60.77 41.26 29.84 12.98 46.22 4*N/A
|
| 136 |
+
|
| 137 |
+
4-10
|
| 138 |
+
row 16.33 46.91 16.72 19.51 4.83 31.71
|
| 139 |
+
|
| 140 |
+
4-10
|
| 141 |
+
3-mask 19.90 56.90 26.51 23.86 7.54 38.64
|
| 142 |
+
|
| 143 |
+
4-10
|
| 144 |
+
4-mask 18.82 52.96 23.75 22.56 5.87 34.36
|
| 145 |
+
|
| 146 |
+
1-11
|
| 147 |
+
|
| 148 |
+
Table 2: Demasked Smoothing results with column masking for different models
|
| 149 |
+
|
| 150 |
+
max width=
|
| 151 |
+
|
| 152 |
+
2*mode 2*dataset 2*segm 2*mloU 2|c|big 2|c|all 2*%C 2*FAR $\downarrow$
|
| 153 |
+
|
| 154 |
+
5-8
|
| 155 |
+
mR cmR mR cmR
|
| 156 |
+
|
| 157 |
+
1-10
|
| 158 |
+
5*detection 1 % patch 3*ADE20K BEiT-B 53.08 70.92 57.33 64.45 32.55 63.55 20.04
|
| 159 |
+
|
| 160 |
+
3-10
|
| 161 |
+
PSPNet 44.39 61.83 50.02 54.74 26.37 60.57 20.08
|
| 162 |
+
|
| 163 |
+
3-10
|
| 164 |
+
Swin-B 48.13 68.51 55.45 59.13 29.06 61.44 20.31
|
| 165 |
+
|
| 166 |
+
2-10
|
| 167 |
+
2*COCO10K PSPNet 37.76 71.71 56.86 49.65 26.80 47.09 21.43
|
| 168 |
+
|
| 169 |
+
3-10
|
| 170 |
+
DeepLab v3 37.81 72.52 56.54 49.98 26.86 46.55 21.89
|
| 171 |
+
|
| 172 |
+
1-10
|
| 173 |
+
5*recovery 0.5 % patch 3*ADE20K BEIT-B 24.92 60.77 41.26 29.84 12.98 46.22 5*N/A
|
| 174 |
+
|
| 175 |
+
3-9
|
| 176 |
+
PSPNet 19.17 51.90 34.11 23.66 10.76 44.90
|
| 177 |
+
|
| 178 |
+
3-9
|
| 179 |
+
Swin-B 22.43 59.75 34.88 27.09 11.70 46.14
|
| 180 |
+
|
| 181 |
+
2-9
|
| 182 |
+
2*COCO10K PSPNet 21.94 61.56 36.67 29.94 11.13 29.51
|
| 183 |
+
|
| 184 |
+
3-9
|
| 185 |
+
DeepLab v3 23.12 62.60 33.84 31.59 11.55 28.71
|
| 186 |
+
|
| 187 |
+
1-10
|
| 188 |
+
|
| 189 |
+
Appendix F. We see that column masking achieves better results in both certification modes. We attribute the effectiveness of column masking to the fact most of the images in the datasets have a clear horizont line, therefore having a visible column provides a slice of the image that intersects most of the scene background objects.
|
| 190 |
+
|
| 191 |
+
In Table 2, we evaluate our method with column masking on different models. For certified detection we can certify more than ${60}\%$ of the pixels with all models on ADE20K and more than ${46}\%$ on COCO10K. False alert ratio on correctly classified pixels is around 20%. In certified recovery, we certify more than ${44}\%$ pixels on ADE20K and more than ${28}\%$ pixels on COCO10K. See the comparison with DRS [9] adaptded for segmentation in Appendix A. We evaluate the performance of our method for different patch sizes in Section C. Ablations with respect to inpainting can be found in Appendix G. DemaskedSMOOTHING illustrations procedure are provided in Appendix D.
|
| 192 |
+
|
| 193 |
+
§ 204 6 CONCLUSION
|
| 194 |
+
|
| 195 |
+
In this work, we propose DEMASKED SMOOTHING, the first (up to our knowledge) certified defence framework against patch attacks on segmentation models. Due to its novel design based on masking schemes and image demasking, DEMASKED SMOOTHING is compatible with any segmentation model and can on average certify ${63}\%$ of the pixel predictions for a 1% patch in the detection task and 46% against a 0.5% patch for the recovery task on the ADE20K dataset.
|
| 196 |
+
|
| 197 |
+
Ethical and Societal Impact This work contributes to the field of certified defences against physically-realizable adversarial attacks. The proposed approach allows to certify robustness of safety-critical applications such as medical imaging or autonomous driving. The defence might be used to improve robustness of systems used for malicious purposes such as (semi-)autonomous weaponry or unauthorized surveillance. This danger may be mitigated e.g. by using a system of 5 sparsely distributed patches which makes certifying the image more challenging. All activities in our organization are carbon neutral, so our experiments do not leave any carbon dioxide footprint.
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/VhBtAHeIUaB/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,659 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Provable Re-Identification Privacy
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
## Abstract
|
| 12 |
+
|
| 13 |
+
In applications involving sensitive data, such as finance and healthcare, the necessity for preserving data privacy can be a significant barrier to machine learning model development. Differential privacy (DP) has emerged as one canonical standard for provable privacy. However, DP's strong theoretical guarantees often come at the cost of a large drop in its utility for machine learning; and DP guarantees themselves can be difficult to interpret. As a result, standard DP has encountered deployment challenges in practice. In this work, we propose a different privacy notion, re-identification privacy (RIP), to address these challenges. RIP guarantees are easily interpretable in terms of the success rate of membership inference attacks. We give a precise characterization of the relationship between RIP and DP, and show that RIP can be achieved using less randomness compared to the amount required for guaranteeing DP, leading to smaller drop in utility. Our theoretical results also give rise to a simple algorithm for guaranteeing RIP which can be used as a wrapper around any algorithm with a continuous output, including parametric model training.
|
| 14 |
+
|
| 15 |
+
## 16 1 Introduction
|
| 16 |
+
|
| 17 |
+
As the popularity and efficacy of machine learning (ML) have increased, the number of domains in which ML is applied has also expanded greatly. Some of these domains, such as finance or healthcare, are based on machine learning on sensitive data which cannot be publicly shared due to regulatory or ethical concerns (Assefa et al., 2020; Office for Civil Rights, 2002). In these instances, maintaining data privacy is of paramount importance and must be considered at every stage of the machine learning process, from model development to deployment. In development, even sharing data in-house while retaining the appropriate level of privacy can be a barrier to model development (Assefa et al., 2020). After deployment, the trained model itself can leak information about the training data if appropriate precautions are not taken (Shokri et al., 2017; Carlini et al., 2021a).
|
| 18 |
+
|
| 19 |
+
Differential privacy (DP) (Dwork et al., 2014) has emerged as the gold standard for provable privacy in the academic literature. Training methods for DP use randomized algorithms applied on databases of points, and DP stipulates that the algorithm's random output cannot change much depending on the presence or absence of one individual point in the database. These guarantees in turn give information theoretic protection against the maximum amount of information that an adversary can obtain about any particular sample in the database, regardless of that adversary's prior knowledge or computational power, making DP an attractive method for guaranteeing privacy. However, DP's strong theoretical guarantees often come at the cost of a large drop in utility for many algorithms. In addition, DP guarantees themselves are difficult to interpret by non-experts. For instance, there is a precise definition for what it means for an algorithm to satisfy DP with $\varepsilon = {10}$ , but it is not a priori clear what this definition guarantees in terms of practical questions that a user could have, the most basic of which might be to ask whether or not an attacker can determine whether or not that user's information was included in the algorithm's input. These issues hinder the widespread adoption of DP in practice.
|
| 20 |
+
|
| 21 |
+
In this paper, we propose a novel privacy notion, re-identification privacy (RIP), to address these challenges. RIP is based on re-identification, also called membership inference. Re-identification measures privacy via a game played between the algorithm designer and an adversary or attacker. The adversary is presented with the algorithm’s output and a "target" sample ${\mathbf{x}}^{ * }$ , which may or may not have been included in the algorithm's input set. The adversary's goal is to determine whether or not the target sample was included in the algorithm's input. If the adversary can succeed with probability much higher than random guessing, then the algorithm must be leaking information about its input. This measure of privacy is one of the simplest for the attacker; thus, provably protecting against it is a strong privacy guarantee. Furthermore, RIP is easily interpretable, as it is measured with respect to a simple quantity-namely, the maximum success rate of an attacker. In summary, our contributions are as follows:
|
| 22 |
+
|
| 23 |
+
- We propose a novel privacy notion, which we dub re-identification privacy (RIP).
|
| 24 |
+
|
| 25 |
+
- We characterize the relationship between RIP and differential privacy (DP).
|
| 26 |
+
|
| 27 |
+
- We introduce algorithms for generating RIP synthetic data.
|
| 28 |
+
|
| 29 |
+
- We demonstrate that certifying RIP can allow for much higher utility than certifying DP, and never results in worse utility.
|
| 30 |
+
|
| 31 |
+
## 2 Related Work
|
| 32 |
+
|
| 33 |
+
Privacy attacks in ML The study of privacy attacks has recently gained popularity in the machine learning community as the importance of data privacy has become more apparent. In a membership inference or re-identification attack (Shokri et al., 2017), an attacker is presented with a particular sample and the output of the algorithm to be attacked. The attacker's goal is to determine whether or not the presented sample was included in the training data or not. If the attacker can determine the membership of the sample with a probability significantly greater than random guessing, this indicates that the algorithm is leaking information about its training data. Obscuring whether or not a given individual belongs to the private dataset is the core promise of private data sharing, and the main reason that we focus on membership inference as the privacy measure. Membership inference attacks against predictive models have been studied extensively (Shokri et al., 2017; Baluta et al., 2022; Hu et al., 2022; Liu et al., 2022; He et al., 2022; Carlini et al., 2021a), and recent work has also developed membership inference attacks against synthetic data (Stadler et al., 2022; Chen et al., 2020).
|
| 34 |
+
|
| 35 |
+
In a reconstruction attack, the attacker is not presented with a real sample to classify as belonging to the training set or not, but rather has to create samples belonging to the training set based only on the algorithm's output. Reconstruction attacks have been successfully conducted against large language models (Carlini et al., 2021b). At present, these attacks require the attacker to have a great deal of auxiliary information to succeed. For our purposes, we are interested in privacy attacks to measure the privacy of an algorithm, and such a granular task may place too high burden on the attacker to accurately detect "small" amounts of privacy leakage.
|
| 36 |
+
|
| 37 |
+
In an attribute inference attack (Bun et al., 2021; Stadler et al., 2022), the attacker tries to infer a sensitive attribute from a particular sample, based on its non-sensitive attributes and the attacked algorithm output. It has been argued that attribute inference is really the entire goal of statistical learning, and therefore should not be considered a privacy violation (Bun et al., 2021; Jayaraman & Evans, 2022).
|
| 38 |
+
|
| 39 |
+
Differential privacy (DP) DP (Dwork et al., 2014) and its variants (Mironov, 2017; Dwork & Rothblum, 2016) offer strong, information-theoretic privacy guarantees. A DP (probabilistic) algorithm is one in which the probability law of its output does not change much if one sample in its input is changed. That is, if $D$ and ${D}^{\prime }$ are two datasets (collections of $n$ bounds) which differ in exactly one element, then the algorithm $\mathcal{A}$ is $\varepsilon$ -DP if
|
| 40 |
+
|
| 41 |
+
$$
|
| 42 |
+
\mathbb{P}\left( {\mathcal{A}\left( D\right) \in S}\right) \leq {e}^{\varepsilon }\mathbb{P}\left( {\mathcal{A}\left( {D}^{\prime }\right) \in S}\right)
|
| 43 |
+
$$
|
| 44 |
+
|
| 45 |
+
82 for any subset $S$ of the output space. DP has many desirable properties, such as the ability to compose DP methods or post-process the output without losing guarantees. Many simple "wrapper" methods are also available for certifying DP. Among the simplest, the Laplace mechanism, adds Laplace noise to the algorithm output. The noise level must generally depend on the sensitivity of the base algorithm, which measures how much a single input sample can change the algorithm's output. The method we propose in this work is very similar to the Laplace mechanism, but we show that the amount of noise needed can be reduced drastically. Abadi et al. (2016) introduced DP-SGD, a powerful tool enabling DP to be combined with deep learning methods with only a small modification to the standard gradient descent training procedure. However, as previously mentioned, enforcing DP does not come without a cost. Enforcing DP with high levels of privacy (small $\varepsilon$ ) often comes with sharp decreases in algorithm utility (Tao et al., 2021; Stadler et al., 2022). DP is also difficult to audit; it must be proven mathematically for a given algorithm implementation. Checking it empirically is generally computationally intractable (Gilbert & McMillan, 2018). The difficulty of checking DP has led to widespread implementation bugs (and even errors due to finite machine precision), which invalidate the guarantees of DP (Jagielski et al., 2020).
|
| 46 |
+
|
| 47 |
+
The independent work of Thudi et al. (2022) specifically applies DP to bound re-identification rates, and our results in Section 3.4 complement theirs on the relationship between re-identification and DP. However, our results show that DP is not required to prevent re-identification; it is merely one option, and we give alternative methods for defending against membership inference.
|
| 48 |
+
|
| 49 |
+
Auditing methods and metrics Another important component of synthetic data is privacy and utility auditing. This is especially crucial in regulated environments where users may be required to prove compliance of their tools with privacy regulations. Recent works (Alaa et al., 2022; Meehan et al., 2020) have proposed heuristics for measuring both synthetic data privacy and utility. Utility metrics are often based on statistical measures of similarity between the synthesized and real data (Yoon et al., 2020). Privacy metrics try to capture the notion of whether or not a generative model has "memorized" its training data, typically by looking at distances of the synthetic data to training data vs. some held out data. Most of the proposed distance-based heuristics fall victim to simple counter examples in which the proposed synthetic data scores perfectly on the privacy metric, but clearly does not preserve the privacy of the training data. On the other hand, RIP lends itself to useful empirical measurement, as the success rate of any existing membership inference attack method gives a lower bound on the best achievable privacy.
|
| 50 |
+
|
| 51 |
+
## 3 Re-Identification Privacy (RIP)
|
| 52 |
+
|
| 53 |
+
### 3.1 Notation
|
| 54 |
+
|
| 55 |
+
We make use of the following notation. We will always use $\mathcal{D}$ to refer to our entire dataset, which we assume consists of $n$ samples all of which must remain private. We will use $\mathbf{x} \in \mathcal{D}$ or ${\mathbf{x}}^{ * } \in \mathcal{D}$ to refer to a particular sample. ${\mathcal{D}}_{\text{train }} \subseteq \mathcal{D}$ refers to a size- $k$ subset of our private data. We will assume is selected randomly, so ${\mathcal{D}}_{\text{train }}$ is a random variable. The remaining data $\mathcal{D} \smallsetminus {\mathcal{D}}_{\text{train }}$ will be referred to as the holdout data. We denote by $\mathbb{D}$ the set of all size- $k$ subsets of $\mathcal{D}$ (i.e., all possible training sets), and we will typically use $D \in \mathbb{D}$ to refer to a particular realization of the random variable ${\mathcal{D}}_{\text{train }}$ . Finally, given a particular sample ${\mathbf{x}}^{ * } \in \mathcal{D},{\mathbb{D}}^{\text{in }}$ (resp. ${\mathbb{D}}^{\text{out }}$ ) will refer to those sets $D \in \mathbb{D}$ for which ${\mathbf{x}}^{ * } \in D$ (resp. ${\mathbf{x}}^{ * } \notin D$ ).
|
| 56 |
+
|
| 57 |
+
### 3.2 Theoretical Motivation
|
| 58 |
+
|
| 59 |
+
The implicit assumption behind the public release of any statistical algorithm-be it a generative or predictive ML model, or even the release of simple population statistics-is that it is acceptable for statistical information about the modelled data to be released publicly. In the context of membership inference, this poses a potential problem: if the population we are modeling is significantly different from the "larger" population, then if our algorithm's output contains any useful information whatsoever, it should be possible for an attacker to infer whether or not a given record could have plausibly come from our training data or not.
|
| 60 |
+
|
| 61 |
+
We illustrate this concept with an example. Suppose we wish to publish a model which predicts a patient's blood pressure from several biomarkers, specifically for patients who suffer from a particular chronic disease. To do this, we collect a dataset of individuals with confirmed cases of the disease, and use this data to train a linear regression model with coefficients $\widehat{\theta }$ . Formally, we let $\mathbf{x} \in {\mathbb{R}}^{d}$ denote the features (e.g. biomarker values), $z \in \mathbb{R}$ denote the patient’s blood pressure, and $y = \mathbb{1}\{$ patient has the chronic disease in question $\}$ . In this case, the private dataset ${\mathcal{D}}_{\text{train }}$ contains only the patients with $y = 1$ . Assume that in the general populace, patient features are drawn from a mixture model:
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
y \sim \operatorname{Bernoulli}\left( p\right) ,\;\mathbf{x} \sim \mathcal{N}\left( {0, I}\right) ,\;z \mid \mathbf{x}, y \sim {\theta }_{y}^{\top }\mathbf{x},\;{\theta }_{0} \neq {\theta }_{1}.
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
In the re-identification attack scenario, an adversary observes a data point $\left( {{\mathbf{x}}^{ * },{z}^{ * }}\right)$ and the model $\widehat{\theta }$ , and tries to determine whether or not $\left( {{\mathbf{x}}^{ * },{z}^{ * }}\right) \in {\mathcal{D}}_{\text{train }}$ . If ${\theta }_{0}$ and ${\theta }_{1}$ are well-separated, then an adversary can train an effective classifier to determine the corresponding label $\mathbb{1}\left\{ {\left( {{\mathbf{x}}^{ * },{z}^{ * }}\right) \in {\mathcal{D}}_{\text{train }}}\right\}$ for $\left( {{\mathbf{x}}^{ * },{z}^{ * }}\right)$ by checking whether or not ${z}^{ * } \approx {\widehat{\theta }}^{\top }{\mathbf{x}}^{ * }$ . Since only data with $y = 1$ belong to ${\mathcal{D}}_{\text{train }}$ , this provides a signal to the adversary as to whether or not ${\mathbf{x}}^{ * }$ could have belonged to ${\mathcal{D}}_{\text{train }}$ or not. The point is that in this setting, this outcome is unavoidable if $\widehat{\theta }$ is to provide any utility whatsoever. In other words:
|
| 68 |
+
|
| 69 |
+
In order to preserve utility, re-identification privacy must be measured with respect to the distribution from which the private data are drawn.
|
| 70 |
+
|
| 71 |
+
The example above motivates the following theoretical ideal for our synthetic data. Let $\mathcal{D} = {\left\{ {\mathbf{x}}_{i}\right\} }_{i = 1}^{n}$ be the private dataset and suppose that ${\mathbf{x}}_{i}\overset{\text{ i.i.d. }}{ \sim }\mathcal{P}$ for some probability distribution $\mathcal{P}$ . (Note: Here, ${\mathbf{x}}^{ * }$ corresponds to the complete datapoint $\left( {{\mathbf{x}}^{ * },{z}^{ * }}\right)$ in the example above.) Let $\mathcal{A}$ be our (randomized) algorithm, and denote its output by $\theta = \mathcal{A}\left( \mathcal{D}\right)$ . We generate a test point based on:
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
{y}^{ * } \sim \operatorname{Bernoulli}\left( {1/2}\right) ,\;{\mathbf{x}}^{ * } \mid {y}^{ * } \sim {y}^{ * }\operatorname{Unif}\left( {\mathcal{D}}_{\text{train }}\right) + \left( {1 - {y}^{ * }}\right) \mathcal{P},
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
i.e. ${\mathbf{x}}^{ * }$ is a fresh draw from $\mathcal{P}$ or a random element of the private training data with equal probability. Let $\mathcal{I}$ denote any re-identification algorithm which takes as input ${\mathbf{x}}^{ * }$ and the algorithm’s output $\theta$ . The notion of privacy we wish to enforce is that $\mathcal{I}$ cannot do much better to ascertain the membership of ${\mathbf{x}}^{ * }$ than guessing randomly:
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
{\mathbb{P}}_{\mathcal{A},{\mathcal{D}}_{\text{train }}}\left( {\mathcal{I}\left( {{\mathbf{x}}^{ * },{\mathcal{D}}_{\text{synth }}}\right) = {y}^{ * }}\right) \leq 1/2 + \eta ,\;\eta \ll 1/2. \tag{1}
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
### 3.3 Practical Definition
|
| 84 |
+
|
| 85 |
+
In reality, we do not have access to the underlying distribution $\mathcal{P}$ . Instead, we propose to use a bootstrap sampling approach to approximate fresh draws from $\mathcal{P}$ .
|
| 86 |
+
|
| 87 |
+
Definition 1 (Re-Identification Privacy (RIP)). Fix $k \leq n$ and let ${\mathcal{D}}_{\text{train }} \subseteq \mathcal{D}$ be a size- $k$ subset chosen uniformly at random from the elements in $\mathcal{D}$ . For ${\mathbf{x}}^{ * } \in \mathcal{D}$ , let ${y}^{ * } = \mathbb{1}\left\{ {{\mathbf{x}}^{ * } \in {\mathcal{D}}_{\text{train }}}\right\}$ . An algorithm $\mathcal{A}$ is $\eta$ -RIP with respect to $\mathcal{D}$ if for any identification algorithm $\mathcal{I}$ and for every ${\mathbf{x}}^{ * } \in \mathcal{D}$ ,
|
| 88 |
+
|
| 89 |
+
we have
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
\mathbb{P}\left( {\mathcal{I}\left( {{\mathbf{x}}^{ * },\mathcal{A}\left( {\mathcal{D}}_{\text{train }}\right) }\right) = {y}^{ * }}\right) \leq \max \left\{ {\frac{k}{n},1 - \frac{k}{n}}\right\} + \eta .
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
Here, the probability is taken over the uniformly random size- $k$ subset ${\mathcal{D}}_{\text{train }} \subseteq \mathcal{D}$ , as well as any randomness in $\mathcal{A}$ and $\mathcal{I}$ .
|
| 96 |
+
|
| 97 |
+
Definition 1 states that given the output of $\mathcal{A}$ , an adversary cannot determine whether a given point was in the holdout set or training set with probability more than $\eta$ better than always guessing the a priori more likely outcome. In the remainder of the paper, we will set $k = n/2$ , so that $\mathcal{A}$ is $\eta$ -RIP if an attacker cannot have average accuracy greater than $\left( {1/2 + \eta }\right)$ . This gives the largest a priori entropy for the attacker's classification task, which creates the highest ceiling on how much of an advantage an attacker can possibly gain from the algorithm's output, and consequently the most accurate measurement of privacy leakage. The choice $k = n/2$ also keeps us as close as possible to the theoretical motivation in the previous subsection. We note that analogues of all of our results apply for general $k$ .
|
| 98 |
+
|
| 99 |
+
The definition of RIP is phrased with respect to any classifier (whose randomness is independent of the randomness in $\mathcal{A}$ ; if the adversary knows our algorithm and our random seed, we are doomed). While this definition is compelling in that it shows a bound on what any attacker can hope to accomplish, the need to consider all possible attack algorithms makes it difficult to work with technically. The following proposition shows that RIP is equivalent to a simpler definition which does not need to simultaneously consider all identification algorithms $\mathcal{I}$ .
|
| 100 |
+
|
| 101 |
+
Proposition 2. Let $\mathbb{A} = \operatorname{Range}\left( \mathcal{A}\right)$ and let $\mu$ denote the probability law of $\mathcal{A}\left( {\mathcal{D}}_{\text{train }}\right)$ . Then $\mathcal{A}$ is $\eta$ -RIP if and only if
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
{\int }_{\mathbb{A}}\max \left\{ {\mathbb{P}\left( {{\mathbf{x}}^{ * } \in {\mathcal{D}}_{\text{train }} \mid \mathcal{A}\left( {\mathcal{D}}_{\text{train }}\right) = A}\right) ,\mathbb{P}\left( {{\mathbf{x}}^{ * } \notin {\mathcal{D}}_{\text{train }} \mid \mathcal{A}\left( {\mathcal{D}}_{\text{train }}\right) = A}\right) }\right\} {d\mu }\left( A\right) \leq \frac{1}{2} + \eta .
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
Furthermore, the optimal adversary is given by
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
\mathcal{I}\left( {{\mathbf{x}}^{ * }, A}\right) = \mathbb{1}\left\{ {\mathbb{P}\left( {{\mathbf{x}}^{ * } \in {\mathcal{D}}_{\text{train }} \mid \mathcal{A}\left( {\mathcal{D}}_{\text{train }}\right) = A}\right) \geq 1/2}\right\} .
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
Proposition 2 makes precise the intuition that the optimal attacker should guess the more likely of ${\mathbf{x}}^{ * } \in {\mathcal{D}}_{\text{train }}$ or ${\mathbf{x}}^{ * } \notin {\mathcal{D}}_{\text{train }}$ conditional on the output of $\mathcal{A}$ . The optimal attacker’s overall accuracy is then computed by marginalizing this conditional statement.
|
| 114 |
+
|
| 115 |
+
Finally, RIP also satisfies a post-processing inequality similar to the classical result in DP (Dwork et al. 2014). This states that any local functions of a RIP algorithm's output cannot degrade the privacy guarantee.
|
| 116 |
+
|
| 117 |
+
Theorem 3. Suppose that $\mathcal{A}$ is $\eta$ -RIP, and let $f$ be any (potentially randomized, with randomness independent of ${\mathcal{D}}_{\text{train }}$ ) function. Then $f \circ \mathcal{A}$ is also $\eta$ -RIP.
|
| 118 |
+
|
| 119 |
+
For example, Theorem 10 is important for the application of RIP to generative model training: if we can guarantee that our generative model is $\eta$ -RIP, then any output produced by it is $\eta$ -RIP as well.
|
| 120 |
+
|
| 121 |
+
### 3.4 Relation to Differential Privacy
|
| 122 |
+
|
| 123 |
+
In this section, we make precise the relationship between RIP and the most common theoretical formulation of privacy: differential privacy (DP). We provide proof sketches for most of our results here; detailed proofs can be found in the Appendix. Our first theorem shows that DP is at least as strong as RIP.
|
| 124 |
+
|
| 125 |
+
Theorem 4. Let $\mathcal{A}$ be $\varepsilon$ -DP. Then $\mathcal{A}$ is $\eta$ -RIP with $\eta = \frac{1}{1 + {e}^{-\varepsilon }} - \frac{1}{2}$ . Furthermore, this bound is tight, i.e. for any $\varepsilon > 0$ , there exists an $\varepsilon$ -DP algorithm against which the optimal attacker has accuracy $\frac{1}{1 + {e}^{-\varepsilon }}$ .
|
| 126 |
+
|
| 127 |
+
To help interpret this result, we remark that for $\varepsilon \approx 0$ , we have $\frac{1}{1 + {e}^{-\varepsilon }} - \frac{1}{2} \approx \varepsilon /4$ . Thus in the regime where strong privacy guarantees are required $\left( {\eta \approx 0}\right) ,\eta \approx \varepsilon /4$ .
|
| 128 |
+
|
| 129 |
+
In fact, it is the case that DP is strictly stronger than RIP, which we make precise with the following theorem.
|
| 130 |
+
|
| 131 |
+
Theorem 5. For any $\eta > 0$ , there exists an algorithm $\mathcal{A}$ which is $\eta$ -RIP but not $\varepsilon$ -DP for any $\varepsilon < \infty$ .
|
| 132 |
+
|
| 133 |
+
In order to better understand the difference between DP and RIP, let us again examine Proposition 2, Recall that this proposition showed that marginally over the output of $\mathcal{A}$ , the conditional probability that ${\mathbf{x}}^{ * } \in {\mathcal{D}}_{\text{train }}$ given the synthetic should not differ too much from the unconditional probability that ${\mathbf{x}}^{ * } \in {\mathcal{D}}_{\text{train }}$ . The following proposition shows that DP requires this condition to hold for every output of $\mathcal{A}\left( {\mathcal{D}}_{\text{train }}\right)$ .
|
| 134 |
+
|
| 135 |
+
Proposition 6. If $\mathcal{A}$ is an $\varepsilon$ -DP synthetic data generation algorithm, then for any ${\mathbf{x}}^{ * }$ , we have
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
\frac{\mathbb{P}\left( {{\mathbf{x}}^{ * } \notin {\mathcal{D}}_{\text{train }} \mid \mathcal{A}\left( {\mathcal{D}}_{\text{train }}\right) }\right) }{\mathbb{P}\left( {{\mathbf{x}}^{ * } \in {\mathcal{D}}_{\text{train }} \mid \mathcal{A}\left( {\mathcal{D}}_{\text{train }}\right) }\right) } \leq {e}^{\varepsilon }\frac{\mathbb{P}\left( {{\mathbf{x}}^{ * } \notin {\mathcal{D}}_{\text{train }}}\right) }{\mathbb{P}\left( {{\mathbf{x}}^{ * } \in {\mathcal{D}}_{\text{train }}}\right) }.
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
Proposition 6 can be thought of as an extension of the Bayesian interpretation of DP explained by Jordon et al. (2022). Namely, the definition of DP immediately implies that, for any two adjacent sets
|
| 142 |
+
|
| 143 |
+
$D$ and ${D}^{\prime }$ ,
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
\frac{\mathbb{P}\left( {{\mathcal{D}}_{\text{train }} = D \mid \mathcal{A}\left( {\mathcal{D}}_{\text{train }}\right) }\right) }{\mathbb{P}\left( {{\mathcal{D}}_{\text{train }} = {D}^{\prime } \mid \mathcal{A}\left( {\mathcal{D}}_{\text{train }}\right) }\right) } \leq {e}^{\varepsilon }\frac{\mathbb{P}\left( {{\mathcal{D}}_{\text{train }} = D}\right) }{\mathbb{P}\left( {{\mathcal{D}}_{\text{train }} = {D}^{\prime }}\right) }.
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
## 4 Guaranteeing RIP via Noise Addition
|
| 150 |
+
|
| 151 |
+
There are a number of mechanisms for guaranteeing DP which operate via simple noise addition (the 4 Laplace mechanism) or sampling (the exponential mechanism) (Dwork et al., 2014). More recently,
|
| 152 |
+
|
| 153 |
+
Abadi et al. (2016) showed how to make a small modification to the standard deep neural network training procedure to guarantee DP. In this section, we show that a small modification to standard training procedures can be used to guarantee RIP as well.
|
| 154 |
+
|
| 155 |
+
Suppose that $\mathcal{A}$ takes as input a data set $D$ and produces output $\theta \in {\mathbb{R}}^{d}$ . For instance, $\mathcal{A}$ may compute a simple statistical query on $D$ , such as mean estimation, but our results apply equally well in the case that e.g. $\mathcal{A}\left( D\right)$ are the weights of a neural network trained on $D$ . If $\theta$ are the weights of a generative model, then if we can guarantee RIP for $\theta$ , then by the data processing inequality (Theorem 10), this guarantees privacy for any output of the generative model.
|
| 156 |
+
|
| 157 |
+
The distribution over training data (in our case, the uniform distribution over size $n/2$ subsets of our complete dataset $\mathcal{D}$ ) induces a distribution over the output $\theta$ . The idea is the following: What is the smallest amount of noise we can add to $\theta$ which will guarantee RIP? If we add noise on the order of $\mathop{\max }\limits_{{D \sim {D}^{\prime } \subseteq \mathcal{D}}}\begin{Vmatrix}{\mathcal{A}\left( D\right) - \mathcal{A}\left( {D}^{\prime }\right) }\end{Vmatrix}$ , then we can adapt the standard proof for guaranteeing DP in terms of algorithm sensitivity to show that a restricted version of DP (only with respect subsets of $\mathcal{D}$ ) holds in this case, which in turn guarantees RIP. On the other hand, it seems possible that we should be able to reduce the amount of noise even further. Recall that by Propositions 2 and 6, RIP is only asking for a marginal guarantee on the change in the posterior probability of $\bar{D}$ given $A$ , whereas DP is asking for a conditional guarantee on the posterior. So while max seems necessary for a conditional guarantee, the moments of $\theta$ should be sufficient for a marginal guarantee. Theorem 7 shows that this intuition is correct.
|
| 158 |
+
|
| 159 |
+
Theorem 7. Let $\parallel \cdot \parallel$ be any norm, and let ${\sigma }^{M} \geq \mathbb{E}\parallel \theta - \mathbb{E}\theta {\parallel }^{M}$ be an upper bound on the $M$ -th central moment of $\theta$ with respect to this norm over the randomness in ${\mathcal{D}}_{\text{train }}$ and $\mathcal{A}$ . Let $X$ be a random variable with density proportional to $\exp \left( {-\frac{1}{c\sigma }\parallel X\parallel }\right)$ with $c = {\left( {7.5}/\eta \right) }^{1 + \frac{2}{M}}$ . Finally, let $\widehat{\theta } = \theta + X$ . Then $\widehat{\theta }$ is $\eta$ -RIP, i.e., for any adversary $\mathcal{I}$ ,
|
| 160 |
+
|
| 161 |
+
$$
|
| 162 |
+
\mathbb{P}\left( {\mathcal{I}\left( {{\mathbf{x}}^{ * },\widehat{\theta }}\right) = {y}^{ * }}\right) \leq 1/2 + \eta .
|
| 163 |
+
$$
|
| 164 |
+
|
| 165 |
+
At first glance, Theorem 7 may appear to be adding noise of equal magnitude to all of the coordinates of $\theta$ , regardless of how much each contributes to the central moment $\sigma$ . However, by carefully selecting the norm $\parallel \cdot \parallel$ , we can add non-isotropic noise to $\theta$ such that the marginal noise level reflects the variability of each specific coordinate of $\theta$ . This is the content of Corollary 8,
|
| 166 |
+
|
| 167 |
+
Corollary 8. Let ${\sigma }_{i}^{2} \geq \mathbb{E}{\left| {\theta }_{i} - \mathbb{E}{\theta }_{i}\right| }^{2}$ , and define $\parallel x{\parallel }_{\sigma ,2} = {\left( \mathop{\sum }\limits_{{i = 1}}^{d}\frac{{\left| {x}_{i}\right| }^{2}}{d{\sigma }_{i}^{2}}\right) }^{1/2}$ . Generate ${Y}_{i} \sim$
|
| 168 |
+
|
| 169 |
+
$\mathcal{N}\left( {0,{\sigma }_{i}^{2}}\right)$ , set $U = Y/\parallel Y{\parallel }_{\sigma ,2}$ , and draw $r \sim \operatorname{Laplace}\left( {\left( \frac{6.16}{\eta }\right) }^{2}\right)$ . Finally, set $X = {rU}$ and return $\widehat{\theta } = \theta + X$ . Then $\widehat{\theta }$ is $\eta$ -RIP.
|
| 170 |
+
|
| 171 |
+
When does RIP improve over DP? By Theorem 4, any DP algorithm gives rise to a RIP algorithm, so we never need to add more noise than the amount required to guarantee DP, in order to guarantee RIP. However, Theorem 7 shows that RIP affords an advantage over DP when the variance of our algorithm’s output (over subsets of size $n/2$ ) is much smaller than its sensitivity $\Delta$ , which is defined as the maximum change in the algorithm's output when evaluated on two datasets which differ in only one element. For instance, applying the Laplace mechanism from DP requires a noise which scales like $\Delta /\epsilon$ to guarantee $\varepsilon$ -DP. It is easy to construct examples where the variance is much smaller than the sensitivity if the output of our "algorithm" is allowed to be completely arbitrary as a function of the input. However, it is more interesting to ask if there are any natural settings in which this occurs. Proposition 9 answers this question in the affirmative.
|
| 172 |
+
|
| 173 |
+
Proposition 9. For any finite $D \subseteq \mathbb{R}$ , define $\mathcal{A}\left( D\right) = \frac{1}{\mathop{\sum }\limits_{{x \in D}}x}$ . Given a dataset $\mathcal{D}$ of size $n$ , define $\mathbb{D} = \{ D \subseteq \mathcal{D} : \left| D\right| = \lfloor n/2\rfloor \}$ , and define
|
| 174 |
+
|
| 175 |
+
$$
|
| 176 |
+
{\sigma }^{2} = \operatorname{Var}\left( {\mathcal{A}\left( D\right) }\right) ,\;\Delta = \mathop{\max }\limits_{{D \sim {D}^{\prime } \in \mathbb{D}}}\left| {\mathcal{A}\left( D\right) - \mathcal{A}\left( {D}^{\prime }\right) }\right| .
|
| 177 |
+
$$
|
| 178 |
+
|
| 179 |
+
Here the variance is taken over $D \sim \operatorname{Unif}\left( \mathbb{D}\right)$ . Then for all $n$ , there exists a dataset $\left| \mathcal{D}\right| = n$ such that ${\sigma }^{2} = O\left( 1\right)$ but $\Delta = \Omega \left( {2}^{n/3}\right)$ .
|
| 180 |
+
|
| 181 |
+
We remark that similar results should hold for e.g. subset precision matrix queries, perhaps even without such a carefully constructed $\mathcal{D}$ if the size of the subset is comparable to the dimension of the data.
|
| 182 |
+
|
| 183 |
+
Algorithm 1 RIP via noise addition
|
| 184 |
+
|
| 185 |
+
---
|
| 186 |
+
|
| 187 |
+
Require: Private dataset $\mathcal{D},\sigma$ estimation budget $B$ , RIP parameter $\eta$
|
| 188 |
+
|
| 189 |
+
${\mathcal{D}}_{\text{train }} \leftarrow \operatorname{RANDOMSPLIT}\left( {\mathcal{D},1/2}\right)$
|
| 190 |
+
|
| 191 |
+
#Estimate $\sigma$ if an a priori bound is not known
|
| 192 |
+
|
| 193 |
+
$i \leftarrow 1$
|
| 194 |
+
|
| 195 |
+
for $i = 1,\ldots , B$ do
|
| 196 |
+
|
| 197 |
+
${\mathcal{D}}_{\text{train }}^{\left( i\right) } \leftarrow \operatorname{RANDOMSPLIT}\left( {{\mathcal{D}}_{\text{train }},1/2}\right)$
|
| 198 |
+
|
| 199 |
+
${\theta }^{\left( i\right) } \leftarrow \mathcal{A}\left( {\mathcal{D}}_{\text{train }}^{\left( i\right) }\right)$
|
| 200 |
+
|
| 201 |
+
end for
|
| 202 |
+
|
| 203 |
+
$\bar{\theta } \leftarrow \frac{1}{B}\mathop{\sum }\limits_{{i = 1}}^{B}{\theta }^{\left( i\right) }$
|
| 204 |
+
|
| 205 |
+
${\sigma }^{2} \leftarrow \frac{1}{B - 1}\mathop{\sum }\limits_{{i = 1}}^{B}{\begin{Vmatrix}{\theta }^{\left( i\right) } - \bar{\theta }\end{Vmatrix}}^{2}$
|
| 206 |
+
|
| 207 |
+
#Add appropriate noise to the base algorithm's output
|
| 208 |
+
|
| 209 |
+
$U \leftarrow \operatorname{Unif}\left( \left\{ {u \in {\mathbb{R}}^{d} : \parallel u\parallel = 1}\right\} \right)$
|
| 210 |
+
|
| 211 |
+
$r \leftarrow$ Laplace $\left( {{\left( \frac{7.5}{\eta }\right) }^{2}\sigma }\right)$
|
| 212 |
+
|
| 213 |
+
$X \leftarrow {rU}$
|
| 214 |
+
|
| 215 |
+
return $\mathcal{A}\left( {\mathcal{D}}_{\text{train }}\right) + X$
|
| 216 |
+
|
| 217 |
+
---
|
| 218 |
+
|
| 219 |
+
## 5 Simulation Results
|
| 220 |
+
|
| 221 |
+
To illustrate our theoretical results, we plot the noise level needed to guarantee RIP vs. the corresponding level of DP (with the correspondence given by Theorem 4) for the example in Proposition 9.
|
| 222 |
+
|
| 223 |
+
Refer to Fig. 1, Dotted lines refer to DP, while the solid line is for RIP. The $x$ -axis gives the best possible bound on the attacker's improvement in accuracy over random guessing-i.e., the parameter $\eta$ for an $\eta$ -RIP method-according to that method’s guarantees. For DP, the value along the $x$ -axis is given by the (tight) correspondence in Theorem 4, namely $\eta = \frac{1}{1 + {e}^{-\varepsilon }} - \frac{1}{2}.\eta = 0$ corresponds to perfect privacy (the attacker cannot do any better than random guessing), while $\eta = \frac{1}{2}$ corresponds to no privacy (the attacker can determine membership with perfect accuracy). The $y$ -axis denotes the amount of noise that must be added to the non-private algorithm's output, as measured by the scale parameter of the Laplace noise that must be added. For RIP, by Theorem 7, this is ${\left( {6.16}/\eta \right) }^{2}\sigma$ where $\sigma$ is an upper bound on the variance of the base algorithm over random subsets, and for DP this is $\frac{\Delta }{\log \frac{1 + {2\eta }}{1 + {2\eta }}}$ . (This comes from solving $\eta = \frac{1}{1 + {e}^{-\varepsilon }}$ for $\varepsilon$ , then using the fact that $\operatorname{Laplace}\left( {\Delta /\varepsilon }\right)$ noise must be added to guarantee $\varepsilon$ -DP.) For DP, the amount of noise necessary changes with the size $n$ of the private dataset. For RIP, the amount of noise does not change, so there is only one line.
|
| 224 |
+
|
| 225 |
+
The results show that for even small datasets $\left( {n \geq {36}}\right)$ and for $\eta \geq {0.01}$ , direct noise accounting for RIP gives a large advantage over guaranteeing RIP via DP. In practice, such small datasets are uncommon. As $n$ increases above even this modest range, the advantage in terms of noise reduction for RIP vs. DP quickly becomes many orders of magnitude and is not visible on the plot. (Refer to Proposition 9. The noise required for DP grows exponentially in $n$ , while it remains constant in $n$ for RIP.)
|
| 226 |
+
|
| 227 |
+
## 6 Conclusion
|
| 228 |
+
|
| 229 |
+
In this work, we propose a novel privacy property, re-identification privacy (RIP) and explained its properties and relationship with differential privacy (DP). The RIP property is more readily interpretable than the guarantees offered by (DP). RIP also requires a smaller amount of noise to guarantee as compared to DP, and therefore can retain greater utility in practice. We proposed a simple "wrapper" method for guaranteeing RIP, which can be implemented with a minor modification both to simple statistical queries or more complicated tasks such as the training procedure for parametric machine learning models.
|
| 230 |
+
|
| 231 |
+

|
| 232 |
+
|
| 233 |
+
Figure 1: Noise level vs. privacy guarantee for RIP and DP. For datasets with at least $n = {36}$ points and for almost all values of $\eta$ , RIP allows us to add much less noise than what would be required by naively applying DP. For $n > {48}$ , the amount of noise required by DP is so large that it will not appear on the plot.
|
| 234 |
+
|
| 235 |
+
Limitations As the example used to prove Theorem 5 shows, there are cases where apparently non-private algorithms can satisfy RIP. Thus, algorithms which satisfy RIP may require post-processing to ensure that the output is not one of the low-probability events in which data privacy is leaked. In addition, because RIP is determined with respect to a holdout set still drawn from $\mathcal{D}$ , an adversary may be able to determine with high probability whether or not a given sample was contained in $\mathcal{D}$ , rather than just in ${\mathcal{D}}_{\text{train }}$ , if $\mathcal{D}$ is sufficiently different from the rest of the population.
|
| 236 |
+
|
| 237 |
+
Future work Theorem 4 suggests that DP implies RIP in general. However, Theorem 7 shows that a finer-grained analysis of a standard DP mechanism (the Laplace mechanism) is possible, showing that we can guarantee RIP with less noise. It seems plausible that a similar analysis can be undertaken for other DP mechanisms. In addition to these "wrapper" type methods which can be applied on top of existing algorithms, bespoke algorithms for guaranteeing RIP in particular applications (such as synthetic data generation) are also of interest. Lastly, noise addition is a simple and effective way to enforce privacy, but other classes of mechanisms may also be possible. For instance, is it possible to directly regularize a probabilistic model using Proposition 2? Finally, the connections between RIP and other theoretical notions of privacy (Renyi DP (Mironov, 2017), concentrated DP (Dwork & Rothblum, 2016), etc.) are also of interest. Lastly, this paper focused on developing on the theoretical principles and guarantees of RIP, but systematic empirical evaluation is an important direction for future work. Practical membership inference attacks-particularly those against synthetic data and generative models rather than predictive models-still have a gap between practical efficacy and the theoretical upper bounds. It is likely that this gap can be closed through a combination of improved privacy accounting, but also through improved practical attacks. For the "shadow model" approach used by Stadler et al. (2022), improved computational efficiency is also of interest for improving membership inference attacks. These improved attacks will in turn allow model developers to better audit the empirical privacy limitations of their methods.
|
| 238 |
+
|
| 239 |
+
## References
|
| 240 |
+
|
| 241 |
+
Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp. 308-318, 2016.
|
| 242 |
+
|
| 243 |
+
Ahmed Alaa, Boris Van Breugel, Evgeny S Saveliev, and Mihaela van der Schaar. How faithful is your synthetic data? sample-level metrics for evaluating and auditing generative models. In International Conference on Machine Learning, pp. 290-306. PMLR, 2022.
|
| 244 |
+
|
| 245 |
+
Samuel A Assefa, Danial Dervovic, Mahmoud Mahfouz, Robert E Tillman, Prashant Reddy, and Manuela Veloso. Generating synthetic data in finance: opportunities, challenges and pitfalls. In Proceedings of the First ACM International Conference on AI in Finance, pp. 1-8, 2020.
|
| 246 |
+
|
| 247 |
+
Teodora Baluta, Shiqi Shen, S Hitarth, Shruti Tople, and Prateek Saxena. Membership inference attacks and generalization: A causal perspective. ACM SIGSAC Conference on Computer and Communications Security, 2022.
|
| 248 |
+
|
| 249 |
+
Mark Bun, Damien Desfontaines, Cynthia Dwork, Moni Naor, Kobbi Nissim, Aaron Roth, Adam Smith, Thomas Steinke, Jonathan Ullman, and Salil Vadhan. Statistical inference is not a privacy violation. DifferentialPrivacy.org, 06 2021. https://differentialprivacy.org/ inference-is-not-a-privacy-violation/.
|
| 250 |
+
|
| 251 |
+
Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, and Florian Tramer. Membership inference attacks from first principles. arXiv preprint arXiv:2112.03570, 2021a.
|
| 252 |
+
|
| 253 |
+
Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, et al. Extracting training data from large language models. In 30th USENIX Security Symposium (USENIX Security 21), pp. 2633-2650, 2021b.
|
| 254 |
+
|
| 255 |
+
Dingfan Chen, Ning Yu, Yang Zhang, and Mario Fritz. Gan-leaks: A taxonomy of membership inference attacks against generative models. In Proceedings of the 2020 ACM SIGSAC conference on computer and communications security, pp. 343-362, 2020.
|
| 256 |
+
|
| 257 |
+
Cynthia Dwork and Guy N Rothblum. Concentrated differential privacy. arXiv preprint arXiv:1603.01887, 2016.
|
| 258 |
+
|
| 259 |
+
Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3-4):211-407, 2014.
|
| 260 |
+
|
| 261 |
+
Anna C Gilbert and Audra McMillan. Property testing for differential privacy. In 2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 249-258. IEEE, 2018.
|
| 262 |
+
|
| 263 |
+
Xinlei He, Zheng Li, Weilin Xu, Cory Cornelius, and Yang Zhang. Membership-doctor: Comprehensive assessment of membership inference against machine learning models. arXiv preprint arXiv:2208.10445, 2022.
|
| 264 |
+
|
| 265 |
+
Pingyi Hu, Zihan Wang, Ruoxi Sun, Hu Wang, and Minhui Xue. M^ 4i: Multi-modal models membership inference. Advances in Neural Information Processing Systems, 2022.
|
| 266 |
+
|
| 267 |
+
Matthew Jagielski, Jonathan Ullman, and Alina Oprea. Auditing differentially private machine learning: How private is private sgd? Advances in Neural Information Processing Systems, 33: 22205-22216, 2020.
|
| 268 |
+
|
| 269 |
+
Bargav Jayaraman and David Evans. Are attribute inference attacks just imputation? ACM SIGSAC Conference on Computer and Communications Security, 2022.
|
| 270 |
+
|
| 271 |
+
James Jordon, Lukasz Szpruch, Florimond Houssiau, Mirko Bottarelli, Giovanni Cherubin, Carsten Maple, Samuel N Cohen, and Adrian Weller. Synthetic data-what, why and how? arXiv preprint arXiv:2205.03257, 2022.
|
| 272 |
+
|
| 273 |
+
Yiyong Liu, Zhengyu Zhao, Michael Backes, and Yang Zhang. Membership inference attacks by exploiting loss trajectory. ACM SIGSAC Conference on Computer and Communications Security, 2022.
|
| 274 |
+
|
| 275 |
+
Casey Meehan, Kamalika Chaudhuri, and Sanjoy Dasgupta. A non-parametric test to detect data-copying in generative models. In International Conference on Artificial Intelligence and Statistics, 2020.
|
| 276 |
+
|
| 277 |
+
Ilya Mironov. Rényi differential privacy. In 2017 IEEE 30th computer security foundations symposium (CSF), pp. 263-275. IEEE, 2017.
|
| 278 |
+
|
| 279 |
+
HHS Office for Civil Rights. Standards for privacy of individually identifiable health information. final rule. Federal register, 67(157):53181-53273, 2002.
|
| 280 |
+
|
| 281 |
+
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference attacks against machine learning models. In 2017 IEEE symposium on security and privacy (SP), pp. 3-18. IEEE, 2017.
|
| 282 |
+
|
| 283 |
+
Theresa Stadler, Bristen Oprisanu, and Carmela Troncoso. Synthetic data - anonymisation groundhog day. In 31st USENIX Security Symposium (USENIX Security 22). USENIX Association, 2022.
|
| 284 |
+
|
| 285 |
+
Yuchao Tao, Ryan McKenna, Michael Hay, Ashwin Machanavajjhala, and Gerome Miklau. Benchmarking differentially private synthetic data generation algorithms. arXiv preprint arXiv:2112.09238, 2021.
|
| 286 |
+
|
| 287 |
+
Anvith Thudi, Ilia Shumailov, Franziska Boenisch, and Nicolas Papernot. Bounding membership inference. arXiv preprint arXiv:2202.12232, 2022.
|
| 288 |
+
|
| 289 |
+
Jinsung Yoon, Lydia N Drumright, and Mihaela Van Der Schaar. Anonymization through data synthesis using generative adversarial networks (ads-gan). IEEE journal of biomedical and health informatics, 24(8):2378-2388, 2020.
|
| 290 |
+
|
| 291 |
+
## 358 A Deferred Proofs
|
| 292 |
+
|
| 293 |
+
359 For the reader's convenience, we restate all lemmas, theorems, etc. here.
|
| 294 |
+
|
| 295 |
+
Proposition 2. Let $\mathbb{A} = \operatorname{Range}\left( \mathcal{A}\right)$ and let $\mu$ denote the probability law of $\mathcal{A}\left( {\mathcal{D}}_{\text{train }}\right)$ . Then $\mathcal{A}$ is
|
| 296 |
+
|
| 297 |
+
$\eta$ -RIP if and only if
|
| 298 |
+
|
| 299 |
+
$$
|
| 300 |
+
{\int }_{\mathbb{A}}\max \left\{ {\mathbb{P}\left( {{\mathbf{x}}^{ * } \in {\mathcal{D}}_{\text{train }} \mid \mathcal{A}\left( {\mathcal{D}}_{\text{train }}\right) = A}\right) ,\mathbb{P}\left( {{\mathbf{x}}^{ * } \notin {\mathcal{D}}_{\text{train }} \mid \mathcal{A}\left( {\mathcal{D}}_{\text{train }}\right) = A}\right) }\right\} {d\mu }\left( A\right) \leq \frac{1}{2} + \eta .
|
| 301 |
+
$$
|
| 302 |
+
|
| 303 |
+
Furthermore, the optimal adversary is given by
|
| 304 |
+
|
| 305 |
+
$$
|
| 306 |
+
\mathcal{I}\left( {{\mathbf{x}}^{ * }, A}\right) = \mathbb{1}\left\{ {\mathbb{P}\left( {{\mathbf{x}}^{ * } \in {\mathcal{D}}_{\text{train }} \mid \mathcal{A}\left( {\mathcal{D}}_{\text{train }}\right) = A}\right) \geq 1/2}\right\} .
|
| 307 |
+
$$
|
| 308 |
+
|
| 309 |
+
Proof. We will show that the re-identification algorithm $\mathcal{I}\left( {{\mathbf{x}}^{ * }, A}\right) = \mathbb{1}\left\{ {\mathbb{P}\left( {{\mathbf{x}}^{ * } \in {\mathcal{D}}_{\text{train }} \mid \mathcal{A}\left( {\mathcal{D}}_{\text{train }}\right) = }\right. }\right.$
|
| 310 |
+
|
| 311 |
+
$A) \geq 1/2\}$ is optimal, then compute the resulting probability of re-identification. We have
|
| 312 |
+
|
| 313 |
+
$$
|
| 314 |
+
\mathbb{P}\left( {\mathcal{I}\left( {{\mathbf{x}}^{ * },\mathcal{A}\left( {\mathcal{D}}_{\text{train }}\right) }\right) = {y}^{ * }}\right) = \mathop{\sum }\limits_{{{\mathcal{D}}_{\text{train }} \subseteq \mathcal{D}}}{\left( \begin{array}{l} n \\ k \end{array}\right) }^{-1}\mathop{\sum }\limits_{{A \in \mathbb{A}}}\mathbb{P}\left( {\mathcal{A}\left( {\mathcal{D}}_{\text{train }}\right) = A}\right) \cdot \mathbb{P}\left( {\mathcal{I}\left( {{\mathbf{x}}^{ * }, A}\right) = \mathbb{1}\left\{ {{\mathbf{x}}^{ * } \in {\mathcal{D}}_{\text{train }}}\right\} }\right)
|
| 315 |
+
$$
|
| 316 |
+
|
| 317 |
+
$$
|
| 318 |
+
= {\left( \begin{array}{l} n \\ k \end{array}\right) }^{-1}\mathop{\sum }\limits_{{A \in \mathbb{A}}}\left\lbrack {\mathop{\sum }\limits_{{D \in {\mathbb{D}}^{\text{in }}}}\mathbb{P}\left( {\mathcal{A}\left( D\right) = A}\right) \cdot \mathbb{P}\left( {\mathcal{I}\left( {{\mathbf{x}}^{ * }, A}\right) = 1}\right) }\right.
|
| 319 |
+
$$
|
| 320 |
+
|
| 321 |
+
$$
|
| 322 |
+
\left. {+\mathop{\sum }\limits_{{D \in {\mathbb{D}}^{\text{out }}}}\mathbb{P}\left( {\mathcal{A}\left( D\right) = A}\right) \cdot \left( {1 - \mathbb{P}\left( {\mathcal{I}\left( {{\mathbf{x}}^{ * }, A}\right) = 1}\right) }\right) }\right\rbrack
|
| 323 |
+
$$
|
| 324 |
+
|
| 325 |
+
$$
|
| 326 |
+
= {\left( \begin{array}{l} n \\ k \end{array}\right) }^{-1}\mathop{\sum }\limits_{{A \in \mathbb{A}}}\left\lbrack {\left( {\mathop{\sum }\limits_{{D \in {\mathbb{D}}^{\text{in }}}}\mathbb{P}\left( {\mathcal{A}\left( D\right) = A}\right) - \mathop{\sum }\limits_{{D \in {\mathbb{D}}^{\text{out }}}}\mathbb{P}\left( {\mathcal{A}\left( D\right) = A}\right) }\right) \mathbb{P}\left( {\mathcal{I}\left( {{\mathbf{x}}^{ * }, A}\right) = 1}\right) }\right.
|
| 327 |
+
$$
|
| 328 |
+
|
| 329 |
+
$$
|
| 330 |
+
\left. {+\mathop{\sum }\limits_{{D \in {\mathbb{D}}^{\text{out }}}}\mathbb{P}\left( {\mathcal{A}\left( D\right) = A}\right) }\right\rbrack \text{.}
|
| 331 |
+
$$
|
| 332 |
+
|
| 333 |
+
2 The choice of algorithm $\mathcal{I}$ just specifies the value of $\mathbb{P}\left( {\mathcal{I}\left( {{\mathbf{x}}^{ * }, A}\right) = 1}\right)$ for each sample ${\mathbf{x}}^{ * }$ and each
|
| 334 |
+
|
| 335 |
+
3. $A \in \mathbb{A}$ . We see that the maximum re-identification probability is obtained when
|
| 336 |
+
|
| 337 |
+
$$
|
| 338 |
+
\mathbb{P}\left( {\mathcal{I}\left( {{\mathbf{x}}^{ * }, A}\right) = 1}\right) = \mathbb{1}\left\{ {\mathop{\sum }\limits_{{D \in {\mathbb{D}}^{\text{in }}}}\mathbb{P}\left( {\mathcal{A}\left( D\right) = A}\right) - \mathop{\sum }\limits_{{D \in {\mathbb{D}}^{\text{out }}}}\mathbb{P}\left( {\mathcal{A}\left( D\right) = A}\right) \geq 0}\right\} , \tag{2}
|
| 339 |
+
$$
|
| 340 |
+
|
| 341 |
+
364 which implies that
|
| 342 |
+
|
| 343 |
+
$$
|
| 344 |
+
\mathop{\sum }\limits_{{D \in {\mathbb{D}}^{\text{in }}}}\mathbb{P}\left( {\mathcal{A}\left( D\right) = A}\right) - \mathop{\sum }\limits_{{D \in {\mathbb{D}}^{\text{out }}}}\mathbb{P}\left( {\mathcal{A}\left( D\right) = A}\right) \leq {\left( \begin{array}{l} n \\ k \end{array}\right) }^{-1}\mathop{\sum }\limits_{{A \in \mathbb{A}}}\max \left\{ {\mathop{\sum }\limits_{{D \in {\mathbb{D}}^{\text{in }}}}\mathbb{P}\left( {\mathcal{A}\left( D\right) = A}\right) ,\mathop{\sum }\limits_{{D \in {\mathbb{D}}^{\text{out }}}}\mathbb{P}\left( {\mathcal{A}\left( D\right) = A}\right) }\right\} .
|
| 345 |
+
$$
|
| 346 |
+
|
| 347 |
+
(3)
|
| 348 |
+
|
| 349 |
+
5 To conclude, observe that
|
| 350 |
+
|
| 351 |
+
$$
|
| 352 |
+
\mathbb{P}\left( {{\mathbf{x}}^{ * } \in D \mid \mathcal{A}\left( D\right) = A}\right) = \frac{\mathbb{P}\left( {x \in D \land \mathcal{A}\left( D\right) = A}\right) }{\mathbb{P}\left( {\mathcal{A}\left( D\right) = A}\right) } = \frac{\mathop{\sum }\limits_{{D \in {\mathbb{D}}^{\text{in }}}}{\left( \begin{array}{l} n \\ k \end{array}\right) }^{-1}{\mathbb{P}}_{\mathcal{A}}\left( {\mathcal{A}\left( D\right) = A}\right) }{{\mathbb{P}}_{\mathcal{A}, D}\left( {\mathcal{A}\left( D\right) = A}\right) }. \tag{4}
|
| 353 |
+
$$
|
| 354 |
+
|
| 355 |
+
The result follows by rearranging the expression (4) and plugging it into (2) and (3).
|
| 356 |
+
|
| 357 |
+
Theorem 10. Suppose that $\mathcal{A}$ is $\eta$ -RIP, and let $f$ be any (potentially randomized, with randomness 3 independent of ${\mathcal{D}}_{\text{train }}$ ) function. Then $f \circ \mathcal{A}$ is also $\eta$ -RIP.
|
| 358 |
+
|
| 359 |
+
Proof. Let ${\mathcal{I}}_{f}$ be any re-identification algorithm for $f \circ \mathcal{A}$ . Define ${\mathcal{I}}_{\mathcal{A}}\left( {{\mathbf{x}}^{ * },\mathcal{A}\left( {\mathcal{D}}_{\text{train }}\right) }\right) =$ ${\mathcal{I}}_{f}\left( {{\mathbf{x}}^{ * }, f\left( {\mathcal{A}\left( {\mathcal{D}}_{\text{train }}\right) }\right) }\right)$ . Since $\mathcal{A}$ is $\eta$ -RIP, we have
|
| 360 |
+
|
| 361 |
+
$$
|
| 362 |
+
\frac{1}{2} + \eta \geq \mathbb{P}\left( {{\mathcal{I}}_{\mathcal{A}}\left( {{\mathbf{x}}^{ * },\mathcal{A}\left( {\mathcal{D}}_{\text{train }}\right) }\right) = {y}^{ * }}\right) = \mathbb{P}\left( {{\mathcal{I}}_{f}\left( {{\mathbf{x}}^{ * }, f\left( {\mathcal{A}\left( {\mathcal{D}}_{\text{train }}\right) }\right) }\right) = {y}^{ * }}\right) .
|
| 363 |
+
$$
|
| 364 |
+
|
| 365 |
+
Thus, $f \circ \mathcal{A}$ is $\eta$ -RIP by Definition 1.
|
| 366 |
+
|
| 367 |
+
In what follows, we will assume WLOG that $k \geq n/2$ . The proofs in the case $k < n/2$ are almost identical and can be obtained by simply swapping ${\mathbb{D}}^{\text{in }} \leftrightarrow {\mathbb{D}}^{\text{out }}$ and $k \leftrightarrow n - k$ .
|
| 368 |
+
|
| 369 |
+
Lemma 11. Fix ${\mathbf{x}}^{ * } \in \mathcal{D}$ and let ${\mathbb{D}}^{\text{in }} = \left\{ {D \in \mathcal{D} \mid {\mathbf{x}}^{ * } \in D}\right\}$ and ${\mathbb{D}}^{\text{out }} = \left\{ {D \in \mathcal{D} \mid {\mathbf{x}}^{ * } \notin D}\right\}$ . If $k \geq n/2$ then there is an injective function $f : {\mathbb{D}}^{\text{out }} \rightarrow {\mathbb{D}}^{\text{in }}$ such that $D \sim f\left( D\right)$ for all $D \in {\mathbb{D}}^{\text{out }}$ .
|
| 370 |
+
|
| 371 |
+
Proof. We define a bipartite graph $G$ on nodes ${\mathbb{D}}^{\text{in }}$ and ${\mathbb{D}}^{\text{out }}$ . There is an edge between ${D}^{\text{in }} \in {\mathbb{D}}^{\text{in }}$ and ${D}^{\text{out }} \in {\mathbb{D}}^{\text{out }}$ if ${D}^{\text{out }}$ can be obtained from ${D}^{\text{in }}$ by removing ${\mathbf{x}}^{ * }$ from ${D}^{\text{in }}$ and replacing it with another element, i.e. if ${D}^{\text{in }} \sim {D}^{\text{out }}$ . To prove the lemma, it suffices to show that there is a matching on $G$ which covers ${D}^{\text{out }}$ . We will show this via Hall’s marriage theorem.
|
| 372 |
+
|
| 373 |
+
First, observe that $G$ is a(k, n - k)-biregular graph. Each ${D}^{\text{in }} \in {\mathbb{D}}^{\text{in }}$ has $n - k$ neighbors which are obtained from ${D}^{\text{in }}$ by selecting which of the remaining $n - k$ elements to replace ${\mathbf{x}}^{ * }$ with; each ${D}^{\text{out }} \in {\mathbb{D}}^{\text{out }}$ has $k$ neighbors which are obtained by selecting which of the $k$ elements in ${D}^{\text{out }}$ to replace with ${\mathbf{x}}^{ * }$ .
|
| 374 |
+
|
| 375 |
+
Let $W \subseteq {\mathbb{D}}^{\text{out }}$ and let $N\left( W\right) \subseteq {\mathbb{D}}^{\text{in }}$ denote the neighborhood of $W$ . We have the following:
|
| 376 |
+
|
| 377 |
+
$$
|
| 378 |
+
\left| {N\left( W\right) }\right| = \mathop{\sum }\limits_{{{D}^{\text{in }} \in N\left( W\right) }}\frac{\mathop{\sum }\limits_{{{D}^{\text{out }} \in W}}\mathbb{1}\left\{ {{D}^{\text{out }} \sim {D}^{\text{in }}}\right\} }{\mathop{\sum }\limits_{{{D}^{\text{out }} \in W}}\mathbb{1}\left\{ {{D}^{\text{out }} \sim {D}^{\text{in }}}\right\} }
|
| 379 |
+
$$
|
| 380 |
+
|
| 381 |
+
$$
|
| 382 |
+
\geq \mathop{\sum }\limits_{{{D}^{\text{in }} \in N\left( W\right) }}\frac{\mathop{\sum }\limits_{{{D}^{\text{out }} \in W}}\mathbb{1}\left\{ {{D}^{\text{out }} \sim {D}^{\text{in }}}\right\} }{\mathop{\sum }\limits_{{{D}^{\text{out }} \in {\mathbb{D}}^{\text{out }}}}\mathbb{1}\left\{ {{D}^{\text{out }} \sim {D}^{\text{in }}}\right\} }
|
| 383 |
+
$$
|
| 384 |
+
|
| 385 |
+
$$
|
| 386 |
+
= \frac{1}{n - k}\mathop{\sum }\limits_{{{D}^{\text{out }} \in W}}\mathop{\sum }\limits_{{{D}^{\text{in }} \in N\left( W\right) }}\mathbb{1}\left\{ {{D}^{\text{out }} \sim {D}^{\text{in }}}\right\} \tag{5}
|
| 387 |
+
$$
|
| 388 |
+
|
| 389 |
+
$$
|
| 390 |
+
= \frac{k}{n - k}\left| W\right| \text{.} \tag{6}
|
| 391 |
+
$$
|
| 392 |
+
|
| 393 |
+
Equation (5) holds since each ${D}^{\text{in }}$ has degree $n - k$ and by exchanging the order of summation. Similarly,(6) holds since each ${D}^{\text{out }}$ has degree $k$ . When $k \geq n/2$ , we thus have $\left| {N\left( W\right) }\right| \geq \left| W\right|$ for every $W \subseteq {\mathbb{D}}^{\text{out }}$ and the result follows by Hall’s marriage theorem.
|
| 394 |
+
|
| 395 |
+
Theorem 4. Let $\mathcal{A}$ be $\varepsilon$ -DP. Then $\mathcal{A}$ is $\eta$ -RIP with $\eta = \frac{1}{1 + {e}^{-\varepsilon }} - \frac{1}{2}$ . Furthermore, this bound is tight, i.e. for any $\varepsilon > 0$ , there exists an $\varepsilon$ -DP algorithm against which the optimal attacker has accuracy $\frac{1}{1 + {e}^{-\varepsilon }}$ .
|
| 396 |
+
|
| 397 |
+
Proof. Let $f : {\mathbb{D}}^{\text{out }} \rightarrow {\mathbb{D}}^{\text{in }}$ denote the injection guaranteed by Lemma 11 . We have
|
| 398 |
+
|
| 399 |
+
$$
|
| 400 |
+
\mathbb{P}\left( {\mathcal{I}\left( {{\mathbf{x}}^{ * },\mathcal{A}\left( D\right) }\right) = {y}^{ * }}\right) = \frac{1}{\left( \begin{array}{l} n \\ k \end{array}\right) }\left\lbrack {\mathop{\sum }\limits_{{{D}^{\text{in }} \in {\mathbb{D}}^{\text{in }}}}\mathbb{P}\left( {\mathcal{I}\left( {{\mathbf{x}}^{ * },\mathcal{A}\left( {D}^{\text{in }}\right) }\right) = 1}\right) + \mathop{\sum }\limits_{{{D}^{\text{out }} \in {\mathbb{D}}^{\text{out }}}}\mathbb{P}\left( {\mathcal{I}\left( {{\mathbf{x}}^{ * },\mathcal{A}\left( {D}^{\text{out }}\right) }\right) = 0}\right) }\right\rbrack
|
| 401 |
+
$$
|
| 402 |
+
|
| 403 |
+
$$
|
| 404 |
+
\leq \frac{1}{\left( \begin{matrix} n \\ k \end{matrix}\right) }\left\lbrack {\mathop{\sum }\limits_{{{D}^{\text{in }} \in {\mathbb{D}}^{\text{in }}}}\mathbb{P}\left( {\mathcal{I}\left( {{\mathbf{x}}^{ * },\mathcal{A}\left( {D}^{\text{in }}\right) }\right) = 1}\right) + \mathop{\sum }\limits_{{{D}^{\text{out }} \in {\mathbb{D}}^{\text{out }}}}\left( {{e}^{\varepsilon }\mathbb{P}\left( {\mathcal{I}\left( {{\mathbf{x}}^{ * },\mathcal{A}\left( {f\left( {D}^{\text{out }}\right) }\right) }\right) }\right) = 0}\right) + \delta }\right\rbrack
|
| 405 |
+
$$
|
| 406 |
+
|
| 407 |
+
$$
|
| 408 |
+
\leq \frac{1}{\left( \begin{matrix} n \\ k \end{matrix}\right) }\left\lbrack {\mathop{\sum }\limits_{{{D}^{\text{in }} \in {\mathbb{D}}^{\text{in }}}}\left( {\mathbb{P}\left( {\mathcal{I}\left( {{\mathbf{x}}^{ * },\mathcal{A}\left( {D}^{\text{in }}\right) }\right) = 1}\right) + {e}^{\varepsilon }\mathbb{P}\left( {\mathcal{I}\left( {{\mathbf{x}}^{ * },\mathcal{A}\left( {D}^{\text{in }}\right) }\right) = 0}\right) }\right) + \delta \left( \begin{matrix} n - 1 \\ k \end{matrix}\right) }\right\rbrack \tag{7}
|
| 409 |
+
$$
|
| 410 |
+
|
| 411 |
+
$$
|
| 412 |
+
\leq \frac{1}{\left( \begin{matrix} n \\ k \end{matrix}\right) }\left\lbrack {{e}^{\varepsilon }\mathop{\sum }\limits_{{{D}^{\text{in }} \in {\mathbb{D}}^{\text{in }}}}\left( {\mathbb{P}\left( {\mathcal{I}\left( {{\mathbf{x}}^{ * },\mathcal{A}\left( {D}^{\text{in }}\right) }\right) = 1}\right) + \mathbb{P}\left( {\mathcal{I}\left( {{\mathbf{x}}^{ * },\mathcal{A}\left( {D}^{\text{in }}\right) }\right) = 0}\right) }\right) + \delta \left( \begin{matrix} n - 1 \\ k \end{matrix}\right) }\right\rbrack
|
| 413 |
+
$$
|
| 414 |
+
|
| 415 |
+
$$
|
| 416 |
+
= \frac{1}{\left( \begin{array}{l} n \\ k \end{array}\right) }\left\lbrack {{e}^{\varepsilon }\left( \begin{array}{l} n - 1 \\ k - 1 \end{array}\right) + \delta \left( \begin{matrix} n - 1 \\ k \end{matrix}\right) }\right\rbrack = {e}^{\varepsilon }\frac{k}{n} + \delta \frac{n - k}{n}.
|
| 417 |
+
$$
|
| 418 |
+
|
| 419 |
+
Here inequality (7) critically uses the fact that $f$ is injective, so at most one term from the sum over ${\mathbb{D}}^{\text{out }}$ is added to each term in the sum over ${\mathbb{D}}^{\text{in }}$ . This completes the proof. Theorem 5. For any $\eta > 0$ , there exists an algorithm $\mathcal{A}$ which is $\eta$ -RIP but not $\varepsilon$ -DP for any $\varepsilon < \infty$ .
|
| 420 |
+
|
| 421 |
+
Proof. Let $\mathcal{A}$ be defined as follows. Given a training set $D,\mathcal{A}\left( D\right)$ outputs a random subset of $D$ where each element is included independently and with probability $p$ . It is obvious that such an algorithm is not $\left( {\varepsilon ,0}\right)$ -DP for any $\varepsilon < \infty$ : if $\mathbf{x} \in D$ , then $\mathcal{A}\left( D\right) \in \{ \{ \mathbf{x}\} \}$ with positive probability. But if we replace $\mathbf{x}$ with ${\mathbf{x}}^{\prime } \neq \mathbf{x}$ and call this adjacent dataset ${D}^{\prime }$ (so that $\mathbf{x} \notin {D}^{\prime }$ , then $\mathcal{A}\left( {D}^{\prime }\right) \in \{ \{ \mathbf{x}\} \}$ with probability 0 . Thus $\mathcal{A}$ is not differentially private for any $p > 0$ .
|
| 422 |
+
|
| 423 |
+
We now claim that $\mathcal{A}$ is $\left( {\varepsilon ,0}\right)$ -RIP for any $\varepsilon > 0$ , provided that $p$ is small enough. To see this, observe the following. For any identification algorithm $\mathcal{I}$ ,
|
| 424 |
+
|
| 425 |
+
$$
|
| 426 |
+
\mathbb{P}\left( {\mathcal{I}\left( {{\mathbf{x}}^{ * },\mathcal{A}\left( D\right) }\right) = {y}^{ * }}\right) = \mathop{\sum }\limits_{{\mathcal{A}\left( D\right) }}\left\lbrack {\mathbb{P}\left( {{\mathbf{x}}^{ * } \in D}\right) \cdot \mathbb{P}\left( {\mathcal{A}\left( D\right) \mid {\mathbf{x}}^{ * } \in D}\right) \cdot \mathbb{P}\left( {\mathcal{I}\left( {{\mathbf{x}}^{ * },\mathcal{A}\left( D\right) }\right) = 1}\right) }\right.
|
| 427 |
+
$$
|
| 428 |
+
|
| 429 |
+
$$
|
| 430 |
+
\left. {+\mathbb{P}\left( {{\mathbf{x}}^{ * } \notin D}\right) \cdot \mathbb{P}\left( {\mathcal{A}\left( D\right) \mid {\mathbf{x}}^{ * } \notin D}\right) \cdot \left( {1 - \mathbb{P}\left( {\mathcal{I}\left( {{\mathbf{x}}^{ * },\mathcal{A}\left( D\right) }\right) = 1}\right) }\right) }\right\rbrack
|
| 431 |
+
$$
|
| 432 |
+
|
| 433 |
+
$$
|
| 434 |
+
= \mathop{\sum }\limits_{{\mathcal{A}\left( D\right) }}\left\lbrack {\left( {\frac{k}{n}\mathbb{P}\left( {\mathcal{A}\left( D\right) \mid {\mathbf{x}}^{ * } \in D}\right) + \left( {1 - \frac{k}{n}}\right) \mathbb{P}\left( {\mathcal{A}\left( D\right) \mid {\mathbf{x}}^{ * } \notin D}\right) }\right) \mathbb{P}\left( {\mathcal{I}\left( {{\mathbf{x}}^{ * },\mathcal{A}\left( D\right) }\right) = 1}\right) }\right.
|
| 435 |
+
$$
|
| 436 |
+
|
| 437 |
+
$$
|
| 438 |
+
\left. {+\left( {1 - \frac{k}{n}}\right) \mathbb{P}\left( {\mathcal{A}\left( D\right) \mid {\mathbf{x}}^{ * } \notin D}\right) }\right\rbrack
|
| 439 |
+
$$
|
| 440 |
+
|
| 441 |
+
$$
|
| 442 |
+
\leq \mathop{\sum }\limits_{{\mathcal{A}\left( D\right) }}\max \left\{ {\frac{k}{n}\mathbb{P}\left( {\mathcal{A}\left( D\right) \mid {\mathbf{x}}^{ * } \in D}\right) ,\left( {1 - \frac{k}{n}}\right) \mathbb{P}\left( {\mathcal{A}\left( D\right) \mid {\mathbf{x}}^{ * } \notin D}\right) }\right\}
|
| 443 |
+
$$
|
| 444 |
+
|
| 445 |
+
$$
|
| 446 |
+
\leq {\left( 1 - p\right) }^{k}\max \left\{ {\frac{k}{n},1 - \frac{k}{n}}\right\} + \mathop{\sum }\limits_{{\mathcal{A}\left( D\right) \neq \varnothing }}\left( {1 - {\left( 1 - p\right) }^{k}}\right) \max \left\{ {\frac{k}{n},1 - \frac{k}{n}}\right\} \tag{8}
|
| 447 |
+
$$
|
| 448 |
+
|
| 449 |
+
$$
|
| 450 |
+
= \max \left\{ {\frac{k}{n},1 - \frac{k}{n}}\right\} \left\lbrack {{\left( 1 - p\right) }^{k} + {C}_{n, k}\left( {1 - {\left( 1 - p\right) }^{k}}\right) }\right\rbrack . \tag{9}
|
| 451 |
+
$$
|
| 452 |
+
|
| 453 |
+
Inequality (8) holds because $\mathcal{A}\left( D\right) = \varnothing$ with probability ${\left( 1 - p\right) }^{k}$ regardless of whether of not ${\mathbf{x}}^{ * } \in D$ , and therefore the probability that $\mathcal{A}\left( D\right) \neq \varnothing$ is at most $1 - {\left( 1 - p\right) }^{k}$ (again regardless of ${\mathbf{x}}^{ * } \in D$ or not) for any $\mathcal{A}\left( D\right) \neq \varnothing$ . The constant ${C}_{n, k}$ simply counts the number of possible $\mathcal{A}\left( D\right) \neq \varnothing$ , which depends only on $n$ and $k$ but not $p$ . Thus as $p \rightarrow 0,\text{⑨} \rightarrow 1$ . This completes the proof.
|
| 454 |
+
|
| 455 |
+
The proof of Theorem 5 emphasizes that the re-identification privacy guarantee is marginal over the ouput of $\mathcal{A}$ . Conditional on a particular output, an adversary may be able to determine whether or not ${\mathbf{x}}^{ * } \in D$ with arbitrarily high precision. This is in contrast with the result of Proposition 6, which shows that even conditionally on a particular output of the synthetic data algorithm, a DP algorithm cannot help the adversary too much.
|
| 456 |
+
|
| 457 |
+
Proposition 6. If $\mathcal{A}$ is an $\varepsilon$ -DP synthetic data generation algorithm, then for any ${\mathbf{x}}^{ * }$ , we have
|
| 458 |
+
|
| 459 |
+
$$
|
| 460 |
+
\frac{\mathbb{P}\left( {{\mathbf{x}}^{ * } \notin {\mathcal{D}}_{\text{train }} \mid \mathcal{A}\left( {\mathcal{D}}_{\text{train }}\right) }\right) }{\mathbb{P}\left( {{\mathbf{x}}^{ * } \in {\mathcal{D}}_{\text{train }} \mid \mathcal{A}\left( {\mathcal{D}}_{\text{train }}\right) }\right) } \leq {e}^{\varepsilon }\frac{\mathbb{P}\left( {{\mathbf{x}}^{ * } \notin {\mathcal{D}}_{\text{train }}}\right) }{\mathbb{P}\left( {{\mathbf{x}}^{ * } \in {\mathcal{D}}_{\text{train }}}\right) }.
|
| 461 |
+
$$
|
| 462 |
+
|
| 463 |
+
412 Proof. Using expression (4) (and the corresponding expression for ${\mathbf{x}}^{ * } \notin {\mathcal{D}}_{\text{train }}$ ), we have
|
| 464 |
+
|
| 465 |
+
$$
|
| 466 |
+
\frac{\mathbb{P}\left( {{\mathbf{x}}^{ * } \notin {\mathcal{D}}_{\text{train }} \mid \mathcal{A}\left( {\mathcal{D}}_{\text{train }}\right) = A}\right) }{\mathbb{P}\left( {{\mathbf{x}}^{ * } \in {\mathcal{D}}_{\text{train }} \mid \mathcal{A}\left( {\mathcal{D}}_{\text{train }}\right) }\right) } = \frac{\mathop{\sum }\limits_{{D \in {\mathbb{D}}^{\text{out }}}}{\mathbb{P}}_{\mathcal{A}}\left( {\mathcal{A}\left( D\right) = A}\right) }{\mathop{\sum }\limits_{{D \in {\mathbb{D}}^{\text{in }}}}{\mathbb{P}}_{\mathcal{A}}\left( {\mathcal{A}\left( D\right) = A}\right) }
|
| 467 |
+
$$
|
| 468 |
+
|
| 469 |
+
$$
|
| 470 |
+
\leq \frac{{e}^{\varepsilon }\mathop{\sum }\limits_{{D \in {\mathbb{D}}^{\text{out }}}}\mathop{\min }\limits_{{{D}^{\prime } \in {\mathbb{D}}^{\text{in }},{D}^{\prime } \sim D}}{\mathbb{P}}_{\mathcal{A}}\left( {\mathcal{A}\left( D\right) = A}\right) }{\mathop{\sum }\limits_{{D \in {\mathbb{D}}^{\text{in }}}}{\mathbb{P}}_{\mathcal{A}}\left( {\mathcal{A}\left( D\right) = A}\right) }.
|
| 471 |
+
$$
|
| 472 |
+
|
| 473 |
+
We now analyze this latter expression. We refer again to the biregular graph $G$ defined in Lemma 1.1 For $D \in {\mathbb{D}}^{\text{out }}, N\left( D\right) \subseteq {\mathbb{D}}^{\text{in }}$ refers to the neighbors of $D$ in $G$ , and recall that $\left| {N\left( D\right) }\right| = k$ for all
|
| 474 |
+
|
| 475 |
+
$D \in {\mathbb{D}}^{\text{out }}$ . Note that since each ${D}^{\prime } \in {\mathbb{D}}^{\text{in }}$ has $n - k$ neighbors, we have
|
| 476 |
+
|
| 477 |
+
$$
|
| 478 |
+
\mathop{\sum }\limits_{{D \in {\mathbb{D}}^{\text{out }}}}\mathop{\sum }\limits_{{{D}^{\prime } \in N\left( D\right) }}\mathbb{P}\left( {\mathcal{A}\left( {D}^{\prime }\right) = A}\right) = \left( {n - k}\right) \mathop{\sum }\limits_{{{D}^{\prime } \in {\mathbb{D}}^{\text{in }}}}\mathbb{P}\left( {\mathcal{A}\left( {D}^{\prime }\right) = A}\right) .
|
| 479 |
+
$$
|
| 480 |
+
|
| 481 |
+
413 Using this equality, we have
|
| 482 |
+
|
| 483 |
+
$$
|
| 484 |
+
\frac{\mathop{\sum }\limits_{{D \in {\mathbb{D}}^{\text{out }}}}\mathop{\min }\limits_{{{D}^{\prime } \in {\mathbb{D}}^{\text{in }}}}{\mathbb{P}}_{\mathcal{A}}\left( {\mathcal{A}\left( D\right) = A}\right) }{\mathop{\sum }\limits_{{D \in {\mathbb{D}}^{\text{in }}}}{\mathbb{P}}_{\mathcal{A}}\left( {\mathcal{A}\left( D\right) = A}\right) } = \frac{\mathop{\sum }\limits_{{D \in {\mathbb{D}}^{\text{out }}}}\mathop{\min }\limits_{{{D}^{\prime } \in N\left( D\right) }}{\mathbb{P}}_{\mathcal{A}}\left( {\mathcal{A}\left( D\right) = A}\right) }{\frac{1}{n - k}\mathop{\sum }\limits_{{D \in {\mathbb{D}}^{\text{out }}}}\mathop{\sum }\limits_{{A\left( {\mathcal{A}\left( D\right) = A}\right) }}{\mathbb{P}}_{\mathcal{A}}\left( {\mathcal{A}\left( D\right) = A}\right) }
|
| 485 |
+
$$
|
| 486 |
+
|
| 487 |
+
$$
|
| 488 |
+
\leq \frac{n - k}{k}\text{.}
|
| 489 |
+
$$
|
| 490 |
+
|
| 491 |
+
414 Since $\mathbb{P}\left( {{\mathbf{x}}^{ * } \notin {\mathcal{D}}_{\text{train }}}\right) = \left( \begin{matrix} n - 1 \\ k \end{matrix}\right) /\left( \begin{array}{l} n \\ k \end{array}\right)$ and $\mathbb{P}\left( {{\mathbf{x}}^{ * } \in {\mathcal{D}}_{\text{train }}}\right) = \left( \begin{matrix} n - 1 \\ k - 1 \end{matrix}\right) /\left( \begin{array}{l} n \\ k \end{array}\right)$ , we have $\frac{\mathbb{P}\left( {{\mathbf{x}}^{ * } \notin {\mathcal{D}}_{\text{train }}}\right) }{\mathbb{P}\left( {{\mathbf{x}}^{ * } \in {\mathcal{D}}_{\text{train }}}\right) } =$ 415 $\frac{n - k}{k}$ . This completes the proof.
|
| 492 |
+
|
| 493 |
+
Theorem 7. Let $\parallel \cdot \parallel$ be any norm, and let ${\sigma }^{M} \geq \mathbb{E}\parallel \theta - \mathbb{E}\theta {\parallel }^{M}$ be an upper bound on the $M$ -th central moment of $\theta$ with respect to this norm over the randomness in ${\mathcal{D}}_{\text{train }}$ and $\mathcal{A}$ . Let $X$ be a random variable with density proportional to $\exp \left( {-\frac{1}{c\sigma }\parallel X\parallel }\right)$ with $c = {\left( {7.5}/\eta \right) }^{1 + \frac{2}{M}}$ . Finally, let $\widehat{\theta } = \theta + X$ . Then $\widehat{\theta }$ is $\eta$ -RIP, i.e., for any adversary $\mathcal{I}$ ,
|
| 494 |
+
|
| 495 |
+
$$
|
| 496 |
+
\mathbb{P}\left( {\mathcal{I}\left( {{\mathbf{x}}^{ * },\widehat{\theta }}\right) = {y}^{ * }}\right) \leq 1/2 + \eta .
|
| 497 |
+
$$
|
| 498 |
+
|
| 499 |
+
Proof. We will assume that $k = n/2$ is an integer. Let $N = \left| {\mathbb{D}}^{\text{in }}\right| = \left| {\mathbb{D}}^{\text{out }}\right|$ , and let ${\mathbb{D}}^{\text{in }} =$ $\left\{ {{D}_{1},\ldots ,{D}_{N}}\right\}$ and ${\mathbb{D}}^{\text{out }} = \left\{ {{D}_{1}^{\prime },\ldots ,{D}_{N}^{\prime }}\right\}$ . Define ${a}_{i} = \mathcal{A}\left( {D}_{i}\right)$ for ${D}_{i} \in {\mathbb{D}}^{\text{in }}$ and ${b}_{j} = \mathcal{A}\left( {D}_{j}^{\prime }\right)$ for ${D}_{j}^{\prime } \in {\mathbb{D}}^{\text{out }}$ . Let $Z$ be a random variable which is uniformly distributed on $\left\{ {a}_{i}\right\} \cup \left\{ {b}_{j}\right\}$ . We may assume WLOG that $\mathbb{E}Z = 0$ . In what follows, $c,\alpha ,\beta$ , and $\gamma$ are constants which we will choose later to optimize our bounds. We also make repeated use of the inequalities $1 + x \leq {e}^{x}$ for all $x$ ; $\frac{1}{1 + x} \geq 1 - x$ for all $x \geq 0$ ; and ${e}^{x} \leq 1 + {2x}$ and $\left( {1 - x}\right) \left( {1 - y}\right) \geq 1 - x - y$ for $0 \leq x, y \leq 1$ . Let $X$ have density proportional to $\exp \left( {-\frac{1}{c\sigma }\parallel X\parallel }\right)$ . The posterior likelihood ratio is given by
|
| 500 |
+
|
| 501 |
+
$$
|
| 502 |
+
f\left( \widehat{\theta }\right) \overset{\text{ def }}{ = }\frac{\mathbb{P}\left( {{\mathcal{D}}_{\text{train }} \in {\mathbb{D}}^{\text{in }} \mid \widehat{\theta }}\right) }{\mathbb{P}\left( {{\mathcal{D}}_{\text{train }} \in {\mathbb{D}}^{\text{out }} \mid \widehat{\theta }}\right) } = \frac{\mathop{\sum }\limits_{{i = 1}}^{N}\exp \left( {-\frac{1}{c\sigma }\begin{Vmatrix}{\widehat{\theta } - {a}_{i}}\end{Vmatrix}}\right) }{\mathop{\sum }\limits_{{j = 1}}^{N}\exp \left( {-\frac{1}{c\sigma }\begin{Vmatrix}{\widehat{\theta } - {b}_{j}}\end{Vmatrix}}\right) }.
|
| 503 |
+
$$
|
| 504 |
+
|
| 505 |
+
We claim that for all $\widehat{\theta }$ with $\parallel \widehat{\theta }\parallel \leq {\gamma \sigma c}\log c,1 - \frac{\eta }{2} \leq f\left( \widehat{\theta }\right) \leq {\left( 1 - \frac{\eta }{2}\right) }^{-1}$ . First, suppose that 4 $\parallel \widehat{\theta }\parallel \leq {c}^{\alpha }\sigma$ . Then we have:
|
| 506 |
+
|
| 507 |
+
$$
|
| 508 |
+
f\left( \widehat{\theta }\right) \geq \frac{\mathop{\sum }\limits_{{\begin{Vmatrix}{a}_{i}\end{Vmatrix} \leq {c}^{\alpha }\sigma }}\exp \left\lbrack {-\frac{1}{c\sigma }\left( {\parallel \widehat{\theta }\parallel + \begin{Vmatrix}{a}_{i}\end{Vmatrix}}\right) }\right\rbrack }{N}
|
| 509 |
+
$$
|
| 510 |
+
|
| 511 |
+
$$
|
| 512 |
+
\geq \frac{\left( {1 - \frac{2}{{c}^{M\alpha }}}\right) N \cdot {e}^{-2{c}^{\alpha - 1}}}{N}
|
| 513 |
+
$$
|
| 514 |
+
|
| 515 |
+
$$
|
| 516 |
+
\geq 1 - 4{c}^{-\min \left( {{M\alpha },1 - \alpha }\right) }. \tag{10}
|
| 517 |
+
$$
|
| 518 |
+
|
| 519 |
+
425 Otherwise, $\parallel \widehat{\theta }\parallel \geq {c}^{\alpha }\sigma$ . We now have the following chain of inequalities:
|
| 520 |
+
|
| 521 |
+
$$
|
| 522 |
+
f\left( \widehat{\theta }\right) \geq \frac{\mathop{\sum }\limits_{{\begin{Vmatrix}{a}_{i}\end{Vmatrix} \leq {c}^{\alpha }\sigma }}{e}^{-\frac{1}{c\sigma }\left( {\begin{Vmatrix}\widehat{\theta }\end{Vmatrix} + \begin{Vmatrix}{a}_{i}\end{Vmatrix}}\right) }}{\mathop{\sum }\limits_{{\begin{Vmatrix}{b}_{j}\end{Vmatrix} \leq {c}^{\alpha }\sigma }}{e}^{-\frac{1}{c\sigma }\left( {\begin{Vmatrix}\widehat{\theta }\end{Vmatrix} - \begin{Vmatrix}{b}_{j}\end{Vmatrix}}\right) } + \mathop{\sum }\limits_{{{c}^{\alpha }\sigma < \begin{Vmatrix}{b}_{i}\end{Vmatrix} < \begin{Vmatrix}\widehat{\theta }\end{Vmatrix}}}{e}^{-\frac{1}{c\sigma }\left( {\begin{Vmatrix}\widehat{\theta }\end{Vmatrix} - \begin{Vmatrix}{b}_{j}\end{Vmatrix}}\right) } + \mathop{\sum }\limits_{{\begin{Vmatrix}{b}_{i}\end{Vmatrix} \geq \begin{Vmatrix}\widehat{\theta }\end{Vmatrix}}}{e}^{-\frac{1}{c\sigma }\left( {\begin{Vmatrix}{b}_{i}\end{Vmatrix} - \begin{Vmatrix}\widehat{\theta }\end{Vmatrix}}\right) }}
|
| 523 |
+
$$
|
| 524 |
+
|
| 525 |
+
$$
|
| 526 |
+
= \frac{\mathop{\sum }\limits_{{\begin{Vmatrix}{a}_{i}\end{Vmatrix} \leq {c}^{\alpha }\sigma }}{e}^{-\frac{1}{c\sigma }\begin{Vmatrix}{a}_{i}\end{Vmatrix}}}{\mathop{\sum }\limits_{{\begin{Vmatrix}{b}_{j}\end{Vmatrix} \leq {c}^{\alpha }\sigma }}{e}^{\frac{1}{c\sigma }\begin{Vmatrix}{b}_{j}\end{Vmatrix}} + \mathop{\sum }\limits_{{{c}^{\alpha }\sigma < \begin{Vmatrix}{b}_{i}\end{Vmatrix} < \begin{Vmatrix}\widehat{\theta }\end{Vmatrix}}}{e}^{\frac{1}{c\sigma }\begin{Vmatrix}{b}_{j}\end{Vmatrix}} + \mathop{\sum }\limits_{{\begin{Vmatrix}{b}_{i}\end{Vmatrix} \geq \begin{Vmatrix}\widehat{\theta }\end{Vmatrix}}}{e}^{\frac{1}{c\sigma }\left( {2\begin{Vmatrix}\widehat{\theta }\end{Vmatrix} - \begin{Vmatrix}{b}_{i}\end{Vmatrix}}\right) }}
|
| 527 |
+
$$
|
| 528 |
+
|
| 529 |
+
$$
|
| 530 |
+
\geq \frac{N\left( {1 - \frac{2}{{c}^{M\alpha }}}\right) {e}^{-{c}^{\alpha - 1}}}{N\left( {{e}^{{c}^{\alpha - 1}} + \frac{2}{{c}^{M\alpha }}{e}^{\frac{1}{c\sigma }\parallel \widehat{\theta }\parallel } + \frac{2{\sigma }^{M}}{\parallel \widehat{\theta }{\parallel }^{M}}{e}^{\frac{1}{c\sigma }\parallel \widehat{\theta }\parallel }}\right) }
|
| 531 |
+
$$
|
| 532 |
+
|
| 533 |
+
$$
|
| 534 |
+
\geq \frac{\left( {1 - \frac{2}{{c}^{M}\alpha }}\right) {e}^{-{c}^{\alpha - 1}}}{{e}^{{c}^{\alpha - 1}} + \frac{2}{{c}^{M\alpha }}{e}^{\gamma \log c} + \frac{2}{{c}^{M\alpha }}{e}^{\gamma \log c}}
|
| 535 |
+
$$
|
| 536 |
+
|
| 537 |
+
$$
|
| 538 |
+
\geq 1 - 2{c}^{-{M\alpha }} - {c}^{\alpha - 1} - 2{c}^{\alpha - 1} - 4{c}^{\gamma - {M\alpha }} \tag{11}
|
| 539 |
+
$$
|
| 540 |
+
|
| 541 |
+
$$
|
| 542 |
+
\geq 1 - 9{c}^{-\min \left( {1 - \alpha ,{M\alpha } - {2\gamma }}\right) }.
|
| 543 |
+
$$
|
| 544 |
+
|
| 545 |
+
Combining this with (10) shows that $f\left( \widehat{\theta }\right) \geq 1 - 9{c}^{-\min \left( {1 - \alpha ,{M\alpha } - \gamma }\right) }$ for all $\parallel \widehat{\theta }\parallel \leq {\gamma \sigma c}\log c$ .
|
| 546 |
+
|
| 547 |
+
Next, we must measure the probability of $\parallel \widehat{\theta }\parallel \leq {\gamma \sigma c}\log c$ . We can lower bound this probability by first conditioning on the value of ${\mathcal{D}}_{\text{train }}$ :
|
| 548 |
+
|
| 549 |
+
$$
|
| 550 |
+
\mathbb{P}\left( {\parallel \widehat{\theta }\parallel \leq {\gamma \sigma c}\log c}\right) = \frac{1}{\left| \mathbb{D}\right| }\mathop{\sum }\limits_{{D \in \mathbb{D}}}\mathbb{P}\left( {\parallel \widehat{\theta }\parallel \leq {\gamma \sigma c}\log c \mid {\mathcal{D}}_{\text{train }} = D}\right)
|
| 551 |
+
$$
|
| 552 |
+
|
| 553 |
+
$$
|
| 554 |
+
\geq \frac{1}{\left| \mathbb{D}\right| }\mathop{\sum }\limits_{{\parallel \mathcal{A}\left( D\right) \parallel \leq {c\sigma }}}\mathbb{P}\left( {\parallel X\parallel \leq {\gamma \sigma c}\log c - \parallel \mathcal{A}\left( D\right) \parallel }\right)
|
| 555 |
+
$$
|
| 556 |
+
|
| 557 |
+
$$
|
| 558 |
+
\geq \left( {1 - \frac{1}{{c}^{M}}}\right) \left( {1 - \frac{1}{2}\exp \left( {-\frac{{\gamma \sigma c}\log c - {c\sigma }}{c\sigma }}\right) }\right)
|
| 559 |
+
$$
|
| 560 |
+
|
| 561 |
+
$$
|
| 562 |
+
= \left( {1 - \frac{1}{{c}^{M}}}\right) \left( {1 - \frac{e}{2}{c}^{-\gamma }}\right)
|
| 563 |
+
$$
|
| 564 |
+
|
| 565 |
+
$$
|
| 566 |
+
\geq 1 - {c}^{-M} - \frac{e}{2}{c}^{-\gamma }
|
| 567 |
+
$$
|
| 568 |
+
|
| 569 |
+
Note that the exact same logic (reversing the roles of the ${a}_{i}$ ’s and ${b}_{j}$ ’s) shows that $f\left( \widehat{\theta }\right) \leq {\left( 1 - 9{c}^{-\min \left( {1 - \alpha ,{M\alpha } - {2\gamma }}\right) }\right) }^{-1}$ with probability at least $1 - {c}^{-M} - \frac{e}{2}{c}^{-\gamma }$ as well.
|
| 570 |
+
|
| 571 |
+
Finally, we can invoke the result of Proposition 2. Let $\Delta = 9{c}^{-\min \left( {1 - \alpha ,{M\alpha } - \gamma }\right) }$ and note that $1 - \Delta \leq f\left( \widehat{\theta }\right) \leq {\left( 1 - \Delta \right) }^{-1}$ implies that $\max \left\{ {\mathbb{P}\left( {{\mathbf{x}}^{ * } \in {\mathcal{D}}_{\text{train }} \mid \widehat{\theta }}\right) ,\mathbb{P}\left( {{\mathbf{x}}^{ * } \notin {\mathcal{D}}_{\text{train }} \mid \widehat{\theta }}\right) }\right\} \leq \frac{1}{2} + \frac{\Delta }{2}$ . Thus we have
|
| 572 |
+
|
| 573 |
+
$$
|
| 574 |
+
\int \max \left\{ {\mathbb{P}\left( {{\mathbf{x}}^{ * } \in {\mathcal{D}}_{\text{train }} \mid \widehat{\theta }}\right) ,\mathbb{P}\left( {{\mathbf{x}}^{ * } \notin {\mathcal{D}}_{\text{train }} \mid \widehat{\theta }}\right) }\right\} d\mathbb{P}\left( \widehat{\theta }\right)
|
| 575 |
+
$$
|
| 576 |
+
|
| 577 |
+
$$
|
| 578 |
+
\leq \left( {\frac{1}{2} + \frac{\Delta }{2}}\right) \mathbb{P}\left( {f\left( \widehat{\theta }\right) \in \left\lbrack {1 - \Delta ,{\left( 1 - \Delta \right) }^{-1}}\right\rbrack }\right) + \mathbb{P}\left( {f\left( \widehat{\theta }\right) \notin \left\lbrack {1 - \Delta ,{\left( 1 - \Delta \right) }^{-1}}\right\rbrack }\right)
|
| 579 |
+
$$
|
| 580 |
+
|
| 581 |
+
$$
|
| 582 |
+
\leq \frac{1}{2} + \frac{9}{2}{c}^{-\min \left( {1 - \alpha ,{M\alpha } - \gamma }\right) } + 2{c}^{-M} + e{c}^{-\gamma }
|
| 583 |
+
$$
|
| 584 |
+
|
| 585 |
+
$$
|
| 586 |
+
\leq \frac{1}{2} + \left( {\frac{9}{2} + e}\right) {c}^{-\min \left( {1 - \alpha ,{M\alpha } - \gamma ,\gamma }\right) } + 2{c}^{-M} \tag{12}
|
| 587 |
+
$$
|
| 588 |
+
|
| 589 |
+
$$
|
| 590 |
+
\leq \frac{1}{2} + {7.5}{c}^{-\frac{M}{M + 2}}
|
| 591 |
+
$$
|
| 592 |
+
|
| 593 |
+
where the last inequality follows by setting $\gamma = 1 - \alpha = {M\alpha } - \gamma$ and solving, yielding $\gamma =$ $M/\left( {M + 2}\right)$ . Solving for $\eta = {7.5}{c}^{-\frac{M}{M + 2}}$ , we find that $c = {\left( \frac{7.5}{\eta }\right) }^{1 + 2/M}$ suffices. This completes the proof.
|
| 594 |
+
|
| 595 |
+
Corollary 12. When $M = 2$ , taking $c = {\left( {6.16}/\eta \right) }^{2}$ guarantees $\eta$ -RIP.
|
| 596 |
+
|
| 597 |
+
Proof. Let $M = 2$ and $\alpha = \gamma = 1/2$ and refer to the proof of Theorem 7. Equation (11) can be improved to
|
| 598 |
+
|
| 599 |
+
$$
|
| 600 |
+
1 - 2{c}^{-{M\alpha }} - {c}^{\alpha - 1} - \left( {e - 1}\right) {c}^{\alpha - 1} - 4{c}^{\gamma - {M\alpha }} = 1 - \left( {4 + e + 2{c}^{-1/2}}\right) {c}^{-1/2}
|
| 601 |
+
$$
|
| 602 |
+
|
| 603 |
+
using the inequality ${e}^{x} \leq 1 + \left( {e - 1}\right) x$ for $0 \leq x \leq 1$ instead of ${e}^{x} \leq 1 + {2x}$ , which was used to prove Theorem 7. With $\Delta = \left( {4 + e + 2{c}^{-1/2}}\right) {c}^{-1/2}$ ,(12) becomes
|
| 604 |
+
|
| 605 |
+
$$
|
| 606 |
+
\frac{1}{2} + \left( {\frac{4 + e + 2{c}^{-1/2}}{2} + e + 2{c}^{-3/2}}\right) {c}^{-1/2}. \tag{13}
|
| 607 |
+
$$
|
| 608 |
+
|
| 609 |
+
Observe that since $\eta \leq 1/2$ , when we set $c = {\left( {6.16}/\eta \right) }^{2}$ , we always have $c \geq {\left( 2 \cdot {6.16}\right) }^{2}$ , in which case
|
| 610 |
+
|
| 611 |
+
$$
|
| 612 |
+
\frac{4 + e + 2{c}^{-1/2}}{2} + e + 2{c}^{-3/2} \leq {6.1597}.
|
| 613 |
+
$$
|
| 614 |
+
|
| 615 |
+
Thus, with $c = {\left( {6.16}/\eta \right) }^{2}$ , we have
|
| 616 |
+
|
| 617 |
+
$$
|
| 618 |
+
\text{ [13] } \leq \frac{1}{2} + {6.1597} \cdot \frac{\eta }{6.16} \leq \frac{1}{2} + \eta .
|
| 619 |
+
$$
|
| 620 |
+
|
| 621 |
+
This completes the proof.
|
| 622 |
+
|
| 623 |
+
3 Corollary 8. Let ${\sigma }_{i}^{2} \geq \mathbb{E}{\left| {\theta }_{i} - \mathbb{E}{\theta }_{i}\right| }^{2}$ , and define $\parallel x{\parallel }_{\sigma ,2} = {\left( \mathop{\sum }\limits_{{i = 1}}^{d}\frac{{\left| {x}_{i}\right| }^{2}}{d{\sigma }_{i}^{2}}\right) }^{1/2}$ . Generate ${Y}_{i} \sim$ $\mathcal{N}\left( {0,{\sigma }_{i}^{2}}\right)$ , set $U = Y/\parallel Y{\parallel }_{\sigma ,2}$ , and draw $r \sim \operatorname{Laplace}\left( {\left( \frac{6.16}{\eta }\right) }^{2}\right)$ . Finally, set $X = {rU}$ and return $\widehat{\theta } = \theta + X$ . Then $\widehat{\theta }$ is $\eta$ -RIP.
|
| 624 |
+
|
| 625 |
+
Proof. We with to apply the result of Theorem 7 with $k = 2$ and $\parallel \cdot \parallel = \parallel \cdot {\parallel }_{\sigma ,2}$ . To do this, we must bound the resulting ${\sigma }^{2}$ and show that the density of $X$ has the correct form. First, observe that
|
| 626 |
+
|
| 627 |
+
$$
|
| 628 |
+
{\sigma }^{2} = \mathbb{E}\parallel \theta - \mathbb{E}\theta {\parallel }^{2} = \mathop{\sum }\limits_{{i = 1}}^{d}\mathbb{E}\frac{{\left| {\theta }_{i} - \mathbb{E}{\theta }_{i}\right| }^{2}}{d{\sigma }_{i}^{2}} \leq \mathop{\sum }\limits_{{i = 1}}^{d}\frac{1}{d} = 1.
|
| 629 |
+
$$
|
| 630 |
+
|
| 631 |
+
It remains to show that the density has the correct form, i.e. depends on $X$ only through $\parallel X\parallel$ . This 7 will be the case if the marginal density of $U$ is uniform. Let $p\left( U\right)$ be the density of $U$ . Observe that, 448 for any $\parallel u\parallel = \parallel u{\parallel }_{s,2} = 1$ , we have that $Y \mapsto u$ iff $Y = {su}$ for some $s > 0$ . Thus
|
| 632 |
+
|
| 633 |
+
$$
|
| 634 |
+
p\left( u\right) \propto {\int }_{s = 0}^{\infty }{e}^{-c\left( {\frac{1}{{\sigma }_{1}^{2}}{s}^{2}{u}_{1}^{2} + \cdots + \frac{1}{{\sigma }_{d}^{2}}{s}^{2}{u}_{d}^{2}}\right) }{ds}
|
| 635 |
+
$$
|
| 636 |
+
|
| 637 |
+
$$
|
| 638 |
+
= {\int }_{s = 0}^{\infty }{e}^{-c{s}^{2}d\parallel u{\parallel }^{2}}{ds}
|
| 639 |
+
$$
|
| 640 |
+
|
| 641 |
+
$$
|
| 642 |
+
= {\int }_{s = 0}^{\infty }{e}^{-c{s}^{2}d}{ds}
|
| 643 |
+
$$
|
| 644 |
+
|
| 645 |
+
The last inequality holds because $\parallel u\parallel = 1$ is constant. Thus, the density is independent of $u$ and we 450 can directly apply Theorem 7.
|
| 646 |
+
|
| 647 |
+
Lemma 13 (Chebyshev’s Inequality). Let $\parallel \cdot \parallel$ be any norm and $X$ be a random vector with $\mathbb{E}\parallel X - \mathbb{E}X{\parallel }^{2} \leq {\sigma }^{k}$ . Then for any $t > 0$ , we have
|
| 648 |
+
|
| 649 |
+
$$
|
| 650 |
+
\mathbb{P}\left( {\parallel X - \mathbb{E}X\parallel > {t\sigma }}\right) \leq 1/{t}^{k}.
|
| 651 |
+
$$
|
| 652 |
+
|
| 653 |
+
Proof. This follows almost directly from Markov's inequality:
|
| 654 |
+
|
| 655 |
+
$$
|
| 656 |
+
\mathbb{P}\left( {\parallel X - \mathbb{E}X\parallel > {t\sigma }}\right) = \mathbb{P}\left( {\parallel X - \mathbb{E}X{\parallel }^{k} > {t}^{k}{\sigma }^{k}}\right) \leq \frac{\mathbb{E}\parallel X - \mathbb{E}X{\parallel }^{k}}{{t}^{k}{\sigma }^{k}} \leq 1/{t}^{k}.
|
| 657 |
+
$$
|
| 658 |
+
|
| 659 |
+
451
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/VhBtAHeIUaB/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,233 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ PROVABLE RE-IDENTIFICATION PRIVACY
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
§ ABSTRACT
|
| 12 |
+
|
| 13 |
+
In applications involving sensitive data, such as finance and healthcare, the necessity for preserving data privacy can be a significant barrier to machine learning model development. Differential privacy (DP) has emerged as one canonical standard for provable privacy. However, DP's strong theoretical guarantees often come at the cost of a large drop in its utility for machine learning; and DP guarantees themselves can be difficult to interpret. As a result, standard DP has encountered deployment challenges in practice. In this work, we propose a different privacy notion, re-identification privacy (RIP), to address these challenges. RIP guarantees are easily interpretable in terms of the success rate of membership inference attacks. We give a precise characterization of the relationship between RIP and DP, and show that RIP can be achieved using less randomness compared to the amount required for guaranteeing DP, leading to smaller drop in utility. Our theoretical results also give rise to a simple algorithm for guaranteeing RIP which can be used as a wrapper around any algorithm with a continuous output, including parametric model training.
|
| 14 |
+
|
| 15 |
+
§ 16 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
As the popularity and efficacy of machine learning (ML) have increased, the number of domains in which ML is applied has also expanded greatly. Some of these domains, such as finance or healthcare, are based on machine learning on sensitive data which cannot be publicly shared due to regulatory or ethical concerns (Assefa et al., 2020; Office for Civil Rights, 2002). In these instances, maintaining data privacy is of paramount importance and must be considered at every stage of the machine learning process, from model development to deployment. In development, even sharing data in-house while retaining the appropriate level of privacy can be a barrier to model development (Assefa et al., 2020). After deployment, the trained model itself can leak information about the training data if appropriate precautions are not taken (Shokri et al., 2017; Carlini et al., 2021a).
|
| 18 |
+
|
| 19 |
+
Differential privacy (DP) (Dwork et al., 2014) has emerged as the gold standard for provable privacy in the academic literature. Training methods for DP use randomized algorithms applied on databases of points, and DP stipulates that the algorithm's random output cannot change much depending on the presence or absence of one individual point in the database. These guarantees in turn give information theoretic protection against the maximum amount of information that an adversary can obtain about any particular sample in the database, regardless of that adversary's prior knowledge or computational power, making DP an attractive method for guaranteeing privacy. However, DP's strong theoretical guarantees often come at the cost of a large drop in utility for many algorithms. In addition, DP guarantees themselves are difficult to interpret by non-experts. For instance, there is a precise definition for what it means for an algorithm to satisfy DP with $\varepsilon = {10}$ , but it is not a priori clear what this definition guarantees in terms of practical questions that a user could have, the most basic of which might be to ask whether or not an attacker can determine whether or not that user's information was included in the algorithm's input. These issues hinder the widespread adoption of DP in practice.
|
| 20 |
+
|
| 21 |
+
In this paper, we propose a novel privacy notion, re-identification privacy (RIP), to address these challenges. RIP is based on re-identification, also called membership inference. Re-identification measures privacy via a game played between the algorithm designer and an adversary or attacker. The adversary is presented with the algorithm’s output and a "target" sample ${\mathbf{x}}^{ * }$ , which may or may not have been included in the algorithm's input set. The adversary's goal is to determine whether or not the target sample was included in the algorithm's input. If the adversary can succeed with probability much higher than random guessing, then the algorithm must be leaking information about its input. This measure of privacy is one of the simplest for the attacker; thus, provably protecting against it is a strong privacy guarantee. Furthermore, RIP is easily interpretable, as it is measured with respect to a simple quantity-namely, the maximum success rate of an attacker. In summary, our contributions are as follows:
|
| 22 |
+
|
| 23 |
+
* We propose a novel privacy notion, which we dub re-identification privacy (RIP).
|
| 24 |
+
|
| 25 |
+
* We characterize the relationship between RIP and differential privacy (DP).
|
| 26 |
+
|
| 27 |
+
* We introduce algorithms for generating RIP synthetic data.
|
| 28 |
+
|
| 29 |
+
* We demonstrate that certifying RIP can allow for much higher utility than certifying DP, and never results in worse utility.
|
| 30 |
+
|
| 31 |
+
§ 2 RELATED WORK
|
| 32 |
+
|
| 33 |
+
Privacy attacks in ML The study of privacy attacks has recently gained popularity in the machine learning community as the importance of data privacy has become more apparent. In a membership inference or re-identification attack (Shokri et al., 2017), an attacker is presented with a particular sample and the output of the algorithm to be attacked. The attacker's goal is to determine whether or not the presented sample was included in the training data or not. If the attacker can determine the membership of the sample with a probability significantly greater than random guessing, this indicates that the algorithm is leaking information about its training data. Obscuring whether or not a given individual belongs to the private dataset is the core promise of private data sharing, and the main reason that we focus on membership inference as the privacy measure. Membership inference attacks against predictive models have been studied extensively (Shokri et al., 2017; Baluta et al., 2022; Hu et al., 2022; Liu et al., 2022; He et al., 2022; Carlini et al., 2021a), and recent work has also developed membership inference attacks against synthetic data (Stadler et al., 2022; Chen et al., 2020).
|
| 34 |
+
|
| 35 |
+
In a reconstruction attack, the attacker is not presented with a real sample to classify as belonging to the training set or not, but rather has to create samples belonging to the training set based only on the algorithm's output. Reconstruction attacks have been successfully conducted against large language models (Carlini et al., 2021b). At present, these attacks require the attacker to have a great deal of auxiliary information to succeed. For our purposes, we are interested in privacy attacks to measure the privacy of an algorithm, and such a granular task may place too high burden on the attacker to accurately detect "small" amounts of privacy leakage.
|
| 36 |
+
|
| 37 |
+
In an attribute inference attack (Bun et al., 2021; Stadler et al., 2022), the attacker tries to infer a sensitive attribute from a particular sample, based on its non-sensitive attributes and the attacked algorithm output. It has been argued that attribute inference is really the entire goal of statistical learning, and therefore should not be considered a privacy violation (Bun et al., 2021; Jayaraman & Evans, 2022).
|
| 38 |
+
|
| 39 |
+
Differential privacy (DP) DP (Dwork et al., 2014) and its variants (Mironov, 2017; Dwork & Rothblum, 2016) offer strong, information-theoretic privacy guarantees. A DP (probabilistic) algorithm is one in which the probability law of its output does not change much if one sample in its input is changed. That is, if $D$ and ${D}^{\prime }$ are two datasets (collections of $n$ bounds) which differ in exactly one element, then the algorithm $\mathcal{A}$ is $\varepsilon$ -DP if
|
| 40 |
+
|
| 41 |
+
$$
|
| 42 |
+
\mathbb{P}\left( {\mathcal{A}\left( D\right) \in S}\right) \leq {e}^{\varepsilon }\mathbb{P}\left( {\mathcal{A}\left( {D}^{\prime }\right) \in S}\right)
|
| 43 |
+
$$
|
| 44 |
+
|
| 45 |
+
82 for any subset $S$ of the output space. DP has many desirable properties, such as the ability to compose DP methods or post-process the output without losing guarantees. Many simple "wrapper" methods are also available for certifying DP. Among the simplest, the Laplace mechanism, adds Laplace noise to the algorithm output. The noise level must generally depend on the sensitivity of the base algorithm, which measures how much a single input sample can change the algorithm's output. The method we propose in this work is very similar to the Laplace mechanism, but we show that the amount of noise needed can be reduced drastically. Abadi et al. (2016) introduced DP-SGD, a powerful tool enabling DP to be combined with deep learning methods with only a small modification to the standard gradient descent training procedure. However, as previously mentioned, enforcing DP does not come without a cost. Enforcing DP with high levels of privacy (small $\varepsilon$ ) often comes with sharp decreases in algorithm utility (Tao et al., 2021; Stadler et al., 2022). DP is also difficult to audit; it must be proven mathematically for a given algorithm implementation. Checking it empirically is generally computationally intractable (Gilbert & McMillan, 2018). The difficulty of checking DP has led to widespread implementation bugs (and even errors due to finite machine precision), which invalidate the guarantees of DP (Jagielski et al., 2020).
|
| 46 |
+
|
| 47 |
+
The independent work of Thudi et al. (2022) specifically applies DP to bound re-identification rates, and our results in Section 3.4 complement theirs on the relationship between re-identification and DP. However, our results show that DP is not required to prevent re-identification; it is merely one option, and we give alternative methods for defending against membership inference.
|
| 48 |
+
|
| 49 |
+
Auditing methods and metrics Another important component of synthetic data is privacy and utility auditing. This is especially crucial in regulated environments where users may be required to prove compliance of their tools with privacy regulations. Recent works (Alaa et al., 2022; Meehan et al., 2020) have proposed heuristics for measuring both synthetic data privacy and utility. Utility metrics are often based on statistical measures of similarity between the synthesized and real data (Yoon et al., 2020). Privacy metrics try to capture the notion of whether or not a generative model has "memorized" its training data, typically by looking at distances of the synthetic data to training data vs. some held out data. Most of the proposed distance-based heuristics fall victim to simple counter examples in which the proposed synthetic data scores perfectly on the privacy metric, but clearly does not preserve the privacy of the training data. On the other hand, RIP lends itself to useful empirical measurement, as the success rate of any existing membership inference attack method gives a lower bound on the best achievable privacy.
|
| 50 |
+
|
| 51 |
+
§ 3 RE-IDENTIFICATION PRIVACY (RIP)
|
| 52 |
+
|
| 53 |
+
§ 3.1 NOTATION
|
| 54 |
+
|
| 55 |
+
We make use of the following notation. We will always use $\mathcal{D}$ to refer to our entire dataset, which we assume consists of $n$ samples all of which must remain private. We will use $\mathbf{x} \in \mathcal{D}$ or ${\mathbf{x}}^{ * } \in \mathcal{D}$ to refer to a particular sample. ${\mathcal{D}}_{\text{ train }} \subseteq \mathcal{D}$ refers to a size- $k$ subset of our private data. We will assume is selected randomly, so ${\mathcal{D}}_{\text{ train }}$ is a random variable. The remaining data $\mathcal{D} \smallsetminus {\mathcal{D}}_{\text{ train }}$ will be referred to as the holdout data. We denote by $\mathbb{D}$ the set of all size- $k$ subsets of $\mathcal{D}$ (i.e., all possible training sets), and we will typically use $D \in \mathbb{D}$ to refer to a particular realization of the random variable ${\mathcal{D}}_{\text{ train }}$ . Finally, given a particular sample ${\mathbf{x}}^{ * } \in \mathcal{D},{\mathbb{D}}^{\text{ in }}$ (resp. ${\mathbb{D}}^{\text{ out }}$ ) will refer to those sets $D \in \mathbb{D}$ for which ${\mathbf{x}}^{ * } \in D$ (resp. ${\mathbf{x}}^{ * } \notin D$ ).
|
| 56 |
+
|
| 57 |
+
§ 3.2 THEORETICAL MOTIVATION
|
| 58 |
+
|
| 59 |
+
The implicit assumption behind the public release of any statistical algorithm-be it a generative or predictive ML model, or even the release of simple population statistics-is that it is acceptable for statistical information about the modelled data to be released publicly. In the context of membership inference, this poses a potential problem: if the population we are modeling is significantly different from the "larger" population, then if our algorithm's output contains any useful information whatsoever, it should be possible for an attacker to infer whether or not a given record could have plausibly come from our training data or not.
|
| 60 |
+
|
| 61 |
+
We illustrate this concept with an example. Suppose we wish to publish a model which predicts a patient's blood pressure from several biomarkers, specifically for patients who suffer from a particular chronic disease. To do this, we collect a dataset of individuals with confirmed cases of the disease, and use this data to train a linear regression model with coefficients $\widehat{\theta }$ . Formally, we let $\mathbf{x} \in {\mathbb{R}}^{d}$ denote the features (e.g. biomarker values), $z \in \mathbb{R}$ denote the patient’s blood pressure, and $y = \mathbb{1}\{$ patient has the chronic disease in question $\}$ . In this case, the private dataset ${\mathcal{D}}_{\text{ train }}$ contains only the patients with $y = 1$ . Assume that in the general populace, patient features are drawn from a mixture model:
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
y \sim \operatorname{Bernoulli}\left( p\right) ,\;\mathbf{x} \sim \mathcal{N}\left( {0,I}\right) ,\;z \mid \mathbf{x},y \sim {\theta }_{y}^{\top }\mathbf{x},\;{\theta }_{0} \neq {\theta }_{1}.
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
In the re-identification attack scenario, an adversary observes a data point $\left( {{\mathbf{x}}^{ * },{z}^{ * }}\right)$ and the model $\widehat{\theta }$ , and tries to determine whether or not $\left( {{\mathbf{x}}^{ * },{z}^{ * }}\right) \in {\mathcal{D}}_{\text{ train }}$ . If ${\theta }_{0}$ and ${\theta }_{1}$ are well-separated, then an adversary can train an effective classifier to determine the corresponding label $\mathbb{1}\left\{ {\left( {{\mathbf{x}}^{ * },{z}^{ * }}\right) \in {\mathcal{D}}_{\text{ train }}}\right\}$ for $\left( {{\mathbf{x}}^{ * },{z}^{ * }}\right)$ by checking whether or not ${z}^{ * } \approx {\widehat{\theta }}^{\top }{\mathbf{x}}^{ * }$ . Since only data with $y = 1$ belong to ${\mathcal{D}}_{\text{ train }}$ , this provides a signal to the adversary as to whether or not ${\mathbf{x}}^{ * }$ could have belonged to ${\mathcal{D}}_{\text{ train }}$ or not. The point is that in this setting, this outcome is unavoidable if $\widehat{\theta }$ is to provide any utility whatsoever. In other words:
|
| 68 |
+
|
| 69 |
+
In order to preserve utility, re-identification privacy must be measured with respect to the distribution from which the private data are drawn.
|
| 70 |
+
|
| 71 |
+
The example above motivates the following theoretical ideal for our synthetic data. Let $\mathcal{D} = {\left\{ {\mathbf{x}}_{i}\right\} }_{i = 1}^{n}$ be the private dataset and suppose that ${\mathbf{x}}_{i}\overset{\text{ i.i.d. }}{ \sim }\mathcal{P}$ for some probability distribution $\mathcal{P}$ . (Note: Here, ${\mathbf{x}}^{ * }$ corresponds to the complete datapoint $\left( {{\mathbf{x}}^{ * },{z}^{ * }}\right)$ in the example above.) Let $\mathcal{A}$ be our (randomized) algorithm, and denote its output by $\theta = \mathcal{A}\left( \mathcal{D}\right)$ . We generate a test point based on:
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
{y}^{ * } \sim \operatorname{Bernoulli}\left( {1/2}\right) ,\;{\mathbf{x}}^{ * } \mid {y}^{ * } \sim {y}^{ * }\operatorname{Unif}\left( {\mathcal{D}}_{\text{ train }}\right) + \left( {1 - {y}^{ * }}\right) \mathcal{P},
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
i.e. ${\mathbf{x}}^{ * }$ is a fresh draw from $\mathcal{P}$ or a random element of the private training data with equal probability. Let $\mathcal{I}$ denote any re-identification algorithm which takes as input ${\mathbf{x}}^{ * }$ and the algorithm’s output $\theta$ . The notion of privacy we wish to enforce is that $\mathcal{I}$ cannot do much better to ascertain the membership of ${\mathbf{x}}^{ * }$ than guessing randomly:
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
{\mathbb{P}}_{\mathcal{A},{\mathcal{D}}_{\text{ train }}}\left( {\mathcal{I}\left( {{\mathbf{x}}^{ * },{\mathcal{D}}_{\text{ synth }}}\right) = {y}^{ * }}\right) \leq 1/2 + \eta ,\;\eta \ll 1/2. \tag{1}
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
§ 3.3 PRACTICAL DEFINITION
|
| 84 |
+
|
| 85 |
+
In reality, we do not have access to the underlying distribution $\mathcal{P}$ . Instead, we propose to use a bootstrap sampling approach to approximate fresh draws from $\mathcal{P}$ .
|
| 86 |
+
|
| 87 |
+
Definition 1 (Re-Identification Privacy (RIP)). Fix $k \leq n$ and let ${\mathcal{D}}_{\text{ train }} \subseteq \mathcal{D}$ be a size- $k$ subset chosen uniformly at random from the elements in $\mathcal{D}$ . For ${\mathbf{x}}^{ * } \in \mathcal{D}$ , let ${y}^{ * } = \mathbb{1}\left\{ {{\mathbf{x}}^{ * } \in {\mathcal{D}}_{\text{ train }}}\right\}$ . An algorithm $\mathcal{A}$ is $\eta$ -RIP with respect to $\mathcal{D}$ if for any identification algorithm $\mathcal{I}$ and for every ${\mathbf{x}}^{ * } \in \mathcal{D}$ ,
|
| 88 |
+
|
| 89 |
+
we have
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
\mathbb{P}\left( {\mathcal{I}\left( {{\mathbf{x}}^{ * },\mathcal{A}\left( {\mathcal{D}}_{\text{ train }}\right) }\right) = {y}^{ * }}\right) \leq \max \left\{ {\frac{k}{n},1 - \frac{k}{n}}\right\} + \eta .
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
Here, the probability is taken over the uniformly random size- $k$ subset ${\mathcal{D}}_{\text{ train }} \subseteq \mathcal{D}$ , as well as any randomness in $\mathcal{A}$ and $\mathcal{I}$ .
|
| 96 |
+
|
| 97 |
+
Definition 1 states that given the output of $\mathcal{A}$ , an adversary cannot determine whether a given point was in the holdout set or training set with probability more than $\eta$ better than always guessing the a priori more likely outcome. In the remainder of the paper, we will set $k = n/2$ , so that $\mathcal{A}$ is $\eta$ -RIP if an attacker cannot have average accuracy greater than $\left( {1/2 + \eta }\right)$ . This gives the largest a priori entropy for the attacker's classification task, which creates the highest ceiling on how much of an advantage an attacker can possibly gain from the algorithm's output, and consequently the most accurate measurement of privacy leakage. The choice $k = n/2$ also keeps us as close as possible to the theoretical motivation in the previous subsection. We note that analogues of all of our results apply for general $k$ .
|
| 98 |
+
|
| 99 |
+
The definition of RIP is phrased with respect to any classifier (whose randomness is independent of the randomness in $\mathcal{A}$ ; if the adversary knows our algorithm and our random seed, we are doomed). While this definition is compelling in that it shows a bound on what any attacker can hope to accomplish, the need to consider all possible attack algorithms makes it difficult to work with technically. The following proposition shows that RIP is equivalent to a simpler definition which does not need to simultaneously consider all identification algorithms $\mathcal{I}$ .
|
| 100 |
+
|
| 101 |
+
Proposition 2. Let $\mathbb{A} = \operatorname{Range}\left( \mathcal{A}\right)$ and let $\mu$ denote the probability law of $\mathcal{A}\left( {\mathcal{D}}_{\text{ train }}\right)$ . Then $\mathcal{A}$ is $\eta$ -RIP if and only if
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
{\int }_{\mathbb{A}}\max \left\{ {\mathbb{P}\left( {{\mathbf{x}}^{ * } \in {\mathcal{D}}_{\text{ train }} \mid \mathcal{A}\left( {\mathcal{D}}_{\text{ train }}\right) = A}\right) ,\mathbb{P}\left( {{\mathbf{x}}^{ * } \notin {\mathcal{D}}_{\text{ train }} \mid \mathcal{A}\left( {\mathcal{D}}_{\text{ train }}\right) = A}\right) }\right\} {d\mu }\left( A\right) \leq \frac{1}{2} + \eta .
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
Furthermore, the optimal adversary is given by
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
\mathcal{I}\left( {{\mathbf{x}}^{ * },A}\right) = \mathbb{1}\left\{ {\mathbb{P}\left( {{\mathbf{x}}^{ * } \in {\mathcal{D}}_{\text{ train }} \mid \mathcal{A}\left( {\mathcal{D}}_{\text{ train }}\right) = A}\right) \geq 1/2}\right\} .
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
Proposition 2 makes precise the intuition that the optimal attacker should guess the more likely of ${\mathbf{x}}^{ * } \in {\mathcal{D}}_{\text{ train }}$ or ${\mathbf{x}}^{ * } \notin {\mathcal{D}}_{\text{ train }}$ conditional on the output of $\mathcal{A}$ . The optimal attacker’s overall accuracy is then computed by marginalizing this conditional statement.
|
| 114 |
+
|
| 115 |
+
Finally, RIP also satisfies a post-processing inequality similar to the classical result in DP (Dwork et al. 2014). This states that any local functions of a RIP algorithm's output cannot degrade the privacy guarantee.
|
| 116 |
+
|
| 117 |
+
Theorem 3. Suppose that $\mathcal{A}$ is $\eta$ -RIP, and let $f$ be any (potentially randomized, with randomness independent of ${\mathcal{D}}_{\text{ train }}$ ) function. Then $f \circ \mathcal{A}$ is also $\eta$ -RIP.
|
| 118 |
+
|
| 119 |
+
For example, Theorem 10 is important for the application of RIP to generative model training: if we can guarantee that our generative model is $\eta$ -RIP, then any output produced by it is $\eta$ -RIP as well.
|
| 120 |
+
|
| 121 |
+
§ 3.4 RELATION TO DIFFERENTIAL PRIVACY
|
| 122 |
+
|
| 123 |
+
In this section, we make precise the relationship between RIP and the most common theoretical formulation of privacy: differential privacy (DP). We provide proof sketches for most of our results here; detailed proofs can be found in the Appendix. Our first theorem shows that DP is at least as strong as RIP.
|
| 124 |
+
|
| 125 |
+
Theorem 4. Let $\mathcal{A}$ be $\varepsilon$ -DP. Then $\mathcal{A}$ is $\eta$ -RIP with $\eta = \frac{1}{1 + {e}^{-\varepsilon }} - \frac{1}{2}$ . Furthermore, this bound is tight, i.e. for any $\varepsilon > 0$ , there exists an $\varepsilon$ -DP algorithm against which the optimal attacker has accuracy $\frac{1}{1 + {e}^{-\varepsilon }}$ .
|
| 126 |
+
|
| 127 |
+
To help interpret this result, we remark that for $\varepsilon \approx 0$ , we have $\frac{1}{1 + {e}^{-\varepsilon }} - \frac{1}{2} \approx \varepsilon /4$ . Thus in the regime where strong privacy guarantees are required $\left( {\eta \approx 0}\right) ,\eta \approx \varepsilon /4$ .
|
| 128 |
+
|
| 129 |
+
In fact, it is the case that DP is strictly stronger than RIP, which we make precise with the following theorem.
|
| 130 |
+
|
| 131 |
+
Theorem 5. For any $\eta > 0$ , there exists an algorithm $\mathcal{A}$ which is $\eta$ -RIP but not $\varepsilon$ -DP for any $\varepsilon < \infty$ .
|
| 132 |
+
|
| 133 |
+
In order to better understand the difference between DP and RIP, let us again examine Proposition 2, Recall that this proposition showed that marginally over the output of $\mathcal{A}$ , the conditional probability that ${\mathbf{x}}^{ * } \in {\mathcal{D}}_{\text{ train }}$ given the synthetic should not differ too much from the unconditional probability that ${\mathbf{x}}^{ * } \in {\mathcal{D}}_{\text{ train }}$ . The following proposition shows that DP requires this condition to hold for every output of $\mathcal{A}\left( {\mathcal{D}}_{\text{ train }}\right)$ .
|
| 134 |
+
|
| 135 |
+
Proposition 6. If $\mathcal{A}$ is an $\varepsilon$ -DP synthetic data generation algorithm, then for any ${\mathbf{x}}^{ * }$ , we have
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
\frac{\mathbb{P}\left( {{\mathbf{x}}^{ * } \notin {\mathcal{D}}_{\text{ train }} \mid \mathcal{A}\left( {\mathcal{D}}_{\text{ train }}\right) }\right) }{\mathbb{P}\left( {{\mathbf{x}}^{ * } \in {\mathcal{D}}_{\text{ train }} \mid \mathcal{A}\left( {\mathcal{D}}_{\text{ train }}\right) }\right) } \leq {e}^{\varepsilon }\frac{\mathbb{P}\left( {{\mathbf{x}}^{ * } \notin {\mathcal{D}}_{\text{ train }}}\right) }{\mathbb{P}\left( {{\mathbf{x}}^{ * } \in {\mathcal{D}}_{\text{ train }}}\right) }.
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
Proposition 6 can be thought of as an extension of the Bayesian interpretation of DP explained by Jordon et al. (2022). Namely, the definition of DP immediately implies that, for any two adjacent sets
|
| 142 |
+
|
| 143 |
+
$D$ and ${D}^{\prime }$ ,
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
\frac{\mathbb{P}\left( {{\mathcal{D}}_{\text{ train }} = D \mid \mathcal{A}\left( {\mathcal{D}}_{\text{ train }}\right) }\right) }{\mathbb{P}\left( {{\mathcal{D}}_{\text{ train }} = {D}^{\prime } \mid \mathcal{A}\left( {\mathcal{D}}_{\text{ train }}\right) }\right) } \leq {e}^{\varepsilon }\frac{\mathbb{P}\left( {{\mathcal{D}}_{\text{ train }} = D}\right) }{\mathbb{P}\left( {{\mathcal{D}}_{\text{ train }} = {D}^{\prime }}\right) }.
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
§ 4 GUARANTEEING RIP VIA NOISE ADDITION
|
| 150 |
+
|
| 151 |
+
There are a number of mechanisms for guaranteeing DP which operate via simple noise addition (the 4 Laplace mechanism) or sampling (the exponential mechanism) (Dwork et al., 2014). More recently,
|
| 152 |
+
|
| 153 |
+
Abadi et al. (2016) showed how to make a small modification to the standard deep neural network training procedure to guarantee DP. In this section, we show that a small modification to standard training procedures can be used to guarantee RIP as well.
|
| 154 |
+
|
| 155 |
+
Suppose that $\mathcal{A}$ takes as input a data set $D$ and produces output $\theta \in {\mathbb{R}}^{d}$ . For instance, $\mathcal{A}$ may compute a simple statistical query on $D$ , such as mean estimation, but our results apply equally well in the case that e.g. $\mathcal{A}\left( D\right)$ are the weights of a neural network trained on $D$ . If $\theta$ are the weights of a generative model, then if we can guarantee RIP for $\theta$ , then by the data processing inequality (Theorem 10), this guarantees privacy for any output of the generative model.
|
| 156 |
+
|
| 157 |
+
The distribution over training data (in our case, the uniform distribution over size $n/2$ subsets of our complete dataset $\mathcal{D}$ ) induces a distribution over the output $\theta$ . The idea is the following: What is the smallest amount of noise we can add to $\theta$ which will guarantee RIP? If we add noise on the order of $\mathop{\max }\limits_{{D \sim {D}^{\prime } \subseteq \mathcal{D}}}\begin{Vmatrix}{\mathcal{A}\left( D\right) - \mathcal{A}\left( {D}^{\prime }\right) }\end{Vmatrix}$ , then we can adapt the standard proof for guaranteeing DP in terms of algorithm sensitivity to show that a restricted version of DP (only with respect subsets of $\mathcal{D}$ ) holds in this case, which in turn guarantees RIP. On the other hand, it seems possible that we should be able to reduce the amount of noise even further. Recall that by Propositions 2 and 6, RIP is only asking for a marginal guarantee on the change in the posterior probability of $\bar{D}$ given $A$ , whereas DP is asking for a conditional guarantee on the posterior. So while max seems necessary for a conditional guarantee, the moments of $\theta$ should be sufficient for a marginal guarantee. Theorem 7 shows that this intuition is correct.
|
| 158 |
+
|
| 159 |
+
Theorem 7. Let $\parallel \cdot \parallel$ be any norm, and let ${\sigma }^{M} \geq \mathbb{E}\parallel \theta - \mathbb{E}\theta {\parallel }^{M}$ be an upper bound on the $M$ -th central moment of $\theta$ with respect to this norm over the randomness in ${\mathcal{D}}_{\text{ train }}$ and $\mathcal{A}$ . Let $X$ be a random variable with density proportional to $\exp \left( {-\frac{1}{c\sigma }\parallel X\parallel }\right)$ with $c = {\left( {7.5}/\eta \right) }^{1 + \frac{2}{M}}$ . Finally, let $\widehat{\theta } = \theta + X$ . Then $\widehat{\theta }$ is $\eta$ -RIP, i.e., for any adversary $\mathcal{I}$ ,
|
| 160 |
+
|
| 161 |
+
$$
|
| 162 |
+
\mathbb{P}\left( {\mathcal{I}\left( {{\mathbf{x}}^{ * },\widehat{\theta }}\right) = {y}^{ * }}\right) \leq 1/2 + \eta .
|
| 163 |
+
$$
|
| 164 |
+
|
| 165 |
+
At first glance, Theorem 7 may appear to be adding noise of equal magnitude to all of the coordinates of $\theta$ , regardless of how much each contributes to the central moment $\sigma$ . However, by carefully selecting the norm $\parallel \cdot \parallel$ , we can add non-isotropic noise to $\theta$ such that the marginal noise level reflects the variability of each specific coordinate of $\theta$ . This is the content of Corollary 8,
|
| 166 |
+
|
| 167 |
+
Corollary 8. Let ${\sigma }_{i}^{2} \geq \mathbb{E}{\left| {\theta }_{i} - \mathbb{E}{\theta }_{i}\right| }^{2}$ , and define $\parallel x{\parallel }_{\sigma ,2} = {\left( \mathop{\sum }\limits_{{i = 1}}^{d}\frac{{\left| {x}_{i}\right| }^{2}}{d{\sigma }_{i}^{2}}\right) }^{1/2}$ . Generate ${Y}_{i} \sim$
|
| 168 |
+
|
| 169 |
+
$\mathcal{N}\left( {0,{\sigma }_{i}^{2}}\right)$ , set $U = Y/\parallel Y{\parallel }_{\sigma ,2}$ , and draw $r \sim \operatorname{Laplace}\left( {\left( \frac{6.16}{\eta }\right) }^{2}\right)$ . Finally, set $X = {rU}$ and return $\widehat{\theta } = \theta + X$ . Then $\widehat{\theta }$ is $\eta$ -RIP.
|
| 170 |
+
|
| 171 |
+
When does RIP improve over DP? By Theorem 4, any DP algorithm gives rise to a RIP algorithm, so we never need to add more noise than the amount required to guarantee DP, in order to guarantee RIP. However, Theorem 7 shows that RIP affords an advantage over DP when the variance of our algorithm’s output (over subsets of size $n/2$ ) is much smaller than its sensitivity $\Delta$ , which is defined as the maximum change in the algorithm's output when evaluated on two datasets which differ in only one element. For instance, applying the Laplace mechanism from DP requires a noise which scales like $\Delta /\epsilon$ to guarantee $\varepsilon$ -DP. It is easy to construct examples where the variance is much smaller than the sensitivity if the output of our "algorithm" is allowed to be completely arbitrary as a function of the input. However, it is more interesting to ask if there are any natural settings in which this occurs. Proposition 9 answers this question in the affirmative.
|
| 172 |
+
|
| 173 |
+
Proposition 9. For any finite $D \subseteq \mathbb{R}$ , define $\mathcal{A}\left( D\right) = \frac{1}{\mathop{\sum }\limits_{{x \in D}}x}$ . Given a dataset $\mathcal{D}$ of size $n$ , define $\mathbb{D} = \{ D \subseteq \mathcal{D} : \left| D\right| = \lfloor n/2\rfloor \}$ , and define
|
| 174 |
+
|
| 175 |
+
$$
|
| 176 |
+
{\sigma }^{2} = \operatorname{Var}\left( {\mathcal{A}\left( D\right) }\right) ,\;\Delta = \mathop{\max }\limits_{{D \sim {D}^{\prime } \in \mathbb{D}}}\left| {\mathcal{A}\left( D\right) - \mathcal{A}\left( {D}^{\prime }\right) }\right| .
|
| 177 |
+
$$
|
| 178 |
+
|
| 179 |
+
Here the variance is taken over $D \sim \operatorname{Unif}\left( \mathbb{D}\right)$ . Then for all $n$ , there exists a dataset $\left| \mathcal{D}\right| = n$ such that ${\sigma }^{2} = O\left( 1\right)$ but $\Delta = \Omega \left( {2}^{n/3}\right)$ .
|
| 180 |
+
|
| 181 |
+
We remark that similar results should hold for e.g. subset precision matrix queries, perhaps even without such a carefully constructed $\mathcal{D}$ if the size of the subset is comparable to the dimension of the data.
|
| 182 |
+
|
| 183 |
+
Algorithm 1 RIP via noise addition
|
| 184 |
+
|
| 185 |
+
Require: Private dataset $\mathcal{D},\sigma$ estimation budget $B$ , RIP parameter $\eta$
|
| 186 |
+
|
| 187 |
+
${\mathcal{D}}_{\text{ train }} \leftarrow \operatorname{RANDOMSPLIT}\left( {\mathcal{D},1/2}\right)$
|
| 188 |
+
|
| 189 |
+
#Estimate $\sigma$ if an a priori bound is not known
|
| 190 |
+
|
| 191 |
+
$i \leftarrow 1$
|
| 192 |
+
|
| 193 |
+
for $i = 1,\ldots ,B$ do
|
| 194 |
+
|
| 195 |
+
${\mathcal{D}}_{\text{ train }}^{\left( i\right) } \leftarrow \operatorname{RANDOMSPLIT}\left( {{\mathcal{D}}_{\text{ train }},1/2}\right)$
|
| 196 |
+
|
| 197 |
+
${\theta }^{\left( i\right) } \leftarrow \mathcal{A}\left( {\mathcal{D}}_{\text{ train }}^{\left( i\right) }\right)$
|
| 198 |
+
|
| 199 |
+
end for
|
| 200 |
+
|
| 201 |
+
$\bar{\theta } \leftarrow \frac{1}{B}\mathop{\sum }\limits_{{i = 1}}^{B}{\theta }^{\left( i\right) }$
|
| 202 |
+
|
| 203 |
+
${\sigma }^{2} \leftarrow \frac{1}{B - 1}\mathop{\sum }\limits_{{i = 1}}^{B}{\begin{Vmatrix}{\theta }^{\left( i\right) } - \bar{\theta }\end{Vmatrix}}^{2}$
|
| 204 |
+
|
| 205 |
+
#Add appropriate noise to the base algorithm's output
|
| 206 |
+
|
| 207 |
+
$U \leftarrow \operatorname{Unif}\left( \left\{ {u \in {\mathbb{R}}^{d} : \parallel u\parallel = 1}\right\} \right)$
|
| 208 |
+
|
| 209 |
+
$r \leftarrow$ Laplace $\left( {{\left( \frac{7.5}{\eta }\right) }^{2}\sigma }\right)$
|
| 210 |
+
|
| 211 |
+
$X \leftarrow {rU}$
|
| 212 |
+
|
| 213 |
+
return $\mathcal{A}\left( {\mathcal{D}}_{\text{ train }}\right) + X$
|
| 214 |
+
|
| 215 |
+
§ 5 SIMULATION RESULTS
|
| 216 |
+
|
| 217 |
+
To illustrate our theoretical results, we plot the noise level needed to guarantee RIP vs. the corresponding level of DP (with the correspondence given by Theorem 4) for the example in Proposition 9.
|
| 218 |
+
|
| 219 |
+
Refer to Fig. 1, Dotted lines refer to DP, while the solid line is for RIP. The $x$ -axis gives the best possible bound on the attacker's improvement in accuracy over random guessing-i.e., the parameter $\eta$ for an $\eta$ -RIP method-according to that method’s guarantees. For DP, the value along the $x$ -axis is given by the (tight) correspondence in Theorem 4, namely $\eta = \frac{1}{1 + {e}^{-\varepsilon }} - \frac{1}{2}.\eta = 0$ corresponds to perfect privacy (the attacker cannot do any better than random guessing), while $\eta = \frac{1}{2}$ corresponds to no privacy (the attacker can determine membership with perfect accuracy). The $y$ -axis denotes the amount of noise that must be added to the non-private algorithm's output, as measured by the scale parameter of the Laplace noise that must be added. For RIP, by Theorem 7, this is ${\left( {6.16}/\eta \right) }^{2}\sigma$ where $\sigma$ is an upper bound on the variance of the base algorithm over random subsets, and for DP this is $\frac{\Delta }{\log \frac{1 + {2\eta }}{1 + {2\eta }}}$ . (This comes from solving $\eta = \frac{1}{1 + {e}^{-\varepsilon }}$ for $\varepsilon$ , then using the fact that $\operatorname{Laplace}\left( {\Delta /\varepsilon }\right)$ noise must be added to guarantee $\varepsilon$ -DP.) For DP, the amount of noise necessary changes with the size $n$ of the private dataset. For RIP, the amount of noise does not change, so there is only one line.
|
| 220 |
+
|
| 221 |
+
The results show that for even small datasets $\left( {n \geq {36}}\right)$ and for $\eta \geq {0.01}$ , direct noise accounting for RIP gives a large advantage over guaranteeing RIP via DP. In practice, such small datasets are uncommon. As $n$ increases above even this modest range, the advantage in terms of noise reduction for RIP vs. DP quickly becomes many orders of magnitude and is not visible on the plot. (Refer to Proposition 9. The noise required for DP grows exponentially in $n$ , while it remains constant in $n$ for RIP.)
|
| 222 |
+
|
| 223 |
+
§ 6 CONCLUSION
|
| 224 |
+
|
| 225 |
+
In this work, we propose a novel privacy property, re-identification privacy (RIP) and explained its properties and relationship with differential privacy (DP). The RIP property is more readily interpretable than the guarantees offered by (DP). RIP also requires a smaller amount of noise to guarantee as compared to DP, and therefore can retain greater utility in practice. We proposed a simple "wrapper" method for guaranteeing RIP, which can be implemented with a minor modification both to simple statistical queries or more complicated tasks such as the training procedure for parametric machine learning models.
|
| 226 |
+
|
| 227 |
+
< g r a p h i c s >
|
| 228 |
+
|
| 229 |
+
Figure 1: Noise level vs. privacy guarantee for RIP and DP. For datasets with at least $n = {36}$ points and for almost all values of $\eta$ , RIP allows us to add much less noise than what would be required by naively applying DP. For $n > {48}$ , the amount of noise required by DP is so large that it will not appear on the plot.
|
| 230 |
+
|
| 231 |
+
Limitations As the example used to prove Theorem 5 shows, there are cases where apparently non-private algorithms can satisfy RIP. Thus, algorithms which satisfy RIP may require post-processing to ensure that the output is not one of the low-probability events in which data privacy is leaked. In addition, because RIP is determined with respect to a holdout set still drawn from $\mathcal{D}$ , an adversary may be able to determine with high probability whether or not a given sample was contained in $\mathcal{D}$ , rather than just in ${\mathcal{D}}_{\text{ train }}$ , if $\mathcal{D}$ is sufficiently different from the rest of the population.
|
| 232 |
+
|
| 233 |
+
Future work Theorem 4 suggests that DP implies RIP in general. However, Theorem 7 shows that a finer-grained analysis of a standard DP mechanism (the Laplace mechanism) is possible, showing that we can guarantee RIP with less noise. It seems plausible that a similar analysis can be undertaken for other DP mechanisms. In addition to these "wrapper" type methods which can be applied on top of existing algorithms, bespoke algorithms for guaranteeing RIP in particular applications (such as synthetic data generation) are also of interest. Lastly, noise addition is a simple and effective way to enforce privacy, but other classes of mechanisms may also be possible. For instance, is it possible to directly regularize a probabilistic model using Proposition 2? Finally, the connections between RIP and other theoretical notions of privacy (Renyi DP (Mironov, 2017), concentrated DP (Dwork & Rothblum, 2016), etc.) are also of interest. Lastly, this paper focused on developing on the theoretical principles and guarantees of RIP, but systematic empirical evaluation is an important direction for future work. Practical membership inference attacks-particularly those against synthetic data and generative models rather than predictive models-still have a gap between practical efficacy and the theoretical upper bounds. It is likely that this gap can be closed through a combination of improved privacy accounting, but also through improved practical attacks. For the "shadow model" approach used by Stadler et al. (2022), improved computational efficiency is also of interest for improving membership inference attacks. These improved attacks will in turn allow model developers to better audit the empirical privacy limitations of their methods.
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/Vz_gE-nrFu9/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,244 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Towards Reasoning-Aware Explainable VQA
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
## Abstract
|
| 12 |
+
|
| 13 |
+
The domain of joint vision-language understanding, especially in the context of reasoning in Visual Question Answering (VQA) models, has garnered significant attention in the recent past. While most of the existing VQA models focus on improving the accuracy of VQA, the way models arrive at an answer is oftentimes a black box. As a step towards making the VQA task more explainable and interpretable, our method is built upon the SOTA VQA framework [1] by augmenting it with an end-to-end explanation generation module. In this paper, we investigate two network architectures, including LSTM and Transformer decoder, as the explanation generator. Our method generates human-readable explanations while maintaining SOTA VQA accuracy on the GQA-REX (77.49%) and VQA-E (71.48%) datasets. Approximately 65.16% of the generated explanations are approved to be valid by humans. Roughly 60.5% of the generated explanations are valid and lead to the correct answers.
|
| 14 |
+
|
| 15 |
+
## 14 1 Introduction
|
| 16 |
+
|
| 17 |
+
Problems involving joint vision-language understanding are gaining more attention in both Computer Vision (CV) and Natural Language Processing (NLP) communities. In recent years, complex reasoning problems in the vision-language domain have been in the spotlight. Based on the classic Visual Question Answering (VQA), reasoning has been highly involved. In [2, 3, 4], a model needs to reason over spatial and arithmetic relationships within an image-question pair. [5] incorporates spatial-temporal reasoning and domain-specific knowledge. A more challenging setting such as [6, 7] requires the capability to make use of external knowledge to perform reasoning in the vision and language domain.
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
|
| 21 |
+
Q: Is the stovelight on? GT Ans: Yes M1, M2, M3: Yes
|
| 22 |
+
|
| 23 |
+
C: There is a stove and a bunch of knives E: The space under the hood is brighter than the surrounding area,
|
| 24 |
+
|
| 25 |
+
Figure 1: A VQA example shows the importance of an explanation that leads to the correct answer.
|
| 26 |
+
|
| 27 |
+
Benefiting from the large-scale pre-trained models in both single modality and multi-modality, we witness impressive improvements in VQA accuracy [8, 9, 10, 11] in both the stock setting and its variants. However, we barely pay attention to how a model reaches an answer given an image-question pair. Let's take a look at Figure 1 as an illustrative example. The ground-truth answer to the question is straightforward and information from the image is sufficient to answer the question. There are three different types of VQA models: Model type 1 predicts the correct answer without any evidence on how to achieve it. Model type 2 answers the question correctly and provides a caption that summarizes the image. Unfortunately, the caption fails to unveil the reasoning chain behind predicting the correct answer to the question. Model type 3 successfully generates a logically self-contained explanation which corresponds to the correct answer. Both Model type 1 and Model type 2 cover most of the SOTA VQA models. Surprisingly, very few models exist that are similar to Model type 3. Motivated by examples like in Figure 1, we investigate the following two open topics in this paper: (i) whether a VQA model can generate a human-readable explanation while maintaining the VQA accuracy; (ii) how good are the generated explanations and how to evaluate them? Our contribution is two-fold:
|
| 28 |
+
|
| 29 |
+
- We present easy to implement methods on top of a SOTA VQA framework which maintains VQA accuracy while generating human-readable textual explanations.
|
| 30 |
+
|
| 31 |
+
- We show both quantitative experimental results and human-studies of the proposed explainable VQA method. Our experiments illustrate the urgency of proposing new metrics to evaluate the predicted explanations in vision-language reasoning problems such as VQA.
|
| 32 |
+
|
| 33 |
+
## 2 Related Work
|
| 34 |
+
|
| 35 |
+
Reasoning in VQA VQA [4] and its variants, as an end-to-end task, have been well explored, and various models have kept achieving higher performance along the way. Built on top of the original setting, complex reasoning tasks are heavily involved. [4, 12] introduces arithmetic reasoning into the setting. [3, 13, 2] highlight the importance of fine spatial reasoning in the VQA problem. While the above datasets limit the reasoning domain within the image, [6, 14] propose a visual language task that requires external knowledge, sometimes even domain-specific knowledge, to answer the question. In [15], the emphasis is on the logical entailment problem. Recent methods such as [8, 9, 10] take advantage of the unprecedented amount of vision-language data and large size of models, achieving SOTA performances on the above VQA datasets. As the reasoning problem in VQA is becoming more and more complicated, it is urgent to have an interpretable way to analyze and diagnose the model and measure its reliability.
|
| 36 |
+
|
| 37 |
+
Explainable VQA and Metrics Very few among the SOTA VQA works study the explainability of the models, especially in the era of big data and big model. [16] is one of the standard datasets that focuses on explainability. A similarity score between the question and image caption is computed to check for question-relevant captions. The caption is then used to generate an explanation that is relevant to the question-answer pair. [17, 18] use attributes and captions of the image to provide a naive version of the explanation to the answer. Some works [19, 11] make use of textual knowledge from external sources to improve the interpretability of the model. But such external knowledge oftentimes cannot provide direct evidence to the answer. The neural-symbolic framework [20] is also applied in the VQA domain since it is naturally more interpretable than the pure deep learning based methods. Recently, more works [21, 22] have been proposed for enhancing the explainability of the VQA problem using either natural or synthetic data. Another topic that is not well-studied in explainable VQA is the evaluation of explanations. In [16, 22], conventional NLP metrics such as ROUGE, BLEU scores are used to measure the quality of the generated explanation. Different from [16], [21] doesn't use a caption as the explanation, instead it uses tokens representing a bounding box in the image to replace key parts in the scene graph.
|
| 38 |
+
|
| 39 |
+
## 3 Methodology
|
| 40 |
+
|
| 41 |
+
In this section, we describe our method in detail. Please refer to Figure 2 for an overview of the model flow and architecture. The proposed method consists of two major components, (i) coarse-to-fine visual language reasoning for VQA and (ii) explanation generation module.
|
| 42 |
+
|
| 43 |
+
### 3.1 Extracting Features and Predicates
|
| 44 |
+
|
| 45 |
+
A pre-trained Faster-RCNN model [23] is used to extract features for each Region of Interest (RoI) in the image $I$ . The image features as denoted as ${f}_{I}$ . Similarly, a Faster-RCNN model is also 5 used to extract objects and attributes, which form the image predicates. We generate the Glove - embedding [24] for each word in the set of image predicates, denoted as ${p}_{I}$ . The words in the question - $Q$ are also encoded using Glove embeddings. The question embeddings are then passed through
|
| 46 |
+
|
| 47 |
+

|
| 48 |
+
|
| 49 |
+
Figure 2: An overview of coarse-to-fine VQA with explanation generation.
|
| 50 |
+
|
| 51 |
+
a GRU to extract sequential features ${f}_{Q}$ . Together with this, question predicates are extracted by passing the question through a stop words filter. The stop words not only consist of words from NLTK[25], but also includes those words in questions that occur less frequently (threshold=10). Each question predicate is then encoded with Glove embedding. Question predicates are denoted as ${p}_{Q}$ .
|
| 52 |
+
|
| 53 |
+
### 3.2 Coarse-to-Fine Reasoning for VQA
|
| 54 |
+
|
| 55 |
+
VQA can be generally formulated as $\left( {I, Q}\right) \rightarrow \mathbf{a}$ , where $a \in \mathcal{A}$ and $\mathcal{A}$ is the set of answers. Usually, the answer set $\mathcal{A}$ is filtered by a frequency threshold from the annotated answers. The coarse-to-fine reasoning framework can be formalized as :
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
{\mathbf{a}}^{ * } = \arg \mathop{\max }\limits_{a}{\mathbf{{CFR}}}_{\theta }\left( {\mathbf{a} \mid {f}_{I},{p}_{I},{f}_{Q},{p}_{Q}}\right) \tag{1}
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
= \arg \mathop{\max }\limits_{a}\mathbf{{SR}}\left( {\mathbf{a} \mid \mathbf{{IF}}\left( {{f}_{I},{p}_{I},{f}_{Q},{p}_{Q}}\right) ,\mathbf{{MM}}\left( {{f}_{I},{p}_{I},{f}_{Q},{p}_{Q}}\right) }\right)
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
where ${\mathbf{{CFR}}}_{\theta }$ is an end-to-end module with learnable parameter $\theta$ . It consists of three different modules, including an information filtering module IF, a multimodal learning module MM, followed by a semantic reasoning module $\mathbf{{SR}}$ .
|
| 66 |
+
|
| 67 |
+
Information Filtering The extracted features may be noisy and have incorrect information as they are extracted from pre-trained models. This module helps remove unnecessary information and aids in understanding the importance of RoIs in images for each question.
|
| 68 |
+
|
| 69 |
+
Multimodal Learning Bilinear Attention Networks are used to learn features at both coarse-grained and fine-grained levels. The coarse-grained module works with image and question features and predicates and produces a joint representation at the coarse-grained level. The fine-grained module learns the correlation of the filtered image and question information and learns a joint representation at the fine-grained level.
|
| 70 |
+
|
| 71 |
+
Semantic Reasoning This module learns selective information from both the coarse-grained and fine-grained module outputs. The joint embedding from this module is then fed into a multi-layer perceptron to perform answer prediction and to the explanation module for explanation generation.
|
| 72 |
+
|
| 73 |
+
### 3.3 Explanation Generation
|
| 74 |
+
|
| 75 |
+
The joint embedding from the semantic reasoning module is used to train an explanation generator with ground-truth explanations as supervision. The VQA backbone is augmented with the explanation generation module. Two architectures are evaluated for explanation generation: (i) Long Short-Term Memory (LSTM), (ii) Transformer Decoder. The LSTM architecture used consists of 2 layers, with an input dimension of 768. The Transformer Decoder architecture has an input dimension of 768, and consists of 8 attention heads. In both cases, the input is a joint embedding and is trained using ground-truth explanations from the dataset (discussed in the following section) using cross-entropy loss for each word. Suppose we have an explanation $\mathbf{E} = \left( {{w}_{1},\ldots ,{w}_{i},\ldots ,{w}_{l}}\right)$ , where ${w}_{i} \in \mathbb{V}$ , the vocabulary and $l$ is the length of the explanation. The explanation can therefore be represented as a sequence of one-hot encoded vectors. The loss function is therefore given by:
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
{L}_{\text{expl }} = - \frac{1}{l \cdot \left| \mathbb{V}\right| } \cdot \mathop{\sum }\limits_{{i = 1}}^{l}\mathop{\sum }\limits_{{k = 1}}^{\left| \mathbb{V}\right| }{y}_{i, k} \cdot \log \left( {p\left( {w}_{i, k}\right) }\right) \tag{2}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
where ${y}_{i, k}$ is the one-hot vector for the ${i}^{th}$ word in the ground-truth explanation and $p\left( {w}_{i, k}\right)$ is the probability of the ${k}^{th}$ word in $\mathbb{V}$ at the ${i}^{th}$ time step. We also make use of the teacher enforcing to train the explanation module with auto-regressive cross-entropy loss.
|
| 82 |
+
|
| 83 |
+
## 4 Experiments
|
| 84 |
+
|
| 85 |
+
### 4.1 Datasets and Evaluation
|
| 86 |
+
|
| 87 |
+

|
| 88 |
+
|
| 89 |
+
Figure 3: Examples from (a) GQA-REX dataset, (b) VQA-E dataset.
|
| 90 |
+
|
| 91 |
+
As discussed in section 2, there are a limited number of datasets that come with annotated explanations along with answers. Owing to the large dataset size, we perform our experiments on the GQA-REX and VQA-E datasets, although they have their own set of limitations. In this section, we measure the accuracy of the predicted answer and the quality of the generated explanation. To evaluate the predicted answer, we use the VQA score as the metric. Unfortunately, we don't have accurate metrics for explanation evaluation. Therefore, we report qualitative results from a human-study as well as quantitative results using conventional NLP metrics such as ROUGE and BLEU.
|
| 92 |
+
|
| 93 |
+
GQA-REX contains explanations for almost 98% of the samples in the GQA-balanced dataset. It contains around ${1.04}\mathrm{M}$ question-answer (QA) pairs spanning across ${82}\mathrm{k}$ images, with annotated explanations ( 1 explanation per QA pair). However, the explanations are consistent with the reasoning framework proposed in [21] and are therefore not completely in human-readable form (Refer Figure 3(a)). Although the explanations can be converted to human readable form using information from scene graphs, there are instances of grammatical inaccuracy.
|
| 94 |
+
|
| 95 |
+
VQA-E contains explanations for around ${40}\%$ of the QA pairs in the VQA2.0 dataset ( 1 explanation per QA pair). The explanations are generated by comparing the similarity scores between the caption candidates and the ground-truth question-answer pair. It is, therefore not surprising that the explanations seem more like captions of the image which contains the answer. Figure 3(b) illustrates a couple of examples from the VQA-E dataset.
|
| 96 |
+
|
| 97 |
+
### 4.2 VQA Experimental Results
|
| 98 |
+
|
| 99 |
+
146 We use the CFRF[1] model as the (ii) Transformer Decoder. The base-ent training signals, we design the loss
|
| 100 |
+
|
| 101 |
+
<table><tr><td>Dataset</td><td>$\mathbf{{Expl}.{Model}}$</td><td>$\alpha$</td><td>VQA score</td></tr><tr><td rowspan="6">VQA-E[16]</td><td>N/A</td><td>N/A</td><td>71.48</td></tr><tr><td>LSTM</td><td>0.25</td><td>71.36</td></tr><tr><td>LSTM</td><td>0.50</td><td>71.55</td></tr><tr><td>LSTM</td><td>0.75</td><td>71.53</td></tr><tr><td>LSTM</td><td>1.0</td><td>71.32</td></tr><tr><td>Transformer</td><td>0.50</td><td>71.46</td></tr><tr><td rowspan="5">GQA-REX[21]</td><td>N/A</td><td>N/A</td><td>77.49</td></tr><tr><td>LSTM</td><td>0.25</td><td>75.08</td></tr><tr><td>LSTM</td><td>0.50</td><td>77.16</td></tr><tr><td>LSTM</td><td>1.0</td><td>77.33</td></tr><tr><td>Transformer</td><td>0.50</td><td>77.06</td></tr></table>
|
| 102 |
+
|
| 103 |
+
Table 1: VQA scores of the predicted answers from our method on VQA-E and GQA-REX validation datasets.
|
| 104 |
+
|
| 105 |
+
147 backbone and augment it with an ex-
|
| 106 |
+
|
| 107 |
+
148 planation module based on (i) LSTM,
|
| 108 |
+
|
| 109 |
+
150 line model is trained without any ex-
|
| 110 |
+
|
| 111 |
+
151 planations as supervision. Since our
|
| 112 |
+
|
| 113 |
+
152 goal is to generate explanations while
|
| 114 |
+
|
| 115 |
+
153 maintaining the VQA performance,
|
| 116 |
+
|
| 117 |
+
154 we incorporate both the loss for VQA
|
| 118 |
+
|
| 119 |
+
155 answer and the supervision from the
|
| 120 |
+
|
| 121 |
+
156 ground-truth explanation. In order to
|
| 122 |
+
|
| 123 |
+
157 investigate the impact from two differ-function for the end-to-end training as follows: $L = \alpha {L}_{\text{ans }} + \left( {1 - \alpha }\right) {L}_{\text{expl }}$ , where ${L}_{ans}$ is the cross-entropy loss between the predicted answer and the ground truth answer, ${L}_{\text{expl }}$ is the loss function of the explanation module, as represented by Equation 2, and $\alpha \in \left\lbrack {0,1}\right\rbrack$ is the balance factor. As shown in Table 1, our methods successfully maintain the VQA scores while generating textual explanations.
|
| 124 |
+
|
| 125 |
+
### 4.3 Results of Generated Explanations
|
| 126 |
+
|
| 127 |
+
Quantitative Results We use the explanations generated by the CFRF+LSTM model, corresponding to $\alpha = {0.75}$ (Refer Table 1). The results of BLEU-1 and ROUGE scores are presented. Note that ROUGE scores are F1 scores. As shown in Table 2, although our method outperforms the baseline, the absolute scores are not satisfied.
|
| 128 |
+
|
| 129 |
+
<table><tr><td>Dataset</td><td>Model</td><td>BLEU-1</td><td>ROUGE-1</td><td>ROUGE-2</td><td>ROUGE-L</td></tr><tr><td rowspan="2">VQA-E val</td><td>Baseline[16]</td><td>0.268</td><td>-</td><td>-</td><td>0.249</td></tr><tr><td>CFRF+LSTM</td><td>0.33</td><td>0.364</td><td>0.117</td><td>0.325</td></tr></table>
|
| 130 |
+
|
| 131 |
+
Table 2: Quantitative evaluation of the generated explanation on VQA-E validation set.
|
| 132 |
+
|
| 133 |
+
As mentioned in section 2, in the VQA domain, there is no standard common practice to quantitatively evaluate generated explanations. Although both VQA-E and GQA-REX suggest using conventional NLP metrics such as ROUGE and BLEU scores to evaluate the explanation, it is not ideal. These metrics are particularly designed for string matching in the form of overlapping n-grams. Figure 4 illustrates why such metrics are practically unreliable.
|
| 134 |
+
|
| 135 |
+

|
| 136 |
+
|
| 137 |
+
Figure 4: Problem with using string matching metrics to evaluate generated explanations in VQA.
|
| 138 |
+
|
| 139 |
+
In Figure 4(a), our model predicts the correct answer. However, according to the string matching metrics, the quality of the explanation is bad. In fact, interestingly, both the generated explanation and ground-truth explanation in Figure 4(a) are annotated as valid by human annotators. On the other hand, the generated explanation in Figure 4(b) is almost identical to the ground truth, and both of them are approved by human subjects as valid explanations for the answer. In Figure 4(c), even though the predicted explanation is wrong, the string matching score is very high. Because the keywords are those related to the colors. These examples lead to such a conclusion: we need to find a more reliable metric to evaluate the explanation in VQA problem.
|
| 140 |
+
|
| 141 |
+
Human Study Setup Since no mature quantitative metrics are available, we introduce humans into the loop. We conduct a human subject study using Amazon Mechanical Turk (AMT). The goal of our subject study is to evaluate the quality of the explanation by human annotation. One example of the human intelligence task (HIT) is shown in Figure 5:
|
| 142 |
+
|
| 143 |
+

|
| 144 |
+
|
| 145 |
+
Figure 5: An example of the HIT in the human study.
|
| 146 |
+
|
| 147 |
+
Given an image-question pair from the VQA-E validation set, we design two questions for the annotators. Both questions are the same, asking whether an explanation leads to the answer. But the contexts are different. In the first question, both explanation and answer are generated by our model. In the second question, we provide a ground-truth explanation and answer. The subjects have the same four options to choose from in both cases. They are: (i) Yes; (ii) No, but contains the answer;
|
| 148 |
+
|
| 149 |
+
<table><tr><td>Context</td><td>Yes</td><td>No, but contains the Ans.</td><td>$\mathbf{{No}}$</td><td>Not determined</td></tr><tr><td>predicted [total]</td><td>56.46%</td><td>9.02%</td><td>34.12%</td><td>0.4%</td></tr><tr><td>predicted [unique]</td><td>65.16 %</td><td>2.01%</td><td>32.8%</td><td>0.03%</td></tr><tr><td>ground-truth [total]</td><td>83.90%</td><td>5.12%</td><td>10.98%</td><td>0%</td></tr><tr><td>ground-truth [unique]</td><td>93.12%</td><td>0.57%</td><td>6.31%</td><td>0%</td></tr></table>
|
| 150 |
+
|
| 151 |
+
Table 3: Statistics of the raw human annotation data. It contains 4735 unique examples from the VQA-E validation set. Each job is distributed to 3 different annotators to eliminate potential bias.
|
| 152 |
+
|
| 153 |
+
(iii) No; (iv) Not determined. Annotators have no idea which context is the ground truth. Specifically, option (ii) "No, but contains the answer" means the explanation contains a sub-string that matches the predicted answer, however, the explanation does not lead to the answer. Option (iv) "Not determined" means the explanation leads to the answers, but the reasoning chain may be contradictory.
|
| 154 |
+
|
| 155 |
+
Human Approved Results We randomly select 4735 unique image-question pairs from the VQA-E validation set for the human study. Each image-question pair makes up one HIT with the same setting as in Figure 5. In order to eliminate individual bias, we assign each HIT to three different workers. Therefore, in total, we have received ${4735} \times 3 = {14205}$ responses from 111 subjects. The raw distribution of subject annotations for all the 14205 responses (predicted [total] and ground-truth [total]) are shown in Table 3. From the total set of responses, questions for which there is no consensus among the three annotators (all three responses are different) are discarded (869 out of 4735). Following this, we calculate the vote using mode, i.e. majority vote for the unique HITs. Among the 3866 unique HITs, 65.16% of the generated explanations leads to the predicted answers, while 2.01% of them contain the answers but make no sense. 32.8% of the generated explanations fail to make connections with the predicted answers. On the other hand, 93.12% of the ground-truth explanations successfully lead to the ground-truth answers. According to [16], because the ground-truth explanations are selected by comparing the similarity between the question-ground-truth-answer pair and the caption candidates, most of them are valid.
|
| 156 |
+
|
| 157 |
+
<table><tr><td rowspan="2"/><td colspan="2">Predicted</td><td colspan="2">Ground-truth</td></tr><tr><td>Valid Expl.</td><td>Invalid Expl.</td><td>Valid Expl.</td><td>Invalid Expl.</td></tr><tr><td>Correct Ans.</td><td>56.39%</td><td>23.77%</td><td>93.11%</td><td>6.88%</td></tr><tr><td>Wrong Ans.</td><td>8.77%</td><td>11.05%</td><td>-</td><td>-</td></tr></table>
|
| 158 |
+
|
| 159 |
+
Table 4: Ratio of valid/invalid explanation based on the correctness of the predicted answer.
|
| 160 |
+
|
| 161 |
+
Besides raw annotations, we provide a more straightforward result, as shown in Table 4. Among the 3866 unique HITs, we find that in 56.39% of the cases, our model can predict both the correct answer as well as generate valid explanations. ${23.77}\%$ of the explanation is not valid although the predicted answers are correct. It may either not make any sense or contain the answer in it, albeit without any significant meaning. Only in 8.77% of the cases, our model generates a good explanation but leads to a wrong answer. On the other hand, we also observe that ${6.88}\%$ of the ground-truth explanations are not reasonable. Therefore, our model is able to answer question correctly and also generate valid explanations approximately 60.5% of the time.
|
| 162 |
+
|
| 163 |
+
## 5 Conclusion and Future Work
|
| 164 |
+
|
| 165 |
+
We explore the task of Explainable Visual Question Answering (Explainable-VQA). We leverage the Coarse-to-Fine reasoning framework as the VQA backbone and augment it with an explanation generation module using two architectures: LSTM and Transformer Decoder. Our model generates an explanation along with an answer while also maintaining close to SOTA VQA performance. We conduct both objective experiments and a human study to evaluate the generated explanation, pointing out the urgency of proposing new metrics for explainable VQA.
|
| 166 |
+
|
| 167 |
+
Future Work We plan to improve the quality of generated explanations as well as leverage them to increase VQA accuracy. We urge for proper metrics to evaluate explanations in the VQA problem.
|
| 168 |
+
|
| 169 |
+
Acknowledgement We thank Nguyen et al., the authors of [1] for providing us with the features and predicates for the VQA 2.0 dataset, and helping with answering all queries in a timely manner.
|
| 170 |
+
|
| 171 |
+
References
|
| 172 |
+
|
| 173 |
+
[1] Binh X Nguyen, Tuong Do, Huy Tran, Erman Tjiputra, Quang D Tran, and Anh Nguyen. Coarse-to-fine reasoning for visual question answering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4558-4566, 2022.
|
| 174 |
+
|
| 175 |
+
[2] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, Michael Bernstein, and Li Fei-Fei. Visual genome: Connecting language and vision using crowdsourced dense image annotations. 2016.
|
| 176 |
+
|
| 177 |
+
[3] Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6700-6709, 2019.
|
| 178 |
+
|
| 179 |
+
[4] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision, pages 2425-2433, 2015.
|
| 180 |
+
|
| 181 |
+
[5] Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. From recognition to cognition: Visual commonsense reasoning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
|
| 182 |
+
|
| 183 |
+
[6] Peng Wang, Qi Wu, Chunhua Shen, Anthony Dick, and Anton Van Den Hengel. Fvqa: Fact-based visual question answering. IEEE transactions on pattern analysis and machine intelligence, 40(10):2413-2427, 2017.
|
| 184 |
+
|
| 185 |
+
[7] Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. Ok-vqa: A visual question answering benchmark requiring external knowledge. In Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
|
| 186 |
+
|
| 187 |
+
[8] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. arXiv preprint arXiv:2204.14198, 2022.
|
| 188 |
+
|
| 189 |
+
[9] Zelin Zhao, Karan Samel, Binghong Chen, et al. Proto: Program-guided transformer for program-guided tasks. Advances in Neural Information Processing Systems, 34:17021-17036, 2021.
|
| 190 |
+
|
| 191 |
+
[10] Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, et al. Image as a foreign language: Beit pretraining for all vision and vision-language tasks. arXiv preprint arXiv:2208.10442, 2022.
|
| 192 |
+
|
| 193 |
+
[11] Feng Gao, Qing Ping, Govind Thattai, Aishwarya Reganti, Ying Nian Wu, and Prem Natarajan. A thousand words are worth more than a picture: Natural language-centric outside-knowledge visual question answering. arXiv preprint arXiv:2201.05299, 2022.
|
| 194 |
+
|
| 195 |
+
[12] Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the $\mathrm{v}$ in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6904-6913, 2017.
|
| 196 |
+
|
| 197 |
+
[13] Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2901-2910, 2017.
|
| 198 |
+
|
| 199 |
+
[14] Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. Ok-vqa: A visual question answering benchmark requiring external knowledge. In Proceedings of the IEEE/cvf conference on computer vision and pattern recognition, pages 3195-3204, 2019.
|
| 200 |
+
|
| 201 |
+
[15] Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. A corpus for reasoning about natural language grounded in photographs. arXiv preprint arXiv:1811.00491, 2018.
|
| 202 |
+
|
| 203 |
+
[16] Qing Li, Qingyi Tao, Shafiq Joty, Jianfei Cai, and Jiebo Luo. Vqa-e: Explaining, elaborating, and enhancing your answers for visual questions. In Proceedings of the European Conference on Computer Vision (ECCV), pages 552-567, 2018.
|
| 204 |
+
|
| 205 |
+
[17] Qing Li, Jianlong Fu, Dongfei Yu, Tao Mei, and Jiebo Luo. Tell-and-answer: Towards explainable visual question answering using attributes and captions. arXiv preprint arXiv:1801.09041, 2018.
|
| 206 |
+
|
| 207 |
+
[18] Jialin Wu, Zeyuan Hu, and Raymond J Mooney. Generating question relevant captions to aid visual question answering. arXiv preprint arXiv:1906.00513, 2019.
|
| 208 |
+
|
| 209 |
+
[19] Jialin Wu, Liyan Chen, and Raymond J Mooney. Improving vqa and its explanations by comparing competing explanations. arXiv preprint arXiv:2006.15631, 2020.
|
| 210 |
+
|
| 211 |
+
[20] Weixin Liang, Feiyang Niu, Aishwarya Reganti, Govind Thattai, and Gokhan Tur. Lrta: a transparent neural-symbolic reasoning framework with modular supervision for visual question answering. arXiv preprint arXiv:2011.10731, 2020.
|
| 212 |
+
|
| 213 |
+
[21] Shi Chen and Qi Zhao. Rex: Reasoning-aware and grounded explanation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15586-15595, 2022.
|
| 214 |
+
|
| 215 |
+
[22] Leonard Salewski, A Koepke, Hendrik Lensch, and Zeynep Akata. Clevr-x: A visual reasoning dataset for natural language explanations. In International Workshop on Extending Explainable AI Beyond Deep Models and Classifiers, pages 69-88. Springer, 2022.
|
| 216 |
+
|
| 217 |
+
[23] Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6077-6086, 2018.
|
| 218 |
+
|
| 219 |
+
[24] Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543, 2014.
|
| 220 |
+
|
| 221 |
+
[25] Edward Loper and Steven Bird. Nltk: The natural language toolkit. arXiv preprint cs/0205028, 2002.
|
| 222 |
+
|
| 223 |
+
## Appendix for Towards Reasoning-Aware Explainable VQA
|
| 224 |
+
|
| 225 |
+
Anonymous Author(s) Affiliation
|
| 226 |
+
|
| 227 |
+
Address email
|
| 228 |
+
|
| 229 |
+

|
| 230 |
+
|
| 231 |
+
Figure 1: (a) 3 examples for incorrectly predicted answer but correct explanation. (b) 3 examples for incorrectly predicted answer and incorrect explanation.
|
| 232 |
+
|
| 233 |
+
<table><tr><td> <img src="https://cdn.noedgeai.com/01964319-be27-7507-868a-d3caff7e1592_9.jpg?x=355&y=274&w=325&h=325&r=0"/> Q: What type of plant is this? GT Ans: broccoli GT Expl: a broccoli plant in the ground near other plant life Pred Ans: broccoli Pred Expl: a close up of a broccoli plant with leaves</td><td> <img src="https://cdn.noedgeai.com/01964319-be27-7507-868a-d3caff7e1592_9.jpg?x=739&y=279&w=321&h=318&r=0"/> Q: What sport is being performed? GT Ans: skateboarding GT Expl: a person that is doing a skateboarding trick Pred Ans: skateboarding Pred Expl: a man is doing a trick on a skateboard</td><td> <img src="https://cdn.noedgeai.com/01964319-be27-7507-868a-d3caff7e1592_9.jpg?x=1117&y=273&w=327&h=324&r=0"/> Q: Is the horse walking on a trail? GT Ans: no GT Expl: a man is riding a horse on the beach Pred Ans: no Pred Expl: a man is riding a horse on the beach</td></tr><tr><td> <img src="https://cdn.noedgeai.com/01964319-be27-7507-868a-d3caff7e1592_9.jpg?x=354&y=856&w=328&h=318&r=0"/> Q: Is this in the forest? GT Ans: yes GT Expl: a blue bird is sitting in the branch of a tree Pred Ans: yes Pred Expl: a bird is perched on the branch of a tree</td><td> <img src="https://cdn.noedgeai.com/01964319-be27-7507-868a-d3caff7e1592_9.jpg?x=736&y=857&w=325&h=314&r=0"/> Q: Who is in the car? GT Ans: cat GT Expl: a cat is sitting in a car near the dash Pred Ans: cat Pred Expl: a cat is sitting in the driver 's seat of a car</td><td> <img src="https://cdn.noedgeai.com/01964319-be27-7507-868a-d3caff7e1592_9.jpg?x=1125&y=859&w=327&h=312&r=0"/> Q: Is this building really tall? GT Ans: yes GT Expl: there is a very tall tower that has a clock on it Pred Ans: yes Pred Expl: a tall tower with a clock on the top of it</td></tr><tr><td> <img src="https://cdn.noedgeai.com/01964319-be27-7507-868a-d3caff7e1592_9.jpg?x=355&y=1438&w=325&h=310&r=0"/> Q: What color is the bed sheet? GT Ans: white GT Expl: there is a bed with white sheets and pillows layer on it Pred Ans: white Pred Expl: a bed with a white comforter and pillows on it</td><td> <img src="https://cdn.noedgeai.com/01964319-be27-7507-868a-d3caff7e1592_9.jpg?x=738&y=1449&w=328&h=306&r=0"/> Q: Is the bird sitting? GT Ans: yes GT Expl: a blue bird is sitting on a tree branch Pred Ans: yes Pred Expl: a bird is sitting on the branch of a tree</td><td> <img src="https://cdn.noedgeai.com/01964319-be27-7507-868a-d3caff7e1592_9.jpg?x=1115&y=1443&w=327&h=313&r=0"/> Q: Is it cold? GT Ans: yes GT Expl: a man in skis is standing in a snowy area Pred Ans: yes Pred Expl: a man is standing in the snow with skis and ski poles</td></tr></table>
|
| 234 |
+
|
| 235 |
+
Figure 2: 9 examples for correctly predicted answer and correct explanation.
|
| 236 |
+
|
| 237 |
+

|
| 238 |
+
|
| 239 |
+
Figure 3: 9 examples for correctly predicted answer and correct explanation.
|
| 240 |
+
|
| 241 |
+

|
| 242 |
+
|
| 243 |
+
Figure 4: 9 examples for correctly predicted answer but incorrect explanation.
|
| 244 |
+
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/Vz_gE-nrFu9/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,245 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ TOWARDS REASONING-AWARE EXPLAINABLE VQA
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
§ ABSTRACT
|
| 12 |
+
|
| 13 |
+
The domain of joint vision-language understanding, especially in the context of reasoning in Visual Question Answering (VQA) models, has garnered significant attention in the recent past. While most of the existing VQA models focus on improving the accuracy of VQA, the way models arrive at an answer is oftentimes a black box. As a step towards making the VQA task more explainable and interpretable, our method is built upon the SOTA VQA framework [1] by augmenting it with an end-to-end explanation generation module. In this paper, we investigate two network architectures, including LSTM and Transformer decoder, as the explanation generator. Our method generates human-readable explanations while maintaining SOTA VQA accuracy on the GQA-REX (77.49%) and VQA-E (71.48%) datasets. Approximately 65.16% of the generated explanations are approved to be valid by humans. Roughly 60.5% of the generated explanations are valid and lead to the correct answers.
|
| 14 |
+
|
| 15 |
+
§ 14 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
Problems involving joint vision-language understanding are gaining more attention in both Computer Vision (CV) and Natural Language Processing (NLP) communities. In recent years, complex reasoning problems in the vision-language domain have been in the spotlight. Based on the classic Visual Question Answering (VQA), reasoning has been highly involved. In [2, 3, 4], a model needs to reason over spatial and arithmetic relationships within an image-question pair. [5] incorporates spatial-temporal reasoning and domain-specific knowledge. A more challenging setting such as [6, 7] requires the capability to make use of external knowledge to perform reasoning in the vision and language domain.
|
| 18 |
+
|
| 19 |
+
< g r a p h i c s >
|
| 20 |
+
|
| 21 |
+
Q: Is the stovelight on? GT Ans: Yes M1, M2, M3: Yes
|
| 22 |
+
|
| 23 |
+
C: There is a stove and a bunch of knives E: The space under the hood is brighter than the surrounding area,
|
| 24 |
+
|
| 25 |
+
Figure 1: A VQA example shows the importance of an explanation that leads to the correct answer.
|
| 26 |
+
|
| 27 |
+
Benefiting from the large-scale pre-trained models in both single modality and multi-modality, we witness impressive improvements in VQA accuracy [8, 9, 10, 11] in both the stock setting and its variants. However, we barely pay attention to how a model reaches an answer given an image-question pair. Let's take a look at Figure 1 as an illustrative example. The ground-truth answer to the question is straightforward and information from the image is sufficient to answer the question. There are three different types of VQA models: Model type 1 predicts the correct answer without any evidence on how to achieve it. Model type 2 answers the question correctly and provides a caption that summarizes the image. Unfortunately, the caption fails to unveil the reasoning chain behind predicting the correct answer to the question. Model type 3 successfully generates a logically self-contained explanation which corresponds to the correct answer. Both Model type 1 and Model type 2 cover most of the SOTA VQA models. Surprisingly, very few models exist that are similar to Model type 3. Motivated by examples like in Figure 1, we investigate the following two open topics in this paper: (i) whether a VQA model can generate a human-readable explanation while maintaining the VQA accuracy; (ii) how good are the generated explanations and how to evaluate them? Our contribution is two-fold:
|
| 28 |
+
|
| 29 |
+
* We present easy to implement methods on top of a SOTA VQA framework which maintains VQA accuracy while generating human-readable textual explanations.
|
| 30 |
+
|
| 31 |
+
* We show both quantitative experimental results and human-studies of the proposed explainable VQA method. Our experiments illustrate the urgency of proposing new metrics to evaluate the predicted explanations in vision-language reasoning problems such as VQA.
|
| 32 |
+
|
| 33 |
+
§ 2 RELATED WORK
|
| 34 |
+
|
| 35 |
+
Reasoning in VQA VQA [4] and its variants, as an end-to-end task, have been well explored, and various models have kept achieving higher performance along the way. Built on top of the original setting, complex reasoning tasks are heavily involved. [4, 12] introduces arithmetic reasoning into the setting. [3, 13, 2] highlight the importance of fine spatial reasoning in the VQA problem. While the above datasets limit the reasoning domain within the image, [6, 14] propose a visual language task that requires external knowledge, sometimes even domain-specific knowledge, to answer the question. In [15], the emphasis is on the logical entailment problem. Recent methods such as [8, 9, 10] take advantage of the unprecedented amount of vision-language data and large size of models, achieving SOTA performances on the above VQA datasets. As the reasoning problem in VQA is becoming more and more complicated, it is urgent to have an interpretable way to analyze and diagnose the model and measure its reliability.
|
| 36 |
+
|
| 37 |
+
Explainable VQA and Metrics Very few among the SOTA VQA works study the explainability of the models, especially in the era of big data and big model. [16] is one of the standard datasets that focuses on explainability. A similarity score between the question and image caption is computed to check for question-relevant captions. The caption is then used to generate an explanation that is relevant to the question-answer pair. [17, 18] use attributes and captions of the image to provide a naive version of the explanation to the answer. Some works [19, 11] make use of textual knowledge from external sources to improve the interpretability of the model. But such external knowledge oftentimes cannot provide direct evidence to the answer. The neural-symbolic framework [20] is also applied in the VQA domain since it is naturally more interpretable than the pure deep learning based methods. Recently, more works [21, 22] have been proposed for enhancing the explainability of the VQA problem using either natural or synthetic data. Another topic that is not well-studied in explainable VQA is the evaluation of explanations. In [16, 22], conventional NLP metrics such as ROUGE, BLEU scores are used to measure the quality of the generated explanation. Different from [16], [21] doesn't use a caption as the explanation, instead it uses tokens representing a bounding box in the image to replace key parts in the scene graph.
|
| 38 |
+
|
| 39 |
+
§ 3 METHODOLOGY
|
| 40 |
+
|
| 41 |
+
In this section, we describe our method in detail. Please refer to Figure 2 for an overview of the model flow and architecture. The proposed method consists of two major components, (i) coarse-to-fine visual language reasoning for VQA and (ii) explanation generation module.
|
| 42 |
+
|
| 43 |
+
§ 3.1 EXTRACTING FEATURES AND PREDICATES
|
| 44 |
+
|
| 45 |
+
A pre-trained Faster-RCNN model [23] is used to extract features for each Region of Interest (RoI) in the image $I$ . The image features as denoted as ${f}_{I}$ . Similarly, a Faster-RCNN model is also 5 used to extract objects and attributes, which form the image predicates. We generate the Glove - embedding [24] for each word in the set of image predicates, denoted as ${p}_{I}$ . The words in the question - $Q$ are also encoded using Glove embeddings. The question embeddings are then passed through
|
| 46 |
+
|
| 47 |
+
< g r a p h i c s >
|
| 48 |
+
|
| 49 |
+
Figure 2: An overview of coarse-to-fine VQA with explanation generation.
|
| 50 |
+
|
| 51 |
+
a GRU to extract sequential features ${f}_{Q}$ . Together with this, question predicates are extracted by passing the question through a stop words filter. The stop words not only consist of words from NLTK[25], but also includes those words in questions that occur less frequently (threshold=10). Each question predicate is then encoded with Glove embedding. Question predicates are denoted as ${p}_{Q}$ .
|
| 52 |
+
|
| 53 |
+
§ 3.2 COARSE-TO-FINE REASONING FOR VQA
|
| 54 |
+
|
| 55 |
+
VQA can be generally formulated as $\left( {I,Q}\right) \rightarrow \mathbf{a}$ , where $a \in \mathcal{A}$ and $\mathcal{A}$ is the set of answers. Usually, the answer set $\mathcal{A}$ is filtered by a frequency threshold from the annotated answers. The coarse-to-fine reasoning framework can be formalized as :
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
{\mathbf{a}}^{ * } = \arg \mathop{\max }\limits_{a}{\mathbf{{CFR}}}_{\theta }\left( {\mathbf{a} \mid {f}_{I},{p}_{I},{f}_{Q},{p}_{Q}}\right) \tag{1}
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
= \arg \mathop{\max }\limits_{a}\mathbf{{SR}}\left( {\mathbf{a} \mid \mathbf{{IF}}\left( {{f}_{I},{p}_{I},{f}_{Q},{p}_{Q}}\right) ,\mathbf{{MM}}\left( {{f}_{I},{p}_{I},{f}_{Q},{p}_{Q}}\right) }\right)
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
where ${\mathbf{{CFR}}}_{\theta }$ is an end-to-end module with learnable parameter $\theta$ . It consists of three different modules, including an information filtering module IF, a multimodal learning module MM, followed by a semantic reasoning module $\mathbf{{SR}}$ .
|
| 66 |
+
|
| 67 |
+
Information Filtering The extracted features may be noisy and have incorrect information as they are extracted from pre-trained models. This module helps remove unnecessary information and aids in understanding the importance of RoIs in images for each question.
|
| 68 |
+
|
| 69 |
+
Multimodal Learning Bilinear Attention Networks are used to learn features at both coarse-grained and fine-grained levels. The coarse-grained module works with image and question features and predicates and produces a joint representation at the coarse-grained level. The fine-grained module learns the correlation of the filtered image and question information and learns a joint representation at the fine-grained level.
|
| 70 |
+
|
| 71 |
+
Semantic Reasoning This module learns selective information from both the coarse-grained and fine-grained module outputs. The joint embedding from this module is then fed into a multi-layer perceptron to perform answer prediction and to the explanation module for explanation generation.
|
| 72 |
+
|
| 73 |
+
§ 3.3 EXPLANATION GENERATION
|
| 74 |
+
|
| 75 |
+
The joint embedding from the semantic reasoning module is used to train an explanation generator with ground-truth explanations as supervision. The VQA backbone is augmented with the explanation generation module. Two architectures are evaluated for explanation generation: (i) Long Short-Term Memory (LSTM), (ii) Transformer Decoder. The LSTM architecture used consists of 2 layers, with an input dimension of 768. The Transformer Decoder architecture has an input dimension of 768, and consists of 8 attention heads. In both cases, the input is a joint embedding and is trained using ground-truth explanations from the dataset (discussed in the following section) using cross-entropy loss for each word. Suppose we have an explanation $\mathbf{E} = \left( {{w}_{1},\ldots ,{w}_{i},\ldots ,{w}_{l}}\right)$ , where ${w}_{i} \in \mathbb{V}$ , the vocabulary and $l$ is the length of the explanation. The explanation can therefore be represented as a sequence of one-hot encoded vectors. The loss function is therefore given by:
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
{L}_{\text{ expl }} = - \frac{1}{l \cdot \left| \mathbb{V}\right| } \cdot \mathop{\sum }\limits_{{i = 1}}^{l}\mathop{\sum }\limits_{{k = 1}}^{\left| \mathbb{V}\right| }{y}_{i,k} \cdot \log \left( {p\left( {w}_{i,k}\right) }\right) \tag{2}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
where ${y}_{i,k}$ is the one-hot vector for the ${i}^{th}$ word in the ground-truth explanation and $p\left( {w}_{i,k}\right)$ is the probability of the ${k}^{th}$ word in $\mathbb{V}$ at the ${i}^{th}$ time step. We also make use of the teacher enforcing to train the explanation module with auto-regressive cross-entropy loss.
|
| 82 |
+
|
| 83 |
+
§ 4 EXPERIMENTS
|
| 84 |
+
|
| 85 |
+
§ 4.1 DATASETS AND EVALUATION
|
| 86 |
+
|
| 87 |
+
< g r a p h i c s >
|
| 88 |
+
|
| 89 |
+
Figure 3: Examples from (a) GQA-REX dataset, (b) VQA-E dataset.
|
| 90 |
+
|
| 91 |
+
As discussed in section 2, there are a limited number of datasets that come with annotated explanations along with answers. Owing to the large dataset size, we perform our experiments on the GQA-REX and VQA-E datasets, although they have their own set of limitations. In this section, we measure the accuracy of the predicted answer and the quality of the generated explanation. To evaluate the predicted answer, we use the VQA score as the metric. Unfortunately, we don't have accurate metrics for explanation evaluation. Therefore, we report qualitative results from a human-study as well as quantitative results using conventional NLP metrics such as ROUGE and BLEU.
|
| 92 |
+
|
| 93 |
+
GQA-REX contains explanations for almost 98% of the samples in the GQA-balanced dataset. It contains around ${1.04}\mathrm{M}$ question-answer (QA) pairs spanning across ${82}\mathrm{k}$ images, with annotated explanations ( 1 explanation per QA pair). However, the explanations are consistent with the reasoning framework proposed in [21] and are therefore not completely in human-readable form (Refer Figure 3(a)). Although the explanations can be converted to human readable form using information from scene graphs, there are instances of grammatical inaccuracy.
|
| 94 |
+
|
| 95 |
+
VQA-E contains explanations for around ${40}\%$ of the QA pairs in the VQA2.0 dataset ( 1 explanation per QA pair). The explanations are generated by comparing the similarity scores between the caption candidates and the ground-truth question-answer pair. It is, therefore not surprising that the explanations seem more like captions of the image which contains the answer. Figure 3(b) illustrates a couple of examples from the VQA-E dataset.
|
| 96 |
+
|
| 97 |
+
§ 4.2 VQA EXPERIMENTAL RESULTS
|
| 98 |
+
|
| 99 |
+
146 We use the CFRF[1] model as the (ii) Transformer Decoder. The base-ent training signals, we design the loss
|
| 100 |
+
|
| 101 |
+
max width=
|
| 102 |
+
|
| 103 |
+
Dataset $\mathbf{{Expl}.{Model}}$ $\alpha$ VQA score
|
| 104 |
+
|
| 105 |
+
1-4
|
| 106 |
+
6*VQA-E[16] N/A N/A 71.48
|
| 107 |
+
|
| 108 |
+
2-4
|
| 109 |
+
LSTM 0.25 71.36
|
| 110 |
+
|
| 111 |
+
2-4
|
| 112 |
+
LSTM 0.50 71.55
|
| 113 |
+
|
| 114 |
+
2-4
|
| 115 |
+
LSTM 0.75 71.53
|
| 116 |
+
|
| 117 |
+
2-4
|
| 118 |
+
LSTM 1.0 71.32
|
| 119 |
+
|
| 120 |
+
2-4
|
| 121 |
+
Transformer 0.50 71.46
|
| 122 |
+
|
| 123 |
+
1-4
|
| 124 |
+
5*GQA-REX[21] N/A N/A 77.49
|
| 125 |
+
|
| 126 |
+
2-4
|
| 127 |
+
LSTM 0.25 75.08
|
| 128 |
+
|
| 129 |
+
2-4
|
| 130 |
+
LSTM 0.50 77.16
|
| 131 |
+
|
| 132 |
+
2-4
|
| 133 |
+
LSTM 1.0 77.33
|
| 134 |
+
|
| 135 |
+
2-4
|
| 136 |
+
Transformer 0.50 77.06
|
| 137 |
+
|
| 138 |
+
1-4
|
| 139 |
+
|
| 140 |
+
Table 1: VQA scores of the predicted answers from our method on VQA-E and GQA-REX validation datasets.
|
| 141 |
+
|
| 142 |
+
147 backbone and augment it with an ex-
|
| 143 |
+
|
| 144 |
+
148 planation module based on (i) LSTM,
|
| 145 |
+
|
| 146 |
+
150 line model is trained without any ex-
|
| 147 |
+
|
| 148 |
+
151 planations as supervision. Since our
|
| 149 |
+
|
| 150 |
+
152 goal is to generate explanations while
|
| 151 |
+
|
| 152 |
+
153 maintaining the VQA performance,
|
| 153 |
+
|
| 154 |
+
154 we incorporate both the loss for VQA
|
| 155 |
+
|
| 156 |
+
155 answer and the supervision from the
|
| 157 |
+
|
| 158 |
+
156 ground-truth explanation. In order to
|
| 159 |
+
|
| 160 |
+
157 investigate the impact from two differ-function for the end-to-end training as follows: $L = \alpha {L}_{\text{ ans }} + \left( {1 - \alpha }\right) {L}_{\text{ expl }}$ , where ${L}_{ans}$ is the cross-entropy loss between the predicted answer and the ground truth answer, ${L}_{\text{ expl }}$ is the loss function of the explanation module, as represented by Equation 2, and $\alpha \in \left\lbrack {0,1}\right\rbrack$ is the balance factor. As shown in Table 1, our methods successfully maintain the VQA scores while generating textual explanations.
|
| 161 |
+
|
| 162 |
+
§ 4.3 RESULTS OF GENERATED EXPLANATIONS
|
| 163 |
+
|
| 164 |
+
Quantitative Results We use the explanations generated by the CFRF+LSTM model, corresponding to $\alpha = {0.75}$ (Refer Table 1). The results of BLEU-1 and ROUGE scores are presented. Note that ROUGE scores are F1 scores. As shown in Table 2, although our method outperforms the baseline, the absolute scores are not satisfied.
|
| 165 |
+
|
| 166 |
+
max width=
|
| 167 |
+
|
| 168 |
+
Dataset Model BLEU-1 ROUGE-1 ROUGE-2 ROUGE-L
|
| 169 |
+
|
| 170 |
+
1-6
|
| 171 |
+
2*VQA-E val Baseline[16] 0.268 - - 0.249
|
| 172 |
+
|
| 173 |
+
2-6
|
| 174 |
+
CFRF+LSTM 0.33 0.364 0.117 0.325
|
| 175 |
+
|
| 176 |
+
1-6
|
| 177 |
+
|
| 178 |
+
Table 2: Quantitative evaluation of the generated explanation on VQA-E validation set.
|
| 179 |
+
|
| 180 |
+
As mentioned in section 2, in the VQA domain, there is no standard common practice to quantitatively evaluate generated explanations. Although both VQA-E and GQA-REX suggest using conventional NLP metrics such as ROUGE and BLEU scores to evaluate the explanation, it is not ideal. These metrics are particularly designed for string matching in the form of overlapping n-grams. Figure 4 illustrates why such metrics are practically unreliable.
|
| 181 |
+
|
| 182 |
+
< g r a p h i c s >
|
| 183 |
+
|
| 184 |
+
Figure 4: Problem with using string matching metrics to evaluate generated explanations in VQA.
|
| 185 |
+
|
| 186 |
+
In Figure 4(a), our model predicts the correct answer. However, according to the string matching metrics, the quality of the explanation is bad. In fact, interestingly, both the generated explanation and ground-truth explanation in Figure 4(a) are annotated as valid by human annotators. On the other hand, the generated explanation in Figure 4(b) is almost identical to the ground truth, and both of them are approved by human subjects as valid explanations for the answer. In Figure 4(c), even though the predicted explanation is wrong, the string matching score is very high. Because the keywords are those related to the colors. These examples lead to such a conclusion: we need to find a more reliable metric to evaluate the explanation in VQA problem.
|
| 187 |
+
|
| 188 |
+
Human Study Setup Since no mature quantitative metrics are available, we introduce humans into the loop. We conduct a human subject study using Amazon Mechanical Turk (AMT). The goal of our subject study is to evaluate the quality of the explanation by human annotation. One example of the human intelligence task (HIT) is shown in Figure 5:
|
| 189 |
+
|
| 190 |
+
< g r a p h i c s >
|
| 191 |
+
|
| 192 |
+
Figure 5: An example of the HIT in the human study.
|
| 193 |
+
|
| 194 |
+
Given an image-question pair from the VQA-E validation set, we design two questions for the annotators. Both questions are the same, asking whether an explanation leads to the answer. But the contexts are different. In the first question, both explanation and answer are generated by our model. In the second question, we provide a ground-truth explanation and answer. The subjects have the same four options to choose from in both cases. They are: (i) Yes; (ii) No, but contains the answer;
|
| 195 |
+
|
| 196 |
+
max width=
|
| 197 |
+
|
| 198 |
+
Context Yes No, but contains the Ans. $\mathbf{{No}}$ Not determined
|
| 199 |
+
|
| 200 |
+
1-5
|
| 201 |
+
predicted [total] 56.46% 9.02% 34.12% 0.4%
|
| 202 |
+
|
| 203 |
+
1-5
|
| 204 |
+
predicted [unique] 65.16 % 2.01% 32.8% 0.03%
|
| 205 |
+
|
| 206 |
+
1-5
|
| 207 |
+
ground-truth [total] 83.90% 5.12% 10.98% 0%
|
| 208 |
+
|
| 209 |
+
1-5
|
| 210 |
+
ground-truth [unique] 93.12% 0.57% 6.31% 0%
|
| 211 |
+
|
| 212 |
+
1-5
|
| 213 |
+
|
| 214 |
+
Table 3: Statistics of the raw human annotation data. It contains 4735 unique examples from the VQA-E validation set. Each job is distributed to 3 different annotators to eliminate potential bias.
|
| 215 |
+
|
| 216 |
+
(iii) No; (iv) Not determined. Annotators have no idea which context is the ground truth. Specifically, option (ii) "No, but contains the answer" means the explanation contains a sub-string that matches the predicted answer, however, the explanation does not lead to the answer. Option (iv) "Not determined" means the explanation leads to the answers, but the reasoning chain may be contradictory.
|
| 217 |
+
|
| 218 |
+
Human Approved Results We randomly select 4735 unique image-question pairs from the VQA-E validation set for the human study. Each image-question pair makes up one HIT with the same setting as in Figure 5. In order to eliminate individual bias, we assign each HIT to three different workers. Therefore, in total, we have received ${4735} \times 3 = {14205}$ responses from 111 subjects. The raw distribution of subject annotations for all the 14205 responses (predicted [total] and ground-truth [total]) are shown in Table 3. From the total set of responses, questions for which there is no consensus among the three annotators (all three responses are different) are discarded (869 out of 4735). Following this, we calculate the vote using mode, i.e. majority vote for the unique HITs. Among the 3866 unique HITs, 65.16% of the generated explanations leads to the predicted answers, while 2.01% of them contain the answers but make no sense. 32.8% of the generated explanations fail to make connections with the predicted answers. On the other hand, 93.12% of the ground-truth explanations successfully lead to the ground-truth answers. According to [16], because the ground-truth explanations are selected by comparing the similarity between the question-ground-truth-answer pair and the caption candidates, most of them are valid.
|
| 219 |
+
|
| 220 |
+
max width=
|
| 221 |
+
|
| 222 |
+
2*X 2|c|Predicted 2|c|Ground-truth
|
| 223 |
+
|
| 224 |
+
2-5
|
| 225 |
+
Valid Expl. Invalid Expl. Valid Expl. Invalid Expl.
|
| 226 |
+
|
| 227 |
+
1-5
|
| 228 |
+
Correct Ans. 56.39% 23.77% 93.11% 6.88%
|
| 229 |
+
|
| 230 |
+
1-5
|
| 231 |
+
Wrong Ans. 8.77% 11.05% - -
|
| 232 |
+
|
| 233 |
+
1-5
|
| 234 |
+
|
| 235 |
+
Table 4: Ratio of valid/invalid explanation based on the correctness of the predicted answer.
|
| 236 |
+
|
| 237 |
+
Besides raw annotations, we provide a more straightforward result, as shown in Table 4. Among the 3866 unique HITs, we find that in 56.39% of the cases, our model can predict both the correct answer as well as generate valid explanations. ${23.77}\%$ of the explanation is not valid although the predicted answers are correct. It may either not make any sense or contain the answer in it, albeit without any significant meaning. Only in 8.77% of the cases, our model generates a good explanation but leads to a wrong answer. On the other hand, we also observe that ${6.88}\%$ of the ground-truth explanations are not reasonable. Therefore, our model is able to answer question correctly and also generate valid explanations approximately 60.5% of the time.
|
| 238 |
+
|
| 239 |
+
§ 5 CONCLUSION AND FUTURE WORK
|
| 240 |
+
|
| 241 |
+
We explore the task of Explainable Visual Question Answering (Explainable-VQA). We leverage the Coarse-to-Fine reasoning framework as the VQA backbone and augment it with an explanation generation module using two architectures: LSTM and Transformer Decoder. Our model generates an explanation along with an answer while also maintaining close to SOTA VQA performance. We conduct both objective experiments and a human study to evaluate the generated explanation, pointing out the urgency of proposing new metrics for explainable VQA.
|
| 242 |
+
|
| 243 |
+
Future Work We plan to improve the quality of generated explanations as well as leverage them to increase VQA accuracy. We urge for proper metrics to evaluate explanations in the VQA problem.
|
| 244 |
+
|
| 245 |
+
Acknowledgement We thank Nguyen et al., the authors of [1] for providing us with the features and predicates for the VQA 2.0 dataset, and helping with answering all queries in a timely manner.
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/Y8PmDhBdmv/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,290 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Fair Loss Function for Network Pruning
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
## Abstract
|
| 12 |
+
|
| 13 |
+
Model pruning can enable the deployment of neural networks in environments with resource constraints. While pruning may have a small effect on the overall performance of the model, it can exacerbate existing biases into the model such that subsets of samples see significantly degraded performance. In this paper, we introduce the performance weighted loss function, a simple modified cross-entropy loss function that can be used to limit the introduction of biases during pruning. Experiments using biased classifiers for facial classification and skin-lesion classification tasks demonstrate that the proposed method is a simple and effective tool that can enable existing pruning methods to be used in fairness sensitive contexts.
|
| 14 |
+
|
| 15 |
+
## 1 Introduction
|
| 16 |
+
|
| 17 |
+
Deep learning models are large, requiring millions of operations to make an inference [1]. Deploying large neural networks to environments with limited computational resources, such as mobile and embedded devices, may be infeasible.
|
| 18 |
+
|
| 19 |
+
Pruning is a simple and common method for reducing the size of a neural network [2]. It involves identifying parameters that do not significantly affect the model's output and removing them from the network. Pruning enables the deployment of performant neural networks in resource constrained environments [3, 4]. However, recent research has shown that while overall accuracy of the model may be maintained while the model is compressed, pruning can exacerbate existing model biases, disproportionately affecting disadvantaged groups [5]. Pruning methods that are designed to preserve overall model performance may not prioritize the preservation of parameters that are only important for a small subset of samples.
|
| 20 |
+
|
| 21 |
+
This effect has significant implications for the implementation of pruning in real-world situations. Biases have been observed in artificial intelligence systems such as those used to classify chest X-ray images [6], recognize faces [7] and screen resumes [8]. Biases in models can increase the risk of unfair outcomes, preventing the implementation of the model. If pruning exacerbated a model's biases, it could increase the risk of unfair outcomes or limit the deployment of the pruned model. It is therefore important to prune in a manner that does not aggravate a model's biases.
|
| 22 |
+
|
| 23 |
+
In this paper we propose the performance weighted loss function as a simple method for boosting the fairness of data-driven methods for pruning convolutional filters in convolutional neural network image classifiers. The goal of our method is to enable the pruning of a significant number of model parameters without significantly exacerbating existing biases. The loss function consists of two small tweaks to the standard cross-entropy loss function to prioritize the model's performance for poorly-classified samples over well-classified samples. These tweaks can be used to extend existing data-driven pruning methods without requiring explicit attribute information.
|
| 24 |
+
|
| 25 |
+
We demonstrate the effectiveness of our approach by pruning classifiers using two different pruning approaches for the CelebA [9] and Fitzpatrick 17k [10] datasets. Our results show that the performance weighted loss function can enable existing pruning methods to prune neural networks without significantly increasing model bias.
|
| 26 |
+
|
| 27 |
+
## 2 Related Work
|
| 28 |
+
|
| 29 |
+
Many different pruning approaches have been proposed to reduce the size of CNNs while minimally impacting model accuracy. Pruning methods typically involve assigning a score to each parameter or group of parameters, removing parameters based on these scores and retraining the newly pruned network to recover lost accuracy [2].
|
| 30 |
+
|
| 31 |
+
The procedure by which parameters are identified to be pruned is the primary differentiator between pruning methods. There are a wide variety of scoring approaches used to identify parameters that are unimportant or redundant and can be removed from the network. Many approaches use parameter magnitudes to identify parameters to prune [11, 12]. Other approaches use gradient information [13], Taylor estimates of parameter importance [14, 15, 16] and statistical properties of future layers [17]. Some approaches involve learning the scores via parameters that control the flow of information through the network [18, 19].
|
| 32 |
+
|
| 33 |
+
However, almost all novel pruning approaches focus on the overall accuracy of the model after pruning. There are few pruning approaches that aim to improve or maintain the fairness of a pruned model. Hooker et al. [5] propose auditing samples affected by model compression, called Compression Identified Exemplars, as an approach for identifying and managing the negative effects of model compression. Wu et al. [20] propose Fairprune, a method for improving model bias using pruning. Instead of seeking to compress a model, Fairprune prunes parameters using a saliency metric to increase model fairness [20]. Xu and Hu [21] propose the use of knowledge distillation and pruning to reduce bias in natural language models. Joseph et al. [22] propose a multi-part loss function intended to improve the alignment between predictions between the original and pruned model. They demonstrate that their method can have beneficial effects for fairness between classes.
|
| 34 |
+
|
| 35 |
+
## 3 Method
|
| 36 |
+
|
| 37 |
+
### 3.1 Motivation
|
| 38 |
+
|
| 39 |
+
In the unfair pruning situation described by Hooker et al. [5], model performance was more significantly impacted for certain sample subgroups. The highly impacted subgroups were characterized by poor representation in the training data and worse subgroup performance by the original model when compared to unimpacted groups. The performance decrement induced by the pruning process disproportionately impacts subgroups which are underrepresented and poorly classified.
|
| 40 |
+
|
| 41 |
+
To rectify this inequality, we can design a pruning process that prioritizes maintaining the performance of samples from the impacted subgroups. However, we do not need to develop a new pruning method from scratch to achieve this objective. Many existing pruning methods use data to identify which model parameters should be removed. Some methods use parameters learned via a loss minimization process whereas others values derived from gradients calculated with respect to a loss function. By modifying the loss function to prioritize samples from impacted subgroups, we can boost the fairness of existing pruning methods.
|
| 42 |
+
|
| 43 |
+
### 3.2 The Performance Weighted Loss Function
|
| 44 |
+
|
| 45 |
+
We make two different modifications to the standard cross-entropy loss function to transform it into the performance weighted loss function (PW loss). We first apply sample weighting to ensure that samples from impacted groups have a larger contribution to the loss function. We then transform the sample labels to ensure that we are not reinforcing undesirable model behaviours.
|
| 46 |
+
|
| 47 |
+
As the attribute information required to identify impacted subgroups is not always readily accessible, our weighting scheme does not depend on any external information. We instead use the output of the original model to determine each sample weight. We assign larger weights to samples for which the 4 original model was not able to confidently classify. The weight assigned to the $i$ th data sample, ${w}_{i}$ , is given by the following equation:
|
| 48 |
+
|
| 49 |
+
$$
|
| 50 |
+
{w}_{i} = \theta + {\left( 1 - {\widehat{y}}_{i}\right) }^{\gamma } \tag{1}
|
| 51 |
+
$$
|
| 52 |
+
|
| 53 |
+
where ${\widehat{y}}_{i}$ is the predicted probability given by the original model for the sample’s true class, $\theta \in \left\lbrack {0,1}\right\rbrack$ is the minimum weight value and $\gamma \geq 0$ controls the shape of the relation between ${\widehat{y}}_{i}$ and ${w}_{i}$ .
|
| 54 |
+
|
| 55 |
+
We also emphasize the model performance through the use of corrected soft-labels in the cross-entropy function. Rather than using the true labels of each sample, we use the output of the original model for the loss function in the pruning process. Without this change, the preservation of an originally poorly classified sample's prediction probability would result in a greater loss value than the preservation of an originally well classified sample's prediction probability. The use of true labels implicitly prioritizes the preservation of model performance for samples that have predictions closer to their true labels. Using the model output as soft-labels alleviates this implicit prioritization.
|
| 56 |
+
|
| 57 |
+
However, as we are assigning higher weights to samples that are originally classified by the original model while also using the original model's output as our labels, we are consequently assigning the highest weights to incorrect labels. To avoid emphasizing incorrect behaviours we correct the soft-labels. The corrected soft-label, ${\widehat{\mathbf{y}}}_{i}^{ * }$ is defined as:
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
{\widehat{\mathbf{y}}}_{i}^{ * } = \left\{ \begin{array}{ll} {\widehat{\mathbf{y}}}_{i} & \text{ if }{\widehat{C}}_{i} = {C}_{i} \\ {\mathbf{y}}_{i} & \text{ otherwise } \end{array}\right. \tag{2}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
where ${\widehat{\mathbf{y}}}_{i}$ contains the prediction probabilities derived from the model output for the $i$ th sample, ${\mathbf{y}}_{i}$ is the true label vector of the $i$ th sample, ${\widehat{C}}_{i}$ is predicted class of the $i$ th sample and ${C}_{i}$ is the true class of the $i$ th sample. The corrected soft-label takes on the value of the model’s prediction probabilities when the prediction is correct and the true label when the prediction is incorrect.
|
| 64 |
+
|
| 65 |
+
By the application of the performance weighted scheme and corrected soft-labels onto the standard cross-entropy function, the performance weighted loss function, ${\mathcal{L}}_{PW}$ , is defined by:
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
{\mathcal{L}}_{PW} = \mathop{\sum }\limits_{{i = 1}}^{N}{w}_{i}{l}_{CE}\left( {{\widehat{\mathbf{y}}}_{i}^{ * },{\widehat{\mathbf{y}}}_{i}^{\prime }}\right) \tag{3}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
where ${\widehat{\mathbf{y}}}_{i}^{\prime }$ contains the prediction probabilities derived from the model output for the $i$ th sample after pruning, ${l}_{CE}\left( {{\widehat{\mathbf{y}}}_{i}^{ * },{\widehat{\mathbf{y}}}_{i}^{\prime }}\right)$ is the cross-entropy between the corrected soft-label and the prediction probabilities of the pruned model for the $i$ th sample, and $N$ is the number of samples in the batch.
|
| 72 |
+
|
| 73 |
+
By using this loss function with existing data-driven pruning methods, we can reduce the bias exaggerating effect of pruning by emphasizing samples that are more likely to be negatively affected by pruning.
|
| 74 |
+
|
| 75 |
+
## 4 Experiments
|
| 76 |
+
|
| 77 |
+
### 4.1 Experimental Set-up
|
| 78 |
+
|
| 79 |
+
We applied the PW loss to two different pruning methods. The first method is AutoBot [18], an accuracy preserving pruning method that uses trainable bottleneck parameters that limits the flow of information through the model. The second method uses an importance metric derived from the Taylor expansion of the loss function [14]. In both of our implementations, we pruned whole convolutional filters rather than individual neurons. As pruned filters can be fully removed from the model, rather than being set to zero, filter pruning is a simple method for directly reducing the FLOPS of a model.
|
| 80 |
+
|
| 81 |
+
In the AutoBot method, the bottlenecks are optimized by minimizing a loss function that includes the cross-entropy between the original and pruned model outputs, as well as terms that encourage the bottlenecks to limit information moving through the model, achieving a target number of FLOPS [18]. We applied the performance weighted loss function to the method by replacing the cross-entropy term in the loss function with the performance weighted loss function. Additionally, we also used the performance weighted loss function when retraining the model after pruning.
|
| 82 |
+
|
| 83 |
+
The importance metric of the Taylor expansion method is formed using the gradient of the loss function with respect to each feature map and the value of each feature map [14]. This method alternates between training the network and pruning a filter. In our implementation, a filter is pruned every five iterations. We applied the performance weighted loss function by replacing the loss functions used in the gradient calculation and model training with the performance weighted loss function. Once again, we also used the performance weighted loss function when retraining the model after pruning.
|
| 84 |
+
|
| 85 |
+
We also evaluated a random pruning method in which filters are selected and pruned from the network until only the desired number of FLOPS remain. We use this method as a reference.
|
| 86 |
+
|
| 87 |
+
We implemented the methods using the PyTorch library [23]. The methods were implemented as three step pipelines in which the model is first pseudo-pruned by setting parameters to zero, fully pruned using the Torch-Pruning library [24] and retrained. Pseudo-pruning allows for fast pruning during the pruning process while the full pruning step removes the unused parameters, reducing the number of operations required for prediction. Due to dependencies between parameters introduced by structures such as residual layers, the achieved theoretical speedup often slightly differs from the target theoretical speedup. All hyperparameters for the pruning methods were selected using a hold-out validation set. Hyperparameters for the pruning methods were selected without the PW loss applied and were used for both unmodified and PW loss method variants. We repeated each experiment three times. All figures displaying model performance after pruning are displaying the average of all trials. Trials that produced degenerate models which only predict a single class were excluded.
|
| 88 |
+
|
| 89 |
+
#### 4.1.1 Metrics
|
| 90 |
+
|
| 91 |
+
Our primary concern is the degradation of a model's behaviour towards different subgroups due to pruning. We therefore evaluated the models by comparing the change in the areas under the receiver operator curves (ROC-AUC) for various subgroups for five different degrees of pruning. As it is a threshold agnostic performance metric, the ROC-AUC is a good measure of the model's understanding and separability for a subgroup [25]. For non-binary classification we used the one-vs-one ROC-AUC.
|
| 92 |
+
|
| 93 |
+
We measured the degree to which a model is pruned using the theoretical speedup as defined as the FLOPS of the original model divided by the FLOPS of the pruned model.
|
| 94 |
+
|
| 95 |
+
### 4.2 Evaluating Fairness and Performance
|
| 96 |
+
|
| 97 |
+
All methods were tested with and without the PW loss on two different classification tasks.
|
| 98 |
+
|
| 99 |
+
Our first task was the celebrity face classification task using the CelebA dataset [9] as outlined by Hooker et al. [5], in which a model is trained to identify faces as blonde or non-blonde. The CelebA dataset contains over 200000 images of celebrity faces with various annotations. While blonde non-male samples make up 14.05% of the training data, blond male samples make up only 0.85% of the training data. We used the provided data splits with ${80}\%$ of the available data being used for training with the remaining data split evenly for validation and testing.
|
| 100 |
+
|
| 101 |
+
Our second task is the skin lesion classification task using the Fitzpatrick17k dataset [10]. The Fitzpatrick 17k dataset consists of 16577 images of skin conditions. We trained our models to classify the samples as normal, benign or malignant. Due to missing and invalid images we were only able to use 16526 images. Each sample in the dataset is assigned a Fitzpatrick score that categorizes the skin tone of the sample. We trained our models on only samples with light skin tone scores of 1 or 2, and evaluated the model on medium skin tone scores of 3 or 4 as well as dark skin tone scores of 5 or 6. We used a random 25% of the medium and dark skin tones as a validation set with the remainder used as a test set.
|
| 102 |
+
|
| 103 |
+
#### 4.2.1 Pruning the CelebA Models
|
| 104 |
+
|
| 105 |
+
We trained a Resnet-18 [26] model and a VGG-16 [27] model for the CelebA task. The ROC-AUCs for the male and non-male subgroups of the Resnet-18 model were 0.9639 and 0.9794 respectively. The ROC-AUCs for the male and non-male subgroups of the VGG-16 model were 0.9679 and 0.9825 respectively. Both models were pruned using target theoretical speedups of 16, 32, 64, 128 and 256.
|
| 106 |
+
|
| 107 |
+
The change in ROC-AUC for all tested pruning methods for the Resnet-18 and VGG-16 models can be found in Figure 1. All methods were able to significantly reduce the size of both models, but most of the results without performance weighting exhibited divergent performance between the male and non-male subgroups as the theoretical speedup increases. Performance weighting was highly effective when pruning the Resnet-18 model for both the AutoBot and Taylor pruning methods.
|
| 108 |
+
|
| 109 |
+

|
| 110 |
+
|
| 111 |
+
Figure 1: Mean pruning performance with Resnet-18 and VGG-16 models with CelebA dataset.
|
| 112 |
+
|
| 113 |
+
We see an increase in AUC-ROC at all tested theoretical speedups for both the male and non-male subgroups. The increase for the male subgroup is substantial and the subgroup AUC-ROC scores no longer diverage as the theoretical speedup increases.
|
| 114 |
+
|
| 115 |
+
We see similar improvements when performance weighting is applied to the AutoBot method for the VGG-16 model, however the improvements are only substantial at the lowest theoretical speedups. We do not see improvements when performance weighting is applied to the Taylor method. This is likely due to the method not having significantly divergent performance for the VGG-16 model.
|
| 116 |
+
|
| 117 |
+
#### 4.2.2 Pruning the Fitzpatrick17k Models
|
| 118 |
+
|
| 119 |
+
We trained a Resnet-34 [26] model and a EfficientNet-V2 Medium [28] model for the Fitzpatrick17k task. The ROC-AUCs for the medium and dark subgroups of the Resnet-34 model were 0.8190 and 0.7329 respectively. The ROC-AUCs for the medium and dark subgroups of the EfficientNet model were 0.8516 and 0.7524 respectively.
|
| 120 |
+
|
| 121 |
+
Despite a bias against dark skin tones existing in the original models, we do not see divergent AUC-ROC scores as the theoretical speedup increases. The medium skin tone subgroup actually saw greater changes in AUC-ROC due to pruning. We only see slight benefits for using performance weighting with the Fitzpatrick17k models. Performance weighting increased slightly improved performance after pruning for the ResNet-34 model with AutoBot pruning and the EfficientNet model with AutoBot pruning at lower theoretical speedups. It had negligible or detrimental effects for Taylor pruning with both models.
|
| 122 |
+
|
| 123 |
+
These results indicate that performance weighting is not an appropriate solution for all datasets and models that exhibit bias. The lack of an increasing performance difference between subgroups may indicate that the pruning process was not introducing additional biases in the Fitzpatrick 17k models. This is in contrast to the CelebA models for which the initial bias was small but grew due to pruning. Performance weighting may therefore only mitigate biases that are introduced from the pruning process. It will not rectify biases that exist in the model before pruning.
|
| 124 |
+
|
| 125 |
+
### 4.3 Conditions for Bias
|
| 126 |
+
|
| 127 |
+
From our results in Section 4.2, we can see that utilizing the PW loss is not necessary in all circumstances. The loss appeared to be more beneficial for models which saw increasing differences in performance between subgroups as the theoretical speedup increased.
|
| 128 |
+
|
| 129 |
+

|
| 130 |
+
|
| 131 |
+
Figure 2: Mean pruning performance with Resnet-34 and EfficientNet V2 Med. models with Fitzpatrick 17k dataset.
|
| 132 |
+
|
| 133 |
+
To understand the properties of a dataset that would necessitate the use of the PW loss, we created three artificial datasets from the CelebA dataset by selected subsets of the training data. The first subset was formed using 3.41% of the available training data such that it was fully balanced, containing an equal number of male and non-male samples as well as an equal number of blonde and non-blonde samples. The second and third subsets were formed by adding additional samples to the first subset, altering the class or gender balance. The second subset contained an equal number of blonde and non-blonde samples, but five times as many non-male samples as there were male samples. The third subset contained an equal number of male and non-male samples, but five times as many non-blonde samples as there were blonde samples. The entire test set was used to evaluate all subsets.
|
| 134 |
+
|
| 135 |
+
A ResNet-18 model was trained using each subset. The AUC-ROCs for the male subgroup are 0.9562, 0.9479 and 0.9183 for the first, second and third subsets respectively. The AUC-ROCs for the non-male subgroup are 0.9713, 0.9732 and 0.9580 for the first second and third subsets respectively. The models were pruned using the AutoBot and Taylor methods using target theoretical speedups of 8, 32 and 128. The performance after pruning for these models can be found in Figure 3.
|
| 136 |
+
|
| 137 |
+
In the results using the fully balanced subset, we do see a divergence in subgroup performance for both methods, but the divergence is less than was seen when the full method was used. In the results with the additional non-male samples, we see an increase in performance for all model/method combinations. For the AutoBot results, the increase is greater for non-male samples than it is for male samples. We once again see a common increase in performance when we look at the subset with the additional non-blonde samples. We do see additional instability in the Taylor results, but there are no clear findings with respect to differences in performance between subgroups. A greater decrease in performance was seen for male samples for all model/method combinations, including those that were trained on data with a balanced gender split. These results indicate that the dataset composition does influence the fairness of pruning results, but it does not fully explain it.
|
| 138 |
+
|
| 139 |
+
### 4.4 Ablation
|
| 140 |
+
|
| 141 |
+
To measure the effects of the components of the PW loss independently, we pruned our ResNet-18 CelebA model using the AutoBot method with only the corrected soft-labels and with only the 37 weighting scheme described in equation 1 . We applied the modifications to the only the pruning 238 process, and to both the pruning and retraining processes.
|
| 142 |
+
|
| 143 |
+

|
| 144 |
+
|
| 145 |
+
Figure 3: Pruning performance with ResNet-18 models trained on subsets of CelebA dataset with alternative class and gender balances.
|
| 146 |
+
|
| 147 |
+

|
| 148 |
+
|
| 149 |
+
Figure 4: Pruning performance with ResNet-18 models with CelebA dataset when elements of PW loss are applied independently to the pruning process (left), and to the pruning process as well as the post-prune retraining process (right).
|
| 150 |
+
|
| 151 |
+
The ablation results can be found in Figure 4. Both the modifications were more effective when applied to both the pruning and retraining process, indicating that simply modifying the process by which parameters are selected to be pruning is insufficient to mitigate the effects of bias. Furthermore, the effect of using corrected soft-labels was larger than the effects of using our proposed weighting scheme. While both changes boosted performance for the male subgroup when applied to both pruning and retraining, the effect of the corrected soft-labels was almost as large the effect of the full performance weighting method. The full method did demonstrate less bias with a target theoretical speedup of 16. Furthermore, as the AutoBot method already uses the outputs of the original model in its loss function, the improvement seen when the corrected soft-labels were only used for pruning can solely be attributed to the correction of the model outputs.
|
| 152 |
+
|
| 153 |
+
Unlike our proposed weighting scheme, the use of corrected soft-labels does not involve the selection of any parameters. In situations in which parameter selection is not possible, the use of corrected soft-labels may be a simple yet useful method for reducing the effects of algorithmic bias in pruning.
|
| 154 |
+
|
| 155 |
+
## 5 Conclusion
|
| 156 |
+
|
| 157 |
+
In this paper we demonstrate how model pruning can exacerbate biases in models and present the performance weighted loss function as a novel method for mitigating this effect. The performance weighted loss function is a simple modification that can be applied to any pruning method that uses the cross-entropy loss. Our experimental results indicate that while the performance weighted loss function does not recitify model biases, it can help prevent those biases from becoming exaggerated by the pruning process. The performance weighted loss function is a useful tool for practioners who seek to compress existing models without introducing new fairness concerns.
|
| 158 |
+
|
| 159 |
+
References
|
| 160 |
+
|
| 161 |
+
[1] A. Canziani, A. Paszke, and E. Culurciello, "An analysis of deep neural network models for practical applications," arXiv preprint arXiv:1605.07678, 2016.
|
| 162 |
+
|
| 163 |
+
[2] D. Blalock, J. J. Gonzalez Ortiz, J. Frankle, and J. Guttag, "What is the state of neural network pruning?" in Proceedings of Machine Learning and Systems, I. Dhillon, D. Papailiopoulos, and V. Sze, Eds., vol. 2, 2020, pp. 129-146. [Online]. Available: https: //proceedings.mlsys.org/paper/2020/file/d2ddea18f00665ce8623e36bd4e3c7c5-Paper.pdf
|
| 164 |
+
|
| 165 |
+
[3] R.-T. Wu, A. Singla, M. R. Jahanshahi, E. Bertino, B. J. Ko, and D. Verma, "Pruning deep convolutional neural networks for efficient edge computing in condition assessment of infrastructures," Computer-Aided Civil and Infrastructure Engineering, vol. 34, no. 9, pp. 774- 789, 2019. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1111/mice.12449
|
| 166 |
+
|
| 167 |
+
[4] M. R. Vemparala, A. Singh, A. Mzid, N. Fasfous, A. Frickenstein, F. Mirus, H.-J. Voegel, N. S. Nagaraja, and W. Stechele, "Pruning cnns for lidar-based perception in resource constrained environments," in 2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops), 2021, pp. 228-235.
|
| 168 |
+
|
| 169 |
+
[5] S. Hooker, N. Moorosi, G. Clark, S. Bengio, and E. Denton, "Characterising bias in compressed models," arXiv preprint arXiv:2010.03058, 2020.
|
| 170 |
+
|
| 171 |
+
[6] L. Seyyed-Kalantari, G. Liu, M. McDermott, I. Y. Chen, and M. Ghassemi, "Chexclusion: Fairness gaps in deep chest x-ray classifiers," in BIOCOMPUTING 2021: Proceedings of the Pacific Symposium. World Scientific, 2020, pp. 232-243.
|
| 172 |
+
|
| 173 |
+
[7] J. Snow, "Amazon's face recognition falsely matched 28 members of congress with mugshots," July 2018. [Online]. Available: https://www.aclu.org/blog/privacy-technology/ surveillance-technologies/amazons-face-recognition-falsely-matched-28
|
| 174 |
+
|
| 175 |
+
[8] J. Dastin, "Amazon scraps secret AI recruiting tool that showed bias against women," Reuters, October 2018. [Online]. Available: https://www.reuters.com/article/ us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
|
| 176 |
+
|
| 177 |
+
[9] Z. Liu, P. Luo, X. Wang, and X. Tang, "Deep learning face attributes in the wild," in Proceedings of International Conference on Computer Vision (ICCV), December 2015.
|
| 178 |
+
|
| 179 |
+
[10] M. Groh, C. Harris, L. Soenksen, F. Lau, R. Han, A. Kim, A. Koochek, and O. Badri, "Evaluating deep neural networks trained on clinical images in dermatology with the fitzpatrick 17k dataset," arXiv preprint arXiv:2104.09957, 2021.
|
| 180 |
+
|
| 181 |
+
[11] H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf, "Pruning filters for efficient convnets," 2016. [Online]. Available: https://arxiv.org/abs/1608.08710
|
| 182 |
+
|
| 183 |
+
[12] S. Han, J. Pool, J. Tran, and W. Dally, "Learning both weights and connections for efficient neural network," in Advances in Neural Information Processing Systems, C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, Eds., vol. 28. Curran Associates, Inc., 2015. [Online]. Available: https://proceedings.neurips.cc/paper/2015/file/ ae0eb3eed39d2bcef4622b2499a05fe6-Paper.pdf
|
| 184 |
+
|
| 185 |
+
[13] C. Liu and H. Wu, "Channel pruning based on mean gradient for accelerating convolutional neural networks," Signal Processing, vol. 156, pp. 84-91, 2019. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0165168418303517
|
| 186 |
+
|
| 187 |
+
[14] P. Molchanov, S. Tyree, T. Karras, T. Aila, and J. Kautz, "Pruning convolutional neural networks for resource efficient inference," arXiv preprint arXiv:1611.06440, 2016.
|
| 188 |
+
|
| 189 |
+
[15] P. Molchanov, A. Mallya, S. Tyree, I. Frosio, and J. Kautz, "Importance estimation for neural network pruning," in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 11256-11264.
|
| 190 |
+
|
| 191 |
+
[16] H. Ide, T. Kobayashi, K. Watanabe, and T. Kurita, "Robust pruning for efficient CNNs," Pattern Recognition Letters, vol. 135, pp. 90-98, Jul. 2020.
|
| 192 |
+
|
| 193 |
+
[17] J.-H. Luo, J. Wu, and W. Lin, "Thinet: A filter level pruning method for deep neural network compression," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 5058-5066.
|
| 194 |
+
|
| 195 |
+
[18] T. Castells and S.-K. Yeom, "Automatic neural network pruning that efficiently preserves the model accuracy," 2021. [Online]. Available: https://arxiv.org/abs/2111.09635
|
| 196 |
+
|
| 197 |
+
[19] Z. You, K. Yan, J. Ye, M. Ma, and P. Wang, "Gate decorator: Global filter pruning method for accelerating deep convolutional neural networks," Advances in Neural Information Processing Systems, vol. 32, 2019.
|
| 198 |
+
|
| 199 |
+
[20] Y. Wu, D. Zeng, X. Xu, Y. Shi, and J. Hu, "Fairprune: Achieving fairness through pruning for dermatological disease diagnosis," 2022. [Online]. Available: https://arxiv.org/abs/2203.02110
|
| 200 |
+
|
| 201 |
+
[21] G. Xu and Q. Hu, "Can model compression improve nlp fairness," arXiv preprint arXiv:2201.08542, 2022.
|
| 202 |
+
|
| 203 |
+
[22] V. Joseph, S. A. Siddiqui, A. Bhaskara, G. Gopalakrishnan, S. Muralidharan, M. Garland, S. Ahmed, and A. Dengel, "Going Beyond Classification Accuracy Metrics in Model Compression," Dec. 2020.
|
| 204 |
+
|
| 205 |
+
[23] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, "Pytorch: An imperative style, high-performance deep learning library," in Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, Eds. Curran Associates, Inc., 2019, pp. 8024-8035. [Online]. Available: http://papers.neurips cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf
|
| 206 |
+
|
| 207 |
+
[24] G. Fang, "Torch-Pruning," July 2022. [Online]. Available: https://github.com/VainF/ Torch-Pruning
|
| 208 |
+
|
| 209 |
+
[25] D. Borkan, L. Dixon, J. Sorensen, N. Thain, and L. Vasserman, "Nuanced Metrics for Measuring Unintended Bias with Real Data for Text Classification," in Companion Proceedings of The 2019 World Wide Web Conference, ser. WWW '19. New York, NY, USA: Association for Computing Machinery, May 2019, pp. 491-500.
|
| 210 |
+
|
| 211 |
+
[26] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
|
| 212 |
+
|
| 213 |
+
[27] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
|
| 214 |
+
|
| 215 |
+
[28] M. Tan and Q. V. Le, "EfficientNetV2: Smaller Models and Faster Training," Jun. 2021.
|
| 216 |
+
|
| 217 |
+
[29] W. Falcon and The PyTorch Lightning team, "PyTorch Lightning," Mar. 2019. [Online]. Available: https://github.com/Lightning-AI/lightning
|
| 218 |
+
|
| 219 |
+
[30] I. Loshchilov and F. Hutter, "Decoupled weight decay regularization," arXiv preprint arXiv:1711.05101, 2017.
|
| 220 |
+
|
| 221 |
+
[31] ——, “Sgdr: Stochastic gradient descent with warm restarts," arXiv preprint arXiv:1608.03983, 2016.
|
| 222 |
+
|
| 223 |
+
## A Model Training and Pruning Parameters
|
| 224 |
+
|
| 225 |
+
To ensure transparency and enable reproducability, all parameters and procedures used to train, prune and retrain the models can be found below. All experiments were implemented using PyTorch 1.12.1 and torchvision 0.13.1 [23]. PyTorch Lightning 1.7.1 [29] was also used to train the models.
|
| 226 |
+
|
| 227 |
+
The ResNet-18 [26] CelebA model was trained for 20 epochs using the AdamW [30] optimizer with an initial learning rate of 0.0001 and a CosineAnnealingLR learning rate scheduler with ${T}_{\max } = {20}$
|
| 228 |
+
|
| 229 |
+
[31]. A batch size of 256 was used. The model was initialized using the provided ImageNet weights from torchvision. All parameters in layers except the final fully connected layer were frozen for the first 5 epochs after which they were unfrozen with a learning rate equal to 0.01 times the global learning rate. Early stopping was applied such that the parameters that achieved the lowest validation loss were saved after training.
|
| 230 |
+
|
| 231 |
+
The VGG-16 [27] CelebA model was trained for 10 epochs using the AdamW [30] optimizer with an initial learning rate of 0.0005 and a CosineAnnealingLR learning rate scheduler with ${T}_{\max } = {10}$ [31]. A batch size of 64 was used. The model was initialized using the provided ImageNet weights from torchvision. All parameters in layers except the final fully connected layer were optimized with a learning rate equal to 0.01 times the global learning rate. Early stopping was applied such that the parameters that achieved the lowest validation loss were saved after training.
|
| 232 |
+
|
| 233 |
+
The ResNet-34 [26] Fitzpatrick 17k model was trained for 30 epochs using the AdamW [30] optimizer with an initial learning rate of 0.001 and a CosineAnnealingLR learning rate scheduler with ${T}_{\max } =$ 30 [31]. A batch size of 64 was used. The model was initialized using the provided ImageNet weights from torchvision. All parameters in layers except the final fully connected layer were frozen for the first 5 epochs after which they were unfrozen with a learning rate equal to 0.001 times the global learning rate.
|
| 234 |
+
|
| 235 |
+
The EfficientNet V2 Medium [28] Fitzpatrick17k model was trained for 30 epochs using the AdamW [30] optimizer with an initial learning rate of 0.001 and a CosineAnnealingLR learning rate scheduler with ${T}_{\max } = {30}$ [31]. A batch size of 32 was used. The model was initialized using the provided ImageNet weights from the torchvision. All parameters in layers except the final fully connected layer were frozen for the first 5 epochs after which they were unfrozen with a learning rate equal to 0.01 times the global learning rate.
|
| 236 |
+
|
| 237 |
+
The parameter values used for our AutoBot [18] implementation can be found in Table 1. ${\beta }_{AB}$ and ${\gamma }_{AB}$ refer to the parameters used by the AutoBot method to control the balance between the different terms of its loss function.
|
| 238 |
+
|
| 239 |
+
The parameter values used for our Taylor [14] implementation can be found in Table 2. ${f}_{\text{prune }}$ refers to the frequency of the pruning. That is, the number of batch iterations between the pruning of filters. ${N}_{\text{filters }}$ refers to the number of convolutional filters that are pruned in each pruning instance.
|
| 240 |
+
|
| 241 |
+
The parameter values that are used for our PW losses can be found in Table 3. Other parameters were not changed when the PW loss was introduced.
|
| 242 |
+
|
| 243 |
+
After pruning, all models were retrained using the AdamW [30] optimizer and CosineAnnealingLR learning rate scheduler with a ${T}_{\max }$ value equal to the number of epochs. The parameter values used to retrain the models can be found in Table 4.
|
| 244 |
+
|
| 245 |
+
Table 1: Parameters used for AutoBot pruning method
|
| 246 |
+
|
| 247 |
+
<table><tr><td>Dataset</td><td>Model</td><td>Learning Rate</td><td>Batch Size</td><td>Iters.</td><td>${\beta }_{AB}$</td><td>${\gamma }_{AB}$</td></tr><tr><td>CelebA</td><td>ResNet-18</td><td>0.85</td><td>64</td><td>200</td><td>2.7</td><td>0.1</td></tr><tr><td>CelebA</td><td>VGG-16</td><td>1.81</td><td>64</td><td>250</td><td>3.07</td><td>0.18</td></tr><tr><td>Fitzpatrick17k</td><td>ResNet-34</td><td>1.5</td><td>32</td><td>400</td><td>0.5</td><td>1</td></tr><tr><td>Fitzpatrick17k</td><td>EfficientNet V2 Med.</td><td>1.5</td><td>16</td><td>600</td><td>6.76</td><td>1.05</td></tr></table>
|
| 248 |
+
|
| 249 |
+
Table 2: Parameters used for Taylor pruning method
|
| 250 |
+
|
| 251 |
+
<table><tr><td>Dataset</td><td>Model</td><td>Learning Rate</td><td>Batch Size</td><td>${f}_{\text{prune }}$</td><td>${N}_{\text{filters }}$</td></tr><tr><td>CelebA</td><td>ResNet-18</td><td>0.01</td><td>64</td><td>5</td><td>1</td></tr><tr><td>CelebA</td><td>VGG-16</td><td>0.01</td><td>64</td><td>5</td><td>1</td></tr><tr><td>Fitzpatrick17k</td><td>ResNet-34</td><td>0.01</td><td>32</td><td>5</td><td>1</td></tr><tr><td>Fitzpatrick17k</td><td>EfficientNet V2 Med.</td><td>0.01</td><td>16</td><td>4</td><td>8</td></tr></table>
|
| 252 |
+
|
| 253 |
+
Table 3: Parameters used for PW loss
|
| 254 |
+
|
| 255 |
+
<table><tr><td>Dataset</td><td>Model</td><td>Base Method</td><td>$\theta$</td><td>$\gamma$</td></tr><tr><td>CelebA</td><td>ResNet-18</td><td>AutoBot</td><td>0.3</td><td>1</td></tr><tr><td>CelebA</td><td>ResNet-18</td><td>Taylor</td><td>0.8</td><td>0.5</td></tr><tr><td>CelebA</td><td>VGG-16</td><td>AutoBot</td><td>0.75</td><td>3</td></tr><tr><td>CelebA</td><td>VGG-16</td><td>Taylor</td><td>0.9</td><td>5</td></tr><tr><td>Fitzpatrick17k</td><td>ResNet-34</td><td>AutoBot</td><td>0.8</td><td>2.5</td></tr><tr><td>Fitzpatrick17k</td><td>ResNet-34</td><td>Taylor</td><td>0.95</td><td>3</td></tr><tr><td>Fitzpatrick17k</td><td>EfficientNet V2 Med.</td><td>AutoBot</td><td>0.8</td><td>2</td></tr><tr><td>Fitzpatrick17k</td><td>EfficientNet V2 Med.</td><td>Taylor</td><td>0.95</td><td>3</td></tr></table>
|
| 256 |
+
|
| 257 |
+
Table 4: Parameters used to retrain models
|
| 258 |
+
|
| 259 |
+
<table><tr><td>Dataset</td><td>Model</td><td>Learning Rate</td><td>Batch Size</td><td>Duration</td></tr><tr><td>CelebA</td><td>ResNet-18</td><td>0.0001</td><td>256</td><td>30 epochs</td></tr><tr><td>CelebA</td><td>VGG-16</td><td>0.0005</td><td>64</td><td>10 epochs</td></tr><tr><td>Fitzpatrick17k</td><td>ResNet-34</td><td>0.0001</td><td>64</td><td>30 epochs</td></tr><tr><td>Fitzpatrick17k</td><td>EfficientNet V2 Med.</td><td>0.00001</td><td>32</td><td>50 epochs</td></tr></table>
|
| 260 |
+
|
| 261 |
+
## 388 B Detailed Results
|
| 262 |
+
|
| 263 |
+
For brevity, we only included figures displaying our results in the main body of this report. For transparency, our full results can be found here. Detailed results for all experiments performed in the main body of the paper can be found in Tables 5 through 10 .
|
| 264 |
+
|
| 265 |
+
Table 5: Pruning performance mean $\pm$ standard deviation with ResNet-18 with CelebA dataset
|
| 266 |
+
|
| 267 |
+
<table><tr><td rowspan="2">Pruning Method</td><td rowspan="2">FLOPS</td><td rowspan="2">Parameters</td><td rowspan="2">Accuracy</td><td colspan="3">ROC-AUC</td></tr><tr><td>All</td><td>Male</td><td>Non-Male</td></tr><tr><td>Unpruned</td><td>1508.5 M</td><td>11177.0 k</td><td>0.9824</td><td>0.9546</td><td>0.9639</td><td>0.9795</td></tr><tr><td>AutoBot</td><td>107.1 M $\pm {1.4}\mathrm{M}$</td><td>187.5 k ± 16.6 k</td><td>0.9494 $\pm {0.0066}$</td><td>0.9378 $\pm {0.002}$</td><td>0.8703 $\pm {0.0307}$</td><td>0.9487 $\pm {0.005}$</td></tr><tr><td rowspan="2">AutoBot</td><td>67.4 M</td><td>811.7 k</td><td>0.9511</td><td>0.9428</td><td>0.8364</td><td>0.953</td></tr><tr><td>$\pm {0.6}\mathrm{M}$</td><td>± 35.8 k</td><td>$\pm {0.0068}$</td><td>$\pm {0.0038}$</td><td>$\pm {0.0346}$</td><td>$\pm {0.0053}$</td></tr><tr><td>AutoBot</td><td>34.8 M $\pm {1.7}\mathrm{M}$</td><td>545.4 k $\pm {68.0}\mathrm{\;k}$</td><td>0.9364 $\pm {0.0012}$</td><td>0.9313 $\pm {0.0033}$</td><td>0.8276 $\pm {0.0532}$</td><td>0.9366 $\pm {0.0041}$</td></tr><tr><td>AutoBot</td><td>18.7 M $\pm {1.3}\mathrm{M}$</td><td>277.0 k ± 16.6 k</td><td>0.9298 $\pm {0.015}$</td><td>0.9299 $\pm {0.0054}$</td><td>0.7984 $\pm {0.054}$</td><td>0.9313 $\pm {0.0132}$</td></tr><tr><td>AutoBot</td><td>7.7 M $\pm {0.5}\mathrm{M}$</td><td>72.4 k ± 12.1 k</td><td>0.9419 $\pm {0.0048}$</td><td>0.9297 $\pm {0.0038}$</td><td>0.8326 $\pm {0.0427}$</td><td>0.9427 $\pm {0.0055}$</td></tr><tr><td>AutoBot + PW</td><td>123.9 M $\pm {1.5}\mathrm{M}$</td><td>${345.7}\mathrm{\;k}$ ± 29.2 k</td><td>0.974 $\pm {0.0013}$</td><td>0.9442 $\pm {0.0029}$</td><td>0.9454 $\pm {0.0042}$</td><td>0.9706 $\pm {0.0021}$</td></tr><tr><td>AutoBot + PW</td><td>67.6 M $\pm {0.6}\mathrm{M}$</td><td>611.2 k ± 148.8 k</td><td>0.975 $\pm {0.002}$</td><td>0.9442 $\pm {0.0015}$</td><td>0.9438 $\pm {0.0064}$</td><td>0.9721 $\pm {0.0023}$</td></tr><tr><td>AutoBot + PW</td><td>48.7 M $\pm {5.0}\mathrm{M}$</td><td>734.2 k $\pm {47.7}\mathrm{\;k}$</td><td>0.9646 $\pm {0.0039}$</td><td>0.936 $\pm {0.0017}$</td><td>0.9074 $\pm {0.0123}$</td><td>0.962 $\pm {0.0039}$</td></tr><tr><td>AutoBot + PW</td><td>21.5 M $\pm {1.3}\mathrm{M}$</td><td>294.8 k ± 10.6 k</td><td>0.9588 $\pm {0.0019}$</td><td>0.9317 $\pm {0.0022}$</td><td>0.899 $\pm {0.0024}$</td><td>0.9566 $\pm {0.002}$</td></tr><tr><td rowspan="2">AutoBot + PW</td><td>9.8 M</td><td>108.0 k</td><td>0.9579</td><td>0.9282</td><td>0.897</td><td>0.956</td></tr><tr><td>$\pm {1.7}\mathrm{M}$</td><td>± 16.6 k</td><td>$\pm {0.0019}$</td><td>$\pm {0.0028}$</td><td>$\pm {0.0079}$</td><td>$\pm {0.0031}$</td></tr><tr><td>Taylor</td><td>116.8 M $\pm {5.0}\mathrm{M}$</td><td>70.8 k $\pm {2.2}\mathrm{\;k}$</td><td>0.9571 $\pm {0.0128}$</td><td>0.9445 $\pm {0.001}$</td><td>0.8877 $\pm {0.05}$</td><td>0.9556 $\pm {0.01}$</td></tr><tr><td>Taylor</td><td>55.0 M $\pm {2.4}\mathrm{M}$</td><td>25.8 k $\pm {0.4}\mathrm{\;k}$</td><td>0.9785 $\pm {0.0023}$</td><td>0.9508 $\pm {0.0037}$</td><td>0.9429 $\pm {0.0114}$</td><td>0.9766 $\pm {0.0018}$</td></tr><tr><td>Taylor</td><td>21.7 M ± 1.9 M</td><td>9.8 k $\pm {0.7}\mathrm{\;k}$</td><td>0.9593 $\pm {0.0168}$</td><td>0.9517 $\pm {0.0012}$</td><td>0.8648 $\pm {0.074}$</td><td>0.9617 $\pm {0.0124}$</td></tr><tr><td>Taylor</td><td>9.6 M $\pm {0.8}\mathrm{M}$</td><td>${4.4}\mathrm{\;k}$ $\pm {0.4}\mathrm{\;k}$</td><td>0.9564 $\pm {0.0197}$</td><td>0.9476 $\pm {0.001}$</td><td>0.8626 $\pm {0.0684}$</td><td>0.9581 $\pm {0.0156}$</td></tr><tr><td>Taylor Taylor + PW</td><td>4.6 M $\pm {0.9}\mathrm{M}$ 116.7 M</td><td>1.9 k $\pm {0.7}\mathrm{\;k}$ 62.8 k</td><td>0.9388 $\pm {0.0425}$ 0.9741</td><td>0.9103 $\pm {0.0378}$ 0.9488</td><td>0.8259 $\pm {0.1451}$ 0.9323</td><td>0.9411 $\pm {0.0346}$ 0.9716</td></tr><tr><td/><td>$\pm {0.7}\mathrm{M}$</td><td>$\pm {1.9}\mathrm{k}$</td><td>$\pm {0.0031}$</td><td>$\pm {0.0023}$</td><td>$\pm {0.0087}$</td><td>$\pm {0.0032}$</td></tr><tr><td>Taylor + PW</td><td>48.5 M $\pm {0.6}\mathrm{M}$</td><td>21.9 k $\pm {0.6}\mathrm{\;k}$</td><td>0.9805 $\pm {0.0007}$</td><td>0.9528 $\pm {0.0026}$</td><td>0.948 $\pm {0.0046}$</td><td>0.9787 $\pm {0.0006}$</td></tr><tr><td>Taylor + PW</td><td>23.1 M $\pm {2.2}\mathrm{M}$</td><td>9.6 k $\pm {0.9}\mathrm{k}$</td><td>0.9788 $\pm {0.0009}$</td><td>0.9535 $\pm {0.0024}$</td><td>0.9409 $\pm {0.0072}$</td><td>0.9772 $\pm {0.0011}$</td></tr><tr><td>Taylor + PW</td><td>11.1 M $\pm {0.3}\mathrm{M}$</td><td>${3.9}\mathrm{\;k}$ $\pm {0.7}\mathrm{\;k}$</td><td>0.9711 $\pm {0.0036}$</td><td>0.9403 $\pm {0.0044}$</td><td>0.9356 $\pm {0.0127}$</td><td>0.966 $\pm {0.0074}$</td></tr><tr><td>Taylor + PW</td><td>5.1 M $\pm {1.8}\mathrm{M}$</td><td>1.8 k $\pm {0.2}\mathrm{\;k}$</td><td>0.9601 $\pm {0.006}$</td><td>0.9229 $\pm {0.0146}$</td><td>0.9051 $\pm {0.0176}$</td><td>0.9579 $\pm {0.0059}$</td></tr><tr><td>Random</td><td>138.2 M $\pm {2.8}\mathrm{M}$</td><td>1022.8 k ± 41.3 k</td><td>0.9542 $\pm {0.0038}$</td><td>0.9478 $\pm {0.0014}$</td><td>0.8404 $\pm {0.0236}$</td><td>0.9574 $\pm {0.0031}$</td></tr><tr><td>Random Random</td><td>66.4 M $\pm {5.6}\mathrm{M}$ ${32.4}\mathrm{M}$</td><td>496.6 k $\pm {30.6}\mathrm{\;k}$ 176.3 k</td><td>0.9497 $\pm {0.0058}$ 0.9428</td><td>0.9448 $\pm {0.0004}$ 0.9406</td><td>0.8393 $\pm {0.0281}$ 0.838</td><td>0.9522 $\pm {0.0066}$ 0.944</td></tr><tr><td>Random</td><td>$\pm {0.7}\mathrm{M}$ 16.4 M</td><td>$\pm {19.7}\mathrm{\;k}$ 120.9 k</td><td>$\pm {0.0013}$ 0.94</td><td>$\pm {0.0018}$ 0.9342</td><td>$\pm {0.007}$ 0.8406</td><td>$\pm {0.0015}$ 0.9409</td></tr><tr><td>Random</td><td>$\pm {1.7}\mathrm{M}$ 7.0 M</td><td>$\pm {5.6}\mathrm{\;k}$ 24.1 k</td><td>$\pm {0.0059}$ 0.9641</td><td>$\pm {0.0026}$ 0.942</td><td>$\pm {0.0125}$ 0.901</td><td>$\pm {0.0046}$ 0.9633</td></tr><tr><td/><td>$\pm {0.7}\mathrm{M}$</td><td>$\pm {10.9}\mathrm{\;k}$</td><td>$\pm {0.0113}$</td><td>$\pm {0.004}$</td><td>$\pm {0.0448}$</td><td>$\pm {0.0092}$</td></tr></table>
|
| 268 |
+
|
| 269 |
+
Table 6: Pruning performance mean $\pm$ standard deviation with VGG-16 with CelebA dataset
|
| 270 |
+
|
| 271 |
+
<table><tr><td rowspan="2">Pruning Method</td><td rowspan="2">FLOPS</td><td rowspan="2">Parameters</td><td rowspan="2">Accuracy</td><td colspan="3">ROC-AUC</td></tr><tr><td>All</td><td>Male</td><td>Non-Male</td></tr><tr><td>Unpruned</td><td>11782.0 M</td><td>134264.6 k</td><td>0.9852</td><td>0.9586</td><td>0.9679</td><td>0.9826</td></tr><tr><td rowspan="2">AutoBot</td><td>751.9 M</td><td>22104.4 k</td><td>0.9813</td><td>0.9559</td><td>0.9516</td><td>0.979</td></tr><tr><td>$\pm {2.0}\mathrm{M}$</td><td>± 343.0 k</td><td>$\pm {0.0002}$</td><td>$\pm {0.0006}$</td><td>$\pm {0.0016}$</td><td>$\pm {0.0002}$</td></tr><tr><td rowspan="2">AutoBot*</td><td>390.5 M</td><td>21966.0 k</td><td>0.9817</td><td>0.9557</td><td>0.9547</td><td>0.9793</td></tr><tr><td>$\pm {2.6}\mathrm{M}$</td><td>± 144.5 k</td><td>$\pm {0.0}$</td><td>$\pm {0.0002}$</td><td>$\pm {0.0017}$</td><td>$\pm {0.0004}$</td></tr><tr><td rowspan="2">AutoBot*</td><td>204.7 M</td><td>21889.2 k</td><td>0.9807</td><td>0.9548</td><td>0.9518</td><td>0.9783</td></tr><tr><td>±0.6 M</td><td>± 140.7 k</td><td>$\pm {0.0013}$</td><td>$\pm {0.0012}$</td><td>$\pm {0.0055}$</td><td>$\pm {0.0011}$</td></tr><tr><td rowspan="2">AutoBot + PW*</td><td>751.2 M</td><td>21609.1 k</td><td>0.9832</td><td>0.9568</td><td>0.9629</td><td>0.9805</td></tr><tr><td>± 1.2 M</td><td>$\pm {283.8}\mathrm{\;k}$</td><td>$\pm {0.0001}$</td><td>$\pm {0.001}$</td><td>$\pm {0.0016}$</td><td>$\pm {0.0001}$</td></tr><tr><td>AutoBot + PW**</td><td>412.5 M</td><td>21652.6 k</td><td>0.9827</td><td>0.958</td><td>0.9572</td><td>0.98</td></tr><tr><td>AutoBot + PW**</td><td>102.0 M</td><td>21086.5 k</td><td>0.9775</td><td>0.9503</td><td>0.9436</td><td>0.9747</td></tr><tr><td rowspan="2">Taylor</td><td>746.1 M</td><td>19832.1 k</td><td>0.9833</td><td>0.9581</td><td>0.9592</td><td>0.9809</td></tr><tr><td>± 3.2 M</td><td>$\pm {514.9}\mathrm{\;k}$</td><td>$\pm {0.0006}$</td><td>$\pm {0.0006}$</td><td>$\pm {0.0035}$</td><td>$\pm {0.0003}$</td></tr><tr><td rowspan="2">Taylor</td><td>379.5 M</td><td>19171.8 k</td><td>0.9829</td><td>0.9582</td><td>0.9559</td><td>0.9808</td></tr><tr><td>$\pm {3.8}\mathrm{M}$</td><td>$\pm {421.7}\mathrm{\;k}$</td><td>$\pm {0.0002}$</td><td>$\pm {0.0009}$</td><td>$\pm {0.0016}$</td><td>$\pm {0.0004}$</td></tr><tr><td rowspan="2">Taylor</td><td>197.9 M</td><td>18949.7 k</td><td>0.9821</td><td>0.9559</td><td>0.9568</td><td>0.9795</td></tr><tr><td>$\pm {3.2}\mathrm{M}$</td><td>$\pm {230.9}\mathrm{\;k}$</td><td>$\pm {0.0008}$</td><td>$\pm {0.0015}$</td><td>$\pm {0.0032}$</td><td>$\pm {0.0009}$</td></tr><tr><td rowspan="2">Taylor</td><td>107.8 M</td><td>18805.9 k</td><td>0.9804</td><td>0.9542</td><td>0.9546</td><td>0.9776</td></tr><tr><td>$\pm {2.5}\mathrm{M}$</td><td>± 1.2 k</td><td>$\pm {0.0011}$</td><td>$\pm {0.0008}$</td><td>$\pm {0.0041}$</td><td>$\pm {0.001}$</td></tr><tr><td rowspan="2">Taylor</td><td>63.3 M</td><td>18735.3 k</td><td>0.9788</td><td>0.9524</td><td>0.951</td><td>0.9758</td></tr><tr><td>$\pm {1.1}\mathrm{M}$</td><td>$\pm {115.1}\mathrm{\;k}$</td><td>$\pm {0.0002}$</td><td>$\pm {0.0004}$</td><td>$\pm {0.0026}$</td><td>$\pm {0.0007}$</td></tr><tr><td rowspan="2">Taylor + PW</td><td>741.4 M</td><td>19416.6 k</td><td>0.9831</td><td>0.958</td><td>0.9575</td><td>0.9809</td></tr><tr><td>$\pm {3.3}\mathrm{M}$</td><td>$\pm {467.6}\mathrm{\;k}$</td><td>$\pm {0.0002}$</td><td>$\pm {0.001}$</td><td>$\pm {0.0024}$</td><td>$\pm {0.0003}$</td></tr><tr><td rowspan="2">Taylor + PW</td><td>379.1 M</td><td>19099.7 k</td><td>0.9827</td><td>0.9563</td><td>0.9582</td><td>0.9803</td></tr><tr><td>$\pm {3.3}\mathrm{M}$</td><td>± 515.6 k</td><td>$\pm {0.0009}$</td><td>$\pm {0.0012}$</td><td>$\pm {0.002}$</td><td>$\pm {0.0012}$</td></tr><tr><td rowspan="2">Taylor + PW</td><td>194.6 M</td><td>18746.9 k</td><td>0.9815</td><td>0.9549</td><td>0.9564</td><td>0.9788</td></tr><tr><td>$\pm {3.0}\mathrm{M}$</td><td>$\pm {817.5}\mathrm{\;k}$</td><td>$\pm {0.0009}$</td><td>$\pm {0.0018}$</td><td>$\pm {0.0024}$</td><td>$\pm {0.001}$</td></tr><tr><td rowspan="2">Taylor + PW</td><td>107.8 M</td><td>18403.9 k</td><td>0.9797</td><td>0.9534</td><td>0.9505</td><td>0.9769</td></tr><tr><td>$\pm {0.4}\mathrm{M}$</td><td>$\pm {878.4}\mathrm{\;k}$</td><td>$\pm {0.0014}$</td><td>$\pm {0.0019}$</td><td>$\pm {0.0036}$</td><td>$\pm {0.0014}$</td></tr><tr><td rowspan="2">Taylor + PW</td><td>61.7 M</td><td>18132.0 k</td><td>0.9782</td><td>0.9507</td><td>0.9515</td><td>0.975</td></tr><tr><td>± 1.4 M</td><td>$\pm {418.7}\mathrm{\;k}$</td><td>$\pm {0.0006}$</td><td>$\pm {0.0009}$</td><td>$\pm {0.0042}$</td><td>$\pm {0.0005}$</td></tr><tr><td rowspan="2">Random*</td><td>770.0 M</td><td>43645.0 k</td><td>0.9814</td><td>0.9564</td><td>0.9492</td><td>0.9795</td></tr><tr><td>$\pm {1.0}\mathrm{M}$</td><td>$\pm {773.7}\mathrm{\;k}$</td><td>$\pm {0.0004}$</td><td>$\pm {0.0011}$</td><td>$\pm {0.0002}$</td><td>$\pm {0.0004}$</td></tr><tr><td rowspan="2">Random</td><td>398.7 M</td><td>36418.8 k</td><td>0.9819</td><td>0.9552</td><td>0.9542</td><td>0.9795</td></tr><tr><td>$\pm {0.6}\mathrm{M}$</td><td>$\pm {611.7}\mathrm{\;k}$</td><td>$\pm {0.0003}$</td><td>$\pm {0.0007}$</td><td>$\pm {0.0027}$</td><td>$\pm {0.0004}$</td></tr><tr><td>Random**</td><td>213.7 M</td><td>32481.6 k</td><td>0.9799</td><td>0.9537</td><td>0.9471</td><td>0.9777</td></tr><tr><td rowspan="2">Random*</td><td>117.4 M</td><td>26650.8 k</td><td>0.9765</td><td>0.9508</td><td>0.9308</td><td>0.9747</td></tr><tr><td>$\pm {0.1}\mathrm{M}$</td><td>± 146.0 k</td><td>$\pm {0.0002}$</td><td>$\pm {0.0007}$</td><td>$\pm {0.0012}$</td><td>$\pm {0.0}$</td></tr><tr><td rowspan="2">Random*</td><td>67.9 M</td><td>22757.6 k</td><td>0.9753</td><td>0.9486</td><td>0.9333</td><td>0.973</td></tr><tr><td>$\pm {0.1}\mathrm{M}$</td><td>$\pm {422.0}\mathrm{\;k}$</td><td>$\pm {0.0001}$</td><td>$\pm {0.0008}$</td><td>$\pm {0.0039}$</td><td>$\pm {0.0004}$</td></tr></table>
|
| 272 |
+
|
| 273 |
+
* indicates one of the three trials failed ** indicates two of the three trials failed
|
| 274 |
+
|
| 275 |
+
Table 7: Pruning performance mean $\pm$ standard deviation with ResNet-34 with Fitzpatrick17k dataset
|
| 276 |
+
|
| 277 |
+
<table><tr><td rowspan="2">Pruning Method</td><td rowspan="2">FLOPS</td><td rowspan="2">Parameters</td><td rowspan="2">Accuracy</td><td colspan="3">ROC-AUC</td></tr><tr><td>All</td><td>Medium</td><td>Dark</td></tr><tr><td>Unpruned</td><td>3682.0 M</td><td>21286.2 k</td><td>0.8023</td><td>0.7896</td><td>0.819</td><td>0.7375</td></tr><tr><td rowspan="2">AutoBot</td><td>2549.2 M</td><td>9425.2 k</td><td>0.7852</td><td>0.7852</td><td>0.8007</td><td>0.7244</td></tr><tr><td>$\pm {53.6}\mathrm{M}$</td><td>± 178.6 k</td><td>$\pm {0.0114}$</td><td>$\pm {0.0138}$</td><td>$\pm {0.0134}$</td><td>$\pm {0.007}$</td></tr><tr><td rowspan="2">AutoBot</td><td>1285.0 M</td><td>4263.8 k</td><td>0.7375</td><td>0.7535</td><td>0.7514</td><td>0.6832</td></tr><tr><td>$\pm {20.4}\mathrm{M}$</td><td>$\pm {358.1}\mathrm{\;k}$</td><td>$\pm {0.0043}$</td><td>$\pm {0.0016}$</td><td>$\pm {0.0038}$</td><td>$\pm {0.0086}$</td></tr><tr><td rowspan="2">AutoBot</td><td>654.5 M</td><td>1163.9 k</td><td>0.6565</td><td>0.6792</td><td>0.6661</td><td>0.6172</td></tr><tr><td>$\pm {36.1}\mathrm{M}$</td><td>$\pm {282.1}\mathrm{\;k}$</td><td>$\pm {0.0129}$</td><td>$\pm {0.0144}$</td><td>$\pm {0.009}$</td><td>$\pm {0.0272}$</td></tr><tr><td rowspan="2">AutoBot</td><td>375.1 M</td><td>978.9 k</td><td>0.653</td><td>0.6816</td><td>0.6638</td><td>0.6154</td></tr><tr><td>$\pm {26.5}\mathrm{M}$</td><td>$\pm {78.5}\mathrm{\;k}$</td><td>$\pm {0.0165}$</td><td>$\pm {0.0157}$</td><td>$\pm {0.019}$</td><td>$\pm {0.0061}$</td></tr><tr><td rowspan="2">AutoBot AutoBot + PW</td><td>138.3 M</td><td>${24.0}\mathrm{\;k}$</td><td>0.6127</td><td>0.7421</td><td>0.6189</td><td>0.5917</td></tr><tr><td>$\pm {6.5}\mathrm{M}$ 2327.3 M</td><td>$\pm {5.7}\mathrm{\;k}$ 7091.0 k</td><td>$\pm {0.0171}$ 0.7851</td><td>$\pm {0.0111}$ 0.7714</td><td>$\pm {0.018}$ 0.7989</td><td>$\pm {0.0159}$ 0.7354</td></tr><tr><td>AutoBot + PW</td><td>$\pm {210.8}\mathrm{M}$ 1280.8 M</td><td>$\pm {2118.5}\mathrm{\;k}$ 3944.4 k</td><td>$\pm {0.027}$ 0.7439</td><td>$\pm {0.0171}$ 0.7522</td><td>$\pm {0.0288}$ 0.7577</td><td>$\pm {0.0212}$ 0.6944</td></tr><tr><td>AutoBot + PW</td><td>$\pm {57.0}\mathrm{M}$ 684.4 M</td><td>$\pm {773.3}\mathrm{\;k}$ 1343.4 k</td><td>$\pm {0.0096}$ 0.682</td><td>$\pm {0.0132}$ 0.7075</td><td>$\pm {0.011}$ 0.6939</td><td>$\pm {0.0091}$ 0.642</td></tr><tr><td>AutoBot + PW</td><td>$\pm {66.4}\mathrm{M}$ 390.1 M</td><td>$\pm {552.7}\mathrm{\;k}$ 1157.2 k</td><td>$\pm {0.0242}$ 0.6682</td><td>$\pm {0.0216}$ 0.6924</td><td>$\pm {0.0257}$ 0.6802</td><td>$\pm {0.0225}$ 0.6217</td></tr><tr><td/><td>$\pm {52.4}\mathrm{M}$</td><td>$\pm {187.2}\mathrm{\;k}$</td><td>$\pm {0.0149}$</td><td>$\pm {0.0166}$</td><td>$\pm {0.0162}$</td><td>$\pm {0.0149}$</td></tr><tr><td rowspan="2">AutoBot + PW</td><td>159.6 M</td><td>194.3 k</td><td>0.6491</td><td>0.7049</td><td>0.659</td><td>0.6086</td></tr><tr><td>$\pm {25.1}\mathrm{M}$</td><td>± 199.5 k</td><td>$\pm {0.0117}$</td><td>$\pm {0.0236}$</td><td>$\pm {0.0175}$</td><td>$\pm {0.0144}$</td></tr><tr><td rowspan="2">Taylor</td><td>2219.5 M</td><td>6614.4 k</td><td>0.7897</td><td>0.7831</td><td>0.8056</td><td>0.729</td></tr><tr><td>$\pm {12.1}\mathrm{M}$</td><td>± 90.9 k</td><td>$\pm {0.0137}$</td><td>$\pm {0.0144}$</td><td>$\pm {0.012}$</td><td>$\pm {0.0242}$</td></tr><tr><td rowspan="2">Taylor</td><td>1273.0 M</td><td>2271.0 k</td><td>0.7712</td><td>0.7744</td><td>0.7856</td><td>0.718</td></tr><tr><td>$\pm {14.3}\mathrm{M}$</td><td>$\pm {60.2}\mathrm{\;k}$</td><td>$\pm {0.0116}$</td><td>$\pm {0.0078}$</td><td>$\pm {0.0134}$</td><td>$\pm {0.0126}$</td></tr><tr><td rowspan="2">Taylor</td><td>732.5 M</td><td>879.8 k</td><td>0.73</td><td>0.7492</td><td>0.743</td><td>0.6836</td></tr><tr><td>$\pm {7.1}\mathrm{M}$</td><td>± 22.5 k</td><td>$\pm {0.0184}$</td><td>$\pm {0.0073}$</td><td>$\pm {0.0176}$</td><td>$\pm {0.0232}$</td></tr><tr><td rowspan="2">Taylor</td><td>381.8 M</td><td>${333.0}\mathrm{\;k}$</td><td>0.6877</td><td>0.7061</td><td>0.7015</td><td>0.6414</td></tr><tr><td>$\pm {6.9}\mathrm{M}$</td><td>± 21.8 k</td><td>$\pm {0.0166}$</td><td>$\pm {0.0117}$</td><td>$\pm {0.018}$</td><td>$\pm {0.015}$</td></tr><tr><td rowspan="2">Taylor</td><td>178.8 M</td><td>127.9 k</td><td>0.6565</td><td>0.6961</td><td>0.6659</td><td>0.6241</td></tr><tr><td>$\pm {7.8}\mathrm{M}$</td><td>± 16.8 k</td><td>$\pm {0.0073}$</td><td>$\pm {0.0194}$</td><td>$\pm {0.0094}$</td><td>$\pm {0.0099}$</td></tr><tr><td rowspan="2">Taylor + PW</td><td>2220.1 M</td><td>6825.4 k</td><td>0.8005</td><td>0.7855</td><td>0.8169</td><td>0.7367</td></tr><tr><td>$\pm {10.2}\mathrm{M}$</td><td>± 119.2 k</td><td>$\pm {0.0134}$</td><td>$\pm {0.0074}$</td><td>$\pm {0.0134}$</td><td>$\pm {0.0146}$</td></tr><tr><td rowspan="2">Taylor + PW</td><td>1285.5 M</td><td>2394.2 k</td><td>0.758</td><td>0.753</td><td>0.7758</td><td>0.6937</td></tr><tr><td>$\pm {12.6}\mathrm{M}$</td><td>± 58.2 k</td><td>$\pm {0.0086}$</td><td>$\pm {0.0042}$</td><td>$\pm {0.0071}$</td><td>$\pm {0.0173}$</td></tr><tr><td rowspan="2">Taylor + PW</td><td>721.2 M</td><td>847.1 k</td><td>0.7063</td><td>0.7283</td><td>0.7228</td><td>0.6502</td></tr><tr><td>$\pm {9.3}\mathrm{M}$</td><td>$\pm {59.5}\mathrm{\;k}$</td><td>$\pm {0.0192}$</td><td>$\pm {0.0172}$</td><td>$\pm {0.0198}$</td><td>$\pm {0.0213}$</td></tr><tr><td rowspan="2">Taylor + PW</td><td>377.1 M</td><td>${329.9}\mathrm{\;k}$</td><td>0.6645</td><td>0.6991</td><td>0.6766</td><td>0.6204</td></tr><tr><td>$\pm {10.3}\mathrm{M}$</td><td>± 17.8 k</td><td>$\pm {0.0195}$</td><td>$\pm {0.0148}$</td><td>$\pm {0.0193}$</td><td>$\pm {0.0244}$</td></tr><tr><td rowspan="2">Taylor + PW</td><td>177.5 M</td><td>125.7 k</td><td>0.6452</td><td>0.7199</td><td>0.6571</td><td>0.6006</td></tr><tr><td>$\pm {8.7}\mathrm{M}$</td><td>± 17.7 k</td><td>$\pm {0.0198}$</td><td>$\pm {0.023}$</td><td>$\pm {0.0201}$</td><td>$\pm {0.0177}$</td></tr><tr><td rowspan="2">Random</td><td>2296.8 M</td><td>13762.6 k</td><td>0.79</td><td>0.7904</td><td>0.8071</td><td>0.7239</td></tr><tr><td>$\pm {33.0}\mathrm{M}$</td><td>± 114.3 k</td><td>$\pm {0.0033}$</td><td>$\pm {0.01}$</td><td>$\pm {0.005}$</td><td>$\pm {0.0086}$</td></tr><tr><td rowspan="2">Random Random</td><td>1291.3 M</td><td>7771.6 k</td><td>0.7191</td><td>0.7572</td><td>0.7332</td><td>0.6694</td></tr><tr><td>$\pm {35.8}\mathrm{\;M}$ 664.0 M</td><td>$\pm {333.6}\mathrm{\;k}$ ${4011.0}\mathrm{\;k}$</td><td>$\pm {0.0042}$ 0.6884</td><td>$\pm {0.0068}$ 0.7255</td><td>$\pm {0.0045}$ 0.6989</td><td>$\pm {0.0026}$ 0.6513</td></tr><tr><td>Random</td><td>$\pm {36.7}\mathrm{M}$ 350.3 M</td><td>$\pm {254.9}\mathrm{\;k}$ 2122.7 k</td><td>$\pm {0.0125}$ 0.6419</td><td>$\pm {0.0213}$ 0.7147</td><td>$\pm {0.0141}$ 0.6505</td><td>$\pm {0.0209}$ 0.6113</td></tr><tr><td/><td>$\pm {10.1}\mathrm{M}$</td><td>$\pm {113.5}\mathrm{\;k}$</td><td>$\pm {0.0068}$</td><td>$\pm {0.0151}$</td><td>$\pm {0.0075}$</td><td>$\pm {0.0027}$</td></tr><tr><td rowspan="2">Random</td><td>173.6 M</td><td>925.2 k</td><td>0.6236</td><td>0.6652</td><td>0.6344</td><td>0.583</td></tr><tr><td>$\pm {2.8}\mathrm{M}$</td><td>$\pm {86.2}\mathrm{\;k}$</td><td>$\pm {0.0184}$</td><td>$\pm {0.0156}$</td><td>$\pm {0.0182}$</td><td>$\pm {0.0202}$</td></tr></table>
|
| 278 |
+
|
| 279 |
+
Table 8: Pruning performance mean $\pm$ standard deviation with EfficientNet V2 Med. with Fitzpatrick17k dataset
|
| 280 |
+
|
| 281 |
+
<table><tr><td rowspan="2">Pruning Method</td><td rowspan="2">FLOPS</td><td rowspan="2">Parameters</td><td rowspan="2">Accuracy</td><td colspan="3">ROC-AUC</td></tr><tr><td>All</td><td>Medium</td><td>Dark</td></tr><tr><td>Unpruned</td><td>5464.7 M</td><td>52862.2 k</td><td>0.831</td><td>0.8218</td><td>0.8516</td><td>0.7524</td></tr><tr><td rowspan="2">AutoBot</td><td>4892.1 M</td><td>41883.3 k</td><td>0.8202</td><td>0.8168</td><td>0.8405</td><td>0.7424</td></tr><tr><td>$\pm {4.0}\mathrm{M}$</td><td>$\pm {939.3}\mathrm{\;k}$</td><td>$\pm {0.0062}$</td><td>$\pm {0.0086}$</td><td>$\pm {0.0082}$</td><td>$\pm {0.0053}$</td></tr><tr><td rowspan="2">AutoBot</td><td>4125.9 M</td><td>41931.5 k</td><td>0.7196</td><td>0.755</td><td>0.7345</td><td>0.6632</td></tr><tr><td>$\pm {254.6}\mathrm{M}$</td><td>$\pm {3481.1}\mathrm{\;k}$</td><td>$\pm {0.0794}$</td><td>$\pm {0.038}$</td><td>$\pm {0.0822}$</td><td>$\pm {0.0709}$</td></tr><tr><td rowspan="2">AutoBot</td><td>3385.7 M</td><td>42110.8 k</td><td>0.7253</td><td>0.7577</td><td>0.7383</td><td>0.6779</td></tr><tr><td>$\pm {91.6}\mathrm{M}$</td><td>± 523.8 k</td><td>$\pm {0.0292}$</td><td>$\pm {0.0175}$</td><td>$\pm {0.0323}$</td><td>$\pm {0.0164}$</td></tr><tr><td rowspan="2">AutoBot</td><td>1771.0 M</td><td>20118.8 k</td><td>0.65</td><td>0.7041</td><td>0.6617</td><td>0.604</td></tr><tr><td>$\pm {94.8}\mathrm{\;M}$</td><td>$\pm {1568.1}\mathrm{\;k}$</td><td>$\pm {0.021}$</td><td>$\pm {0.0192}$</td><td>$\pm {0.0254}$</td><td>$\pm {0.0196}$</td></tr><tr><td rowspan="2">AutoBot AutoBot + PW</td><td>985.7 M</td><td>13211.3 k</td><td>0.6357</td><td>0.7123</td><td>0.6446</td><td>0.6033</td></tr><tr><td>$\pm {33.3}\mathrm{\;M}$ 4946.1 M</td><td>$\pm {278.8}\mathrm{\;k}$ 41855.0 k</td><td>$\pm {0.0259}$ 0.8251</td><td>$\pm {0.0101}$ 0.8164</td><td>$\pm {0.0284}$ 0.8441</td><td>$\pm {0.012}$ 0.7513</td></tr><tr><td>AutoBot + PW</td><td>$\pm {65.8}\mathrm{\;M}$ 4314.5 M</td><td>$\pm {896.3}\mathrm{\;k}$ 40312.4 k</td><td>$\pm {0.0024}$ 0.7969</td><td>$\pm {0.0019}$ 0.7972</td><td>$\pm {0.0028}$ 0.8116</td><td>$\pm {0.001}$ 0.7402</td></tr><tr><td/><td>$\pm {214.6}\mathrm{M}$</td><td>$\pm {4882.1}\mathrm{\;k}$</td><td>$\pm {0.0353}$</td><td>$\pm {0.018}$</td><td>$\pm {0.0406}$</td><td>$\pm {0.0157}$</td></tr><tr><td rowspan="2">AutoBot + PW</td><td>3393.3 M</td><td>42535.4 k</td><td>0.7235</td><td>0.7577</td><td>0.7359</td><td>0.6772</td></tr><tr><td>$\pm {94.2}\mathrm{M}$</td><td>$\pm {937.1}\mathrm{\;k}$</td><td>$\pm {0.0409}$</td><td>$\pm {0.0296}$</td><td>$\pm {0.0406}$</td><td>$\pm {0.0403}$</td></tr><tr><td rowspan="2">AutoBot + PW</td><td>1961.2 M</td><td>21341.5 k</td><td>0.6474</td><td>0.7007</td><td>0.6579</td><td>0.6079</td></tr><tr><td>$\pm {74.7}\mathrm{M}$</td><td>$\pm {1960.4}\mathrm{\;k}$</td><td>$\pm {0.0177}$</td><td>$\pm {0.0081}$</td><td>$\pm {0.0204}$</td><td>$\pm {0.0115}$</td></tr><tr><td rowspan="2">AutoBot + PW</td><td>913.7 M</td><td>11856.2 k</td><td>0.6411</td><td>0.7315</td><td>0.6544</td><td>0.5939</td></tr><tr><td>$\pm {161.7}\mathrm{M}$</td><td>$\pm {2435.1}\mathrm{\;k}$</td><td>$\pm {0.0094}$</td><td>$\pm {0.0124}$</td><td>$\pm {0.0115}$</td><td>$\pm {0.0059}$</td></tr><tr><td rowspan="2">Taylor</td><td>4826.1 M</td><td>33486.3 k</td><td>0.8366</td><td>0.8277</td><td>0.8567</td><td>0.7586</td></tr><tr><td>$\pm {1.5}\mathrm{M}$</td><td>$\pm {46.3}\mathrm{\;k}$</td><td>$\pm {0.0029}$</td><td>$\pm {0.0015}$</td><td>$\pm {0.0039}$</td><td>$\pm {0.0004}$</td></tr><tr><td rowspan="2">Taylor</td><td>3907.0 M</td><td>17731.0 k</td><td>0.8274</td><td>0.8168</td><td>0.8442</td><td>0.7617</td></tr><tr><td>$\pm {0.8}\mathrm{M}$</td><td>$\pm {33.4}\mathrm{\;k}$</td><td>$\pm {0.0018}$</td><td>$\pm {0.0026}$</td><td>$\pm {0.0011}$</td><td>$\pm {0.0029}$</td></tr><tr><td rowspan="2">Taylor</td><td>3057.6 M</td><td>8477.2 k</td><td>0.8071</td><td>0.7909</td><td>0.8267</td><td>0.7324</td></tr><tr><td>$\pm {2.0}\mathrm{M}$</td><td>± 61.2 k</td><td>$\pm {0.0048}$</td><td>$\pm {0.0049}$</td><td>$\pm {0.0073}$</td><td>$\pm {0.0061}$</td></tr><tr><td rowspan="2">Taylor</td><td>1515.9 M</td><td>1448.1 k</td><td>0.705</td><td>0.696</td><td>0.7143</td><td>0.6745</td></tr><tr><td>$\pm {13.0}\mathrm{M}$</td><td>± 13.3 k</td><td>$\pm {0.0495}$</td><td>$\pm {0.0498}$</td><td>$\pm {0.051}$</td><td>$\pm {0.0404}$</td></tr><tr><td rowspan="2">Taylor</td><td>899.2 M</td><td>668.7 k</td><td>0.6369</td><td>0.7275</td><td>0.6466</td><td>0.6002</td></tr><tr><td>$\pm {21.1}\mathrm{M}$</td><td>± 19.1 k</td><td>$\pm {0.0155}$</td><td>$\pm {0.0028}$</td><td>$\pm {0.0188}$</td><td>$\pm {0.0069}$</td></tr><tr><td rowspan="2">Taylor + PW</td><td>4836.8 M</td><td>33734.0 k</td><td>0.8346</td><td>0.8275</td><td>0.855</td><td>0.757</td></tr><tr><td>$\pm {1.5}\mathrm{M}$</td><td>$\pm {55.8}\mathrm{\;k}$</td><td>$\pm {0.0068}$</td><td>$\pm {0.0041}$</td><td>$\pm {0.0062}$</td><td>$\pm {0.0083}$</td></tr><tr><td rowspan="2">Taylor + PW</td><td>3911.8 M</td><td>17826.1 k</td><td>0.8247</td><td>0.809</td><td>0.8437</td><td>0.7523</td></tr><tr><td>$\pm {2.1}\mathrm{M}$</td><td>$\pm {21.9}\mathrm{\;k}$</td><td>$\pm {0.0019}$</td><td>$\pm {0.0079}$</td><td>$\pm {0.0028}$</td><td>$\pm {0.0024}$</td></tr><tr><td rowspan="2">Taylor + PW</td><td>3051.9 M</td><td>8429.1 k</td><td>0.8021</td><td>0.7592</td><td>0.8207</td><td>0.7305</td></tr><tr><td>$\pm {4.4}\mathrm{M}$</td><td>$\pm {60.3}\mathrm{\;k}$</td><td>$\pm {0.0054}$</td><td>$\pm {0.0149}$</td><td>$\pm {0.0061}$</td><td>$\pm {0.0082}$</td></tr><tr><td rowspan="2">Taylor + PW</td><td>1529.8 M</td><td>${1460.9}\mathrm{\;k}$</td><td>0.7216</td><td>0.747</td><td>0.7356</td><td>0.6676</td></tr><tr><td>$\pm {8.6}\mathrm{M}$</td><td>± 14.9 k</td><td>$\pm {0.0071}$</td><td>$\pm {0.0136}$</td><td>$\pm {0.0047}$</td><td>$\pm {0.0137}$</td></tr><tr><td rowspan="2">Taylor + PW</td><td>890.6 M</td><td>672.0 k</td><td>0.6113</td><td>0.7539</td><td>0.621</td><td>0.5738</td></tr><tr><td>$\pm {12.8}\mathrm{M}$</td><td>$\pm {8.8}\mathrm{\;k}$</td><td>$\pm {0.0084}$</td><td>$\pm {0.0016}$</td><td>$\pm {0.0069}$</td><td>$\pm {0.0184}$</td></tr><tr><td rowspan="2">Random</td><td>4078.8 M</td><td>39661.8 k</td><td>0.7794</td><td>0.7838</td><td>0.7952</td><td>0.7207</td></tr><tr><td>$\pm {157.9}\mathrm{M}$</td><td>± 539.2 k</td><td>$\pm {0.0424}$</td><td>$\pm {0.0238}$</td><td>$\pm {0.0441}$</td><td>$\pm {0.0351}$</td></tr><tr><td rowspan="2">Random</td><td>3305.1 M</td><td>31406.7 k</td><td>0.7427</td><td>0.7647</td><td>0.7573</td><td>0.6881</td></tr><tr><td>$\pm {6.9}\mathrm{M}$</td><td>$\pm {869.5}\mathrm{\;k}$</td><td>$\pm {0.0202}$</td><td>$\pm {0.012}$</td><td>$\pm {0.0202}$</td><td>$\pm {0.0206}$</td></tr><tr><td rowspan="2">Random</td><td>2819.2 M</td><td>26370.3 k</td><td>0.7185</td><td>0.7451</td><td>0.7321</td><td>0.6692</td></tr><tr><td>$\pm {28.5}\mathrm{M}$</td><td>$\pm {515.0}\mathrm{\;k}$</td><td>$\pm {0.0053}$</td><td>$\pm {0.0142}$</td><td>$\pm {0.007}$</td><td>$\pm {0.0041}$</td></tr><tr><td rowspan="2">Random</td><td>1369.1 M</td><td>13060.2 k</td><td>0.6602</td><td>0.7172</td><td>0.6685</td><td>0.6286</td></tr><tr><td>$\pm {33.3}\mathrm{\;M}$</td><td>$\pm {157.0}\mathrm{\;k}$</td><td>$\pm {0.0192}$</td><td>$\pm {0.0098}$</td><td>$\pm {0.0208}$</td><td>$\pm {0.0123}$</td></tr><tr><td rowspan="2">Random</td><td>681.8 M</td><td>6646.2 k</td><td>0.6241</td><td>0.7162</td><td>0.6362</td><td>0.5767</td></tr><tr><td>$\pm {6.3}\mathrm{M}$</td><td>$\pm {44.0}\mathrm{\;k}$</td><td>$\pm {0.0153}$</td><td>$\pm {0.0501}$</td><td>$\pm {0.0142}$</td><td>$\pm {0.0153}$</td></tr></table>
|
| 282 |
+
|
| 283 |
+
Table 9: Pruning performance mean $\pm$ standard deviation with ResNet-18 with subsets of CelebA
|
| 284 |
+
|
| 285 |
+
<table><tr><td rowspan="2">Subset</td><td rowspan="2">Pruning Method</td><td rowspan="2">FLOPS</td><td rowspan="2">Parameters</td><td colspan="2">ROC-AUC</td></tr><tr><td>Male</td><td>Non-Male</td></tr><tr><td>Fully Balanced</td><td>Unpruned</td><td>1508.5 M</td><td>11177.0 k</td><td>0.9562</td><td>0.9713</td></tr><tr><td rowspan="2">Fully Balanced Fully Balanced</td><td rowspan="2">AutoBot AutoBot</td><td>285.5 M</td><td>497.9 k</td><td>0.8922</td><td>0.9396</td></tr><tr><td>± 1.7 M 66.9 M</td><td>± 86.7 k 864.8 k</td><td>$\pm {0.031}$ 0.8148</td><td>$\pm {0.0048}$ 0.8863</td></tr><tr><td/><td/><td>$\pm {1.5}\mathrm{M}$</td><td>± 30.9 k</td><td>$\pm {0.0275}$</td><td>$\pm {0.0212}$</td></tr><tr><td>Fully Balanced</td><td>AutoBot</td><td>17.2 M $\pm {3.0}\mathrm{M}$</td><td>196.0 k ± 35.5 k</td><td>0.8188 $\pm {0.0137}$</td><td>0.8764 $\pm {0.0337}$</td></tr><tr><td>Fully Balanced</td><td>Taylor</td><td>247.2 M $\pm {6.6}\mathrm{M}$</td><td>192.7 k ± 15.5 k</td><td>0.9451 $\pm {0.0026}$</td><td>0.9601 $\pm {0.0009}$</td></tr><tr><td>Fully Balanced</td><td>Taylor</td><td>55.7 M $\pm {1.8}\mathrm{M}$</td><td>${31.4}\mathrm{\;k}$ $\pm {2.7}\mathrm{\;k}$</td><td>0.9265 $\pm {0.0097}$</td><td>0.9576 $\pm {0.004}$</td></tr><tr><td rowspan="2">Fully Balanced</td><td>Taylor</td><td>12.4 M</td><td>5.7 k</td><td>0.8973</td><td>0.9469</td></tr><tr><td/><td>$\pm {0.4}\mathrm{M}$</td><td>$\pm {0.4}\mathrm{\;k}$</td><td>$\pm {0.0201}$</td><td>$\pm {0.008}$</td></tr><tr><td>Unequal Male/Non- Male Split</td><td>Unpruned</td><td>1508.5 M</td><td>11177.0 k</td><td>0.9479</td><td>0.9732</td></tr><tr><td>Unequal Male/Non- Male Split</td><td>AutoBot</td><td>243.4 M $\pm {1.6}\mathrm{M}$</td><td>1868.5 k ±80.0 k</td><td>0.9098 $\pm {0.0102}$</td><td>0.952 $\pm {0.0019}$</td></tr><tr><td>Unequal Male/Non-</td><td rowspan="2">AutoBot</td><td>66.5 M</td><td>923.6 k</td><td>0.838</td><td>0.9308</td></tr><tr><td>Male Split</td><td>$\pm {3.0}\mathrm{M}$</td><td>± 16.3 k</td><td>$\pm {0.0226}$</td><td>$\pm {0.007}$</td></tr><tr><td>Unequal Male/Non-</td><td rowspan="2">AutoBot</td><td>16.0 M</td><td>174.6 k</td><td>0.8433</td><td>0.9255</td></tr><tr><td>Male Split</td><td>± 1.2 M</td><td>± 14.6 k</td><td>$\pm {0.0209}$</td><td>$\pm {0.0057}$</td></tr><tr><td>Unequal Male/Non-</td><td>Taylor</td><td>252.9 M</td><td>205.8 k</td><td>0.9246</td><td>0.9611</td></tr><tr><td>Male Split</td><td/><td>$\pm {4.4}\mathrm{M}$</td><td>± 20.1 k</td><td>$\pm {0.0086}$</td><td>$\pm {0.0065}$</td></tr><tr><td>Unequal Male/Non- Male Split Unequal Male/Non-</td><td>Taylor Taylor</td><td>56.8 M ± 1.8 M 11.5 M</td><td>${30.8}\mathrm{\;k}$ $\pm {0.3}\mathrm{\;k}$ 5.9 k</td><td>0.9453 $\pm {0.0083}$ 0.9178</td><td>0.971 $\pm {0.0021}$ 0.9675</td></tr><tr><td>Male Split</td><td/><td>$\pm {1.8}\mathrm{M}$</td><td>$\pm {0.1}\mathrm{k}$</td><td>$\pm {0.0099}$</td><td>$\pm {0.0045}$</td></tr><tr><td>Unequal Label Split</td><td>Unpruned</td><td>1508.5 M</td><td>11177.0 k</td><td>0.9183</td><td>0.958</td></tr><tr><td>Unequal Label Split Unequal Label Split</td><td>AutoBot AutoBot</td><td>281.8 M $\pm {3.4}\mathrm{M}$ 70.4 M</td><td>723.8 k $\pm {38.7}\mathrm{\;k}$ ${915.0}\mathrm{\;k}$</td><td>0.8722 $\pm {0.0117}$ 0.8255</td><td>0.9418 $\pm {0.0065}$ 0.9151</td></tr><tr><td/><td/><td>$\pm {4.9}\mathrm{M}$</td><td>$\pm {95.9}\mathrm{\;k}$</td><td>$\pm {0.052}$</td><td>$\pm {0.0123}$</td></tr><tr><td rowspan="2">Unequal Label Split</td><td>AutoBot</td><td>17.1 M</td><td>208.4 k</td><td>0.8109</td><td>0.8994</td></tr><tr><td/><td>$\pm {1.1}\mathrm{M}$</td><td>± 44.6 k</td><td>$\pm {0.0147}$</td><td>$\pm {0.0102}$</td></tr><tr><td rowspan="2">Unequal Label Split Unequal Label Split</td><td>Taylor</td><td>235.8 M</td><td>187.5 k</td><td>0.8551</td><td>0.9015</td></tr><tr><td>Taylor</td><td>$\pm {3.1}\mathrm{M}$ 53.7 M</td><td>± 6.2 k 22.5 k</td><td>$\pm {0.0242}$ 0.9075</td><td>$\pm {0.0189}$ 0.9576</td></tr><tr><td/><td/><td>$\pm {4.9}\mathrm{M}$</td><td>$\pm {0.7}\mathrm{\;k}$</td><td>$\pm {0.057}$</td><td>$\pm {0.0262}$</td></tr><tr><td rowspan="2">Unequal Label Split</td><td rowspan="2">Taylor</td><td>9.8 M</td><td>4.1 k</td><td>0.8143</td><td>0.9193</td></tr><tr><td>$\pm {1.0}\mathrm{M}$</td><td>$\pm {0.5}\mathrm{k}$</td><td>$\pm {0.0758}$</td><td>$\pm {0.037}$</td></tr></table>
|
| 286 |
+
|
| 287 |
+
Table 10: Pruning performance mean $\pm$ standard deviation with ResNet-18 with CelebA when elements of PW loss are applied independently
|
| 288 |
+
|
| 289 |
+
<table><tr><td rowspan="2">Area of Mod- ification</td><td rowspan="2">Pruning Method</td><td rowspan="2">FLOPS</td><td rowspan="2">Parameters</td><td colspan="2">ROC-AUC</td></tr><tr><td>Male</td><td>Non-Male</td></tr><tr><td rowspan="2">Pruning Only</td><td rowspan="2">AutoBot + Weights</td><td>111.9 M</td><td>188.2 k</td><td>0.8547</td><td>0.9521</td></tr><tr><td>$\pm {3.5}\mathrm{M}$</td><td>$\pm {59.2}\mathrm{\;k}$</td><td>$\pm {0.0354}$</td><td>$\pm {0.0041}$</td></tr><tr><td>Pruning Only</td><td>AutoBot + Weights</td><td>47.6 M</td><td>278.5 k</td><td>0.846</td><td>0.9481</td></tr><tr><td>Pruning Only</td><td>AutoBot + Weights</td><td>$\pm {6.2}\mathrm{M}$ 49.3 M</td><td>$\pm {62.1}\mathrm{\;k}$ 729.4 k</td><td>$\pm {0.025}$ 0.8381</td><td>$\pm {0.0089}$ 0.9312</td></tr><tr><td/><td/><td>$\pm {3.3}\mathrm{M}$</td><td>± 41.3 k</td><td>$\pm {0.0071}$</td><td>$\pm {0.0022}$</td></tr><tr><td>Pruning Only</td><td>AutoBot + Weights</td><td>23.0 M $\pm {12.4}\mathrm{M}$</td><td>350.8 k $\pm {219.1}\mathrm{\;k}$</td><td>0.811 $\pm {0.0327}$</td><td>0.9256 $\pm {0.0183}$</td></tr><tr><td>Pruning Only Pruning Only</td><td>AutoBot + Weights AutoBot + Corr.</td><td>30.2 M $\pm {1.9}\mathrm{M}$ 108.1 M</td><td>474.6 k $\pm {30.9}\mathrm{\;k}$ ${215.0}\mathrm{\;k}$</td><td>0.7873 $\pm {0.0246}$ 0.9056</td><td>0.9343 $\pm {0.0019}$ 0.9625</td></tr><tr><td/><td>Soft-Labels</td><td>$\pm {1.0}\mathrm{M}$</td><td>± 50.5 k</td><td>$\pm {0.0438}$</td><td>$\pm {0.0089}$</td></tr><tr><td>Pruning Only</td><td>AutoBot + Corr.</td><td>67.7 M</td><td>916.2 k</td><td>0.8812</td><td>0.9566</td></tr><tr><td/><td>Soft-Labels</td><td>$\pm {2.8}\mathrm{M}$</td><td>$\pm {25.4}\mathrm{\;k}$</td><td>$\pm {0.0587}$</td><td>$\pm {0.0115}$</td></tr><tr><td>Pruning Only</td><td>AutoBot + Corr.</td><td>34.9 M</td><td>476.0 k</td><td>0.8655</td><td>0.9517</td></tr><tr><td/><td>Soft-Labels</td><td>$\pm {2.4}\mathrm{M}$</td><td>$\pm {33.7}\mathrm{\;k}$</td><td>$\pm {0.0471}$</td><td>$\pm {0.0115}$</td></tr><tr><td>Pruning Only</td><td>AutoBot + Corr.</td><td>17.2 M</td><td>199.0 k</td><td>0.8506</td><td>0.9454</td></tr><tr><td/><td>Soft-Labels</td><td>$\pm {1.4}\mathrm{M}$</td><td>± 11.0 k</td><td>$\pm {0.0513}$</td><td>$\pm {0.0163}$</td></tr><tr><td>Pruning Only</td><td>AutoBot + Corr.</td><td>8.9 M</td><td>84.7 k</td><td>0.8746</td><td>0.9475</td></tr><tr><td/><td>Soft-Labels</td><td>$\pm {1.1}\mathrm{M}$</td><td>$\pm {8.5}\mathrm{\;k}$</td><td>$\pm {0.0202}$</td><td>$\pm {0.011}$</td></tr><tr><td>Pruning and</td><td>AutoBot + Weights</td><td>112.2 M</td><td>194.8 k</td><td>0.9035</td><td>0.9649</td></tr><tr><td>Retraining</td><td/><td>$\pm {2.7}\mathrm{M}$</td><td>$\pm {47.8}\mathrm{\;k}$</td><td>$\pm {0.058}$</td><td>$\pm {0.0142}$</td></tr><tr><td>Pruning and</td><td>AutoBot + Weights</td><td>46.7 M</td><td>255.4 k</td><td>0.8992</td><td>0.9626</td></tr><tr><td>Retraining</td><td/><td>$\pm {4.9}\mathrm{M}$</td><td>± 50.8 k</td><td>$\pm {0.0605}$</td><td>$\pm {0.0168}$</td></tr><tr><td>Pruning and</td><td>AutoBot + Weights</td><td>47.7 M</td><td>730.3 k</td><td>0.8738</td><td>0.9483</td></tr><tr><td>Retraining</td><td/><td>$\pm {2.9}\mathrm{M}$</td><td>± 29.5 k</td><td>$\pm {0.04}$</td><td>$\pm {0.019}$</td></tr><tr><td>Pruning and</td><td>AutoBot + Weights</td><td>31.5 M</td><td>486.7 k</td><td>0.8663</td><td>0.9462</td></tr><tr><td>Retraining</td><td/><td>$\pm {12.2}\mathrm{M}$</td><td>$\pm {205.6}\mathrm{\;k}$</td><td>$\pm {0.064}$</td><td>$\pm {0.0254}$</td></tr><tr><td>Pruning and</td><td>AutoBot + Weights</td><td>25.0 M</td><td>387.2 k</td><td>0.8556</td><td>0.9492</td></tr><tr><td>Retraining</td><td/><td>$\pm {9.5}\mathrm{M}$</td><td>$\pm {167.9}\mathrm{\;k}$</td><td>$\pm {0.0765}$</td><td>$\pm {0.0164}$</td></tr><tr><td>Pruning and</td><td>AutoBot + Corr.</td><td>108.0 M</td><td>222.1 k</td><td>0.9441</td><td>0.9705</td></tr><tr><td>Retraining</td><td>Soft-Labels</td><td>$\pm {1.3}\mathrm{M}$</td><td>$\pm {57.6}\mathrm{\;k}$</td><td>$\pm {0.0025}$</td><td>$\pm {0.0007}$</td></tr><tr><td>Pruning and</td><td>AutoBot + Corr.</td><td>68.3 M</td><td>928.8 k</td><td>0.9213</td><td>0.9647</td></tr><tr><td>Retraining</td><td>Soft-Labels</td><td>$\pm {4.0}\mathrm{M}$</td><td>± 16.8 k</td><td>$\pm {0.0022}$</td><td>$\pm {0.0007}$</td></tr><tr><td>Pruning and</td><td>AutoBot + Corr.</td><td>35.2 M</td><td>474.5 k</td><td>0.9069</td><td>0.9617</td></tr><tr><td>Retraining</td><td>Soft-Labels</td><td>$\pm {2.9}\mathrm{M}$</td><td>± 19.8 k</td><td>$\pm {0.0079}$</td><td>$\pm {0.0031}$</td></tr><tr><td>Pruning and</td><td>AutoBot + Corr.</td><td>17.5 M</td><td>199.9 k</td><td>0.8937</td><td>0.9588</td></tr><tr><td>Retraining</td><td>Soft-Labels</td><td>$\pm {0.2}\mathrm{M}$</td><td>± 10.5 k</td><td>$\pm {0.0114}$</td><td>$\pm {0.0026}$</td></tr><tr><td>Pruning and</td><td>AutoBot + Corr.</td><td>9.0 M</td><td>85.3 k</td><td>0.888</td><td>0.9568</td></tr><tr><td>Retraining</td><td>Soft-Labels</td><td>$\pm {1.6}\mathrm{M}$</td><td>± 11.7 k</td><td>$\pm {0.016}$</td><td>$\pm {0.0063}$</td></tr></table>
|
| 290 |
+
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/Y8PmDhBdmv/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,157 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ A FAIR LOSS FUNCTION FOR NETWORK PRUNING
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
§ ABSTRACT
|
| 12 |
+
|
| 13 |
+
Model pruning can enable the deployment of neural networks in environments with resource constraints. While pruning may have a small effect on the overall performance of the model, it can exacerbate existing biases into the model such that subsets of samples see significantly degraded performance. In this paper, we introduce the performance weighted loss function, a simple modified cross-entropy loss function that can be used to limit the introduction of biases during pruning. Experiments using biased classifiers for facial classification and skin-lesion classification tasks demonstrate that the proposed method is a simple and effective tool that can enable existing pruning methods to be used in fairness sensitive contexts.
|
| 14 |
+
|
| 15 |
+
§ 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
Deep learning models are large, requiring millions of operations to make an inference [1]. Deploying large neural networks to environments with limited computational resources, such as mobile and embedded devices, may be infeasible.
|
| 18 |
+
|
| 19 |
+
Pruning is a simple and common method for reducing the size of a neural network [2]. It involves identifying parameters that do not significantly affect the model's output and removing them from the network. Pruning enables the deployment of performant neural networks in resource constrained environments [3, 4]. However, recent research has shown that while overall accuracy of the model may be maintained while the model is compressed, pruning can exacerbate existing model biases, disproportionately affecting disadvantaged groups [5]. Pruning methods that are designed to preserve overall model performance may not prioritize the preservation of parameters that are only important for a small subset of samples.
|
| 20 |
+
|
| 21 |
+
This effect has significant implications for the implementation of pruning in real-world situations. Biases have been observed in artificial intelligence systems such as those used to classify chest X-ray images [6], recognize faces [7] and screen resumes [8]. Biases in models can increase the risk of unfair outcomes, preventing the implementation of the model. If pruning exacerbated a model's biases, it could increase the risk of unfair outcomes or limit the deployment of the pruned model. It is therefore important to prune in a manner that does not aggravate a model's biases.
|
| 22 |
+
|
| 23 |
+
In this paper we propose the performance weighted loss function as a simple method for boosting the fairness of data-driven methods for pruning convolutional filters in convolutional neural network image classifiers. The goal of our method is to enable the pruning of a significant number of model parameters without significantly exacerbating existing biases. The loss function consists of two small tweaks to the standard cross-entropy loss function to prioritize the model's performance for poorly-classified samples over well-classified samples. These tweaks can be used to extend existing data-driven pruning methods without requiring explicit attribute information.
|
| 24 |
+
|
| 25 |
+
We demonstrate the effectiveness of our approach by pruning classifiers using two different pruning approaches for the CelebA [9] and Fitzpatrick 17k [10] datasets. Our results show that the performance weighted loss function can enable existing pruning methods to prune neural networks without significantly increasing model bias.
|
| 26 |
+
|
| 27 |
+
§ 2 RELATED WORK
|
| 28 |
+
|
| 29 |
+
Many different pruning approaches have been proposed to reduce the size of CNNs while minimally impacting model accuracy. Pruning methods typically involve assigning a score to each parameter or group of parameters, removing parameters based on these scores and retraining the newly pruned network to recover lost accuracy [2].
|
| 30 |
+
|
| 31 |
+
The procedure by which parameters are identified to be pruned is the primary differentiator between pruning methods. There are a wide variety of scoring approaches used to identify parameters that are unimportant or redundant and can be removed from the network. Many approaches use parameter magnitudes to identify parameters to prune [11, 12]. Other approaches use gradient information [13], Taylor estimates of parameter importance [14, 15, 16] and statistical properties of future layers [17]. Some approaches involve learning the scores via parameters that control the flow of information through the network [18, 19].
|
| 32 |
+
|
| 33 |
+
However, almost all novel pruning approaches focus on the overall accuracy of the model after pruning. There are few pruning approaches that aim to improve or maintain the fairness of a pruned model. Hooker et al. [5] propose auditing samples affected by model compression, called Compression Identified Exemplars, as an approach for identifying and managing the negative effects of model compression. Wu et al. [20] propose Fairprune, a method for improving model bias using pruning. Instead of seeking to compress a model, Fairprune prunes parameters using a saliency metric to increase model fairness [20]. Xu and Hu [21] propose the use of knowledge distillation and pruning to reduce bias in natural language models. Joseph et al. [22] propose a multi-part loss function intended to improve the alignment between predictions between the original and pruned model. They demonstrate that their method can have beneficial effects for fairness between classes.
|
| 34 |
+
|
| 35 |
+
§ 3 METHOD
|
| 36 |
+
|
| 37 |
+
§ 3.1 MOTIVATION
|
| 38 |
+
|
| 39 |
+
In the unfair pruning situation described by Hooker et al. [5], model performance was more significantly impacted for certain sample subgroups. The highly impacted subgroups were characterized by poor representation in the training data and worse subgroup performance by the original model when compared to unimpacted groups. The performance decrement induced by the pruning process disproportionately impacts subgroups which are underrepresented and poorly classified.
|
| 40 |
+
|
| 41 |
+
To rectify this inequality, we can design a pruning process that prioritizes maintaining the performance of samples from the impacted subgroups. However, we do not need to develop a new pruning method from scratch to achieve this objective. Many existing pruning methods use data to identify which model parameters should be removed. Some methods use parameters learned via a loss minimization process whereas others values derived from gradients calculated with respect to a loss function. By modifying the loss function to prioritize samples from impacted subgroups, we can boost the fairness of existing pruning methods.
|
| 42 |
+
|
| 43 |
+
§ 3.2 THE PERFORMANCE WEIGHTED LOSS FUNCTION
|
| 44 |
+
|
| 45 |
+
We make two different modifications to the standard cross-entropy loss function to transform it into the performance weighted loss function (PW loss). We first apply sample weighting to ensure that samples from impacted groups have a larger contribution to the loss function. We then transform the sample labels to ensure that we are not reinforcing undesirable model behaviours.
|
| 46 |
+
|
| 47 |
+
As the attribute information required to identify impacted subgroups is not always readily accessible, our weighting scheme does not depend on any external information. We instead use the output of the original model to determine each sample weight. We assign larger weights to samples for which the 4 original model was not able to confidently classify. The weight assigned to the $i$ th data sample, ${w}_{i}$ , is given by the following equation:
|
| 48 |
+
|
| 49 |
+
$$
|
| 50 |
+
{w}_{i} = \theta + {\left( 1 - {\widehat{y}}_{i}\right) }^{\gamma } \tag{1}
|
| 51 |
+
$$
|
| 52 |
+
|
| 53 |
+
where ${\widehat{y}}_{i}$ is the predicted probability given by the original model for the sample’s true class, $\theta \in \left\lbrack {0,1}\right\rbrack$ is the minimum weight value and $\gamma \geq 0$ controls the shape of the relation between ${\widehat{y}}_{i}$ and ${w}_{i}$ .
|
| 54 |
+
|
| 55 |
+
We also emphasize the model performance through the use of corrected soft-labels in the cross-entropy function. Rather than using the true labels of each sample, we use the output of the original model for the loss function in the pruning process. Without this change, the preservation of an originally poorly classified sample's prediction probability would result in a greater loss value than the preservation of an originally well classified sample's prediction probability. The use of true labels implicitly prioritizes the preservation of model performance for samples that have predictions closer to their true labels. Using the model output as soft-labels alleviates this implicit prioritization.
|
| 56 |
+
|
| 57 |
+
However, as we are assigning higher weights to samples that are originally classified by the original model while also using the original model's output as our labels, we are consequently assigning the highest weights to incorrect labels. To avoid emphasizing incorrect behaviours we correct the soft-labels. The corrected soft-label, ${\widehat{\mathbf{y}}}_{i}^{ * }$ is defined as:
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
{\widehat{\mathbf{y}}}_{i}^{ * } = \left\{ \begin{array}{ll} {\widehat{\mathbf{y}}}_{i} & \text{ if }{\widehat{C}}_{i} = {C}_{i} \\ {\mathbf{y}}_{i} & \text{ otherwise } \end{array}\right. \tag{2}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
where ${\widehat{\mathbf{y}}}_{i}$ contains the prediction probabilities derived from the model output for the $i$ th sample, ${\mathbf{y}}_{i}$ is the true label vector of the $i$ th sample, ${\widehat{C}}_{i}$ is predicted class of the $i$ th sample and ${C}_{i}$ is the true class of the $i$ th sample. The corrected soft-label takes on the value of the model’s prediction probabilities when the prediction is correct and the true label when the prediction is incorrect.
|
| 64 |
+
|
| 65 |
+
By the application of the performance weighted scheme and corrected soft-labels onto the standard cross-entropy function, the performance weighted loss function, ${\mathcal{L}}_{PW}$ , is defined by:
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
{\mathcal{L}}_{PW} = \mathop{\sum }\limits_{{i = 1}}^{N}{w}_{i}{l}_{CE}\left( {{\widehat{\mathbf{y}}}_{i}^{ * },{\widehat{\mathbf{y}}}_{i}^{\prime }}\right) \tag{3}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
where ${\widehat{\mathbf{y}}}_{i}^{\prime }$ contains the prediction probabilities derived from the model output for the $i$ th sample after pruning, ${l}_{CE}\left( {{\widehat{\mathbf{y}}}_{i}^{ * },{\widehat{\mathbf{y}}}_{i}^{\prime }}\right)$ is the cross-entropy between the corrected soft-label and the prediction probabilities of the pruned model for the $i$ th sample, and $N$ is the number of samples in the batch.
|
| 72 |
+
|
| 73 |
+
By using this loss function with existing data-driven pruning methods, we can reduce the bias exaggerating effect of pruning by emphasizing samples that are more likely to be negatively affected by pruning.
|
| 74 |
+
|
| 75 |
+
§ 4 EXPERIMENTS
|
| 76 |
+
|
| 77 |
+
§ 4.1 EXPERIMENTAL SET-UP
|
| 78 |
+
|
| 79 |
+
We applied the PW loss to two different pruning methods. The first method is AutoBot [18], an accuracy preserving pruning method that uses trainable bottleneck parameters that limits the flow of information through the model. The second method uses an importance metric derived from the Taylor expansion of the loss function [14]. In both of our implementations, we pruned whole convolutional filters rather than individual neurons. As pruned filters can be fully removed from the model, rather than being set to zero, filter pruning is a simple method for directly reducing the FLOPS of a model.
|
| 80 |
+
|
| 81 |
+
In the AutoBot method, the bottlenecks are optimized by minimizing a loss function that includes the cross-entropy between the original and pruned model outputs, as well as terms that encourage the bottlenecks to limit information moving through the model, achieving a target number of FLOPS [18]. We applied the performance weighted loss function to the method by replacing the cross-entropy term in the loss function with the performance weighted loss function. Additionally, we also used the performance weighted loss function when retraining the model after pruning.
|
| 82 |
+
|
| 83 |
+
The importance metric of the Taylor expansion method is formed using the gradient of the loss function with respect to each feature map and the value of each feature map [14]. This method alternates between training the network and pruning a filter. In our implementation, a filter is pruned every five iterations. We applied the performance weighted loss function by replacing the loss functions used in the gradient calculation and model training with the performance weighted loss function. Once again, we also used the performance weighted loss function when retraining the model after pruning.
|
| 84 |
+
|
| 85 |
+
We also evaluated a random pruning method in which filters are selected and pruned from the network until only the desired number of FLOPS remain. We use this method as a reference.
|
| 86 |
+
|
| 87 |
+
We implemented the methods using the PyTorch library [23]. The methods were implemented as three step pipelines in which the model is first pseudo-pruned by setting parameters to zero, fully pruned using the Torch-Pruning library [24] and retrained. Pseudo-pruning allows for fast pruning during the pruning process while the full pruning step removes the unused parameters, reducing the number of operations required for prediction. Due to dependencies between parameters introduced by structures such as residual layers, the achieved theoretical speedup often slightly differs from the target theoretical speedup. All hyperparameters for the pruning methods were selected using a hold-out validation set. Hyperparameters for the pruning methods were selected without the PW loss applied and were used for both unmodified and PW loss method variants. We repeated each experiment three times. All figures displaying model performance after pruning are displaying the average of all trials. Trials that produced degenerate models which only predict a single class were excluded.
|
| 88 |
+
|
| 89 |
+
§ 4.1.1 METRICS
|
| 90 |
+
|
| 91 |
+
Our primary concern is the degradation of a model's behaviour towards different subgroups due to pruning. We therefore evaluated the models by comparing the change in the areas under the receiver operator curves (ROC-AUC) for various subgroups for five different degrees of pruning. As it is a threshold agnostic performance metric, the ROC-AUC is a good measure of the model's understanding and separability for a subgroup [25]. For non-binary classification we used the one-vs-one ROC-AUC.
|
| 92 |
+
|
| 93 |
+
We measured the degree to which a model is pruned using the theoretical speedup as defined as the FLOPS of the original model divided by the FLOPS of the pruned model.
|
| 94 |
+
|
| 95 |
+
§ 4.2 EVALUATING FAIRNESS AND PERFORMANCE
|
| 96 |
+
|
| 97 |
+
All methods were tested with and without the PW loss on two different classification tasks.
|
| 98 |
+
|
| 99 |
+
Our first task was the celebrity face classification task using the CelebA dataset [9] as outlined by Hooker et al. [5], in which a model is trained to identify faces as blonde or non-blonde. The CelebA dataset contains over 200000 images of celebrity faces with various annotations. While blonde non-male samples make up 14.05% of the training data, blond male samples make up only 0.85% of the training data. We used the provided data splits with ${80}\%$ of the available data being used for training with the remaining data split evenly for validation and testing.
|
| 100 |
+
|
| 101 |
+
Our second task is the skin lesion classification task using the Fitzpatrick17k dataset [10]. The Fitzpatrick 17k dataset consists of 16577 images of skin conditions. We trained our models to classify the samples as normal, benign or malignant. Due to missing and invalid images we were only able to use 16526 images. Each sample in the dataset is assigned a Fitzpatrick score that categorizes the skin tone of the sample. We trained our models on only samples with light skin tone scores of 1 or 2, and evaluated the model on medium skin tone scores of 3 or 4 as well as dark skin tone scores of 5 or 6. We used a random 25% of the medium and dark skin tones as a validation set with the remainder used as a test set.
|
| 102 |
+
|
| 103 |
+
§ 4.2.1 PRUNING THE CELEBA MODELS
|
| 104 |
+
|
| 105 |
+
We trained a Resnet-18 [26] model and a VGG-16 [27] model for the CelebA task. The ROC-AUCs for the male and non-male subgroups of the Resnet-18 model were 0.9639 and 0.9794 respectively. The ROC-AUCs for the male and non-male subgroups of the VGG-16 model were 0.9679 and 0.9825 respectively. Both models were pruned using target theoretical speedups of 16, 32, 64, 128 and 256.
|
| 106 |
+
|
| 107 |
+
The change in ROC-AUC for all tested pruning methods for the Resnet-18 and VGG-16 models can be found in Figure 1. All methods were able to significantly reduce the size of both models, but most of the results without performance weighting exhibited divergent performance between the male and non-male subgroups as the theoretical speedup increases. Performance weighting was highly effective when pruning the Resnet-18 model for both the AutoBot and Taylor pruning methods.
|
| 108 |
+
|
| 109 |
+
< g r a p h i c s >
|
| 110 |
+
|
| 111 |
+
Figure 1: Mean pruning performance with Resnet-18 and VGG-16 models with CelebA dataset.
|
| 112 |
+
|
| 113 |
+
We see an increase in AUC-ROC at all tested theoretical speedups for both the male and non-male subgroups. The increase for the male subgroup is substantial and the subgroup AUC-ROC scores no longer diverage as the theoretical speedup increases.
|
| 114 |
+
|
| 115 |
+
We see similar improvements when performance weighting is applied to the AutoBot method for the VGG-16 model, however the improvements are only substantial at the lowest theoretical speedups. We do not see improvements when performance weighting is applied to the Taylor method. This is likely due to the method not having significantly divergent performance for the VGG-16 model.
|
| 116 |
+
|
| 117 |
+
§ 4.2.2 PRUNING THE FITZPATRICK17K MODELS
|
| 118 |
+
|
| 119 |
+
We trained a Resnet-34 [26] model and a EfficientNet-V2 Medium [28] model for the Fitzpatrick17k task. The ROC-AUCs for the medium and dark subgroups of the Resnet-34 model were 0.8190 and 0.7329 respectively. The ROC-AUCs for the medium and dark subgroups of the EfficientNet model were 0.8516 and 0.7524 respectively.
|
| 120 |
+
|
| 121 |
+
Despite a bias against dark skin tones existing in the original models, we do not see divergent AUC-ROC scores as the theoretical speedup increases. The medium skin tone subgroup actually saw greater changes in AUC-ROC due to pruning. We only see slight benefits for using performance weighting with the Fitzpatrick17k models. Performance weighting increased slightly improved performance after pruning for the ResNet-34 model with AutoBot pruning and the EfficientNet model with AutoBot pruning at lower theoretical speedups. It had negligible or detrimental effects for Taylor pruning with both models.
|
| 122 |
+
|
| 123 |
+
These results indicate that performance weighting is not an appropriate solution for all datasets and models that exhibit bias. The lack of an increasing performance difference between subgroups may indicate that the pruning process was not introducing additional biases in the Fitzpatrick 17k models. This is in contrast to the CelebA models for which the initial bias was small but grew due to pruning. Performance weighting may therefore only mitigate biases that are introduced from the pruning process. It will not rectify biases that exist in the model before pruning.
|
| 124 |
+
|
| 125 |
+
§ 4.3 CONDITIONS FOR BIAS
|
| 126 |
+
|
| 127 |
+
From our results in Section 4.2, we can see that utilizing the PW loss is not necessary in all circumstances. The loss appeared to be more beneficial for models which saw increasing differences in performance between subgroups as the theoretical speedup increased.
|
| 128 |
+
|
| 129 |
+
< g r a p h i c s >
|
| 130 |
+
|
| 131 |
+
Figure 2: Mean pruning performance with Resnet-34 and EfficientNet V2 Med. models with Fitzpatrick 17k dataset.
|
| 132 |
+
|
| 133 |
+
To understand the properties of a dataset that would necessitate the use of the PW loss, we created three artificial datasets from the CelebA dataset by selected subsets of the training data. The first subset was formed using 3.41% of the available training data such that it was fully balanced, containing an equal number of male and non-male samples as well as an equal number of blonde and non-blonde samples. The second and third subsets were formed by adding additional samples to the first subset, altering the class or gender balance. The second subset contained an equal number of blonde and non-blonde samples, but five times as many non-male samples as there were male samples. The third subset contained an equal number of male and non-male samples, but five times as many non-blonde samples as there were blonde samples. The entire test set was used to evaluate all subsets.
|
| 134 |
+
|
| 135 |
+
A ResNet-18 model was trained using each subset. The AUC-ROCs for the male subgroup are 0.9562, 0.9479 and 0.9183 for the first, second and third subsets respectively. The AUC-ROCs for the non-male subgroup are 0.9713, 0.9732 and 0.9580 for the first second and third subsets respectively. The models were pruned using the AutoBot and Taylor methods using target theoretical speedups of 8, 32 and 128. The performance after pruning for these models can be found in Figure 3.
|
| 136 |
+
|
| 137 |
+
In the results using the fully balanced subset, we do see a divergence in subgroup performance for both methods, but the divergence is less than was seen when the full method was used. In the results with the additional non-male samples, we see an increase in performance for all model/method combinations. For the AutoBot results, the increase is greater for non-male samples than it is for male samples. We once again see a common increase in performance when we look at the subset with the additional non-blonde samples. We do see additional instability in the Taylor results, but there are no clear findings with respect to differences in performance between subgroups. A greater decrease in performance was seen for male samples for all model/method combinations, including those that were trained on data with a balanced gender split. These results indicate that the dataset composition does influence the fairness of pruning results, but it does not fully explain it.
|
| 138 |
+
|
| 139 |
+
§ 4.4 ABLATION
|
| 140 |
+
|
| 141 |
+
To measure the effects of the components of the PW loss independently, we pruned our ResNet-18 CelebA model using the AutoBot method with only the corrected soft-labels and with only the 37 weighting scheme described in equation 1 . We applied the modifications to the only the pruning 238 process, and to both the pruning and retraining processes.
|
| 142 |
+
|
| 143 |
+
< g r a p h i c s >
|
| 144 |
+
|
| 145 |
+
Figure 3: Pruning performance with ResNet-18 models trained on subsets of CelebA dataset with alternative class and gender balances.
|
| 146 |
+
|
| 147 |
+
< g r a p h i c s >
|
| 148 |
+
|
| 149 |
+
Figure 4: Pruning performance with ResNet-18 models with CelebA dataset when elements of PW loss are applied independently to the pruning process (left), and to the pruning process as well as the post-prune retraining process (right).
|
| 150 |
+
|
| 151 |
+
The ablation results can be found in Figure 4. Both the modifications were more effective when applied to both the pruning and retraining process, indicating that simply modifying the process by which parameters are selected to be pruning is insufficient to mitigate the effects of bias. Furthermore, the effect of using corrected soft-labels was larger than the effects of using our proposed weighting scheme. While both changes boosted performance for the male subgroup when applied to both pruning and retraining, the effect of the corrected soft-labels was almost as large the effect of the full performance weighting method. The full method did demonstrate less bias with a target theoretical speedup of 16. Furthermore, as the AutoBot method already uses the outputs of the original model in its loss function, the improvement seen when the corrected soft-labels were only used for pruning can solely be attributed to the correction of the model outputs.
|
| 152 |
+
|
| 153 |
+
Unlike our proposed weighting scheme, the use of corrected soft-labels does not involve the selection of any parameters. In situations in which parameter selection is not possible, the use of corrected soft-labels may be a simple yet useful method for reducing the effects of algorithmic bias in pruning.
|
| 154 |
+
|
| 155 |
+
§ 5 CONCLUSION
|
| 156 |
+
|
| 157 |
+
In this paper we demonstrate how model pruning can exacerbate biases in models and present the performance weighted loss function as a novel method for mitigating this effect. The performance weighted loss function is a simple modification that can be applied to any pruning method that uses the cross-entropy loss. Our experimental results indicate that while the performance weighted loss function does not recitify model biases, it can help prevent those biases from becoming exaggerated by the pruning process. The performance weighted loss function is a useful tool for practioners who seek to compress existing models without introducing new fairness concerns.
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/YzPaQcK2Ko4/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,205 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# On the Feasibility of Compressing Certifiably Robust Neural Networks
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
## Abstract
|
| 12 |
+
|
| 13 |
+
Knowledge distillation is a popular approach to compress high-performance neural networks for use in resource-constrained environments. However, the threat of adversarial machine learning poses the question: Is it possible to compress adversarially robust networks and achieve similar or better adversarial robustness as the original network? In this paper, we explore this question with respect to certifiable robustness defenses, in which the defense establishes a formal robustness guarantee irrespective of the adversarial attack methodology. We present our preliminary findings answering two main questions: 1) Is the traditional knowledge distillation sufficient to compress certifiably robust neural networks? and 2) What aspects of the transfer process can we modify to improve the compression effectiveness? Our work represents the first study of the interaction between machine learning model compression and certifiable robustness.
|
| 14 |
+
|
| 15 |
+
## 3 1 Introduction
|
| 16 |
+
|
| 17 |
+
The existence of adversarial inputs [8, 20], i.e., imperceptibly perturbed inputs that reliably cause erroneous outputs, have heightened concerns regarding the use of neural networks in sensitive real-world settings. In response, several methods have been proposed to harden neural networks against such inputs, enhancing their reliability in adversarial environments [15, 13, 19, 5]. One such category of methods focuses on computing the size of the largest neighborhood around a given input within which a classifier's output remains constant. Such a classifier is said to be certifiably robust for the given input inside this neighborhood. Since these methods provide a guarantee of robustness, they are highly desirable in safety- and privacy-critical applications such as payment systems, access control systems, self-driving cars, and security surveillance [14]. However, the expanded usage of neural networks in resource-limited computing platforms, such as IoT systems, has introduced a new challenge in this space: when training to generalize over adversarial data rather than the standard data distribution, larger networks are necessary [4, 16], making robust networks more resource demanding compared to their non-robust counterparts.
|
| 18 |
+
|
| 19 |
+
Knowledge distillation (KD) [3, 9] is a technique which uses a teacher-student training pipeline to compress the performance of larger networks into a smaller architecture. On the standard (i.e., non-adversarial) classification task, KD improves the performance of the smaller student network and often results in performance compared to the large teacher network. Thus, a question arises: can knowledge distillation be used in an adversarial context to compress highly robust neural networks? Several works have explored this question with respect to empirical robustness, in which adversarial robustness is measured with respect to a specific attack algorithm [7, 1, 22, 23]. We note, though, no work has explored this question with respect to certifiable robustness, which is more given that it establishes a security guarantee irrespective of the attack methodology. To address the gap in literature, we perform the first study on compressing certifiably robust networks.
|
| 20 |
+
|
| 21 |
+
## In this paper, we make the following contributions:
|
| 22 |
+
|
| 23 |
+
- We study the various strategies used for KD in context of certifiable robustness. We discover shortfalls in naive application of KD in these settings.
|
| 24 |
+
|
| 25 |
+
- We propose a more effective strategy for distilling certified robustness into small networks, which allows us to bridge the gap between the robustness of the student and the teacher.
|
| 26 |
+
|
| 27 |
+
- We identify that distilling certifiable robustness imposes stricter requirements than distilling standard performance or empirical robustness. Specifically, a smaller size gap between the student and the teacher networks might be required for effective distillation.
|
| 28 |
+
|
| 29 |
+
## 2 Background and Related Work
|
| 30 |
+
|
| 31 |
+
In this work, we study the certifiable robustness of neural network based image classifiers trained using knowledge distillation (KD). We use this section to define our problem, provide relevant background, and discuss related prior works.
|
| 32 |
+
|
| 33 |
+
### 2.1 Preliminaries
|
| 34 |
+
|
| 35 |
+
Consider a neural network classifier $f$ parameterized by $\theta$ (denoted as ${f}_{\theta }$ ) that is trained to map a given image $x \in {\mathbb{R}}^{d}$ to a set of discrete labels $\mathcal{Y}$ using a set of i.i.d. samples $\mathcal{S} = \left\{ {\left( {{x}_{1},{y}_{1}}\right) ,\left( {{x}_{1},{y}_{1}}\right) ,\cdots ,\left( {{x}_{n},{y}_{n}}\right) }\right\}$ drawn from some data distribution. The output of the classifier can be written as ${f}_{\theta }\left( x\right) = {\operatorname{argmax}}_{c \in \mathcal{Y}}{z}_{\theta }^{c}\left( x\right)$ . Here ${z}_{\theta }\left( x\right)$ is the softmax output of the classifier and ${z}_{\theta }^{c}\left( x\right)$ denotes the probability that image $x$ belongs to class $c$ . The process of training of a classifier involves minimizing the cross-entropy loss on the standard data distribution, which is approximated through $\mathcal{S}$ .
|
| 36 |
+
|
| 37 |
+
### 2.2 Adversarial Robustness
|
| 38 |
+
|
| 39 |
+
In the ${\ell }_{p}$ -norm space, an adversarial perturbation is defined as any perturbation $\delta \in {\mathbb{R}}^{d}$ such that $\parallel \delta {\parallel }_{p} \leq \epsilon$ , that an adversary can use to change the classifier’s output i.e., ${f}_{\theta }\left( x\right) \neq {f}_{\theta }\left( {x + \delta }\right)$ . Here, $\epsilon$ defines the adversarial perturbation budget in terms of the maximum allowed magnitude of the perturbation vector.
|
| 40 |
+
|
| 41 |
+
The adversarial robustness of a neural network is often characterized by its empirical or certified robustness. The empirical adversarial robustness of a neural network is the accuracy of the network against adversarial samples generated by a given attack algorithm. Many early proposed defenses that were thought to be effective based on their reported empirical robustness were later found to have poor true robustness [2]. In contrast, the certified adversarial robustness of a neural network represents the true worst-case performance and is measured by the lower bound accuracy of the network against all adversarial samples within a given $\epsilon$ neighborhood.
|
| 42 |
+
|
| 43 |
+
#### 2.2.1 Certified Robustness
|
| 44 |
+
|
| 45 |
+
The certified robustness of a network can be measured by its lower bound accuracy within a predetermined neighborhood. Alternatively, it can be measured based on the radius of the largest neighborhood within which a classifier’s output remains (correct and) constant. For a given input $x$ , the classifier ${f}_{\theta }$ is said to be certifiably robust if ${f}_{\theta }\left( x\right)$ is provably constant within some large neighborhood around $x$ . The radius of this neighborhood (or, robust radius) is defined using the ${\ell }_{p}$ -norm metric as follows:
|
| 46 |
+
|
| 47 |
+
$$
|
| 48 |
+
R\left( {{f}_{\theta };x, y}\right) = \left\{ \begin{matrix} \mathop{\inf }\limits_{{{f}_{\theta }\left( {x}^{\prime }\right) \neq {f}_{\theta }\left( x\right) }}{\begin{Vmatrix}{x}^{\prime } - x\end{Vmatrix}}_{p} & ,\text{ when }{f}_{\theta }\left( x\right) = y \\ 0 & ,\text{ when }{f}_{\theta }\left( x\right) \neq y \end{matrix}\right. \tag{1}
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+
Intuitively, the robust radius $R\left( {{f}_{\theta };x, y}\right)$ establishes a region within which the classifier’s prediction remains constant, ensuring that an adversary with a budget $\epsilon \leq R$ can not succeed. Therefore, training networks to maximize the robust radius would harden a classifier against manipulations of all sorts, including adversarial ones. However, as computing the robust radius of a neural network for a given input $x$ is NP-hard [12], recent certified training methods instead propose computing a lower bound of the robust radius, known as the certified radius.
|
| 52 |
+
|
| 53 |
+
Randomized Smoothing. Cohen et al. [6] presented a scalable method for creating certifiably robust image classifiers using randomized smoothing. This involves converting a given classifier (termed base classifier) into a smooth classifier.
|
| 54 |
+
|
| 55 |
+
2 Definition 2.1. For a given (base) classifier ${f}_{\theta }$ and $\sigma > 0$ , the corresponding smooth classifier ${g}_{\theta }$ is defined as follows:
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
{g}_{\theta }\left( x\right) = \mathop{\operatorname{argmax}}\limits_{{c \in \mathcal{Y}}}{P}_{\eta \sim \mathcal{N}\left( {0,{\sigma }^{2}I}\right) }\left( {{f}_{\theta }\left( {x + \eta }\right) = c}\right) \tag{2}
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
Simply put, ${g}_{\theta }$ returns the class $c$ , which has the highest probability mass under the Gaussian distribution $\mathcal{N}\left( {x,{\sigma }^{2}I}\right)$ . The authors provide theoretical proof for the certified robustness of a smooth classifier. This theoretical work can be summarized using Theorem 2.2.
|
| 62 |
+
|
| 63 |
+
Theorem 2.2. Let ${f}_{\theta } : {\mathbb{R}}^{d} \mapsto \mathcal{Y}$ be a classifier and ${g}_{\theta }$ be its smoothed version (as defined in Equation 2). For a given input $x \in {\mathbb{R}}^{d}$ and corresponding ground truth output $y \in \mathcal{Y}$ , if ${g}_{\theta }$ correctly classifies $x$ as $y$ such that
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
{P}_{\eta }\left( {{f}_{\theta }\left( {x + \eta }\right) = y}\right) \geq \mathop{\max }\limits_{{{y}^{\prime } \neq y}}{P}_{\eta }\left( {{f}_{\theta }\left( {x + \eta }\right) = {y}^{\prime }}\right) \tag{3}
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
then ${g}_{\theta }$ is provably robust at $x$ within the certified radius $R$ given by:
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
{CR}\left( {{g}_{\theta };x, y}\right) = \frac{\sigma }{2}\left\lbrack {{\Phi }^{-1}\left( {{P}_{\eta }\left( {{f}_{\theta }\left( {x + \eta }\right) = y}\right) }\right) - {\Phi }^{-1}\left( {\mathop{\max }\limits_{{{y}^{\prime } \neq y}}{P}_{\eta }\left( {{f}_{\theta }\left( {x + \eta }\right) = {y}^{\prime }}\right) }\right) }\right\rbrack \tag{4}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
where, $\Phi$ is the c.d.f. of the standard Gaussian distribution.
|
| 76 |
+
|
| 77 |
+
Furthermore, the authors demonstrate that training the base classifier to minimize cross-entropy loss on inputs perturbed with Gaussian noise (Gaussian data augmentation) increases the certified radius of the smooth classifier. More generally, improving the base classifier's robustness to Gaussian noise is a successful strategy for increasing certified radius and has been utilized by several other prior works [18, 21, 11].
|
| 78 |
+
|
| 79 |
+
### 2.3 Knowledge Distillation (KD)
|
| 80 |
+
|
| 81 |
+
For the standard classification task, small neural networks are able to learn similar classification functions as large ones through a process known as knowledge distillation [3, 9]. Traditional KD involves training the small network (student) to mimic the outputs of a much larger network (teacher) for a given task (e.g., classification). The student's training objective, also referred to as the distillation objective, is formalized as follows:
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
\mathop{\min }\limits_{\theta }{\mathbb{E}}_{\left( {x, y}\right) \sim \mathcal{S}}\left\lbrack {\left( {1 - \alpha }\right) {\mathcal{L}}_{CE}\left( {{z}_{\theta }\left( x\right) , y}\right) + \alpha {t}^{2}{\mathcal{L}}_{M}\left( {{z}_{\theta }^{t}\left( x\right) ,{z}_{\phi }^{t}\left( x\right) }\right) }\right\rbrack \tag{5}
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
where ${z}_{\theta }\left( x\right)$ and ${z}_{\phi }\left( x\right)$ are the softmax outputs of the student ${S}_{\theta }$ and teacher ${T}_{\phi }$ , respectively; ${\mathcal{L}}_{CE}$ is the cross-entropy loss; ${\mathcal{L}}_{M}$ is the "mimic loss" (KL-Divergence, Euclidean distance etc); $\alpha$ is an hyperparameter used to weigh the two loss terms; and $t$ is the softmax temperature. The value of $\alpha$ is usually set to 1 , implying that the student is solely trained to mimic the teacher and in the process learns to perform well on the task. Training with the supervision of a teacher improves the student's performance as compared to training it independently. This is due to the small student benefits from the inter-class relationships learned by the large teacher (with higher modelling capacity). By distilling the knowledge of a large network into a small network, we essentially perform model compression, as the small network encodes the performance of the large network but with fewer parameters. This, in turn, allows for the use of high performance networks in resource-restricted devices.
|
| 88 |
+
|
| 89 |
+
### 2.4 Adversarial Robustness and Knowledge Distillation
|
| 90 |
+
|
| 91 |
+
Using the lessons learned from KD, several successful attempts have been made at improving the adversarial robustness of small networks by training them under the supervision of larger, robust networks. Goldblum et al. [7] propose Adversarially Robust Distillation (ARD), that combines KD with Adversarial Training [15]: the student is trained to match the teacher's outputs on adversarial inputs. ARD improves the robustness of small networks against gradient-based attacks relative to standalone adversarial training. Zi et al. [23] propose Robust Soft Label Adversarial Distillation (RSLAD), which improves upon ARD by using soft labels from a robust teacher rather than hard labels in all supervision loss terms. An important commonality between all these prior works is that they limit their experimentation to adversarial training, which is an empirical robustness method. Therefore, the robustness claims made by them can potentially be invalidated by a future adversary.
|
| 92 |
+
|
| 93 |
+
Table 1: The mimic loss used in different distillation objectives proposed by prior works. ${z}_{\theta }$ and ${z}_{\phi }$ are softmax outputs of the student and teacher network respectively, and $t$ represents the temperature parameter. Note that since $\alpha$ is usually set to 1 (see Equation 5), we only report the loss terms that the student is actually trained with.
|
| 94 |
+
|
| 95 |
+
<table><tr><td>Method</td><td>${\mathcal{L}}_{\mathcal{M}}$</td></tr><tr><td>KD [3, 9]</td><td>KL-DIV $\left( {{z}_{\theta }^{t}\left( x\right) ,{z}_{\phi }^{t}\left( x\right) }\right) \;$ OR $\;{\begin{Vmatrix}{z}_{\theta }\left( x\right) - {z}_{\phi }\left( x\right) \end{Vmatrix}}_{2}$</td></tr><tr><td>ARD [7] ${}^{ * }$</td><td>KL-DIV $\left( {{z}_{\theta }^{t}\left( {x + \delta }\right) ,{z}_{\phi }^{t}\left( x\right) }\right)$</td></tr><tr><td>RSLAD [23]</td><td>$\operatorname{KL-Div}\left( {{z}_{\theta }\left( x\right) ,{z}_{\phi }\left( x\right) }\right) + \operatorname{KL-Div}\left( {{z}_{\theta }\left( {x + \delta }\right) ,{z}_{\phi }\left( x\right) }\right)$</td></tr></table>
|
| 96 |
+
|
| 97 |
+
* For $\delta$ we use Gaussian noise instead of adversarial noise.
|
| 98 |
+
|
| 99 |
+
## 3 Distilling Certified Robustness
|
| 100 |
+
|
| 101 |
+
In order to promote deployment of safe machine learning models in resource-limited settings, it is important to study if certifiably robust neural networks can be effectively compressed. Knowledge distillation (KD) is one of the most effective approaches for doing this. Therefore, in this section we examine the effectiveness of KD towards compressing certifiably robust neural networks. We begin by describing our experimental setup in Section 3.1. In Section 3.2, we study the effectiveness of existing distillation objectives in distilling certified robustness. In Section 3.3, we propose a distillation objective using the traditional KD strategy that improves the distillation process by addressing the shortcoming of existing objectives.
|
| 102 |
+
|
| 103 |
+
### 3.1 Experimental Setup
|
| 104 |
+
|
| 105 |
+
In our experiments, we focus on certified robustness of image classifiers in the ${\ell }_{2}$ -space. Following the work by Cohen et al. [6], we use randomized smoothing to achieve certifiably robust classifiers. All our experiments are conducted using the CIFAR-10 dataset. To measure certified robustness, we follow prior works and report certified accuracy (or, the prediction accuracy of the smooth classifier) at different ${\ell }_{2}$ radius [6,18,11]. The certified accuracy at radius $= 0$ is equivalent to the prediction accuracy of the smooth classifier on clean inputs. Additionally, we report the average certified radius (ACR) computed over the entire test set [21]. Our code is implemented in PyTorch [17] and is publicly available at [REDACTED] [R]
|
| 106 |
+
|
| 107 |
+
### 3.2 Can existing distillation objectives be used to distill certified robustness?
|
| 108 |
+
|
| 109 |
+
Knowledge distillation is effectively a method for "transferring" the knowledge of one network to another. Traditionally, this is achieved by training one network to mimic the other using some sort of mimic loss (i.e., ${\mathcal{L}}_{\mathcal{M}}$ from Equation 5). Prior works on distilling adversarial robustness [7,23,22] also follow the traditional KD strategy, but propose different versions of ${\mathcal{L}}_{\mathcal{M}}$ (see Table 1). In this section, we evaluate whether these existing distillation objectives can be used to distill certified robustness. Note that since ARD and RSLAD were designed for adversarial training [15], they are not compatible with the certified robustness methods that we wish to study. To make them compatible, we use Gaussian noise in place of the adversarial noise therm $\left( \delta \right)$ in their distillation objective.
|
| 110 |
+
|
| 111 |
+
For experimentation, we use a ResNet-110 network as the teacher and train it on the CIFAR-10 dataset. To obtain non-trivial certified robustness for this network, we train it using the Gaussian data augmentation method proposed by Cohen et al. [6]. We then distill its robustness to a much smaller ResNet-20 network using the different existing distillation objectives. The results are summarized in Table 2. For comparison, we also report the student and teacher network's robustness when trained independently using Gaussian data augmentation. The first observation we make is that traditional distillation (KD) completely fails at distilling certified robustness as the student exhibits trivial ACR and certified accuracy for all values of $r$ . The student network trained with RSLAD appears to have non-trivial certified robustness, however, the distilled ResNet-20 network exhibits poorer robustness than the one trained independently. This implies that the RSLAD objective is also unsuitable for our use case. Only the ARD objective seems to be successful at distilling certified robustness as the distilled ResNet-20 exhibits higher robustness than a ResNet-20 trained independently.
|
| 112 |
+
|
| 113 |
+
---
|
| 114 |
+
|
| 115 |
+
${}^{1}$ To maintain anonymity, we have temporarily redacted the URL to the code repository.
|
| 116 |
+
|
| 117 |
+
${}^{2}$ For certification, we borrow code from Cohen et al. [6]: https://github.com/locuslab/smoothing
|
| 118 |
+
|
| 119 |
+
---
|
| 120 |
+
|
| 121 |
+
Table 2: Comparing the robustness of a student network trained using different variants of knowledge distillation. We denote the distillation process as "teacher "tethod ". Among all the methods, only ARD is able to successfully distill certified robustness, presenting higher robustness than the independently-trained student.
|
| 122 |
+
|
| 123 |
+
<table><tr><td/><td>$\mathbf{{ACR}}$</td><td>0.00</td><td>0.25</td><td>0.50</td><td>0.75</td></tr><tr><td>RESNET-110</td><td>0.486</td><td>81.41</td><td>67.75</td><td>49.67</td><td>32.37</td></tr><tr><td>RESNET-20</td><td>0.451</td><td>79.62</td><td>63.78</td><td>45.65</td><td>28.01</td></tr><tr><td>RESNET-110 $\xrightarrow[]{\mathrm{{KD}}}$ RESNET-20</td><td>0.090</td><td>10.93</td><td>10.16</td><td>9.86</td><td>9.03</td></tr><tr><td>RESNET-110 $\xrightarrow[]{\text{ RSLAD }}$ RESNET-20</td><td>0.431</td><td>77.46</td><td>61.98</td><td>43.57</td><td>25.62</td></tr><tr><td>RESNET-110 $\xrightarrow[]{\mathrm{{ARD}}}$ RESNET-20</td><td>0.456</td><td>76.50</td><td>62.80</td><td>46.87</td><td>30.29</td></tr></table>
|
| 124 |
+
|
| 125 |
+
Table 3: Evaluating the effectiveness of CRD in distilling certified robustness. CRD performs better than distillation objectives proposed by prior works. Furthermore, CRD trained student exhibits comparable robustness to its teacher (Table 2, ${1}^{st}$ row).
|
| 126 |
+
|
| 127 |
+
<table><tr><td/><td>$\mathbf{{ACR}}$</td><td>0.00</td><td>0.25</td><td>0.50</td><td>0.75</td></tr><tr><td>RESNET- ${110}\xrightarrow[]{\mathrm{{CRD}}}$ RESNET-20</td><td>0.483</td><td>80.06</td><td>66.19</td><td>49.62</td><td>32.74</td></tr></table>
|
| 128 |
+
|
| 129 |
+
### 3.3 Adapting traditional KD strategy to distill certified robustness
|
| 130 |
+
|
| 131 |
+
From the results in Table 2, we observe that even for the best performing objective (i.e., ARD), there exists a gap between the teacher's and the student's robustness. In this section we explore whether it is possible to bridge this gap while using the traditional KD strategy of mimicking outputs. We note that formulation of ${\mathcal{L}}_{M}$ used by prior works on distilling adversarial robustness was motivated by wanting the student to learn a similar output distribution for clean and adversarial inputs (generated using some attack). This motivation, however, doesn't translate to our use case very well. In the randomized smoothing paradigm, higher certified robustness comes from higher robustness to Gaussian noise. In fact, Jeong et al. [10] note that there is a direct correlation between the robustness of a smooth classifier and its prediction confidence (tied to confidence of base classifier on inputs perturbed with Gaussian noise). Based on this, we propose the following ${\mathcal{L}}_{M}$ which is tailored for certified robustness distillation:
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
\mathop{\min }\limits_{\theta }{\mathbb{E}}_{\left( {x, y}\right) \sim \mathcal{S};\delta \sim \mathcal{N}\left( {0,{\sigma }^{2}I}\right) }\left\lbrack {\left( {1 - \alpha }\right) {\mathcal{L}}_{CE}\left( {{z}_{\theta }\left( {x + \delta }\right) , y}\right) + \alpha {t}^{2}{\begin{Vmatrix}{z}_{\theta }^{t}\left( x + \delta \right) - {z}_{\phi }^{t}\left( x + \delta \right) \end{Vmatrix}}_{2}}\right\rbrack \tag{6}
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
We refer to this distillation objective as Certified Robust Distillation (CRD). Simply put, we are training the student to mimic the teacher’s output not only at the given input $x$ , but also in the Gaussian neighborhood around it. The robustness of a ResNet-20 distilled from a ResNet-110 using CRD is reported in Table 3. Comparing with the results in Table 2, we observe that CRD is successful at bridging the gap between the robustness of the student and the teacher. This makes CRD a more successful objective for distilling certified robustness than objectives proposed by prior works.
|
| 138 |
+
|
| 139 |
+
## 4 Limitations of CRD
|
| 140 |
+
|
| 141 |
+
In this section, we address the existing limitations of CRD. It is known that robustness training requires the network to learn more complicated functions than standard training [4, 16]. Therefore, we suspect that it might be more difficult to distill networks with high robustness as compared to distilling networks with high standard performance (e.g., accuracy on test set). To investigate this, we repeat the experiment from previous section, this time training the teacher to be more robust by using training methods that are better than Gaussian data augmentation training. Specifically, we use MACER [21] and SmoothMix [10]. The results are reported in Table 4. For comparison, we also report the robustness of student and teacher networks independently trained using MACER and SmoothMix. Overall, we observe that for all teacher training methods that we use in this section, the process of distillation is not as effective as it was in the previous section. In all cases, there is a large gap between the robustness of the student and the teacher networks. Furthermore, only in one case (i.e., MACER) we observe that the distilled ResNet-20 has higher robustness than the independently trained ResNet-20.
|
| 142 |
+
|
| 143 |
+
Table 4: Evaluating the effectiveness of CRD in distilling certified robustness from ResNet-110 teachers with progressively higher robustness. It is harder for a student to mimic a more robust teacher.
|
| 144 |
+
|
| 145 |
+
<table><tr><td/><td>$\mathbf{{ACR}}$</td><td>0.00</td><td>0.25</td><td>0.50</td><td>0.75</td></tr><tr><td colspan="6">MACER [21]</td></tr><tr><td>RESNET-110</td><td>0.531</td><td>79.11</td><td>68.39</td><td>55.90</td><td>40.61</td></tr><tr><td>RESNET-20</td><td>0.507</td><td>76.44</td><td>65.81</td><td>52.87</td><td>38.75</td></tr><tr><td>RESNET-110 $\xrightarrow[]{\mathrm{{CRD}}}$ RESNET-20</td><td>0.508</td><td>78.30</td><td>66.80</td><td>53.15</td><td>37.75</td></tr></table>
|
| 146 |
+
|
| 147 |
+
<table><tr><td/><td>SmoothMix [10]</td><td/><td/><td/><td/></tr><tr><td>RESNET-110</td><td>0.550</td><td>76.89</td><td>68.25</td><td>57.42</td><td>46.26</td></tr><tr><td>RESNET-20</td><td>0.522</td><td>75.55</td><td>65.53</td><td>54.72</td><td>42.62</td></tr><tr><td>RESNET-110 $\xrightarrow[]{\mathrm{{CRD}}}$ RESNET-20</td><td>0.514</td><td>76.33</td><td>65.85</td><td>53.83</td><td>40.28</td></tr></table>
|
| 148 |
+
|
| 149 |
+
Table 5: Testing the network capacity limitation of CRD using SmoothMix [10]. Networks of larger sizes are required to effectively mimic teachers possessing high certified robustness.
|
| 150 |
+
|
| 151 |
+
<table><tr><td/><td>$\mathbf{{ACR}}$</td><td>0.00</td><td>0.25</td><td>0.50</td><td>0.75</td></tr><tr><td>RESNET-110</td><td>0.550</td><td>76.89</td><td>68.25</td><td>57.42</td><td>46.26</td></tr><tr><td>RESNET-32</td><td>0.537</td><td>76.44</td><td>67.17</td><td>56.19</td><td>44.00</td></tr><tr><td>RESNET-110 $\xrightarrow[]{\mathrm{{CRD}}}$ RESNET-</td><td>0.530</td><td>76.81</td><td>67.57</td><td>55.52</td><td>42.48</td></tr><tr><td>RESNET-44</td><td>0.545</td><td>76.55</td><td>67.33</td><td>57.18</td><td>45.85</td></tr><tr><td>RESNET-110 $\xrightarrow[]{\mathrm{{CRD}}}$ RESNET-44</td><td>0.541</td><td>77.34</td><td>68.11</td><td>56.84</td><td>43.92</td></tr><tr><td>RESNET-56</td><td>0.545</td><td>77.01</td><td>68.17</td><td>56.89</td><td>45.05</td></tr><tr><td>RESNET- ${110}\xrightarrow[]{\mathrm{{CRD}}}$ RESNET-56</td><td>0.547</td><td>77.60</td><td>68.24</td><td>57.72</td><td>44.97</td></tr></table>
|
| 152 |
+
|
| 153 |
+
We run additional experiments to further investigate this student network capacity limitation of CRD. Starting with a ResNet-110 teacher trained using SmoothMix, we distill it into networks of various sizes using CRD. The results for this experiment are reported in Table 5. We observe that as we increase the size of the student network, the effectiveness of the distillation process improves. For ResNet-56, which is about half the size of ResNet-110, we observe that CRD succeeds at achieving comparable robustness between the student and the teacher. The gap between the robustness of the student and the teacher gets progressively worse as we reduce the size of the student. These results further corroborate that CRD, in its current form, is unable to distill complicated functions learnt by state-of-the-art robustness methods into networks of certain size. This result in unlike what prior works have reported in context of distilling both standard performance and adversarial robustness where we see that distillation can be successfully performed between student and teacher networks with larger size differences than the ones we use (e.g., WideResNet-34-10 to ResNet-20) [3, 7, 23].
|
| 154 |
+
|
| 155 |
+
## 205 5 Conclusion & Future Work
|
| 156 |
+
|
| 157 |
+
In this paper, we presented the first study of knowledge distillation (KD) in the context of certified robustness. We tested different existing distillation objectives that were designed to distill standard performance or (empirical) adversarial robustness in terms of how effective they are in distilling certified robustness. Based on these results, we proposed a distillation objective (CRD) tailored for distilling certified robustness. However, CRD suffers from a network capacity limitation which makes it impractical to use with state-of-the-art certified training methods; further research is needed to 2 address this shortcoming. We believe our preliminary investigation will serve as a useful starting point for future works on compressing certifiably robust machine learning models.
|
| 158 |
+
|
| 159 |
+
References
|
| 160 |
+
|
| 161 |
+
[1] Elahe Arani, Fahad Sarfraz, and Bahram Zonooz. Noise as a resource for learning in knowledge distillation. In IEEE/CVF Winter Conference on Applications of Computer Vision, 2021.
|
| 162 |
+
|
| 163 |
+
[2] Anish Athalye, Nicholas Carlini, and David A. Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International Conference on Machine Learning, 2018.
|
| 164 |
+
|
| 165 |
+
[3] Lei Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? In Advances in Neural Information Processing Systems, 2014.
|
| 166 |
+
|
| 167 |
+
[4] Sébastien Bubeck, Yin Tat Lee, Eric Price, and Ilya Razenshteyn. Adversarial examples from computational constraints. In International Conference on Machine Learning, 2019.
|
| 168 |
+
|
| 169 |
+
[5] Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, Ian J. Goodfellow, Aleksander Madry, and Alexey Kurakin. On evaluating adversarial robustness. CoRR, abs/1902.06705, 2019.
|
| 170 |
+
|
| 171 |
+
[6] Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. Certified adversarial robustness via randomized smoothing. In International Conference on Machine Learning, 2019.
|
| 172 |
+
|
| 173 |
+
[7] Micah Goldblum, Liam Fowl, Soheil Feizi, and Tom Goldstein. Adversarially robust distillation. In AAAI Conference on Artificial Intelligence, 2020.
|
| 174 |
+
|
| 175 |
+
[8] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2014.
|
| 176 |
+
|
| 177 |
+
[9] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network, 2015. arXiv:1503.02531.
|
| 178 |
+
|
| 179 |
+
[10] Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, Do-Guk Kim, and Jinwoo Shin. Smoothmix: Training confidence-calibrated smoothed classifiers for certified robustness. Advances in Neural Information Processing Systems, 2021.
|
| 180 |
+
|
| 181 |
+
[11] Jongheon Jeong and Jinwoo Shin. Consistency regularization for certified robustness of smoothed classifiers. Advances in Neural Information Processing Systems, 2020.
|
| 182 |
+
|
| 183 |
+
[12] Guy Katz, Clark Barrett, David L Dill, Kyle Julian, and Mykel J Kochenderfer. Reluplex: An efficient smt solver for verifying deep neural networks. In International conference on computer aided verification, pages 97-117. Springer, 2017.
|
| 184 |
+
|
| 185 |
+
[13] Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. Certified robustness to adversarial examples with differential privacy. In IEEE Symposium on Security and Privacy, 2019.
|
| 186 |
+
|
| 187 |
+
[14] Linyi Li, Xiangyu Qi, Tao Xie, and Bo Li. Sok: Certified robustness for deep neural networks. CoRR, abs/2009.04131, 2020.
|
| 188 |
+
|
| 189 |
+
[15] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018.
|
| 190 |
+
|
| 191 |
+
[16] Preetum Nakkiran. Adversarial robustness may be at odds with simplicity. arXiv preprint arXiv:1901.00532, 2019.
|
| 192 |
+
|
| 193 |
+
[17] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems (NIPS), 2019.
|
| 194 |
+
|
| 195 |
+
[18] Hadi Salman, Greg Yang, Jerry Li, Pengchuan Zhang, Huan Zhang, Ilya Razenshteyn, and Sébastien Bubeck. Provably robust deep learning via adversarially trained smoothed classifiers. In Advances in Neural Information Processing Systems, 2019.
|
| 196 |
+
|
| 197 |
+
[19] Lukas Schott, Jonas Rauber, Matthias Bethge, and Wieland Brendel. Towards the first adversarially robust neural network model on MNIST. In International Conference on Learning Representations, 2019.
|
| 198 |
+
|
| 199 |
+
[20] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations, 2014.
|
| 200 |
+
|
| 201 |
+
[21] Runtian Zhai, Chen Dan, Di He, Huan Zhang, Boqing Gong, Pradeep Ravikumar, Cho-Jui Hsieh, and Liwei Wang. Macer: Attack-free and scalable robust training via maximizing certified radius. In International Conference on Learning Representations, 2020.
|
| 202 |
+
|
| 203 |
+
[22] Jianing Zhu, Jiangchao Yao, Bo Han, Jingfeng Zhang, Tongliang Liu, Gang Niu, Jingren Zhou, Jianliang Xu, and Hongxia Yang. Reliable adversarial distillation with unreliable teachers. arXiv preprint arXiv:2106.04928, 2021.
|
| 204 |
+
|
| 205 |
+
[23] Bojia Zi, Shihao Zhao, Xingjun Ma, and Yu-Gang Jiang. Revisiting adversarial robustness distillation: Robust soft labels make student better. In IEEE/CVF International Conference on Computer Vision, 2021.
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/YzPaQcK2Ko4/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,246 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ ON THE FEASIBILITY OF COMPRESSING CERTIFIABLY ROBUST NEURAL NETWORKS
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
§ ABSTRACT
|
| 12 |
+
|
| 13 |
+
Knowledge distillation is a popular approach to compress high-performance neural networks for use in resource-constrained environments. However, the threat of adversarial machine learning poses the question: Is it possible to compress adversarially robust networks and achieve similar or better adversarial robustness as the original network? In this paper, we explore this question with respect to certifiable robustness defenses, in which the defense establishes a formal robustness guarantee irrespective of the adversarial attack methodology. We present our preliminary findings answering two main questions: 1) Is the traditional knowledge distillation sufficient to compress certifiably robust neural networks? and 2) What aspects of the transfer process can we modify to improve the compression effectiveness? Our work represents the first study of the interaction between machine learning model compression and certifiable robustness.
|
| 14 |
+
|
| 15 |
+
§ 3 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
The existence of adversarial inputs [8, 20], i.e., imperceptibly perturbed inputs that reliably cause erroneous outputs, have heightened concerns regarding the use of neural networks in sensitive real-world settings. In response, several methods have been proposed to harden neural networks against such inputs, enhancing their reliability in adversarial environments [15, 13, 19, 5]. One such category of methods focuses on computing the size of the largest neighborhood around a given input within which a classifier's output remains constant. Such a classifier is said to be certifiably robust for the given input inside this neighborhood. Since these methods provide a guarantee of robustness, they are highly desirable in safety- and privacy-critical applications such as payment systems, access control systems, self-driving cars, and security surveillance [14]. However, the expanded usage of neural networks in resource-limited computing platforms, such as IoT systems, has introduced a new challenge in this space: when training to generalize over adversarial data rather than the standard data distribution, larger networks are necessary [4, 16], making robust networks more resource demanding compared to their non-robust counterparts.
|
| 18 |
+
|
| 19 |
+
Knowledge distillation (KD) [3, 9] is a technique which uses a teacher-student training pipeline to compress the performance of larger networks into a smaller architecture. On the standard (i.e., non-adversarial) classification task, KD improves the performance of the smaller student network and often results in performance compared to the large teacher network. Thus, a question arises: can knowledge distillation be used in an adversarial context to compress highly robust neural networks? Several works have explored this question with respect to empirical robustness, in which adversarial robustness is measured with respect to a specific attack algorithm [7, 1, 22, 23]. We note, though, no work has explored this question with respect to certifiable robustness, which is more given that it establishes a security guarantee irrespective of the attack methodology. To address the gap in literature, we perform the first study on compressing certifiably robust networks.
|
| 20 |
+
|
| 21 |
+
§ IN THIS PAPER, WE MAKE THE FOLLOWING CONTRIBUTIONS:
|
| 22 |
+
|
| 23 |
+
* We study the various strategies used for KD in context of certifiable robustness. We discover shortfalls in naive application of KD in these settings.
|
| 24 |
+
|
| 25 |
+
* We propose a more effective strategy for distilling certified robustness into small networks, which allows us to bridge the gap between the robustness of the student and the teacher.
|
| 26 |
+
|
| 27 |
+
* We identify that distilling certifiable robustness imposes stricter requirements than distilling standard performance or empirical robustness. Specifically, a smaller size gap between the student and the teacher networks might be required for effective distillation.
|
| 28 |
+
|
| 29 |
+
§ 2 BACKGROUND AND RELATED WORK
|
| 30 |
+
|
| 31 |
+
In this work, we study the certifiable robustness of neural network based image classifiers trained using knowledge distillation (KD). We use this section to define our problem, provide relevant background, and discuss related prior works.
|
| 32 |
+
|
| 33 |
+
§ 2.1 PRELIMINARIES
|
| 34 |
+
|
| 35 |
+
Consider a neural network classifier $f$ parameterized by $\theta$ (denoted as ${f}_{\theta }$ ) that is trained to map a given image $x \in {\mathbb{R}}^{d}$ to a set of discrete labels $\mathcal{Y}$ using a set of i.i.d. samples $\mathcal{S} = \left\{ {\left( {{x}_{1},{y}_{1}}\right) ,\left( {{x}_{1},{y}_{1}}\right) ,\cdots ,\left( {{x}_{n},{y}_{n}}\right) }\right\}$ drawn from some data distribution. The output of the classifier can be written as ${f}_{\theta }\left( x\right) = {\operatorname{argmax}}_{c \in \mathcal{Y}}{z}_{\theta }^{c}\left( x\right)$ . Here ${z}_{\theta }\left( x\right)$ is the softmax output of the classifier and ${z}_{\theta }^{c}\left( x\right)$ denotes the probability that image $x$ belongs to class $c$ . The process of training of a classifier involves minimizing the cross-entropy loss on the standard data distribution, which is approximated through $\mathcal{S}$ .
|
| 36 |
+
|
| 37 |
+
§ 2.2 ADVERSARIAL ROBUSTNESS
|
| 38 |
+
|
| 39 |
+
In the ${\ell }_{p}$ -norm space, an adversarial perturbation is defined as any perturbation $\delta \in {\mathbb{R}}^{d}$ such that $\parallel \delta {\parallel }_{p} \leq \epsilon$ , that an adversary can use to change the classifier’s output i.e., ${f}_{\theta }\left( x\right) \neq {f}_{\theta }\left( {x + \delta }\right)$ . Here, $\epsilon$ defines the adversarial perturbation budget in terms of the maximum allowed magnitude of the perturbation vector.
|
| 40 |
+
|
| 41 |
+
The adversarial robustness of a neural network is often characterized by its empirical or certified robustness. The empirical adversarial robustness of a neural network is the accuracy of the network against adversarial samples generated by a given attack algorithm. Many early proposed defenses that were thought to be effective based on their reported empirical robustness were later found to have poor true robustness [2]. In contrast, the certified adversarial robustness of a neural network represents the true worst-case performance and is measured by the lower bound accuracy of the network against all adversarial samples within a given $\epsilon$ neighborhood.
|
| 42 |
+
|
| 43 |
+
§ 2.2.1 CERTIFIED ROBUSTNESS
|
| 44 |
+
|
| 45 |
+
The certified robustness of a network can be measured by its lower bound accuracy within a predetermined neighborhood. Alternatively, it can be measured based on the radius of the largest neighborhood within which a classifier’s output remains (correct and) constant. For a given input $x$ , the classifier ${f}_{\theta }$ is said to be certifiably robust if ${f}_{\theta }\left( x\right)$ is provably constant within some large neighborhood around $x$ . The radius of this neighborhood (or, robust radius) is defined using the ${\ell }_{p}$ -norm metric as follows:
|
| 46 |
+
|
| 47 |
+
$$
|
| 48 |
+
R\left( {{f}_{\theta };x,y}\right) = \left\{ \begin{matrix} \mathop{\inf }\limits_{{{f}_{\theta }\left( {x}^{\prime }\right) \neq {f}_{\theta }\left( x\right) }}{\begin{Vmatrix}{x}^{\prime } - x\end{Vmatrix}}_{p} & ,\text{ when }{f}_{\theta }\left( x\right) = y \\ 0 & ,\text{ when }{f}_{\theta }\left( x\right) \neq y \end{matrix}\right. \tag{1}
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+
Intuitively, the robust radius $R\left( {{f}_{\theta };x,y}\right)$ establishes a region within which the classifier’s prediction remains constant, ensuring that an adversary with a budget $\epsilon \leq R$ can not succeed. Therefore, training networks to maximize the robust radius would harden a classifier against manipulations of all sorts, including adversarial ones. However, as computing the robust radius of a neural network for a given input $x$ is NP-hard [12], recent certified training methods instead propose computing a lower bound of the robust radius, known as the certified radius.
|
| 52 |
+
|
| 53 |
+
Randomized Smoothing. Cohen et al. [6] presented a scalable method for creating certifiably robust image classifiers using randomized smoothing. This involves converting a given classifier (termed base classifier) into a smooth classifier.
|
| 54 |
+
|
| 55 |
+
2 Definition 2.1. For a given (base) classifier ${f}_{\theta }$ and $\sigma > 0$ , the corresponding smooth classifier ${g}_{\theta }$ is defined as follows:
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
{g}_{\theta }\left( x\right) = \mathop{\operatorname{argmax}}\limits_{{c \in \mathcal{Y}}}{P}_{\eta \sim \mathcal{N}\left( {0,{\sigma }^{2}I}\right) }\left( {{f}_{\theta }\left( {x + \eta }\right) = c}\right) \tag{2}
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
Simply put, ${g}_{\theta }$ returns the class $c$ , which has the highest probability mass under the Gaussian distribution $\mathcal{N}\left( {x,{\sigma }^{2}I}\right)$ . The authors provide theoretical proof for the certified robustness of a smooth classifier. This theoretical work can be summarized using Theorem 2.2.
|
| 62 |
+
|
| 63 |
+
Theorem 2.2. Let ${f}_{\theta } : {\mathbb{R}}^{d} \mapsto \mathcal{Y}$ be a classifier and ${g}_{\theta }$ be its smoothed version (as defined in Equation 2). For a given input $x \in {\mathbb{R}}^{d}$ and corresponding ground truth output $y \in \mathcal{Y}$ , if ${g}_{\theta }$ correctly classifies $x$ as $y$ such that
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
{P}_{\eta }\left( {{f}_{\theta }\left( {x + \eta }\right) = y}\right) \geq \mathop{\max }\limits_{{{y}^{\prime } \neq y}}{P}_{\eta }\left( {{f}_{\theta }\left( {x + \eta }\right) = {y}^{\prime }}\right) \tag{3}
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
then ${g}_{\theta }$ is provably robust at $x$ within the certified radius $R$ given by:
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
{CR}\left( {{g}_{\theta };x,y}\right) = \frac{\sigma }{2}\left\lbrack {{\Phi }^{-1}\left( {{P}_{\eta }\left( {{f}_{\theta }\left( {x + \eta }\right) = y}\right) }\right) - {\Phi }^{-1}\left( {\mathop{\max }\limits_{{{y}^{\prime } \neq y}}{P}_{\eta }\left( {{f}_{\theta }\left( {x + \eta }\right) = {y}^{\prime }}\right) }\right) }\right\rbrack \tag{4}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
where, $\Phi$ is the c.d.f. of the standard Gaussian distribution.
|
| 76 |
+
|
| 77 |
+
Furthermore, the authors demonstrate that training the base classifier to minimize cross-entropy loss on inputs perturbed with Gaussian noise (Gaussian data augmentation) increases the certified radius of the smooth classifier. More generally, improving the base classifier's robustness to Gaussian noise is a successful strategy for increasing certified radius and has been utilized by several other prior works [18, 21, 11].
|
| 78 |
+
|
| 79 |
+
§ 2.3 KNOWLEDGE DISTILLATION (KD)
|
| 80 |
+
|
| 81 |
+
For the standard classification task, small neural networks are able to learn similar classification functions as large ones through a process known as knowledge distillation [3, 9]. Traditional KD involves training the small network (student) to mimic the outputs of a much larger network (teacher) for a given task (e.g., classification). The student's training objective, also referred to as the distillation objective, is formalized as follows:
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
\mathop{\min }\limits_{\theta }{\mathbb{E}}_{\left( {x,y}\right) \sim \mathcal{S}}\left\lbrack {\left( {1 - \alpha }\right) {\mathcal{L}}_{CE}\left( {{z}_{\theta }\left( x\right) ,y}\right) + \alpha {t}^{2}{\mathcal{L}}_{M}\left( {{z}_{\theta }^{t}\left( x\right) ,{z}_{\phi }^{t}\left( x\right) }\right) }\right\rbrack \tag{5}
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
where ${z}_{\theta }\left( x\right)$ and ${z}_{\phi }\left( x\right)$ are the softmax outputs of the student ${S}_{\theta }$ and teacher ${T}_{\phi }$ , respectively; ${\mathcal{L}}_{CE}$ is the cross-entropy loss; ${\mathcal{L}}_{M}$ is the "mimic loss" (KL-Divergence, Euclidean distance etc); $\alpha$ is an hyperparameter used to weigh the two loss terms; and $t$ is the softmax temperature. The value of $\alpha$ is usually set to 1, implying that the student is solely trained to mimic the teacher and in the process learns to perform well on the task. Training with the supervision of a teacher improves the student's performance as compared to training it independently. This is due to the small student benefits from the inter-class relationships learned by the large teacher (with higher modelling capacity). By distilling the knowledge of a large network into a small network, we essentially perform model compression, as the small network encodes the performance of the large network but with fewer parameters. This, in turn, allows for the use of high performance networks in resource-restricted devices.
|
| 88 |
+
|
| 89 |
+
§ 2.4 ADVERSARIAL ROBUSTNESS AND KNOWLEDGE DISTILLATION
|
| 90 |
+
|
| 91 |
+
Using the lessons learned from KD, several successful attempts have been made at improving the adversarial robustness of small networks by training them under the supervision of larger, robust networks. Goldblum et al. [7] propose Adversarially Robust Distillation (ARD), that combines KD with Adversarial Training [15]: the student is trained to match the teacher's outputs on adversarial inputs. ARD improves the robustness of small networks against gradient-based attacks relative to standalone adversarial training. Zi et al. [23] propose Robust Soft Label Adversarial Distillation (RSLAD), which improves upon ARD by using soft labels from a robust teacher rather than hard labels in all supervision loss terms. An important commonality between all these prior works is that they limit their experimentation to adversarial training, which is an empirical robustness method. Therefore, the robustness claims made by them can potentially be invalidated by a future adversary.
|
| 92 |
+
|
| 93 |
+
Table 1: The mimic loss used in different distillation objectives proposed by prior works. ${z}_{\theta }$ and ${z}_{\phi }$ are softmax outputs of the student and teacher network respectively, and $t$ represents the temperature parameter. Note that since $\alpha$ is usually set to 1 (see Equation 5), we only report the loss terms that the student is actually trained with.
|
| 94 |
+
|
| 95 |
+
max width=
|
| 96 |
+
|
| 97 |
+
Method ${\mathcal{L}}_{\mathcal{M}}$
|
| 98 |
+
|
| 99 |
+
1-2
|
| 100 |
+
KD [3, 9] KL-DIV $\left( {{z}_{\theta }^{t}\left( x\right) ,{z}_{\phi }^{t}\left( x\right) }\right) \;$ OR $\;{\begin{Vmatrix}{z}_{\theta }\left( x\right) - {z}_{\phi }\left( x\right) \end{Vmatrix}}_{2}$
|
| 101 |
+
|
| 102 |
+
1-2
|
| 103 |
+
ARD [7] ${}^{ * }$ KL-DIV $\left( {{z}_{\theta }^{t}\left( {x + \delta }\right) ,{z}_{\phi }^{t}\left( x\right) }\right)$
|
| 104 |
+
|
| 105 |
+
1-2
|
| 106 |
+
RSLAD [23] $\operatorname{KL-Div}\left( {{z}_{\theta }\left( x\right) ,{z}_{\phi }\left( x\right) }\right) + \operatorname{KL-Div}\left( {{z}_{\theta }\left( {x + \delta }\right) ,{z}_{\phi }\left( x\right) }\right)$
|
| 107 |
+
|
| 108 |
+
1-2
|
| 109 |
+
|
| 110 |
+
* For $\delta$ we use Gaussian noise instead of adversarial noise.
|
| 111 |
+
|
| 112 |
+
§ 3 DISTILLING CERTIFIED ROBUSTNESS
|
| 113 |
+
|
| 114 |
+
In order to promote deployment of safe machine learning models in resource-limited settings, it is important to study if certifiably robust neural networks can be effectively compressed. Knowledge distillation (KD) is one of the most effective approaches for doing this. Therefore, in this section we examine the effectiveness of KD towards compressing certifiably robust neural networks. We begin by describing our experimental setup in Section 3.1. In Section 3.2, we study the effectiveness of existing distillation objectives in distilling certified robustness. In Section 3.3, we propose a distillation objective using the traditional KD strategy that improves the distillation process by addressing the shortcoming of existing objectives.
|
| 115 |
+
|
| 116 |
+
§ 3.1 EXPERIMENTAL SETUP
|
| 117 |
+
|
| 118 |
+
In our experiments, we focus on certified robustness of image classifiers in the ${\ell }_{2}$ -space. Following the work by Cohen et al. [6], we use randomized smoothing to achieve certifiably robust classifiers. All our experiments are conducted using the CIFAR-10 dataset. To measure certified robustness, we follow prior works and report certified accuracy (or, the prediction accuracy of the smooth classifier) at different ${\ell }_{2}$ radius [6,18,11]. The certified accuracy at radius $= 0$ is equivalent to the prediction accuracy of the smooth classifier on clean inputs. Additionally, we report the average certified radius (ACR) computed over the entire test set [21]. Our code is implemented in PyTorch [17] and is publicly available at [REDACTED] [R]
|
| 119 |
+
|
| 120 |
+
§ 3.2 CAN EXISTING DISTILLATION OBJECTIVES BE USED TO DISTILL CERTIFIED ROBUSTNESS?
|
| 121 |
+
|
| 122 |
+
Knowledge distillation is effectively a method for "transferring" the knowledge of one network to another. Traditionally, this is achieved by training one network to mimic the other using some sort of mimic loss (i.e., ${\mathcal{L}}_{\mathcal{M}}$ from Equation 5). Prior works on distilling adversarial robustness [7,23,22] also follow the traditional KD strategy, but propose different versions of ${\mathcal{L}}_{\mathcal{M}}$ (see Table 1). In this section, we evaluate whether these existing distillation objectives can be used to distill certified robustness. Note that since ARD and RSLAD were designed for adversarial training [15], they are not compatible with the certified robustness methods that we wish to study. To make them compatible, we use Gaussian noise in place of the adversarial noise therm $\left( \delta \right)$ in their distillation objective.
|
| 123 |
+
|
| 124 |
+
For experimentation, we use a ResNet-110 network as the teacher and train it on the CIFAR-10 dataset. To obtain non-trivial certified robustness for this network, we train it using the Gaussian data augmentation method proposed by Cohen et al. [6]. We then distill its robustness to a much smaller ResNet-20 network using the different existing distillation objectives. The results are summarized in Table 2. For comparison, we also report the student and teacher network's robustness when trained independently using Gaussian data augmentation. The first observation we make is that traditional distillation (KD) completely fails at distilling certified robustness as the student exhibits trivial ACR and certified accuracy for all values of $r$ . The student network trained with RSLAD appears to have non-trivial certified robustness, however, the distilled ResNet-20 network exhibits poorer robustness than the one trained independently. This implies that the RSLAD objective is also unsuitable for our use case. Only the ARD objective seems to be successful at distilling certified robustness as the distilled ResNet-20 exhibits higher robustness than a ResNet-20 trained independently.
|
| 125 |
+
|
| 126 |
+
${}^{1}$ To maintain anonymity, we have temporarily redacted the URL to the code repository.
|
| 127 |
+
|
| 128 |
+
${}^{2}$ For certification, we borrow code from Cohen et al. [6]: https://github.com/locuslab/smoothing
|
| 129 |
+
|
| 130 |
+
Table 2: Comparing the robustness of a student network trained using different variants of knowledge distillation. We denote the distillation process as "teacher "tethod ". Among all the methods, only ARD is able to successfully distill certified robustness, presenting higher robustness than the independently-trained student.
|
| 131 |
+
|
| 132 |
+
max width=
|
| 133 |
+
|
| 134 |
+
X $\mathbf{{ACR}}$ 0.00 0.25 0.50 0.75
|
| 135 |
+
|
| 136 |
+
1-6
|
| 137 |
+
RESNET-110 0.486 81.41 67.75 49.67 32.37
|
| 138 |
+
|
| 139 |
+
1-6
|
| 140 |
+
RESNET-20 0.451 79.62 63.78 45.65 28.01
|
| 141 |
+
|
| 142 |
+
1-6
|
| 143 |
+
RESNET-110 $\xrightarrow[]{\mathrm{{KD}}}$ RESNET-20 0.090 10.93 10.16 9.86 9.03
|
| 144 |
+
|
| 145 |
+
1-6
|
| 146 |
+
RESNET-110 $\xrightarrow[]{\text{ RSLAD }}$ RESNET-20 0.431 77.46 61.98 43.57 25.62
|
| 147 |
+
|
| 148 |
+
1-6
|
| 149 |
+
RESNET-110 $\xrightarrow[]{\mathrm{{ARD}}}$ RESNET-20 0.456 76.50 62.80 46.87 30.29
|
| 150 |
+
|
| 151 |
+
1-6
|
| 152 |
+
|
| 153 |
+
Table 3: Evaluating the effectiveness of CRD in distilling certified robustness. CRD performs better than distillation objectives proposed by prior works. Furthermore, CRD trained student exhibits comparable robustness to its teacher (Table 2, ${1}^{st}$ row).
|
| 154 |
+
|
| 155 |
+
max width=
|
| 156 |
+
|
| 157 |
+
X $\mathbf{{ACR}}$ 0.00 0.25 0.50 0.75
|
| 158 |
+
|
| 159 |
+
1-6
|
| 160 |
+
RESNET- ${110}\xrightarrow[]{\mathrm{{CRD}}}$ RESNET-20 0.483 80.06 66.19 49.62 32.74
|
| 161 |
+
|
| 162 |
+
1-6
|
| 163 |
+
|
| 164 |
+
§ 3.3 ADAPTING TRADITIONAL KD STRATEGY TO DISTILL CERTIFIED ROBUSTNESS
|
| 165 |
+
|
| 166 |
+
From the results in Table 2, we observe that even for the best performing objective (i.e., ARD), there exists a gap between the teacher's and the student's robustness. In this section we explore whether it is possible to bridge this gap while using the traditional KD strategy of mimicking outputs. We note that formulation of ${\mathcal{L}}_{M}$ used by prior works on distilling adversarial robustness was motivated by wanting the student to learn a similar output distribution for clean and adversarial inputs (generated using some attack). This motivation, however, doesn't translate to our use case very well. In the randomized smoothing paradigm, higher certified robustness comes from higher robustness to Gaussian noise. In fact, Jeong et al. [10] note that there is a direct correlation between the robustness of a smooth classifier and its prediction confidence (tied to confidence of base classifier on inputs perturbed with Gaussian noise). Based on this, we propose the following ${\mathcal{L}}_{M}$ which is tailored for certified robustness distillation:
|
| 167 |
+
|
| 168 |
+
$$
|
| 169 |
+
\mathop{\min }\limits_{\theta }{\mathbb{E}}_{\left( {x,y}\right) \sim \mathcal{S};\delta \sim \mathcal{N}\left( {0,{\sigma }^{2}I}\right) }\left\lbrack {\left( {1 - \alpha }\right) {\mathcal{L}}_{CE}\left( {{z}_{\theta }\left( {x + \delta }\right) ,y}\right) + \alpha {t}^{2}{\begin{Vmatrix}{z}_{\theta }^{t}\left( x + \delta \right) - {z}_{\phi }^{t}\left( x + \delta \right) \end{Vmatrix}}_{2}}\right\rbrack \tag{6}
|
| 170 |
+
$$
|
| 171 |
+
|
| 172 |
+
We refer to this distillation objective as Certified Robust Distillation (CRD). Simply put, we are training the student to mimic the teacher’s output not only at the given input $x$ , but also in the Gaussian neighborhood around it. The robustness of a ResNet-20 distilled from a ResNet-110 using CRD is reported in Table 3. Comparing with the results in Table 2, we observe that CRD is successful at bridging the gap between the robustness of the student and the teacher. This makes CRD a more successful objective for distilling certified robustness than objectives proposed by prior works.
|
| 173 |
+
|
| 174 |
+
§ 4 LIMITATIONS OF CRD
|
| 175 |
+
|
| 176 |
+
In this section, we address the existing limitations of CRD. It is known that robustness training requires the network to learn more complicated functions than standard training [4, 16]. Therefore, we suspect that it might be more difficult to distill networks with high robustness as compared to distilling networks with high standard performance (e.g., accuracy on test set). To investigate this, we repeat the experiment from previous section, this time training the teacher to be more robust by using training methods that are better than Gaussian data augmentation training. Specifically, we use MACER [21] and SmoothMix [10]. The results are reported in Table 4. For comparison, we also report the robustness of student and teacher networks independently trained using MACER and SmoothMix. Overall, we observe that for all teacher training methods that we use in this section, the process of distillation is not as effective as it was in the previous section. In all cases, there is a large gap between the robustness of the student and the teacher networks. Furthermore, only in one case (i.e., MACER) we observe that the distilled ResNet-20 has higher robustness than the independently trained ResNet-20.
|
| 177 |
+
|
| 178 |
+
Table 4: Evaluating the effectiveness of CRD in distilling certified robustness from ResNet-110 teachers with progressively higher robustness. It is harder for a student to mimic a more robust teacher.
|
| 179 |
+
|
| 180 |
+
max width=
|
| 181 |
+
|
| 182 |
+
X $\mathbf{{ACR}}$ 0.00 0.25 0.50 0.75
|
| 183 |
+
|
| 184 |
+
1-6
|
| 185 |
+
6|c|MACER [21]
|
| 186 |
+
|
| 187 |
+
1-6
|
| 188 |
+
RESNET-110 0.531 79.11 68.39 55.90 40.61
|
| 189 |
+
|
| 190 |
+
1-6
|
| 191 |
+
RESNET-20 0.507 76.44 65.81 52.87 38.75
|
| 192 |
+
|
| 193 |
+
1-6
|
| 194 |
+
RESNET-110 $\xrightarrow[]{\mathrm{{CRD}}}$ RESNET-20 0.508 78.30 66.80 53.15 37.75
|
| 195 |
+
|
| 196 |
+
1-6
|
| 197 |
+
|
| 198 |
+
max width=
|
| 199 |
+
|
| 200 |
+
X SmoothMix [10] X X X X
|
| 201 |
+
|
| 202 |
+
1-6
|
| 203 |
+
RESNET-110 0.550 76.89 68.25 57.42 46.26
|
| 204 |
+
|
| 205 |
+
1-6
|
| 206 |
+
RESNET-20 0.522 75.55 65.53 54.72 42.62
|
| 207 |
+
|
| 208 |
+
1-6
|
| 209 |
+
RESNET-110 $\xrightarrow[]{\mathrm{{CRD}}}$ RESNET-20 0.514 76.33 65.85 53.83 40.28
|
| 210 |
+
|
| 211 |
+
1-6
|
| 212 |
+
|
| 213 |
+
Table 5: Testing the network capacity limitation of CRD using SmoothMix [10]. Networks of larger sizes are required to effectively mimic teachers possessing high certified robustness.
|
| 214 |
+
|
| 215 |
+
max width=
|
| 216 |
+
|
| 217 |
+
X $\mathbf{{ACR}}$ 0.00 0.25 0.50 0.75
|
| 218 |
+
|
| 219 |
+
1-6
|
| 220 |
+
RESNET-110 0.550 76.89 68.25 57.42 46.26
|
| 221 |
+
|
| 222 |
+
1-6
|
| 223 |
+
RESNET-32 0.537 76.44 67.17 56.19 44.00
|
| 224 |
+
|
| 225 |
+
1-6
|
| 226 |
+
RESNET-110 $\xrightarrow[]{\mathrm{{CRD}}}$ RESNET- 0.530 76.81 67.57 55.52 42.48
|
| 227 |
+
|
| 228 |
+
1-6
|
| 229 |
+
RESNET-44 0.545 76.55 67.33 57.18 45.85
|
| 230 |
+
|
| 231 |
+
1-6
|
| 232 |
+
RESNET-110 $\xrightarrow[]{\mathrm{{CRD}}}$ RESNET-44 0.541 77.34 68.11 56.84 43.92
|
| 233 |
+
|
| 234 |
+
1-6
|
| 235 |
+
RESNET-56 0.545 77.01 68.17 56.89 45.05
|
| 236 |
+
|
| 237 |
+
1-6
|
| 238 |
+
RESNET- ${110}\xrightarrow[]{\mathrm{{CRD}}}$ RESNET-56 0.547 77.60 68.24 57.72 44.97
|
| 239 |
+
|
| 240 |
+
1-6
|
| 241 |
+
|
| 242 |
+
We run additional experiments to further investigate this student network capacity limitation of CRD. Starting with a ResNet-110 teacher trained using SmoothMix, we distill it into networks of various sizes using CRD. The results for this experiment are reported in Table 5. We observe that as we increase the size of the student network, the effectiveness of the distillation process improves. For ResNet-56, which is about half the size of ResNet-110, we observe that CRD succeeds at achieving comparable robustness between the student and the teacher. The gap between the robustness of the student and the teacher gets progressively worse as we reduce the size of the student. These results further corroborate that CRD, in its current form, is unable to distill complicated functions learnt by state-of-the-art robustness methods into networks of certain size. This result in unlike what prior works have reported in context of distilling both standard performance and adversarial robustness where we see that distillation can be successfully performed between student and teacher networks with larger size differences than the ones we use (e.g., WideResNet-34-10 to ResNet-20) [3, 7, 23].
|
| 243 |
+
|
| 244 |
+
§ 205 5 CONCLUSION & FUTURE WORK
|
| 245 |
+
|
| 246 |
+
In this paper, we presented the first study of knowledge distillation (KD) in the context of certified robustness. We tested different existing distillation objectives that were designed to distill standard performance or (empirical) adversarial robustness in terms of how effective they are in distilling certified robustness. Based on these results, we proposed a distillation objective (CRD) tailored for distilling certified robustness. However, CRD suffers from a network capacity limitation which makes it impractical to use with state-of-the-art certified training methods; further research is needed to 2 address this shortcoming. We believe our preliminary investigation will serve as a useful starting point for future works on compressing certifiably robust machine learning models.
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/Z31SloFrp7/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,503 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Private Data Leakage via Exploiting Access Patterns of Sparse Features in Deep Learning-based Recommendation Systems
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
## Abstract
|
| 12 |
+
|
| 13 |
+
Deep Learning-based Recommendation models use sparse and dense features of a user to predict an item that the user may like. These features carry the users' private information, service providers often protect these values by memory encryption (e.g., with hardware such as Intel's SGX). However, even with such protection, an attacker may still learn information about which entry of the sparse feature is nonzero through the embedding table access pattern. In this work, we show that only leaking the sparse features' nonzero entry positions can be a big threat to privacy. Using the embedding table access pattern, we show that it is possible to identify or re-identify a user, or extract sensitive attributes from a user. We subsequently show that applying a hash function to anonymize the access pattern cannot be a solution, as it can be reverse-engineered in many cases.
|
| 14 |
+
|
| 15 |
+
## 12 1 Introduction
|
| 16 |
+
|
| 17 |
+
Deep learning-based personalized recommendation models empower modern Internet services. These models exploit different types of information, including user attributes, user preferences, user behavior, social interaction, and other contextual information Erkin et al. (2010) to provide personalized recommendations relevant to a given user. They drive ${35}\%$ of Amazon’s revenue Gupta et al. (2020) and influence ${80}\%$ of the videos streamed on Netflix Gomez-Uribe and Hunt (2015).
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
|
| 21 |
+
Figure 1: left: DLRM, right: example of embedding lookup.
|
| 22 |
+
|
| 23 |
+
Deep learning-based recommendation models use dense (continuous) and sparse (categorical) features of a user as an input to a deep neural network to predict an item that a user may like (Figure 1, left). The features may include both static features that do not change frequently (e.g., age or gender) and dynamic features that changes frequently (e.g., a user's recent behavior history). Both features can hold sensitive information and must be kept private. Private user features are often encrypted in memory for privacy, using hardware such as trusted execution environment (TEE), e.g., Intel SGX team (2022). However, even when using hardware like TEE, the information of which entries of the sparse features are nonzero can be leaked. This is because sparse features must be projected into a lower-dimension space through an embedding table, where the index of the nonzero entries are used as an index for an embedding table lookup (Figure 1, right). In this paper, we show that this information leakage can be an enough threat to privacy. We first show that it is possible to (1) identify a user, (2) extract sensitive attributes of a user, or (3) re-identify a user, by only looking at the embedding table access pattern even when the data is fully encrypted. We subsequently show that applying a hash function to randomize the access pattern cannot be a general solution, by demonstrating a set of hash-inversion attacks. Specifically, we show that the below attacks are possible by only observing the embedding table access patterns in modern deep learning recommendation models:
|
| 24 |
+
|
| 25 |
+
- Identification attack. We demonstrate it is possible to identify a user by only observing the access pattern of sparse features' embedding table access pattern.
|
| 26 |
+
|
| 27 |
+
- Sensitive attribute attack. We show it is possible to extract sensitive attributes of a user (e.g., demographics) from seemingly unrelated sparse features, such as dynamic user behavior history.
|
| 28 |
+
|
| 29 |
+
- Re-identification attack. We show it is possible to identify if two queries are from the same user by only looking at seemingly innocuous sparse features, such as the users' recent purchase history. - Hash inversion with frequency-based attack. We show that hiding the access using a hash cannot be a solution against these attacks, by demonstrating a hash inversion attack based on the access frequency. Our hash inversion attack can invert even sophisticated private hash functions as well as simple hash functions that are mainly used by the industry today.
|
| 30 |
+
|
| 31 |
+
## 2 Background and Threat Model
|
| 32 |
+
|
| 33 |
+
Deep learning-based recommendation model Zhou et al. (2018, 2019); Naumov et al. (2019); Ishkhanov et al. (2020); Cheng et al. (2016) uses dense and sparse features of a user and an item to predict whether the user will likely to interact with the item (e.g., click an Ad or purchase an item). Figure 1 shows the operation of a representative recommendation model, DLRM Naumov et al. (2019). In DLRM, the dense features go through a bottom MLP layer, while the sparse features go through an embedding table layer and get converted into a lower-dimensional dense features. Then, the two outputs go through a feature interaction layer (e.g., pairwise dot product) and go through a top MLP layer to predict the likelihood of an interaction. Other modern recommendation models work similarly Zhou et al. (2018, 2019); Ishkhanov et al. (2020); Cheng et al. (2016). Embedding tables convert a sparse feature into a dense representation by using the index of the nonzero entries in the sparse features as an index to perform lookup to a large table (Figure 1, right). Even when the entire dense and sparse features are fully encrypted and processed on a secure environment (e.g., by using Intel SGX Costan and Devadas (2016), hardware that encrypts content in the memory and protects computations), it is possible to learn which index holds a nonzero entry by looking at the table access pattern.
|
| 34 |
+
|
| 35 |
+
Threat Model We assume a scenario where users share their private features with the service provider to get recommendations from the model. We assume that the values of the dense and sparse features of a user is fully protected from the attacker, e.g., with Intel SGX team (2022), but the access pattern of the embedding table is revealed, essentially revealing which entries are nonzero in the sparse features. In the real world, a honest-but-curious service provider running model inference on Intel SGX can fall into this category. Figure 2 summarizes our threat model.
|
| 36 |
+
|
| 37 |
+

|
| 38 |
+
|
| 39 |
+
Figure 2: Our threat model assumes only the access pattern to the embedding table is revealed.
|
| 40 |
+
|
| 41 |
+
## 3 Identification Attack with Static User Features
|
| 42 |
+
|
| 43 |
+
A single user's inference request contains a series of sparse features, each of which in isolation has limited user information. However, multiple sparse features together can form a distinctive fingerprint for personal identification. User profile attributes (e.g. gender, city, etc) are usually static, in other words, they do not change or the frequency of the change is extremely low. We categorize this type of features into two subcategories-identifiable features and unidentifiable features. However, because - of strict regulations in many domains, most of the recommendation systems do not collect and use such identifiable features. The question is if unidentifiable features such as age, gender, education,
|
| 44 |
+
|
| 45 |
+
Table 1: The number of users with anonymity level bellow $\mathrm{K}$ in the identification attacks (out of 1.14 million users).
|
| 46 |
+
|
| 47 |
+
<table><tr><td>1-anonymity</td><td>2-anonymity</td><td>3-anonymity</td><td>4-anonymity</td><td>5-anonymity</td><td>6-anonymity</td><td>7-anonymity</td><td>8-anonymity</td><td>9-anonymity</td><td>10-anonymity</td></tr><tr><td>56</td><td>154</td><td>256</td><td>380</td><td>480</td><td>606</td><td>739</td><td>867</td><td>984</td><td>1104</td></tr></table>
|
| 48 |
+
|
| 49 |
+
## and shopping history can provide sufficient information to identify a user.
|
| 50 |
+
|
| 51 |
+
Evaluation Setup: To answer this question, we analyzed an open-source dataset released by Alibaba. This dataset contains static user features including user ID (1.14M), micro group ID (97), group ID (13), gender (2), age group (7), consumption grade/plevel (4), shopping depth (3), occupation/is college student (2), city level (5). More details about datasets is on Appendix A.
|
| 52 |
+
|
| 53 |
+
Attack Method In this set of features, the only directly identifying feature associated with a single user is the user ID. After removing the user ID, the collection of all other features provides 2.1 million possible combination. Hence, after removing the user ID, a user may mistakenly think that he or she is anonymous, and revealing any of the other features to the attacker on its own will not reveal the identity of the user. However, based on the user profile information from more than 1 million users, it is observed that in the real world only 1120 combinations of these static feature values are possible. We refer to this 1120 as user buckets. We plotted the histogram of users in these 1120 buckets as shown in Figure 3. The x-axis in the figure indicates the bucket number $\left( \left\lbrack {1 - {1120}}\right\rbrack \right)$ and the y-axis shows the percentage of users per bucket. This histogram is quite illuminating in how the user distributions follow a long tail pattern. In particular, there are only a few users in buckets 600 to 1120. In fact, there are only 989 users on average across all these buckets, and the last 56 buckets have only 1 user. Consequently, observing the entire combinations of seemingly innocuous features from each allow may allow an attacker to launch an identification attack to extract the unique user ID with very high certainty.
|
| 54 |
+
|
| 55 |
+

|
| 56 |
+
|
| 57 |
+
Figure 3: Percentage of the users belong to each user bucket.
|
| 58 |
+
|
| 59 |
+
Evaluation Metric: For our analysis, we used a well-known property known as $K$ -anonymity used in information security/privacy. It describes a scenario in which if a user's bucket number is revealed and there are $\mathrm{K}$ users in the same bucket, the probability of finding the user is $\frac{1}{K}$ . For instance, 1-anonymity for a user means that this is the only user having this particular set of feature values.
|
| 60 |
+
|
| 61 |
+
Evaluation Result: As shown in Table 1, for 56 of the user buckets, there is only one user with the specific combination of static features which implies that an attacker can identify these users with 1-anonymity if they can observe this combination of feature values. Also for more 1000 users, the anonymity level is 10 or below.
|
| 62 |
+
|
| 63 |
+
## 4 Sensitive Attribute Attack by Dynamic User Features
|
| 64 |
+
|
| 65 |
+
In this section, the question is when the user removes the static features, can sensitive features leak through other nonsensitive features? For instance, a user may provide no age information and they may have a sense of protecting more of their private data by not disclosing their static features. However, we demonstrate that even when a user hides their sensitive static features, adversaries are still able extract the sensitive attributes through cross correlations with user-item interaction data. Evaluation Setup: For evaluation, we use dynamic sparse features that includes user-item interactions Zhao et al. (2019) in the Alibaba Ads Display dataset. This dataset contains 723,268,134 tuples collected over three weeks. Each tuple includes a user ID (1.14M), a btag (4: browse, cart, favor, buy), a category id(12K), and a brand(379K).
|
| 66 |
+
|
| 67 |
+

|
| 68 |
+
|
| 69 |
+
Figure 4: Different brands are popular between different customer age groups
|
| 70 |
+
|
| 71 |
+

|
| 72 |
+
|
| 73 |
+
Figure 5: Using the accessed brands, ambiguity about A) user buckets (defined in previous section), B) user age groups, and C) user gender groups.
|
| 74 |
+
|
| 75 |
+
Attack Method: Figure 4 depicts an example of how different brands of the items are accessed by different user groups. The user/item interactions are depicted as graphs where each edge weight represents the fraction of the total interactions with that specific item from the corresponding age group. In real-world datasets, there are certain brands, where users from just a single age group interact with, in this example Legoland. A user who wants to protect their age group may not provide their age, but the adversary may deduce their age with a high probability if the user interacted with Legoland. While this simple illustration highlights the extremity (only one age group interacting with an item), this approach can be generalized. In General attacker, uses their prior knowledge on popularity of the items between different demographic groups. Then based on this prior information, they link the query to the demographic who formed most of the accesses to that item.
|
| 76 |
+
|
| 77 |
+
Evaluation Metric: In this part, we employ a metric called ambiguity to determine the likelihood an adversary fails to predict a user's static sparse feature by just viewing their interactions with items. We define ambiguity for each item $i$ as: ambiguit ${y}_{i} = {100}\% - \max \left( \right.$ frequenc $\left. {y}_{i}\right)$ where frequenc ${y}_{i}$ is the distribution vector of all accesses to brand $i$ by different user groups. Using Figure 4 as an example, frequenc ${y}_{\text{apple }} = \left\lbrack {0,0,{20}\% ,{50}\% ,{30}\% ,0,0}\right\rbrack$ and as a result ambiguit ${y}_{\text{Apple }} = {50}\%$ , meaning if a user has interacted with item $i$ (Apple), the attacker can predict the static feature (age group) successfully for ${50}\%$ of the users. With this definition, ambiguit ${y}_{i} = 0$ indicates if a user has interacted with item $i$ , the attacker can successfully determine the user’s sparse feature.
|
| 78 |
+
|
| 79 |
+
Evaluation Result: As shown in Figure 5, we quantify the ambiguity of predicting a user's sparse feature, such as age and gender, by using their item (brand) interaction history alone. The x-axis of these figures shows the percentage of ambiguity where a value of 0 indicates that there is no ambiguity, and this brand is always accessed by only one user bucket. On the other hand, higher values indicate more ambiguity, and hence brands with higher values on the $\mathrm{x}$ -axis are popular across multiple user buckets. We plot both probability density function (PDF) and cumulative distribution function (CDF) of the ambiguity of different brands. What is revealing in the data is that in Figure 5(A), we observe that more than ${17}\%$ of brands are only accessed by 1 user bucket represented by the leftmost tall bar of PDF, meaning the attacker can determine the user bucket using those brands interactions. As shown in the CDF curve in Figure 5(A), for ${38}\%$ of the brands, the attacker can predict the user bucket with a success rate of greater than ${50}\%$ . We present the information of age and gender group versus ambiguity in Figure 5(B) and Figure 5(C) respectively.
|
| 80 |
+
|
| 81 |
+
## 5 Re-Identification Attack
|
| 82 |
+
|
| 83 |
+
In re-identification attack, the goal of an attacker is to identify the same user over time by just observing their interaction history. Studies have shown the majority of the users prefer not to be tracked even anonymously Teltzrow and Kobsa (2004). In this section, we first study if the history of the purchases of a user can be used as a tracking identifier for the user. Hence, we analyze if the history of the purchases is unique for each user. Second, we study if an attacker can re-identify the same user who sent queries over time by only tracking the history of their purchases, with no access to the static sparse features. Evaluation Setup: For evaluation we used Taobao datase that has more than 723 million user-item interactions. Within them, we separated about 9 million purchase interactions. We then pre-processed and formatted that data in a time series data structure (user history data structure) shown below:
|
| 84 |
+
|
| 85 |
+
---
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
{\text{user}}_{1} : \left( {{\text{time}}_{1},{\text{item}}_{1}}\right) ,\left( {{\text{time}}_{4},{\text{item}}_{10}}\right) ,\left( {{\text{time}}_{500},{\text{item}}_{20}}\right)
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
{\text{user}}_{2} : \left( {{\text{time}}_{3},{\text{item}}_{100}}\right) ,\left( {{\text{time}}_{20},{\text{item}}_{100}}\right)
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
$\vdots$
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
{\text{user}}_{X} : \left( {{\text{time}}_{5},{\text{item}}_{75}}\right) ,\left( {{\text{time}}_{20},{\text{item}}_{50}}\right) \text{,}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
\left( {{tim}{e}_{100},{ite}{m}_{75}}\right) ,\left( {{tim}{e}_{400},{ite}{m}_{1}}\right) \left( {{tim}{e}_{420},{ite}{m}_{10}}\right)
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
---
|
| 106 |
+
|
| 107 |
+
Second, for each set of consecutive items purchased by any user, we create a list of users who have the same set of consecutive purchases in exactly that order. We refer to these sets of consecutive recent purchases as keys. Multiple users may have the same key in their history. That is why each key keeps a list of all the users that created the same key and the duration of the time they had the key. An example of the recent item purchase history when we consider two most recent purchases shown below. Each key consists of a pair of items. For instance, the first line shows item 1 and item 10 were the most recent purchases of user 1 from time 4 to time 500 .
|
| 108 |
+
|
| 109 |
+
key : list of values
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
\left\lbrack {{ite}{m}_{1},{ite}{m}_{10}}\right\rbrack : \left\lbrack {{use}{r}_{1},{tim}{e}_{4},{tim}{e}_{500}}\right\rbrack
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
\left\lbrack {{\text{user}}_{X},{\text{time}}_{420},\text{Current}}\right\rbrack
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
\left\lbrack {{ite}{m}_{10},{ite}{m}_{20}}\right\rbrack : \left\lbrack {{use}{r}_{1},{tim}{e}_{1000},{Current}}\right\rbrack
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
\left\lbrack {{ite}{m}_{100},{ite}{m}_{100}}\right\rbrack : \left\lbrack {{use}{r}_{2},{tim}{e}_{20},{Current}}\right\rbrack
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
$\vdots$
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
\left\lbrack {{\text{item}}_{75},{\text{item}}_{50}}\right\rbrack : \left\lbrack {{\text{user}}_{X},{\text{time}}_{20},{\text{time}}_{100}}\right\rbrack
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
\left\lbrack {{\text{item}}_{50},{\text{item}}_{75}}\right\rbrack : \left\lbrack {{\text{user}}_{X},{\text{time}}_{100},{\text{time}}_{400}}\right\rbrack
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
\left\lbrack {{\text{item}}_{75},{\text{item}}_{1}}\right\rbrack : \left\lbrack {{\text{user}}_{X},{\text{time}}_{400},{\text{time}}_{420}}\right\rbrack
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
The goal of the this attack is to use only the $m(m = 2$ in the example above) most recent purchases by a user to track the user across different interaction sessions, which are separated by timestamps as sessions. To evaluate this attack:
|
| 142 |
+
|
| 143 |
+
1. We randomly select a timestamp and a user.
|
| 144 |
+
|
| 145 |
+
2. For the selected user, we check the $m$ most recent purchases of the user at the selected timestamp and form a key $=$ [recent purchase 1, recent purchase 2,... recent purchase m]
|
| 146 |
+
|
| 147 |
+
3. We look up this key in the recent item purchase history dataset. If the same sequence of $m$ most recent items appear on another user at the same time window, this means these recent purchases are not unique for that specific user at that time and cannot be used as a fingerprint of a single user.
|
| 148 |
+
|
| 149 |
+
4. On the other hand, if the $m$ item purchase history only belongs to that specific user, the duration of the time in which this key forms the most recent purchases of the user is extracted.
|
| 150 |
+
|
| 151 |
+
5. This experiment is repeated for many random time stamps and users to obtain 200,000 samples. As depicted in Figure 6 A, we observe that even the two most recent purchases can serve as a unique identifier for ${98}\%$ of our samples. In other words, at a random point in time, the two most recent purchases of a user are unique for ${98}\%$ of randomly selected users. We found that three, four, and five most recent purchases uniquely identify users with 99% probability.
|
| 152 |
+
|
| 153 |
+
Attack Method: Most recent items purchased by a user usually do not change with a very high frequency. For the period of time that these recent purchases remain the same, every query sent by the user has the same list of recent purchases. Therefore, the attacker is interested in using this knowledge to launch the attack. To accomplish this, the attacker first selects a time threshold. This time threshold is chosen to help the attacker to decide if the queries come from the same user or not. Meaning that if the time difference between receiving them is less than the time threshold and two distinct queries received by the cloud have the same most recent purchases, the attacker will predict that they comes from the same use. Otherwise, it is assumed queries come from two different users.
|
| 154 |
+
|
| 155 |
+
Evaluation Metric: To measure the accuracy of this attack, we use the machine learning terms precision and recall defined in Buckland and Gey (1994) as shown in Eq (1).
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
\text{ Precision } = \frac{TP}{\left( TP + FP\right) },\;\text{ Recall } = \frac{TP}{\left( TP + FN\right) }, \tag{1}
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
where TP stands for True Positives, FP represents False Positives, and FN is False Negatives. Precision indicates what percentage of positive predictions are accurate and Recall indicates what percentage of actual positives are detected.
|
| 162 |
+
|
| 163 |
+
Evaluation Result: To evaluate the precision/recall tradeoff, we start from a very small time threshold and increase it gradually. As expected, with low time thresholds, precision is high with few false positives. But as the attacker increases the time threshold and can identify more of the actual positives (higher recall), they false positives increase as well, which reduces the precision. The reason for having more false positives with a large threshold is that, during a longer period of time, other users may generate the same key. Table 2 shows when the 2 most recent purchases are used, there are around 4.5 million keys but the total number of occurrences of these keys is around 8 million times. This means for a fraction of the keys, the same keys are generated for different users at different
|
| 164 |
+
|
| 165 |
+

|
| 166 |
+
|
| 167 |
+
Figure 6: A) Uniqueness of most recent purchases of users. B and C) Precision/recall trade-off based on different time threshold values.
|
| 168 |
+
|
| 169 |
+
times. These repeated keys are the source of false positives in our experiments. The decision of selecting the right threshold depends on the attacker's preference to have a higher recall or precision. Figure 6 shows this trade-off for different time threshold values. We gradually increase the time threshold from 1 second to 277 hours (11.5 days). As shown in this figure, by increasing the time threshold to 11 days recall will reach 1.0 while there is an almost 0.02 drop in precision. This means the attacker can link all the queries that come from the same users correctly. This comes at the cost of $2\%$ miss-prediction of the queries that do not come from the same user and only generates the same key at some point in their purchase history. These high precision and recall values, indicates how an attacker can track users who send queries to the recommendation model over time.
|
| 170 |
+
|
| 171 |
+
Table 2: Re-identification attack statistics about the number of keys and repeated keys.
|
| 172 |
+
|
| 173 |
+
<table><tr><td/><td/><td>_</td><td/></tr><tr><td>Number of recent purchases</td><td>Number of users</td><td>Number of keys</td><td>Total occurrences of keys</td></tr><tr><td>2</td><td>898,803</td><td>4,476,760</td><td>8, 114, 860</td></tr><tr><td>3</td><td>799,475</td><td>5,679,087</td><td>7, 216, 057</td></tr><tr><td>4</td><td>705,888</td><td>5,587,578</td><td>6, 416, 582</td></tr><tr><td>5</td><td>620,029</td><td>5,197,043</td><td>5, 710, 694</td></tr></table>
|
| 174 |
+
|
| 175 |
+
## 6 Hash inversion with frequency-based attack
|
| 176 |
+
|
| 177 |
+
Applying hash on the indices before embedding table lookup is an important performance optimization (more details about the data pipeline in production-scale recommendation systems and different hashing schemes can be found in Appendix B). Here, we analyze how hashing impact information leakage. This section studies how an attacker can recover the raw values of sparse features even when hashing is used for embedding indices. Through a hash function, users' raw data are remapped to post-hash values for indexing the embedding tables as shown in Fig. 7.
|
| 178 |
+
|
| 179 |
+

|
| 180 |
+
|
| 181 |
+
Figure 7: Frequency-based attack tries to reverse engineers the hash based on the frequencies.
|
| 182 |
+
|
| 183 |
+
Evaluation Setup: For evaluation, we used Taobao, Kaggle and Criteo datasets. For each dataset we selected two disjoint random sets; training set and test test. The training set samples forms the prior distribution and the test sample are used for the evaluation.
|
| 184 |
+
|
| 185 |
+
Attack Method: An adversary can launch attacks by collecting the frequency of observed indices,
|
| 186 |
+
|
| 187 |
+
Table 3: Accuracy of hash inversion for the frequency-based attack for Taobao dataset.
|
| 188 |
+
|
| 189 |
+
<table><tr><td>Number of Samples used for Learning Distribution</td><td>Number of Samples for Evaluation</td><td>Top 1</td><td>Top 2</td><td>Top 3</td><td>Top 4</td><td>Top 5</td><td>Top 6</td><td>Top 7</td><td>Top 8</td><td>Top 9</td><td>Top 10</td></tr><tr><td>1,000,000</td><td>1,000</td><td>0.64</td><td>0.76</td><td>0.83</td><td>0.87</td><td>0.89</td><td>0.90</td><td>0.91</td><td>0.92</td><td>0.93</td><td>0.94</td></tr><tr><td>1,000,000</td><td>100,000</td><td>0.61</td><td>0.75</td><td>0.82</td><td>0.86</td><td>0.88</td><td>0.90</td><td>0.92</td><td>0.92</td><td>0.93</td><td>0.93</td></tr><tr><td>2,000,000</td><td>100,000</td><td>0.62</td><td>0.76</td><td>0.82</td><td>0.86</td><td>0.89</td><td>0.91</td><td>0.92</td><td>0.93</td><td>0.93</td><td>0.94</td></tr><tr><td>2,000,000</td><td>1.000.000</td><td>0.62</td><td>0.76</td><td>0.82</td><td>0.86</td><td>0.89</td><td>0.91</td><td>0.92</td><td>0.93</td><td>0.93</td><td>0.94</td></tr></table>
|
| 190 |
+
|
| 191 |
+
use prior knowledge about the distribution of feature values, and find the mapping between input and output of the hash. Here we show how an attacker can compromise a system with hashed input values where the hash function is output $= \left( {\text{input} + {\text{mask}}_{\text{add }}}\right) {\;\operatorname{mod}\;P}$ and $P$ and $P$ is the hash size. We denote the frequency of possible input to a hash function by ${x}_{1},{x}_{2},\ldots ,{x}_{N}$ for $\mathrm{N}$ possible scenarios and its output frequency by ${y}_{1},{y}_{2},\ldots ,{y}_{P}$ of a hash size $\mathrm{P}$ . We form the matrix $M \in {\mathbb{R}}^{P \times P}$ in which each column represents a different value for Mask $\left( \left\lbrack {0, P - 1}\right\rbrack \right)$ . Basically, for each value of a mask, we compute the frequency of outcomes and form this Matrix. As shown, by increasing the value of the mask by 1, the column values are shifted. Hence, the Matrix M is a Toeplitz Matrix. Since a single column in this matrix is shifted and repeated the order of forming this matrix is $O\left( P\right)$ .
|
| 192 |
+
|
| 193 |
+
$$
|
| 194 |
+
\mathbf{M} = {\left\lbrack \begin{matrix} {y}_{1} & {y}_{P - 1} & \cdots & {y}_{2} \\ {y}_{2} & {y}_{1} & \cdots & {y}_{3} \\ \vdots & \vdots & \ddots & \vdots \\ {y}_{P} & {y}_{P - 2} & \cdots & {y}_{1} \end{matrix}\right\rbrack }_{P \times P} \tag{2}
|
| 195 |
+
$$
|
| 196 |
+
|
| 197 |
+
The attacker's goal here is to invert the hash using the input distribution and its observation of the output distribution. Note an input dataset and an output dataset should be independent. We define ${\mathbf{a}}_{t}$ as the distribution of embedding table accesses (post-hash) at time t. To reverse engineer the mask, an attacker has to find out which mask is used by the hash function. To do so, the attacker has to solve the optimization problem in Eq( 3).
|
| 198 |
+
|
| 199 |
+
$$
|
| 200 |
+
\mathop{\min }\limits_{i}{\begin{Vmatrix}\left( {\mathbf{m}}_{i} - {\mathbf{a}}_{t}\right) \end{Vmatrix}}^{2} = \mathop{\min }\limits_{i}\left( {{\begin{Vmatrix}{\mathbf{m}}_{i}\end{Vmatrix}}^{2} + {\begin{Vmatrix}{\mathbf{a}}_{t}\end{Vmatrix}}^{2} - 2{\mathbf{m}}_{i}^{\top }{\mathbf{a}}_{t}}\right) \tag{3}
|
| 201 |
+
$$
|
| 202 |
+
|
| 203 |
+
In Eq (3), ${\mathbf{m}}_{i}$ represents the vector containing the frequencies of output values when mask $i$ is used. So its absolute value will be a constant one. This is similar for $\begin{Vmatrix}{\mathbf{a}}_{t}\end{Vmatrix}$ . As a result, the optimization problem can be simplified to Eq. 4).
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
\bar{P} = \underset{i}{\arg \max }\left( {{\mathbf{m}}_{\mathbf{i}}^{\top }{\mathbf{a}}_{t}}\right) \;\text{ for }\;i \in \left\lbrack {0, P - 1}\right\rbrack \Rightarrow \bar{P} = \underset{i}{\arg \max }\left( {{\mathbf{M}}^{\top }{\mathbf{a}}_{t}}\right) \tag{4}
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
The order of computing such a matrix-vector product is $O\left( {P}^{2}\right)$ . However, because $\mathbf{M}$ is a Toeplitz matrix, this matrix vector computation can be done in time complexity of $O\left( {P\log P}\right)$ Strang (1986). To implement this attack, we created two disjoint sets. The first set is used to extract the distribution (known distribution) and the second set is used for frequency matching and evaluating the frequency-based attack. First, attackers try to reverse engineers the hash function and find the key based on the frequency matching. The attacker was able to reverse engineer the hash and find the key based on the method described above. Next, the attacker tries to reverse engineer the post-hash indices and find out the value of raw sparse features. After finding the key of the hash, the attacker reverse engineer the post-hash value to the top most frequent pre-hash values based on the input distributions.
|
| 210 |
+
|
| 211 |
+
Evaluation Metric: Accuracy in this case is the probability that the attacker correctly identifies an input raw value from the post-hash value. Let the function $g\left( y\right)$ be the attacker’s estimate of the input, given the output query $y, g\left( y\right) = \arg \mathop{\max }\limits_{x}\operatorname{Prob}\left( x\right)$ s.t. $\widehat{h}\left( x\right) = y$ , where $\widehat{h}\left( x\right)$ is the attackers estimation of the hash function. Using this definition, accuracy is defined:
|
| 212 |
+
|
| 213 |
+
$$
|
| 214 |
+
\text{ Accuracy } = {\operatorname{Prob}}_{x \sim {\mathcal{P}}_{X}}\left( {x = g\left( {h\left( x\right) }\right) }\right) , \tag{5}
|
| 215 |
+
$$
|
| 216 |
+
|
| 217 |
+
where $h\left( x\right)$ is the true hash function, and the probability is over the distribution of the input query. We also use the notation of top $K$ accuracy in this section. Essentially top $K$ accuracy is the probability of the input query being among the top guesses of the attacker. To formally define this, we first denote the set $\widehat{\mathcal{S}}\left( y\right)$ as, $\widehat{\mathcal{S}}\left( y\right) = \{ x \mid \widehat{h}\left( x\right) = y\}$ , which is the set of all possible inputs, given an output query $y$ , based on attacker’s estimation of the hash function. We now define the set ${g}_{K}\left( y\right)$ to be the top $k$ members of the set $\widehat{\mathcal{S}}\left( y\right)$ with the largest probability, ${g}_{K}\left( y\right) = \{ x \in$ $\widehat{\mathcal{S}}\left( y\right) \mid \operatorname{Prob}\left( x\right)$ is in the top $K$ probabilities. $\}$ . This means that ${g}_{K}\left( y\right)$ is the set of the top $K$ attacker’s guesses, of the input query. Now we can use the function ${g}_{k}\left( y\right)$ to formally define the top $K$ accuracy,
|
| 218 |
+
|
| 219 |
+
$$
|
| 220 |
+
{\text{ Accuracy }}_{\text{top }K} = {\operatorname{Prob}}_{x \sim {\mathcal{P}}_{X}}\left( {x \in {g}_{K}\left( {h\left( x\right) }\right) }\right) , \tag{6}
|
| 221 |
+
$$
|
| 222 |
+
|
| 223 |
+
where $h\left( x\right)$ is the true hash function, and the probability is over the distribution of the input query. Evaluation Result: As shown in Table 3, we change the number of interactions in these test sets to see the accuracy of hash-inversion and the attacker could achieve up to 0.94 top 10 accuracy for the Taobao dataset. Results on Kaggle and Criteo datasets are reported in C. The key observation here is that, if an attacker observes the frequency of queries, they can reconstruct the values of raw features with high accuracy by knowing the distributions of the pre-hash values and type of the hash function. We also expand this attack and support a general attack for more complex hash functions using OMP, The details of this machine learning based attack is explained in Appendix D. In Appendix B we disccussed why none of the current solutions can solve all the issues.
|
| 224 |
+
|
| 225 |
+
## 7 Conclusion
|
| 226 |
+
|
| 227 |
+
In this work, we shed light on the information leakage through sparse features in deep learning-based recommendation systems. Our work pivoted the prior investigation focus on dense feature protection to the unprotected access patterns of sparse features. The new insight from this work demonstrates even the access patterns can be a big threat to privacy.
|
| 228 |
+
|
| 229 |
+
## References
|
| 230 |
+
|
| 231 |
+
Bilge Acun, Matthew Murphy, Xiaodong Wang, Jade Nie, Carole-Jean Wu, and Kim Hazelwood. 2021. Understanding training efficiency of deep learning recommendation models at scale. In 2021 IEEE International Symposium on High-Performance Computer Architecture (HPCA). IEEE, 802-814.
|
| 232 |
+
|
| 233 |
+
Charu C Aggarwal and Philip S Yu. 2007. On privacy-preservation of text and sparse binary data with sketches. In Proceedings of the 2007 SIAM International Conference on Data Mining. SIAM, 57-67.
|
| 234 |
+
|
| 235 |
+
Naveed Akhtar and Ajmal Mian. 2018. Threat of adversarial attacks on deep learning in computer vision: A survey. Ieee Access 6 (2018), 14410-14430.
|
| 236 |
+
|
| 237 |
+
Ghazaleh Beigi and Huan Liu. 2020. A survey on privacy in social media: Identification, mitigation, and applications. ACM Transactions on Data Science 1, 1 (2020), 1-38.
|
| 238 |
+
|
| 239 |
+
Vincent Bindschaedler, Paul Grubbs, David Cash, Thomas Ristenpart, and Vitaly Shmatikov. 2017. The tao of inference in privacy-protected databases. Cryptology ePrint Archive (2017).
|
| 240 |
+
|
| 241 |
+
Michael Buckland and Fredric Gey. 1994. The relationship between recall and precision. Journal of the American society for information science 45, 1 (1994), 12-19.
|
| 242 |
+
|
| 243 |
+
Joseph A Calandrino, Ann Kilzer, Arvind Narayanan, Edward W Felten, and Vitaly Shmatikov. 2011. " You might also like:" Privacy risks of collaborative filtering. In 2011 IEEE symposium on security and privacy. IEEE, 231-246.
|
| 244 |
+
|
| 245 |
+
Abdelberi Chaabane, Gergely Acs, Mohamed Ali Kaafar, et al. 2012. You are what you like! information leakage through users' interests. In Proceedings of the 19th annual network & distributed system security symposium (NDSS). Citeseer.
|
| 246 |
+
|
| 247 |
+
David Chaum. 1985. Security without identification: Transaction systems to make big brother obsolete. Commun. ACM 28, 10 (1985), 1030-1044.
|
| 248 |
+
|
| 249 |
+
Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, et al. 2016. Wide & deep learning for recommender systems. In Proceedings of the 1st workshop on deep learning for recommender systems. 7-10.
|
| 250 |
+
|
| 251 |
+
Christopher A Choquette-Choo, Florian Tramer, Nicholas Carlini, and Nicolas Papernot. 2021. Label-only membership inference attacks. In International Conference on Machine Learning. PMLR, 1964-1974.
|
| 252 |
+
|
| 253 |
+
Victor Costan and Srinivas Devadas. 2016. Intel SGX Explained. IACR Cryptology ePrint Archive 2016, 086 (2016), 1-118.
|
| 254 |
+
|
| 255 |
+
Thomas M Cover. 1999. Elements of information theory. John Wiley & Sons.
|
| 256 |
+
|
| 257 |
+
Paul Cuff and Lanqing Yu. 2016. Differential privacy as a mutual information constraint. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. 43-54.
|
| 258 |
+
|
| 259 |
+
Zekeriya Erkin, Michael Beye, Thijs Veugen, and Reginald L Lagendijk. 2010. Privacy enhanced recommender system. In Thirty-first symposium on information theory in the Benelux. 35-42.
|
| 260 |
+
|
| 261 |
+
Gabriel Ghinita, Yufei Tao, and Panos Kalnis. 2008. On the anonymization of sparse high-dimensional data. In 2008 IEEE 24th International Conference on Data Engineering. IEEE, 715-724.
|
| 262 |
+
|
| 263 |
+
Oded Goldreich. 1998. Secure multi-party computation. Manuscript. Preliminary version 78 (1998), 110.
|
| 264 |
+
|
| 265 |
+
Oded Goldreich and Rafail Ostrovsky. 1996. Software protection and simulation on oblivious RAMs. Journal of the ACM (JACM) 43, 3 (1996), 431-473.
|
| 266 |
+
|
| 267 |
+
Carlos A Gomez-Uribe and Neil Hunt. 2015. The netflix recommender system: Algorithms, business value, and innovation. ACM Transactions on Management Information Systems (TMIS) 6, 4 (2015), $1 - {19}$ .
|
| 268 |
+
|
| 269 |
+
Paul Grubbs, Marie-Sarah Lacharité, Brice Minaud, and Kenneth G Paterson. 2019. Learning to reconstruct: Statistical learning theory and encrypted database attacks. In 2019 IEEE Symposium on Security and Privacy (SP). IEEE, 1067-1083.
|
| 270 |
+
|
| 271 |
+
Chuan Guo, Awni Hannun, Brian Knott, Laurens van der Maaten, Mark Tygert, and Ruiyu Zhu. 2020. Secure multiparty computations in floating-point arithmetic. arXiv preprint arXiv:2001.03192 (2020).
|
| 272 |
+
|
| 273 |
+
Udit Gupta, Carole-Jean Wu, Xiaodong Wang, Maxim Naumov, Brandon Reagen, David Brooks, Bradford Cottel, Kim Hazelwood, Mark Hempstead, Bill Jia, et al. 2020. The architectural implications of facebook's dnn-based personalized recommendation. In 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 488-501.
|
| 274 |
+
|
| 275 |
+
Jialiang Han, Yun Ma, Qiaozhu Mei, and Xuanzhe Liu. 2021. Deeprec: On-device deep learning for privacy-preserving sequential recommendation in mobile commerce. In Proceedings of the Web Conference 2021. 900-911.
|
| 276 |
+
|
| 277 |
+
Kim Hazelwood, Sarah Bird, David Brooks, Soumith Chintala, Utku Diril, Dmytro Dzhulgakov, Mohamed Fawzy, Bill Jia, Yangqing Jia, Aditya Kalro, et al. 2018. Applied machine learning at facebook: A datacenter infrastructure perspective. In 2018 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 620-629.
|
| 278 |
+
|
| 279 |
+
Tigran Ishkhanov, Maxim Naumov, Xianjie Chen, Yan Zhu, Yuan Zhong, Alisson Gusatti Azzolini, Chonglin Sun, Frank Jiang, Andrey Malevich, and Liang Xiong. 2020. Time-based sequence model for personalization and recommendation systems. arXiv preprint arXiv:2008.11922 (2020).
|
| 280 |
+
|
| 281 |
+
Kousha Kalantari, Lalitha Sankar, and Oliver Kosut. 2017. On information-theoretic privacy with general distortion cost functions. In 2017 ieee international symposium on information theory (isit). IEEE, 2865-2869.
|
| 282 |
+
|
| 283 |
+
Wang-Cheng Kang, Derek Zhiyuan Cheng, Ting Chen, Xinyang Yi, Dong Lin, Lichan Hong, and Ed H Chi. 2020. Learning multi-granular quantized embeddings for large-vocab categorical features in recommender systems. In Companion Proceedings of the Web Conference 2020. 562-566.
|
| 284 |
+
|
| 285 |
+
Criteo AI Lab. 2018a. Criteo 1 TB click log. https://ailab.criteo.com/ressources/.[Online; accessed 31-August-2022].
|
| 286 |
+
|
| 287 |
+
Criteo AI Lab. 2018b. Kaggle display advertising dataset. https://ailab.criteo.com/ ressources/. [Online; accessed 31-August-2022].
|
| 288 |
+
|
| 289 |
+
Zheng Li and Yang Zhang. 2021. Membership leakage in label-only exposures. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security. 880-895.
|
| 290 |
+
|
| 291 |
+
Jiachun Liao, Oliver Kosut, Lalitha Sankar, and Flavio P Calmon. 2017. A general framework for information leakage.
|
| 292 |
+
|
| 293 |
+
Siyi Liu, Chen Gao, Yihong Chen, Depeng Jin, and Yong Li. 2020. Learnable Embedding sizes for Recommender Systems. In International Conference on Learning Representations.
|
| 294 |
+
|
| 295 |
+
Michael Lui, Yavuz Yetim, Özgür Özkan, Zhuoran Zhao, Shin-Yeh Tsai, Carole-Jean Wu, and Mark Hempstead. 2021. Understanding capacity-driven scale-out neural recommendation inference. In 2021 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS). IEEE, 162-171.
|
| 296 |
+
|
| 297 |
+
Fatemehsadat Mireshghallah, Mohammadkazem Taram, Ali Jalali, Ahmed Taha Elthakeb, Dean Tullsen, and Hadi Esmaeilzadeh. 2020. Not All Features Are Equal: Discovering Essential Features for Preserving Prediction Privacy. arXiv preprint arXiv:2003.12154 (2020).
|
| 298 |
+
|
| 299 |
+
Dheevatsa Mudigere, Yuchen Hao, Jianyu Huang, Andrew Tulloch, Srinivas Sridharan, Xing Liu, Mustafa Ozdal, Jade Nie, Jongsoo Park, Liang Luo, et al. 2021. High-performance, distributed training of large-scale deep learning recommendation models. arXiv e-prints (2021), arXiv-2104.
|
| 300 |
+
|
| 301 |
+
Maxim Naumov, Dheevatsa Mudigere, Hao-Jun Michael Shi, Jianyu Huang, Narayanan Sundaraman, Jongsoo Park, Xiaodong Wang, Udit Gupta, Carole-Jean Wu, Alisson G Azzolini, et al. 2019. Deep learning recommendation model for personalization and recommendation systems. arXiv preprint arXiv:1906.00091 (2019).
|
| 302 |
+
|
| 303 |
+
Chaoyue Niu, Fan Wu, Shaojie Tang, Lifeng Hua, Rongfei Jia, Chengfei Lv, Zhihua Wu, and Guihai Chen. 2020. Billion-scale federated learning on mobile clients: A submodel design with tunable privacy. In Proceedings of the 26th Annual International Conference on Mobile Computing and Networking. 1-14.
|
| 304 |
+
|
| 305 |
+
Ivan V Oseledets. 2011. Tensor-train decomposition. SIAM Journal on Scientific Computing 33, 5 (2011), 2295-2317.
|
| 306 |
+
|
| 307 |
+
Rachit Rajat, Yongqin Wang, and Murali Annavaram. 2021. Look Ahead ORAM: Obfuscating Addresses in Recommendation Model Training. arXiv preprint arXiv:2107.08094 (2021).
|
| 308 |
+
|
| 309 |
+
Mehrnoosh Raoufi, Youtao Zhang, and Jun Yang. 2022. IR-ORAM: Path Access Type Based Memory Intensity Reduction for Path-ORAM. In 2022 IEEE International Symposium on High-Performance Computer Architecture (HPCA). IEEE, 360-372.
|
| 310 |
+
|
| 311 |
+
Ling Ren, Christopher W Fletcher, Albert Kwon, Emil Stefanov, Elaine Shi, Marten van Dijk, and Srinivas Devadas. 2014. Ring ORAM: Closing the Gap Between Small and Large Client Storage Oblivious RAM. IACR Cryptol. ePrint Arch. 2014 (2014), 997.
|
| 312 |
+
|
| 313 |
+
Jim Salter. 2021. Containerize all the things! Arm v9 takes security seriously. https://blog openmined.org/pysyft-pytorch-intel-sgx/. [Online; accessed 18-October-2021].
|
| 314 |
+
|
| 315 |
+
Geet Sethi, Bilge Acun, Niket Agarwal, Christos Kozyrakis, Caroline Trippel, and Carole-Jean Wu. 2022. RecShard: Statistical Feature-Based Memory Optimization for Industry-Scale Neural Recommendation. arXiv preprint arXiv:2201.10095 (2022).
|
| 316 |
+
|
| 317 |
+
Qinfeng Shi, James Petterson, Gideon Dror, John Langford, Alex Smola, and SVN Vishwanathan. 2009. Hash kernels for structured data. Journal of Machine Learning Research 10, 11 (2009).
|
| 318 |
+
|
| 319 |
+
Erez Shmueli and Tamir Tassa. 2017. Secure multi-party protocols for item-based collaborative filtering. In Proceedings of the eleventh ACM conference on recommender systems. 89-97.
|
| 320 |
+
|
| 321 |
+
Congzheng Song and Ananth Raghunathan. 2020. Information leakage in embedding models. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security. 377-390.
|
| 322 |
+
|
| 323 |
+
Emil Stefanov, Marten Van Dijk, Elaine Shi, T-H Hubert Chan, Christopher Fletcher, Ling Ren, Xiangyao Yu, and Srinivas Devadas. 2018. Path ORAM: an extremely simple oblivious RAM protocol. Journal of the ACM (JACM) 65, 4 (2018), 1-26.
|
| 324 |
+
|
| 325 |
+
Gilbert Strang. 1986. A proposal for Toeplitz matrix calculations. Studies in Applied Mathematics 74, 2 (1986), 171-176.
|
| 326 |
+
|
| 327 |
+
Latanya Sweeney. 2002. k-anonymity: A model for protecting privacy. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 10, 05 (2002), 557-570.
|
| 328 |
+
|
| 329 |
+
Intel SGX team. 2022. Intel® Software Guard Extensions . https://www.intel.com/content/ www/us/en/developer/tools/software-guard-extensions/overview.html. [Online; accessed 04-October-2022].
|
| 330 |
+
|
| 331 |
+
Taobao Team. 2018. Ad Display Click Data on Taobao.com. https://tianchi.aliyun.com/ dataset/dataDetail?dataId=56&lang=en-us. [Online; accessed 31-August-2022].
|
| 332 |
+
|
| 333 |
+
Maximilian Teltzrow and Alfred Kobsa. 2004. Impacts of user privacy preferences on personalized systems. In Designing personalized user experiences in eCommerce. Springer, 315-332.
|
| 334 |
+
|
| 335 |
+
Joel A Tropp and Anna C Gilbert. 2007. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Transactions on information theory 53, 12 (2007), 4655-4666.
|
| 336 |
+
|
| 337 |
+
Wenhao Wang, Guoxing Chen, Xiaorui Pan, Yinqian Zhang, XiaoFeng Wang, Vincent Bindschaedler,
|
| 338 |
+
|
| 339 |
+
Haixu Tang, and Carl A Gunter. 2017. Leaky cauldron on the dark land: Understanding memory side-channel hazards in SGX. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 2421-2434.
|
| 340 |
+
|
| 341 |
+
Liu Yang, Ben Tan, Vincent W Zheng, Kai Chen, and Qiang Yang. 2020. Federated recommendation systems. In Federated Learning. Springer, 225-239.
|
| 342 |
+
|
| 343 |
+
Jiangchao Yao, Feng Wang, Kunyang Jia, Bo Han, Jingren Zhou, and Hongxia Yang. 2021. Device-cloud collaborative learning for recommendation. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 3865-3874.
|
| 344 |
+
|
| 345 |
+
Chunxing Yin, Bilge Acun, Carole-Jean Wu, and Xing Liu. 2021. Tt-rec: Tensor train compression for deep learning recommendation models. Proceedings of Machine Learning and Systems 3 (2021), 448-462.
|
| 346 |
+
|
| 347 |
+
Caojin Zhang, Yicun Liu, Yuanpu Xie, Sofia Ira Ktena, Alykhan Tejani, Akshay Gupta, Pranay Kumar Myana, Deepak Dilipkumar, Suvadip Paul, Ikuhiro Ihara, et al. 2020. Model size reduction using frequency based double hashing for recommender systems. In Fourteenth ACM Conference on Recommender Systems. 521-526.
|
| 348 |
+
|
| 349 |
+
Kunpeng Zhang, Shaokun Fan, and Harry Jiannan Wang. 2018. An efficient recommender system using locality sensitive hashing. In Proceedings of the 51st Hawaii International Conference on System Sciences.
|
| 350 |
+
|
| 351 |
+
Minxing Zhang, Zhaochun Ren, Zihan Wang, Pengjie Ren, Zhunmin Chen, Pengfei Hu, and Yang Zhang. 2021. Membership Inference Attacks Against Recommender Systems. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security. 864-879.
|
| 352 |
+
|
| 353 |
+
Qian Zhao, Martijn C Willemsen, Gediminas Adomavicius, F Maxwell Harper, and Joseph A Konstan. 2019. From preference into decision making: modeling user interactions in recommender systems. In Proceedings of the 13th ACM Conference on Recommender Systems. 29-33.
|
| 354 |
+
|
| 355 |
+
Guorui Zhou, Na Mou, Ying Fan, Qi Pi, Weijie Bian, Chang Zhou, Xiaoqiang Zhu, and Kun Gai. 2019. Deep interest evolution network for click-through rate prediction. In Proceedings of the AAAI conference on artificial intelligence, Vol. 33. 5941-5948.
|
| 356 |
+
|
| 357 |
+
Guorui Zhou, Xiaoqiang Zhu, Chenru Song, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, and Kun Gai. 2018. Deep interest network for click-through rate prediction. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. 1059-1068.
|
| 358 |
+
|
| 359 |
+
## A Data sets
|
| 360 |
+
|
| 361 |
+
For studying the attacks in the following sections, we use multiple open source datasets such as Taobao Ads Display, Kaggle Ads Display, and Criteo Display. In this section, we briefly explain the content of these datasets, and in each of the following sections, we explain more about the dataset characteristics that we used.
|
| 362 |
+
|
| 363 |
+
Taobao Ads Display Team (2018): This dataset contains user static features that includes1,140,000 users and 10 static features per user including their user IDs. There are also other features representing a user's profile, e.g., age, gender, occupation level, living city, education level, etc. Another file contains user behavior data that includes seven hundred million records of user past behaviors. It contains shopping behavior over 22 days. Each row of this file indicates an interaction between a user (represented by user ID) and an item (represented by item brand ID and category ID). The type of interaction (buy, brows, fav, cart) and the time stamp of the interactions.
|
| 364 |
+
|
| 365 |
+
Kaggle Ads Display Lab (2018b): CriteoLabs shared a week's worth of data for you to develop models predicting ads' click-through rates (CTR). This dataset contains three data files including training file and test files. Training file consists of a portion of Criteo's traffic over a period of 7 days. Each row corresponds to a display ad served by Criteo. Positive (clicked) and negative (non-clicked) examples have both been subsampled at different rates to reduce the dataset size. Each row contains 13 dense features and 26 sparse features that form embedding table accesses. The semantic of these features is not released. The test set is computed in the same way as the training set but for events on the day following the training period.
|
| 366 |
+
|
| 367 |
+
Criteo Ads Display Lab (2018a): This dataset is similar to Kaggle. But it is a much larger dataset containing 24 data files collected over 24 days with a different subsampling ratio.
|
| 368 |
+
|
| 369 |
+
For the identification attack, sensitive attribute attack, re-identification attack, and OMP-based frequency attack our analysis requires user IDs, static profile features, or user past behaviors in the same dataset. Hence, for these attacks, we used the Taobao dataset, which is the only public dataset containing all these features. For the frequency-based attack, we need less information to implement the attacks. Thus all the datasets meet the requirement and we evaluate all of them in the hash information leakage study and the frequency based attack.
|
| 370 |
+
|
| 371 |
+
## B Data Pipeline in Production-Scale Recommendation Systems
|
| 372 |
+
|
| 373 |
+
As mentioned earlier, exposing raw values of sparse features can leak sensitive information of a user. In this section, we discuss the current production-scale data pipeline for sparse feature processing and how such real system designs may impact the information leak.
|
| 374 |
+
|
| 375 |
+
One challenge in designing efficient embedding tables is that the values of sparse features may be unbounded, resulting in very large embedding table sizes. Consider the news articles produced in the world as a dynamic sparse feature item that a user may interact with. There are thousands of news articles in just a day from around the world and creating embeddings for each news item in an embedding table is impractically large. For instance, the DLRM recommendation model in 2021 needs 16x larger memory, compared to the one used in 2017 Lui et al. (2021); Sethi et al. (2022). Furthermore, ${99}\%$ of model parameters belong to embedding tables Gupta et al. (2020). That is why production-scale models demand 10s of TB memory capacity Mudigere et al. (2021); Sethi et al. (2022). One common solution for converting high dimensional data to a low-level representation is to use hashing Shi et al. (2009). Using hashing for recommendation systems was first suggested in Zhang et al. (2018). In addition to bounding sparse features to a fixed size, hashing helps with responding to the rare inputs that are not seen before Acun et al. (2021); Kang et al. (2020). Furthermore, using high-cardinality features may cause over-fitting problems due to over parameterization [Liu et al. (2020); Kang et al. (2020). Considering all these reasons, sparse feature inputs in production-scale models are hashed prior to embedding look-ups.
|
| 376 |
+
|
| 377 |
+
In the appendix B. I, we briefly explain how different hashing schemes work and then we analyze how hashing impact information leakage. Recall that all the information leakage that we discussed in the prior sections is due to the fact that an adversary sees the raw value of embedding table indices. We analyzed and demonstrate embedding table hashing in recommendation systems, which was not necessarily designed for protecting data privacy could not help with reducing information leakage.
|
| 378 |
+
|
| 379 |
+
### B.1 Hash Functions
|
| 380 |
+
|
| 381 |
+
There are multiple ways of reducing the embedding table size using hash functions, and they all have trade-offs. We explain some of the most common hashing schemes here.
|
| 382 |
+
|
| 383 |
+
Embedding table as a hash-map: With hash-map, embedding table entries are combined based on their similarity and a smaller embedding table is formed. However, to use the embedding table, a hash map should be kept to keep track of merged entries. This is the most accurate but the most expensive method in practice. In a previous study Zhang et al. (2018), the authors suggested that using locality sensitive hashing can approximately preserve similarities of data while significantly reducing data dimensions. Frequency hashing Zhang et al. (2020) also keeps a separate map with hot items and carefully maps only hot items to different entries in the table. This ensures that hot items do not collide, while items that are less frequently accessed may in fact be mapped to a same entry.
|
| 384 |
+
|
| 385 |
+
Modulo hashing: This is the cheapest and simplest hash to implement. This hashing performs modulo division based on the pre-defined size of the hash table. For hash size $P$ , the hash function is as simple as input ${mod}P$ . Though simple, it has the disadvantage that two completely different entities might collide.
|
| 386 |
+
|
| 387 |
+
Cryptographic hashing: This approach is a one-way cryptographic algorithm that maps an input of any size to a unique output of a fixed length of bits. A small change in the input drastically changes the output. Cryptographic hashing is a deterministic hashing mechanism.
|
| 388 |
+
|
| 389 |
+
### B.2 Statistical Analysis on Information Leakage After Hashing
|
| 390 |
+
|
| 391 |
+
In this section, we analyze if the amount of randomization created by hashing can have any effect on reducing data leakage. In the following, we report our analysis on the entropy of pre-hash and
|
| 392 |
+
|
| 393 |
+
Table 4: Entropy and mutual information analysis of pre-hash and post-hash embedding table indices.
|
| 394 |
+
|
| 395 |
+
<table><tr><td>Dataset</td><td>Table Name</td><td>Original Table Size</td><td>Post Hash Table Size</td><td>Pre-Hash Entropy</td><td>$\mathbf{{Post} - {HashEntropy}}$</td><td>$\mathbf{{MI}}$</td></tr><tr><td>Taobao</td><td>Brands</td><td>379,353</td><td>37,935</td><td>9.91</td><td>9.28</td><td>9.28</td></tr><tr><td>Taobao</td><td>Categories</td><td>12,124</td><td>1, 212</td><td>6.19</td><td>5.72</td><td>5.72</td></tr><tr><td>Kaggle</td><td>C3</td><td>1,761,917</td><td>176,191</td><td>10.15</td><td>9.41</td><td>9.41</td></tr><tr><td>Kaggle</td><td>C18</td><td>4,836</td><td>483</td><td>5.92</td><td>5.27</td><td>5.27</td></tr><tr><td>Kaggle</td><td>C24</td><td>110,946</td><td>11,094</td><td>6.57</td><td>6.28</td><td>6.28</td></tr><tr><td>Criteo</td><td>C7</td><td>6,593</td><td>659</td><td>7.63</td><td>5.84</td><td>5.84</td></tr><tr><td>Criteo</td><td>C12</td><td>159.619</td><td>15,961</td><td>7.20</td><td>6.85</td><td>6.58</td></tr><tr><td>Criteo</td><td>C20</td><td>11, 568, 963</td><td>1,156,896</td><td>7.37</td><td>7.18</td><td>7.18</td></tr></table>
|
| 396 |
+
|
| 397 |
+
post-hash indices as well as the mutual information analysis. Given a discrete random variable $\mathrm{X}$ , with possible outcomes: ${x}_{1},\ldots ,{x}_{n}$ which occur with probability $p\left( {x}_{1}\right) ,\ldots , p\left( {x}_{n}\right)$ , the entropy is formally is defined as Cover (1999):
|
| 398 |
+
|
| 399 |
+
$$
|
| 400 |
+
H\left( X\right) = - \mathop{\sum }\limits_{{i = 1}}^{N}p\left( {x}_{i}\right) \times \log \left( {p\left( {x}_{i}\right) }\right) \tag{7}
|
| 401 |
+
$$
|
| 402 |
+
|
| 403 |
+
The binary (Base 2) logarithm gives the unit of bits (or "shannons"). Entropy is often roughly used as a measure of unpredictability. In this part we measure the entropy of the input and output of the hash function. In our specific evaluation, we first measure the probabilities in Eq (7) by measuring the frequency of each outcome for pre-hash. We used modulo hash function for compressing the values and measured the post-hash frequencies. Finally by applying Eq (7), we find out the amount of uncertainty in each of these values. As shown in Table 4, the pre-hash entropy of the brand table in Taobao dataset is almost 10 bits. Even after reducing the table size with hashing by 10 times, the amount of information is not reduced significantly for the post-hash values. For the category table, the amount of information was 6 bits and it remains the same after 10 times reduction in the table size. For Kaggle, we selected three embedding tables with different sizes. C3 is the largest embedding table with 1,761,917 entries. C18 represents the small tables with 4,836 entries while C24 represents the moderate tables with 110,946 entries. As shown in this table, the entropy of the sparse features varies between 10 bits to 6 bits depends on the feature. This entropy is not reduced significantly in the post hash values. Finally, the Criteo dataset is evaluated. Note that since the dataset is hashed in a different way, feature names are different from the Kaggle dataset. In this dataset, C7 is the smallest table with 6,593 entries. C12 is the average-size table and C20 is the largest embedding table with 159,619 and 11,568,963 entries respectively. The details about embedding table sizes are reported in Appendix A. An important observation is that the entropy of information in indices is not reduced significantly after hashing. It implies that the post-hash indices hold almost the same amount of information as the pre-hash indices.
|
| 404 |
+
|
| 405 |
+
Mutual Information (MI) Analysis In probability and information theory, the mutual information of two random variables is a measure of the mutual dependence between the two variables. More specifically, it quantifies the "amount of information" obtained about one random variable by observing the other random variable. Mutual information between two random variables $\mathrm{X}$ and $\mathrm{Y}$ is measured by Cover (1999):
|
| 406 |
+
|
| 407 |
+
$$
|
| 408 |
+
I\left( {X;Y}\right) = H\left( X\right) - H\left( {X \mid Y}\right) = H\left( Y\right) - H\left( {Y \mid X}\right) \tag{8}
|
| 409 |
+
$$
|
| 410 |
+
|
| 411 |
+
Many prior works used MI as a measure of privacy guarantee Cuff and Yu (2016); Kalantari et al. (2017); Liao et al. (2017); Guo et al. (2020); Mireshghallah et al. (2020). In our example, we compute the mutual information between the pre-hash indices(X)and the post-hash indices(Y). Based on Eq. , the mutual information between post-hash and pre-hash indices is equal to the entropy of the post-hash indices $\left( {\mathrm{H}\left( \mathrm{Y}\right) }\right)$ minus the conditional entropy of post-hash indices given the pre-hash indices $\left( {H\left( {Y \mid X}\right) }\right)$ . With deterministic hash functions, a post-hash index is deterministic for a given pre-hash index. This means there is no ambiguity in the conditional entropy. So $H\left( {Y \mid X}\right)$ in Eq. (7) is equal to zero and MI is equal to the entropy of post-hash indices. Our empirical result in Table 4 also validates this point. Based on this observation, the mutual information between input and output of the hash is almost equal to the entropy of the hash input. This means that an adversary with unlimited computational power can recover almost all the information in the pre-hash indices by just observing the post-hash indices.
|
| 412 |
+
|
| 413 |
+
## C Frequency Based Attack: Kaggle and Criteo Datasets
|
| 414 |
+
|
| 415 |
+
In Table 5, we show the accuracy of this attack model for the Kaggle dataset. As demonstrated in this table for small embedding tables (represented by C18), even a small sample of prior distribution and online queries observed by an attacker can lead to a high inversion accuracy while for large tables (represented by C3) more accurate distributions are needed. The evaluation for the Criteo dataset is 03 reported in Table 6. In this dataset C7 is the smallest table, C20 is the average-size table and C12 is the largest embedding table (More details about embedding table sizes are reported in Appendix A.). Criteo dataset also validates the same observation as previous datasets. 605
|
| 416 |
+
|
| 417 |
+
Table 5: Accuracy of hash inversion for the frequency-based attack for Kaggle dataset.
|
| 418 |
+
|
| 419 |
+
<table><tr><td>Number of Samples used for Learning Distribution</td><td>Number of Samples for Evaluation</td><td>Feature</td><td>Top 1</td><td>Top 2</td><td>Top 3</td><td>Top 4</td><td>Top 5</td><td>Top 6</td><td>Top 7</td><td>Top 8</td><td>Top 9</td><td>Top 10</td></tr><tr><td>100,000</td><td>1,000</td><td>C3</td><td>0.55</td><td>0.55</td><td>0.55</td><td>0.55</td><td>0.55</td><td>0.55</td><td>0.55</td><td>0.55</td><td>0.55</td><td>0.55</td></tr><tr><td>100,000</td><td>1.000</td><td>C18</td><td>0.74</td><td>0.90</td><td>0.95</td><td>0.96</td><td>0.98</td><td>0.98</td><td>0.98</td><td>0.98</td><td>0.98</td><td>0.98</td></tr><tr><td>100.000</td><td>1.000</td><td>C24</td><td>0.87</td><td>0.92</td><td>0.92</td><td>0.92</td><td>0.93</td><td>0.93</td><td>0.93</td><td>0.93</td><td>093</td><td>0.93</td></tr><tr><td>1000,000</td><td>10,000</td><td>C3</td><td>0.63</td><td>0.64</td><td>0.65</td><td>0.65</td><td>0.65</td><td>0.65</td><td>0.65</td><td>0.65</td><td>0.65</td><td>0.65</td></tr><tr><td>1000,000</td><td>10,000</td><td>C18</td><td>0.75</td><td>0.89</td><td>0.94</td><td>0.96</td><td>0.98</td><td>0.98</td><td>0.98</td><td>0.99</td><td>0.99</td><td>0.99</td></tr><tr><td>1000,000</td><td>10.000</td><td>C24</td><td>0.90</td><td>0.95</td><td>0.96</td><td>0.97</td><td>0.97</td><td>0.97</td><td>0.97</td><td>0.97</td><td>097</td><td>0.97</td></tr><tr><td>4,000,000</td><td>100,000</td><td>C3</td><td>0.68</td><td>0.71</td><td>0.71</td><td>0.72</td><td>0.72</td><td>0.73</td><td>0.73</td><td>0.73</td><td>0.74</td><td>0.74</td></tr><tr><td>4.000.000</td><td>100.000</td><td>C18</td><td>0.78</td><td>0.91</td><td>0.95</td><td>0.97</td><td>0.98</td><td>0.99</td><td>0.99</td><td>0.99</td><td>0.99</td><td>0.99</td></tr><tr><td>4,000,000</td><td>100,000</td><td>C24</td><td>0.91</td><td>0.95</td><td>0.97</td><td>0.97</td><td>0.98</td><td>0.98</td><td>0.98</td><td>0.98</td><td>0.98</td><td>0.98</td></tr></table>
|
| 420 |
+
|
| 421 |
+
Table 6: Accuracy of hash inversion for the frequency-based attack for Criteo dataset.
|
| 422 |
+
|
| 423 |
+
<table><tr><td>Number of Samples used for Learning Distribution</td><td>Number of Samples for Evaluation</td><td>Feature</td><td>Top 1</td><td>Top 2</td><td>Top 3</td><td>Top 4</td><td>Top 5</td><td>Top 6</td><td>Top 7</td><td>Top 8</td><td>Top 9</td><td>Top 10</td></tr><tr><td>3,000,000</td><td>200,000</td><td>C7</td><td>0.33</td><td>0.48</td><td>0.61</td><td>0.68</td><td>0.74</td><td>0.80</td><td>0.84</td><td>0.88</td><td>0.91</td><td>0.93</td></tr><tr><td>3.000,000</td><td>200,000</td><td>C12</td><td>0.89</td><td>0.96</td><td>0.98</td><td>0.99</td><td>0.99</td><td>0.99</td><td>0.99</td><td>0.99</td><td>0.99</td><td>0.99</td></tr><tr><td>3.000,000</td><td>200.000</td><td>C20</td><td>0.93</td><td>0.98</td><td>0.99</td><td>0.99</td><td>1</td><td>1</td><td>1</td><td>1</td><td>1</td><td>1</td></tr><tr><td>30,000,000</td><td>2,000,000</td><td>C7</td><td>0.33</td><td>0.48</td><td>0.58</td><td>0.65</td><td>0.73</td><td>0.80</td><td>0.85</td><td>0.88</td><td>0.92</td><td>0.93</td></tr><tr><td>30,000,000</td><td>2,000,000</td><td>C12</td><td>0.89</td><td>0.96</td><td>0.98</td><td>0.98</td><td>0.99</td><td>0.99</td><td>0.99</td><td>0.99</td><td>0.99</td><td>0.99</td></tr><tr><td>30,000,000</td><td>2,000,000</td><td>C20</td><td>0.85</td><td>0.88</td><td>0.91</td><td>0.94</td><td>0.96</td><td>0.98</td><td>0.99</td><td>0.99</td><td>0.99</td><td>0.99</td></tr><tr><td>400,000,000</td><td>4,000,000</td><td>C7</td><td>0.33</td><td>0.48</td><td>0.58</td><td>0.65</td><td>0.73</td><td>0.80</td><td>0.83</td><td>0.88</td><td>0.90</td><td>0.93</td></tr><tr><td>400,000,000</td><td>4,000,000</td><td>C12</td><td>0.89</td><td>0.96</td><td>0.98</td><td>0.98</td><td>0.99</td><td>0.99</td><td>0.99</td><td>0.99</td><td>0.99</td><td>0.99</td></tr><tr><td>400,000,000</td><td>4,000,000</td><td>C20</td><td>0.84</td><td>0.88</td><td>0.90</td><td>0.92</td><td>0.95</td><td>0.97</td><td>0.98</td><td>0.99</td><td>0.99</td><td>0.99</td></tr></table>
|
| 424 |
+
|
| 425 |
+
## D Is Private Hash a Solution?
|
| 426 |
+
|
| 427 |
+
Note that hash functions are currently used for reducing the sizes of embedding tables rather than designed for privacy purposes. But if a private hash function is employed, can it guarantee zero information leakage? In other words, using any random mapping between inputs and outputs of the hash, and if an attacker does not know the hash, can they find the mapping just by observing the frequency of the accesses? To answer this question, we first use a simple greedy attack to demonstrate the leakage of information. Then we use a more sophisticated machine learning based optimization exploiting sequences of access to show how an attacker can achieve a high hash inversion accuracy even when the hash function is unknown.
|
| 428 |
+
|
| 429 |
+
We first design a greedy attack to map the inputs and outputs by matching the frequencies without having any further information about the hash function. The only knowledge the attacker has are the prior distribution of pre-hash accesses and the observed post-hash access to the embedding table. We analyzed the category table of ${12},{000} +$ pre-hash entries and 1,200 post-hash entries $\left( {P = {0.1N}}\right)$ . We randomly map each of the 12,000 inputs to an output. Then we launched the frequency-based attack without providing any information about this mapping to the attacker. This simple attack could successfully figure out the correct mapping for ${23}\%$ of the accesses. This analysis showed that although a private hash can reduce the amount of information leakage, it will not eliminate the leakage completely and is still susceptible to this type of attack. Now we take a step further to show how this attack can achieve an even higher inversion accuracy.
|
| 430 |
+
|
| 431 |
+
Evaluation Setup: As we explained in the previous sections, the user shares their most recent behaviors with the recommendation system to receive accurate suggestions. In this section, we show that the combination of the users' past shopping behaviors within one query, can help attackers launch more sophisticated attacks. Hence, for evaluating this attack we use Taobao dataset that provides this shopping behaviours. We evaluated both Category and Brand tables with more than ${379}\mathrm{\;K}$ and ${12}\mathrm{K}$ raw entries respectively.
|
| 432 |
+
|
| 433 |
+
Attack Method: Assume that $N$ is the size of the input, and $P$ is the size of the output, and the hash function $\mathbf{h}\left( \text{.}\right)$ maps the input to the output. Thus, $\mathbf{h}\left\lbrack i\right\rbrack = j$ means that the hash function, maps input index $i$ to output index $j$ . We do not impose any assumptions on the hash function in this part. Assume that the joint distribution of the indices of the input and the output are shown by the matrices $\mathbf{X} \in {\mathbb{R}}^{N \times N}$ and $\mathbf{Y} \in {\mathbb{R}}^{P \times P}$ , respectively. This means that the probability of $\left( {{i}_{1},{i}_{2}}\right)$ in the input is ${\mathbf{X}}_{{i1},{i2}}$ and the probability of $\left( {{j}_{1},{j}_{2}}\right)$ in the output is ${\mathbf{Y}}_{{j1},{j2}}$ . Also assume that the matrix $\mathbf{B} \in {\mathbf{R}}^{P \times N}$ is the one-hot representation of the hash function $\mathbf{h}\left( \text{.}\right) ,$ such that
|
| 434 |
+
|
| 435 |
+
$$
|
| 436 |
+
{\mathbf{B}}_{j, i} = \left\{ \begin{array}{ll} 1 & \mathbf{n}\left( i\right) = j \\ 0 & \text{ otherwise } \end{array}\right. \tag{9}
|
| 437 |
+
$$
|
| 438 |
+
|
| 439 |
+
Using these notations, we can show that,
|
| 440 |
+
|
| 441 |
+
$$
|
| 442 |
+
\mathbf{Y} = {\mathbf{{BXB}}}^{T}. \tag{10}
|
| 443 |
+
$$
|
| 444 |
+
|
| 445 |
+
To prove this, note that
|
| 446 |
+
|
| 447 |
+
$$
|
| 448 |
+
{\mathbf{Y}}_{{i}_{1},{i}_{2}} = \mathop{\sum }\limits_{{{j}_{1},{j}_{2}}}{\mathbb{1}}_{\mathbf{h}\left( {j}_{1}\right) = {i}_{1}}{\mathbb{1}}_{\mathbf{h}\left( {j}_{2}\right) = {i}_{2}}{\mathbf{X}}_{{j}_{1},{j}_{2}}
|
| 449 |
+
$$
|
| 450 |
+
|
| 451 |
+
$$
|
| 452 |
+
= \mathop{\sum }\limits_{{{j}_{1},{j}_{2}}}{\mathbf{B}}_{{i}_{1},{j}_{1}}{\mathbf{X}}_{{j}_{1},{j}_{2}}{\mathbf{B}}_{{j}_{2},{i}_{2}}, \tag{11}
|
| 453 |
+
$$
|
| 454 |
+
|
| 455 |
+
where ${\mathbb{1}}_{\mathcal{E}}$ is the indicator function of the event $\mathcal{E}$ , therefore ${\mathbb{1}}_{\mathbf{h}\left( {j}_{1}\right) = {i}_{1}} = {\mathbf{B}}_{{i}_{1},{j}_{1}}$ . Eq (11) yields (10). Now, to estimate $\mathbf{B}$ , we would like to ideally solve the following optimization.
|
| 456 |
+
|
| 457 |
+
$$
|
| 458 |
+
\widehat{\mathbf{B}} = \arg \mathop{\min }\limits_{{\mathbf{B} \in \mathcal{B}}}{\begin{Vmatrix}\mathbf{Y} - \mathbf{B}\mathbf{X}{\mathbf{B}}^{T}\end{Vmatrix}}_{F}^{2}, \tag{12}
|
| 459 |
+
$$
|
| 460 |
+
|
| 461 |
+
where $\parallel \mathbf{X}{\parallel }_{F}^{2} = \mathop{\sum }\limits_{{i, j}}{\mathbf{X}}_{i, j}^{2}$ is the Frobenius norm and $\mathcal{B}$ is the space of all possible matrices $\mathbf{B}$ , that represents a hash function. Optimization (12) is an integer programming and NP-hard problem, due to the constraint in the minimization. To approximately solve this, we use Orthogonal Matching Pursuit (OMP) Tropp and Gilbert (2007). The idea behind OMP is to find one column of the matrix $\mathbf{B}$ in each iteration, in such a way that the new column satisfies the constraint on $\mathbf{B}$ , and the new added column minimizes the loss function in (12) the most (compared to any other feasible column). Note that in each iteration of our algorithm, we make sure that the matrix $\mathbf{B}$ can represent a hash function. The size of Matrix B can grow large based on the embedding table size. Thus, in our implementation we used CSR format since this matrix is sparse.
|
| 462 |
+
|
| 463 |
+
Evaluation Metric: Accuracy is the probability that the attacker correctly identifies a raw input value from the post-hash value. We used top-1 accuracy which is defined in Eq (5).
|
| 464 |
+
|
| 465 |
+
Evaluation Result: To evaluate this attack, we measure the accuracy of the hash inversion function when changing the hash size. Figure 8 demonstrates the hash-inversion accuracy using this optimization for the Taobao category table. We used different hash sizes to evaluate this attack. The size of the hash table changes from ${0.05}\left( {P = {0.05N}}\right)$ of the original table size to 0.80 of the table size. It shows how this accuracy increases over iterations until it saturates. For the large hash sizes, $P = {0.8N}$ , accuracy reaches ${94}\%$ , which means the this attack can recover raw values from hashed values for ${94}\%$ of accesses. Since the embedding table size for the Brand table is large, we used the Compressed Sparse Row (CSR) implementation to optimize the memory usage of the attacker. This way we could analyze the same attack on the brand embedding table with 379,353 raw entries Figure 9 shows how different hash sizes can change the attacker's accuracy for hash inversion in the brand table. The key takeaway is that, even an unknown private hash cannot reduce the information leakage. An attacker can use this frequency-based machine learning optimization to recover the raw value features with high accuracy.
|
| 466 |
+
|
| 467 |
+
## E Implications for Private Recommendation Systems
|
| 468 |
+
|
| 469 |
+
Our threat model is based on the common practices employed by the industry's recommendation systems. They are typically deployed in the cloud for inference serving Niu et al. (2020). In such a
|
| 470 |
+
|
| 471 |
+

|
| 472 |
+
|
| 473 |
+
Figure 8: Hash-inversion accuracy increases with more optimization iterations and Larger hash sizes (Category Table).
|
| 474 |
+
|
| 475 |
+

|
| 476 |
+
|
| 477 |
+
Figure 9: Hash-inversion accuracy increases with more optimization iterations and Larger hash sizes (Brand Table).
|
| 478 |
+
|
| 479 |
+
setting, a pre-trained model is hosted by a cloud server. The interaction history of each end user is kept in a user's local web browser or on a merchant's site where the merchant is precluded from sharing these data with other platforms without users' consent. This assumption is particularly important as it reflects the growing awareness in protecting personal data privacy.
|
| 480 |
+
|
| 481 |
+
There are various techniques that protect computations on cloud systems. These techniques include fully homomorphic encryption (FHE) Shmueli and Tassa (2017), multi-party computation (MPC) Gol-dreich (1998), and trusted execution environments (TEEs) Costan and Devadas (2016); Salter (2021). However, none of these techniques protect the privacy of memory access patterns. For example, while Intel SGX protects computational confidentiality and integrity, it has been shown to be vulnerable to side-channel attacks via memory access pattern leakage Wang et al. (2017). This paper shows that the information leakage through embedding table accesses may be used to extract private user information, suggesting that memory access patterns need to be protected if strong privacy protection is necessary for recommendation systems in the cloud.
|
| 482 |
+
|
| 483 |
+
Table 7 summarizes the attacks introduced in this paper. Each of them has a different goal. In all of these attacks, an attacker launches the attack by exploiting and analyzing the access patterns they observe. In some of the attacks, an attacker uses prior knowledge gleaned from the distribution of the accesses. In this work, we also define different metrics to evaluate each of these attacks. The high success rate of these attacks, highlights the importance of access pattern protection in the cloud-based recommendation systems.
|
| 484 |
+
|
| 485 |
+
## F Related Work
|
| 486 |
+
|
| 487 |
+
The risk of information leakage in recommendation systems has been explored in prior works. 90 However, most of the research in this area focused on other models (e.g. content filtering) or dense features. Access pattern privacy in recommendation systems is a new topic and current
|
| 488 |
+
|
| 489 |
+
Table 7: Attack summary.
|
| 490 |
+
|
| 491 |
+
<table><tr><td>Attack</td><td>Goal</td><td>Assumption</td><td>Evaluation Metric</td></tr><tr><td>Identification</td><td>Finding the identity of users</td><td>Attacker observes accesses Has prior knowledge about distribution of accesses</td><td>K-anonymity</td></tr><tr><td>Sensitive Attribute</td><td>Extracting sensitive user features</td><td>Attacker observes accesses Has prior knowledge about distribution of accesses</td><td>Ambiguity</td></tr><tr><td>Re-Identification</td><td>Tracking users over time</td><td>Attacker observes accesses</td><td>Precision and Recall</td></tr><tr><td>Frequency-based attack</td><td>Finding users' raw feature values</td><td>Attacker observes accesses Has prior knowledge about distribution of accesses Knows hash function Does not know secret key for has</td><td>Inversion Accuracy</td></tr><tr><td>OMP-based frequency attack for private hash</td><td>Finding users' raw feature values</td><td>Attacker observes accesses Has prior knowledge about distribution of accesses No information about hash</td><td>Inversion Accuracy</td></tr></table>
|
| 492 |
+
|
| 493 |
+
Federated learning and Oblivious RAM schemes have shortcomings when it comes to DNN-based recommendation systems as we discuss here.
|
| 494 |
+
|
| 495 |
+
The study in Zhang et al. (2021) designed a membership inference attack against a recommendation system to infer the training data in a content filtering model.Abdelberi et al. used a statistical learning model to find a connection between users' interests and the demographic information that users are not willing to share Chaabane et al. (2012). Previous studies also investigated the risk of cross-system information exposure Chaum ([1985]); Sweeney (2002). For instance, a former Massachusetts Governor was identified in voter registration records by the combination of a zip code, a birth date, and gender. Using this information, the researchers were able to identify him in a supposedly anonymous medical record dataset Sweeney (2002). Most of the prior research in this domain was focused on information leakage through dense features Akhtar and Mian (2018); Choquette-Choo et al. (2021); Li and Zhang (2021); Calandrino et al. (2011); Beigi and Liu (2020). Also, there are prior works investigating sparse feature leakage in other domains Ghinita et al. (2008); Aggarwal and Yu (2007). However, these leakages are through sparse feature values and not the embedding table accesses. Sparse feature's information leakage through embedding table accesses was explored for NLP models Song and Raghunathan (2020); Aggarwal and Yu (2007). This attack aimed to disclose the embedding tables' input values based on their output which is different from our threat model. Access pattern attacks are also investigated in databases research Grubbs et al. (2019); Bindschaedler et al. (2017). However, these attacks and defense schemes are fundamentally different from the ones in recommendation systems. In databases attack the goal is to find the value of the encrypted data of the database based on the range queries or the correlation of different rows.
|
| 496 |
+
|
| 497 |
+
Using federated learning for training centralized recommendation models has gained attention recently Yao et al. (2021); Yang et al. (2020). One of the problems of using federated learning for recommendation systems is the large size of embedding tables. These schemes usually use decomposition techniques such as tensor train to fit embedding tables on the edge devices Oseledets (2011). However, because of the accuracy drop, the compression ratio is not high which makes them incompatible with edge devices. TT-Rec mitigates the performance degradation of tensor decomposition by initializing weight tensors by Gaussian distribution Yin et al. (2021). Niu et al. proposed an FL framework to perform a secure federated sub-model training [Niu et al. (2020). They employed Bloom filter, secure aggregation, and randomized response to protect users' private information. But, inference solutions are not discussed in these federate learning approaches. DeepRec Han et al. (2021) proposed an on-device recommendation model for RNNs. In this work, there is a global model trained by public data that is available from before GDPR. Each device downloads this global model and re-train the last layer with their data. The problem with this model is that it depends on before GDPR public data. However, with new models come new features, which were not collected before. Thus they can not rely on this scheme for future models.
|
| 498 |
+
|
| 499 |
+
One approach to obfuscating the embedded table access pattern is to use Oblivious RAM (ORAM) Goldreich and Ostrovsky (1996); Stefanov et al. (2018); Ren et al. (2014). In a high level, for each read or write operation, ORAM controller reads and writes not only the requested block, but also many random blocks. In this way, ORAM hides the information about real blocks from the attacker. However, the overhead of ORAM is unlikely to be acceptable for real-time applications such as recommendation system inference due to Service Level Agreement (SLA) Hazelwood et al. (2018). Even the most optimized version of ORAM suffers from 8-10 times performance overhead Raoufi et al. (2022). A previous study Rajat et al. (2021) tries to optimize ORAM for recommendation systems training. But, the scheme relied on pre-determined sequence of accesses
|
| 500 |
+
|
| 501 |
+
37 in training and is not applicable to inference. In our future work, we plan to investigate low-latency
|
| 502 |
+
|
| 503 |
+
738 protection schemes for embedding table accesses.
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/Z31SloFrp7/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,265 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ PRIVATE DATA LEAKAGE VIA EXPLOITING ACCESS PATTERNS OF SPARSE FEATURES IN DEEP LEARNING-BASED RECOMMENDATION SYSTEMS
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
§ ABSTRACT
|
| 12 |
+
|
| 13 |
+
Deep Learning-based Recommendation models use sparse and dense features of a user to predict an item that the user may like. These features carry the users' private information, service providers often protect these values by memory encryption (e.g., with hardware such as Intel's SGX). However, even with such protection, an attacker may still learn information about which entry of the sparse feature is nonzero through the embedding table access pattern. In this work, we show that only leaking the sparse features' nonzero entry positions can be a big threat to privacy. Using the embedding table access pattern, we show that it is possible to identify or re-identify a user, or extract sensitive attributes from a user. We subsequently show that applying a hash function to anonymize the access pattern cannot be a solution, as it can be reverse-engineered in many cases.
|
| 14 |
+
|
| 15 |
+
§ 12 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
Deep learning-based personalized recommendation models empower modern Internet services. These models exploit different types of information, including user attributes, user preferences, user behavior, social interaction, and other contextual information Erkin et al. (2010) to provide personalized recommendations relevant to a given user. They drive ${35}\%$ of Amazon’s revenue Gupta et al. (2020) and influence ${80}\%$ of the videos streamed on Netflix Gomez-Uribe and Hunt (2015).
|
| 18 |
+
|
| 19 |
+
< g r a p h i c s >
|
| 20 |
+
|
| 21 |
+
Figure 1: left: DLRM, right: example of embedding lookup.
|
| 22 |
+
|
| 23 |
+
Deep learning-based recommendation models use dense (continuous) and sparse (categorical) features of a user as an input to a deep neural network to predict an item that a user may like (Figure 1, left). The features may include both static features that do not change frequently (e.g., age or gender) and dynamic features that changes frequently (e.g., a user's recent behavior history). Both features can hold sensitive information and must be kept private. Private user features are often encrypted in memory for privacy, using hardware such as trusted execution environment (TEE), e.g., Intel SGX team (2022). However, even when using hardware like TEE, the information of which entries of the sparse features are nonzero can be leaked. This is because sparse features must be projected into a lower-dimension space through an embedding table, where the index of the nonzero entries are used as an index for an embedding table lookup (Figure 1, right). In this paper, we show that this information leakage can be an enough threat to privacy. We first show that it is possible to (1) identify a user, (2) extract sensitive attributes of a user, or (3) re-identify a user, by only looking at the embedding table access pattern even when the data is fully encrypted. We subsequently show that applying a hash function to randomize the access pattern cannot be a general solution, by demonstrating a set of hash-inversion attacks. Specifically, we show that the below attacks are possible by only observing the embedding table access patterns in modern deep learning recommendation models:
|
| 24 |
+
|
| 25 |
+
* Identification attack. We demonstrate it is possible to identify a user by only observing the access pattern of sparse features' embedding table access pattern.
|
| 26 |
+
|
| 27 |
+
* Sensitive attribute attack. We show it is possible to extract sensitive attributes of a user (e.g., demographics) from seemingly unrelated sparse features, such as dynamic user behavior history.
|
| 28 |
+
|
| 29 |
+
* Re-identification attack. We show it is possible to identify if two queries are from the same user by only looking at seemingly innocuous sparse features, such as the users' recent purchase history. - Hash inversion with frequency-based attack. We show that hiding the access using a hash cannot be a solution against these attacks, by demonstrating a hash inversion attack based on the access frequency. Our hash inversion attack can invert even sophisticated private hash functions as well as simple hash functions that are mainly used by the industry today.
|
| 30 |
+
|
| 31 |
+
§ 2 BACKGROUND AND THREAT MODEL
|
| 32 |
+
|
| 33 |
+
Deep learning-based recommendation model Zhou et al. (2018, 2019); Naumov et al. (2019); Ishkhanov et al. (2020); Cheng et al. (2016) uses dense and sparse features of a user and an item to predict whether the user will likely to interact with the item (e.g., click an Ad or purchase an item). Figure 1 shows the operation of a representative recommendation model, DLRM Naumov et al. (2019). In DLRM, the dense features go through a bottom MLP layer, while the sparse features go through an embedding table layer and get converted into a lower-dimensional dense features. Then, the two outputs go through a feature interaction layer (e.g., pairwise dot product) and go through a top MLP layer to predict the likelihood of an interaction. Other modern recommendation models work similarly Zhou et al. (2018, 2019); Ishkhanov et al. (2020); Cheng et al. (2016). Embedding tables convert a sparse feature into a dense representation by using the index of the nonzero entries in the sparse features as an index to perform lookup to a large table (Figure 1, right). Even when the entire dense and sparse features are fully encrypted and processed on a secure environment (e.g., by using Intel SGX Costan and Devadas (2016), hardware that encrypts content in the memory and protects computations), it is possible to learn which index holds a nonzero entry by looking at the table access pattern.
|
| 34 |
+
|
| 35 |
+
Threat Model We assume a scenario where users share their private features with the service provider to get recommendations from the model. We assume that the values of the dense and sparse features of a user is fully protected from the attacker, e.g., with Intel SGX team (2022), but the access pattern of the embedding table is revealed, essentially revealing which entries are nonzero in the sparse features. In the real world, a honest-but-curious service provider running model inference on Intel SGX can fall into this category. Figure 2 summarizes our threat model.
|
| 36 |
+
|
| 37 |
+
< g r a p h i c s >
|
| 38 |
+
|
| 39 |
+
Figure 2: Our threat model assumes only the access pattern to the embedding table is revealed.
|
| 40 |
+
|
| 41 |
+
§ 3 IDENTIFICATION ATTACK WITH STATIC USER FEATURES
|
| 42 |
+
|
| 43 |
+
A single user's inference request contains a series of sparse features, each of which in isolation has limited user information. However, multiple sparse features together can form a distinctive fingerprint for personal identification. User profile attributes (e.g. gender, city, etc) are usually static, in other words, they do not change or the frequency of the change is extremely low. We categorize this type of features into two subcategories-identifiable features and unidentifiable features. However, because - of strict regulations in many domains, most of the recommendation systems do not collect and use such identifiable features. The question is if unidentifiable features such as age, gender, education,
|
| 44 |
+
|
| 45 |
+
Table 1: The number of users with anonymity level bellow $\mathrm{K}$ in the identification attacks (out of 1.14 million users).
|
| 46 |
+
|
| 47 |
+
max width=
|
| 48 |
+
|
| 49 |
+
1-anonymity 2-anonymity 3-anonymity 4-anonymity 5-anonymity 6-anonymity 7-anonymity 8-anonymity 9-anonymity 10-anonymity
|
| 50 |
+
|
| 51 |
+
1-10
|
| 52 |
+
56 154 256 380 480 606 739 867 984 1104
|
| 53 |
+
|
| 54 |
+
1-10
|
| 55 |
+
|
| 56 |
+
§ AND SHOPPING HISTORY CAN PROVIDE SUFFICIENT INFORMATION TO IDENTIFY A USER.
|
| 57 |
+
|
| 58 |
+
Evaluation Setup: To answer this question, we analyzed an open-source dataset released by Alibaba. This dataset contains static user features including user ID (1.14M), micro group ID (97), group ID (13), gender (2), age group (7), consumption grade/plevel (4), shopping depth (3), occupation/is college student (2), city level (5). More details about datasets is on Appendix A.
|
| 59 |
+
|
| 60 |
+
Attack Method In this set of features, the only directly identifying feature associated with a single user is the user ID. After removing the user ID, the collection of all other features provides 2.1 million possible combination. Hence, after removing the user ID, a user may mistakenly think that he or she is anonymous, and revealing any of the other features to the attacker on its own will not reveal the identity of the user. However, based on the user profile information from more than 1 million users, it is observed that in the real world only 1120 combinations of these static feature values are possible. We refer to this 1120 as user buckets. We plotted the histogram of users in these 1120 buckets as shown in Figure 3. The x-axis in the figure indicates the bucket number $\left( \left\lbrack {1 - {1120}}\right\rbrack \right)$ and the y-axis shows the percentage of users per bucket. This histogram is quite illuminating in how the user distributions follow a long tail pattern. In particular, there are only a few users in buckets 600 to 1120. In fact, there are only 989 users on average across all these buckets, and the last 56 buckets have only 1 user. Consequently, observing the entire combinations of seemingly innocuous features from each allow may allow an attacker to launch an identification attack to extract the unique user ID with very high certainty.
|
| 61 |
+
|
| 62 |
+
< g r a p h i c s >
|
| 63 |
+
|
| 64 |
+
Figure 3: Percentage of the users belong to each user bucket.
|
| 65 |
+
|
| 66 |
+
Evaluation Metric: For our analysis, we used a well-known property known as $K$ -anonymity used in information security/privacy. It describes a scenario in which if a user's bucket number is revealed and there are $\mathrm{K}$ users in the same bucket, the probability of finding the user is $\frac{1}{K}$ . For instance, 1-anonymity for a user means that this is the only user having this particular set of feature values.
|
| 67 |
+
|
| 68 |
+
Evaluation Result: As shown in Table 1, for 56 of the user buckets, there is only one user with the specific combination of static features which implies that an attacker can identify these users with 1-anonymity if they can observe this combination of feature values. Also for more 1000 users, the anonymity level is 10 or below.
|
| 69 |
+
|
| 70 |
+
§ 4 SENSITIVE ATTRIBUTE ATTACK BY DYNAMIC USER FEATURES
|
| 71 |
+
|
| 72 |
+
In this section, the question is when the user removes the static features, can sensitive features leak through other nonsensitive features? For instance, a user may provide no age information and they may have a sense of protecting more of their private data by not disclosing their static features. However, we demonstrate that even when a user hides their sensitive static features, adversaries are still able extract the sensitive attributes through cross correlations with user-item interaction data. Evaluation Setup: For evaluation, we use dynamic sparse features that includes user-item interactions Zhao et al. (2019) in the Alibaba Ads Display dataset. This dataset contains 723,268,134 tuples collected over three weeks. Each tuple includes a user ID (1.14M), a btag (4: browse, cart, favor, buy), a category id(12K), and a brand(379K).
|
| 73 |
+
|
| 74 |
+
< g r a p h i c s >
|
| 75 |
+
|
| 76 |
+
Figure 4: Different brands are popular between different customer age groups
|
| 77 |
+
|
| 78 |
+
< g r a p h i c s >
|
| 79 |
+
|
| 80 |
+
Figure 5: Using the accessed brands, ambiguity about A) user buckets (defined in previous section), B) user age groups, and C) user gender groups.
|
| 81 |
+
|
| 82 |
+
Attack Method: Figure 4 depicts an example of how different brands of the items are accessed by different user groups. The user/item interactions are depicted as graphs where each edge weight represents the fraction of the total interactions with that specific item from the corresponding age group. In real-world datasets, there are certain brands, where users from just a single age group interact with, in this example Legoland. A user who wants to protect their age group may not provide their age, but the adversary may deduce their age with a high probability if the user interacted with Legoland. While this simple illustration highlights the extremity (only one age group interacting with an item), this approach can be generalized. In General attacker, uses their prior knowledge on popularity of the items between different demographic groups. Then based on this prior information, they link the query to the demographic who formed most of the accesses to that item.
|
| 83 |
+
|
| 84 |
+
Evaluation Metric: In this part, we employ a metric called ambiguity to determine the likelihood an adversary fails to predict a user's static sparse feature by just viewing their interactions with items. We define ambiguity for each item $i$ as: ambiguit ${y}_{i} = {100}\% - \max \left( \right.$ frequenc $\left. {y}_{i}\right)$ where frequenc ${y}_{i}$ is the distribution vector of all accesses to brand $i$ by different user groups. Using Figure 4 as an example, frequenc ${y}_{\text{ apple }} = \left\lbrack {0,0,{20}\% ,{50}\% ,{30}\% ,0,0}\right\rbrack$ and as a result ambiguit ${y}_{\text{ Apple }} = {50}\%$ , meaning if a user has interacted with item $i$ (Apple), the attacker can predict the static feature (age group) successfully for ${50}\%$ of the users. With this definition, ambiguit ${y}_{i} = 0$ indicates if a user has interacted with item $i$ , the attacker can successfully determine the user’s sparse feature.
|
| 85 |
+
|
| 86 |
+
Evaluation Result: As shown in Figure 5, we quantify the ambiguity of predicting a user's sparse feature, such as age and gender, by using their item (brand) interaction history alone. The x-axis of these figures shows the percentage of ambiguity where a value of 0 indicates that there is no ambiguity, and this brand is always accessed by only one user bucket. On the other hand, higher values indicate more ambiguity, and hence brands with higher values on the $\mathrm{x}$ -axis are popular across multiple user buckets. We plot both probability density function (PDF) and cumulative distribution function (CDF) of the ambiguity of different brands. What is revealing in the data is that in Figure 5(A), we observe that more than ${17}\%$ of brands are only accessed by 1 user bucket represented by the leftmost tall bar of PDF, meaning the attacker can determine the user bucket using those brands interactions. As shown in the CDF curve in Figure 5(A), for ${38}\%$ of the brands, the attacker can predict the user bucket with a success rate of greater than ${50}\%$ . We present the information of age and gender group versus ambiguity in Figure 5(B) and Figure 5(C) respectively.
|
| 87 |
+
|
| 88 |
+
§ 5 RE-IDENTIFICATION ATTACK
|
| 89 |
+
|
| 90 |
+
In re-identification attack, the goal of an attacker is to identify the same user over time by just observing their interaction history. Studies have shown the majority of the users prefer not to be tracked even anonymously Teltzrow and Kobsa (2004). In this section, we first study if the history of the purchases of a user can be used as a tracking identifier for the user. Hence, we analyze if the history of the purchases is unique for each user. Second, we study if an attacker can re-identify the same user who sent queries over time by only tracking the history of their purchases, with no access to the static sparse features. Evaluation Setup: For evaluation we used Taobao datase that has more than 723 million user-item interactions. Within them, we separated about 9 million purchase interactions. We then pre-processed and formatted that data in a time series data structure (user history data structure) shown below:
|
| 91 |
+
|
| 92 |
+
$$
|
| 93 |
+
{\text{ user }}_{1} : \left( {{\text{ time }}_{1},{\text{ item }}_{1}}\right) ,\left( {{\text{ time }}_{4},{\text{ item }}_{10}}\right) ,\left( {{\text{ time }}_{500},{\text{ item }}_{20}}\right)
|
| 94 |
+
$$
|
| 95 |
+
|
| 96 |
+
$$
|
| 97 |
+
{\text{ user }}_{2} : \left( {{\text{ time }}_{3},{\text{ item }}_{100}}\right) ,\left( {{\text{ time }}_{20},{\text{ item }}_{100}}\right)
|
| 98 |
+
$$
|
| 99 |
+
|
| 100 |
+
$\vdots$
|
| 101 |
+
|
| 102 |
+
$$
|
| 103 |
+
{\text{ user }}_{X} : \left( {{\text{ time }}_{5},{\text{ item }}_{75}}\right) ,\left( {{\text{ time }}_{20},{\text{ item }}_{50}}\right) \text{ , }
|
| 104 |
+
$$
|
| 105 |
+
|
| 106 |
+
$$
|
| 107 |
+
\left( {{tim}{e}_{100},{ite}{m}_{75}}\right) ,\left( {{tim}{e}_{400},{ite}{m}_{1}}\right) \left( {{tim}{e}_{420},{ite}{m}_{10}}\right)
|
| 108 |
+
$$
|
| 109 |
+
|
| 110 |
+
Second, for each set of consecutive items purchased by any user, we create a list of users who have the same set of consecutive purchases in exactly that order. We refer to these sets of consecutive recent purchases as keys. Multiple users may have the same key in their history. That is why each key keeps a list of all the users that created the same key and the duration of the time they had the key. An example of the recent item purchase history when we consider two most recent purchases shown below. Each key consists of a pair of items. For instance, the first line shows item 1 and item 10 were the most recent purchases of user 1 from time 4 to time 500 .
|
| 111 |
+
|
| 112 |
+
key : list of values
|
| 113 |
+
|
| 114 |
+
$$
|
| 115 |
+
\left\lbrack {{ite}{m}_{1},{ite}{m}_{10}}\right\rbrack : \left\lbrack {{use}{r}_{1},{tim}{e}_{4},{tim}{e}_{500}}\right\rbrack
|
| 116 |
+
$$
|
| 117 |
+
|
| 118 |
+
$$
|
| 119 |
+
\left\lbrack {{\text{ user }}_{X},{\text{ time }}_{420},\text{ Current }}\right\rbrack
|
| 120 |
+
$$
|
| 121 |
+
|
| 122 |
+
$$
|
| 123 |
+
\left\lbrack {{ite}{m}_{10},{ite}{m}_{20}}\right\rbrack : \left\lbrack {{use}{r}_{1},{tim}{e}_{1000},{Current}}\right\rbrack
|
| 124 |
+
$$
|
| 125 |
+
|
| 126 |
+
$$
|
| 127 |
+
\left\lbrack {{ite}{m}_{100},{ite}{m}_{100}}\right\rbrack : \left\lbrack {{use}{r}_{2},{tim}{e}_{20},{Current}}\right\rbrack
|
| 128 |
+
$$
|
| 129 |
+
|
| 130 |
+
$\vdots$
|
| 131 |
+
|
| 132 |
+
$$
|
| 133 |
+
\left\lbrack {{\text{ item }}_{75},{\text{ item }}_{50}}\right\rbrack : \left\lbrack {{\text{ user }}_{X},{\text{ time }}_{20},{\text{ time }}_{100}}\right\rbrack
|
| 134 |
+
$$
|
| 135 |
+
|
| 136 |
+
$$
|
| 137 |
+
\left\lbrack {{\text{ item }}_{50},{\text{ item }}_{75}}\right\rbrack : \left\lbrack {{\text{ user }}_{X},{\text{ time }}_{100},{\text{ time }}_{400}}\right\rbrack
|
| 138 |
+
$$
|
| 139 |
+
|
| 140 |
+
$$
|
| 141 |
+
\left\lbrack {{\text{ item }}_{75},{\text{ item }}_{1}}\right\rbrack : \left\lbrack {{\text{ user }}_{X},{\text{ time }}_{400},{\text{ time }}_{420}}\right\rbrack
|
| 142 |
+
$$
|
| 143 |
+
|
| 144 |
+
The goal of the this attack is to use only the $m(m = 2$ in the example above) most recent purchases by a user to track the user across different interaction sessions, which are separated by timestamps as sessions. To evaluate this attack:
|
| 145 |
+
|
| 146 |
+
1. We randomly select a timestamp and a user.
|
| 147 |
+
|
| 148 |
+
2. For the selected user, we check the $m$ most recent purchases of the user at the selected timestamp and form a key $=$ [recent purchase 1, recent purchase 2,... recent purchase m]
|
| 149 |
+
|
| 150 |
+
3. We look up this key in the recent item purchase history dataset. If the same sequence of $m$ most recent items appear on another user at the same time window, this means these recent purchases are not unique for that specific user at that time and cannot be used as a fingerprint of a single user.
|
| 151 |
+
|
| 152 |
+
4. On the other hand, if the $m$ item purchase history only belongs to that specific user, the duration of the time in which this key forms the most recent purchases of the user is extracted.
|
| 153 |
+
|
| 154 |
+
5. This experiment is repeated for many random time stamps and users to obtain 200,000 samples. As depicted in Figure 6 A, we observe that even the two most recent purchases can serve as a unique identifier for ${98}\%$ of our samples. In other words, at a random point in time, the two most recent purchases of a user are unique for ${98}\%$ of randomly selected users. We found that three, four, and five most recent purchases uniquely identify users with 99% probability.
|
| 155 |
+
|
| 156 |
+
Attack Method: Most recent items purchased by a user usually do not change with a very high frequency. For the period of time that these recent purchases remain the same, every query sent by the user has the same list of recent purchases. Therefore, the attacker is interested in using this knowledge to launch the attack. To accomplish this, the attacker first selects a time threshold. This time threshold is chosen to help the attacker to decide if the queries come from the same user or not. Meaning that if the time difference between receiving them is less than the time threshold and two distinct queries received by the cloud have the same most recent purchases, the attacker will predict that they comes from the same use. Otherwise, it is assumed queries come from two different users.
|
| 157 |
+
|
| 158 |
+
Evaluation Metric: To measure the accuracy of this attack, we use the machine learning terms precision and recall defined in Buckland and Gey (1994) as shown in Eq (1).
|
| 159 |
+
|
| 160 |
+
$$
|
| 161 |
+
\text{ Precision } = \frac{TP}{\left( TP + FP\right) },\;\text{ Recall } = \frac{TP}{\left( TP + FN\right) }, \tag{1}
|
| 162 |
+
$$
|
| 163 |
+
|
| 164 |
+
where TP stands for True Positives, FP represents False Positives, and FN is False Negatives. Precision indicates what percentage of positive predictions are accurate and Recall indicates what percentage of actual positives are detected.
|
| 165 |
+
|
| 166 |
+
Evaluation Result: To evaluate the precision/recall tradeoff, we start from a very small time threshold and increase it gradually. As expected, with low time thresholds, precision is high with few false positives. But as the attacker increases the time threshold and can identify more of the actual positives (higher recall), they false positives increase as well, which reduces the precision. The reason for having more false positives with a large threshold is that, during a longer period of time, other users may generate the same key. Table 2 shows when the 2 most recent purchases are used, there are around 4.5 million keys but the total number of occurrences of these keys is around 8 million times. This means for a fraction of the keys, the same keys are generated for different users at different
|
| 167 |
+
|
| 168 |
+
< g r a p h i c s >
|
| 169 |
+
|
| 170 |
+
Figure 6: A) Uniqueness of most recent purchases of users. B and C) Precision/recall trade-off based on different time threshold values.
|
| 171 |
+
|
| 172 |
+
times. These repeated keys are the source of false positives in our experiments. The decision of selecting the right threshold depends on the attacker's preference to have a higher recall or precision. Figure 6 shows this trade-off for different time threshold values. We gradually increase the time threshold from 1 second to 277 hours (11.5 days). As shown in this figure, by increasing the time threshold to 11 days recall will reach 1.0 while there is an almost 0.02 drop in precision. This means the attacker can link all the queries that come from the same users correctly. This comes at the cost of $2\%$ miss-prediction of the queries that do not come from the same user and only generates the same key at some point in their purchase history. These high precision and recall values, indicates how an attacker can track users who send queries to the recommendation model over time.
|
| 173 |
+
|
| 174 |
+
Table 2: Re-identification attack statistics about the number of keys and repeated keys.
|
| 175 |
+
|
| 176 |
+
max width=
|
| 177 |
+
|
| 178 |
+
X X _ X
|
| 179 |
+
|
| 180 |
+
1-4
|
| 181 |
+
Number of recent purchases Number of users Number of keys Total occurrences of keys
|
| 182 |
+
|
| 183 |
+
1-4
|
| 184 |
+
2 898,803 4,476,760 8, 114, 860
|
| 185 |
+
|
| 186 |
+
1-4
|
| 187 |
+
3 799,475 5,679,087 7, 216, 057
|
| 188 |
+
|
| 189 |
+
1-4
|
| 190 |
+
4 705,888 5,587,578 6, 416, 582
|
| 191 |
+
|
| 192 |
+
1-4
|
| 193 |
+
5 620,029 5,197,043 5, 710, 694
|
| 194 |
+
|
| 195 |
+
1-4
|
| 196 |
+
|
| 197 |
+
§ 6 HASH INVERSION WITH FREQUENCY-BASED ATTACK
|
| 198 |
+
|
| 199 |
+
Applying hash on the indices before embedding table lookup is an important performance optimization (more details about the data pipeline in production-scale recommendation systems and different hashing schemes can be found in Appendix B). Here, we analyze how hashing impact information leakage. This section studies how an attacker can recover the raw values of sparse features even when hashing is used for embedding indices. Through a hash function, users' raw data are remapped to post-hash values for indexing the embedding tables as shown in Fig. 7.
|
| 200 |
+
|
| 201 |
+
< g r a p h i c s >
|
| 202 |
+
|
| 203 |
+
Figure 7: Frequency-based attack tries to reverse engineers the hash based on the frequencies.
|
| 204 |
+
|
| 205 |
+
Evaluation Setup: For evaluation, we used Taobao, Kaggle and Criteo datasets. For each dataset we selected two disjoint random sets; training set and test test. The training set samples forms the prior distribution and the test sample are used for the evaluation.
|
| 206 |
+
|
| 207 |
+
Attack Method: An adversary can launch attacks by collecting the frequency of observed indices,
|
| 208 |
+
|
| 209 |
+
Table 3: Accuracy of hash inversion for the frequency-based attack for Taobao dataset.
|
| 210 |
+
|
| 211 |
+
max width=
|
| 212 |
+
|
| 213 |
+
Number of Samples used for Learning Distribution Number of Samples for Evaluation Top 1 Top 2 Top 3 Top 4 Top 5 Top 6 Top 7 Top 8 Top 9 Top 10
|
| 214 |
+
|
| 215 |
+
1-12
|
| 216 |
+
1,000,000 1,000 0.64 0.76 0.83 0.87 0.89 0.90 0.91 0.92 0.93 0.94
|
| 217 |
+
|
| 218 |
+
1-12
|
| 219 |
+
1,000,000 100,000 0.61 0.75 0.82 0.86 0.88 0.90 0.92 0.92 0.93 0.93
|
| 220 |
+
|
| 221 |
+
1-12
|
| 222 |
+
2,000,000 100,000 0.62 0.76 0.82 0.86 0.89 0.91 0.92 0.93 0.93 0.94
|
| 223 |
+
|
| 224 |
+
1-12
|
| 225 |
+
2,000,000 1.000.000 0.62 0.76 0.82 0.86 0.89 0.91 0.92 0.93 0.93 0.94
|
| 226 |
+
|
| 227 |
+
1-12
|
| 228 |
+
|
| 229 |
+
use prior knowledge about the distribution of feature values, and find the mapping between input and output of the hash. Here we show how an attacker can compromise a system with hashed input values where the hash function is output $= \left( {\text{ input } + {\text{ mask }}_{\text{ add }}}\right) {\;\operatorname{mod}\;P}$ and $P$ and $P$ is the hash size. We denote the frequency of possible input to a hash function by ${x}_{1},{x}_{2},\ldots ,{x}_{N}$ for $\mathrm{N}$ possible scenarios and its output frequency by ${y}_{1},{y}_{2},\ldots ,{y}_{P}$ of a hash size $\mathrm{P}$ . We form the matrix $M \in {\mathbb{R}}^{P \times P}$ in which each column represents a different value for Mask $\left( \left\lbrack {0,P - 1}\right\rbrack \right)$ . Basically, for each value of a mask, we compute the frequency of outcomes and form this Matrix. As shown, by increasing the value of the mask by 1, the column values are shifted. Hence, the Matrix M is a Toeplitz Matrix. Since a single column in this matrix is shifted and repeated the order of forming this matrix is $O\left( P\right)$ .
|
| 230 |
+
|
| 231 |
+
$$
|
| 232 |
+
\mathbf{M} = {\left\lbrack \begin{matrix} {y}_{1} & {y}_{P - 1} & \cdots & {y}_{2} \\ {y}_{2} & {y}_{1} & \cdots & {y}_{3} \\ \vdots & \vdots & \ddots & \vdots \\ {y}_{P} & {y}_{P - 2} & \cdots & {y}_{1} \end{matrix}\right\rbrack }_{P \times P} \tag{2}
|
| 233 |
+
$$
|
| 234 |
+
|
| 235 |
+
The attacker's goal here is to invert the hash using the input distribution and its observation of the output distribution. Note an input dataset and an output dataset should be independent. We define ${\mathbf{a}}_{t}$ as the distribution of embedding table accesses (post-hash) at time t. To reverse engineer the mask, an attacker has to find out which mask is used by the hash function. To do so, the attacker has to solve the optimization problem in Eq( 3).
|
| 236 |
+
|
| 237 |
+
$$
|
| 238 |
+
\mathop{\min }\limits_{i}{\begin{Vmatrix}\left( {\mathbf{m}}_{i} - {\mathbf{a}}_{t}\right) \end{Vmatrix}}^{2} = \mathop{\min }\limits_{i}\left( {{\begin{Vmatrix}{\mathbf{m}}_{i}\end{Vmatrix}}^{2} + {\begin{Vmatrix}{\mathbf{a}}_{t}\end{Vmatrix}}^{2} - 2{\mathbf{m}}_{i}^{\top }{\mathbf{a}}_{t}}\right) \tag{3}
|
| 239 |
+
$$
|
| 240 |
+
|
| 241 |
+
In Eq (3), ${\mathbf{m}}_{i}$ represents the vector containing the frequencies of output values when mask $i$ is used. So its absolute value will be a constant one. This is similar for $\begin{Vmatrix}{\mathbf{a}}_{t}\end{Vmatrix}$ . As a result, the optimization problem can be simplified to Eq. 4).
|
| 242 |
+
|
| 243 |
+
$$
|
| 244 |
+
\bar{P} = \underset{i}{\arg \max }\left( {{\mathbf{m}}_{\mathbf{i}}^{\top }{\mathbf{a}}_{t}}\right) \;\text{ for }\;i \in \left\lbrack {0,P - 1}\right\rbrack \Rightarrow \bar{P} = \underset{i}{\arg \max }\left( {{\mathbf{M}}^{\top }{\mathbf{a}}_{t}}\right) \tag{4}
|
| 245 |
+
$$
|
| 246 |
+
|
| 247 |
+
The order of computing such a matrix-vector product is $O\left( {P}^{2}\right)$ . However, because $\mathbf{M}$ is a Toeplitz matrix, this matrix vector computation can be done in time complexity of $O\left( {P\log P}\right)$ Strang (1986). To implement this attack, we created two disjoint sets. The first set is used to extract the distribution (known distribution) and the second set is used for frequency matching and evaluating the frequency-based attack. First, attackers try to reverse engineers the hash function and find the key based on the frequency matching. The attacker was able to reverse engineer the hash and find the key based on the method described above. Next, the attacker tries to reverse engineer the post-hash indices and find out the value of raw sparse features. After finding the key of the hash, the attacker reverse engineer the post-hash value to the top most frequent pre-hash values based on the input distributions.
|
| 248 |
+
|
| 249 |
+
Evaluation Metric: Accuracy in this case is the probability that the attacker correctly identifies an input raw value from the post-hash value. Let the function $g\left( y\right)$ be the attacker’s estimate of the input, given the output query $y,g\left( y\right) = \arg \mathop{\max }\limits_{x}\operatorname{Prob}\left( x\right)$ s.t. $\widehat{h}\left( x\right) = y$ , where $\widehat{h}\left( x\right)$ is the attackers estimation of the hash function. Using this definition, accuracy is defined:
|
| 250 |
+
|
| 251 |
+
$$
|
| 252 |
+
\text{ Accuracy } = {\operatorname{Prob}}_{x \sim {\mathcal{P}}_{X}}\left( {x = g\left( {h\left( x\right) }\right) }\right) , \tag{5}
|
| 253 |
+
$$
|
| 254 |
+
|
| 255 |
+
where $h\left( x\right)$ is the true hash function, and the probability is over the distribution of the input query. We also use the notation of top $K$ accuracy in this section. Essentially top $K$ accuracy is the probability of the input query being among the top guesses of the attacker. To formally define this, we first denote the set $\widehat{\mathcal{S}}\left( y\right)$ as, $\widehat{\mathcal{S}}\left( y\right) = \{ x \mid \widehat{h}\left( x\right) = y\}$ , which is the set of all possible inputs, given an output query $y$ , based on attacker’s estimation of the hash function. We now define the set ${g}_{K}\left( y\right)$ to be the top $k$ members of the set $\widehat{\mathcal{S}}\left( y\right)$ with the largest probability, ${g}_{K}\left( y\right) = \{ x \in$ $\widehat{\mathcal{S}}\left( y\right) \mid \operatorname{Prob}\left( x\right)$ is in the top $K$ probabilities. $\}$ . This means that ${g}_{K}\left( y\right)$ is the set of the top $K$ attacker’s guesses, of the input query. Now we can use the function ${g}_{k}\left( y\right)$ to formally define the top $K$ accuracy,
|
| 256 |
+
|
| 257 |
+
$$
|
| 258 |
+
{\text{ Accuracy }}_{\text{ top }K} = {\operatorname{Prob}}_{x \sim {\mathcal{P}}_{X}}\left( {x \in {g}_{K}\left( {h\left( x\right) }\right) }\right) , \tag{6}
|
| 259 |
+
$$
|
| 260 |
+
|
| 261 |
+
where $h\left( x\right)$ is the true hash function, and the probability is over the distribution of the input query. Evaluation Result: As shown in Table 3, we change the number of interactions in these test sets to see the accuracy of hash-inversion and the attacker could achieve up to 0.94 top 10 accuracy for the Taobao dataset. Results on Kaggle and Criteo datasets are reported in C. The key observation here is that, if an attacker observes the frequency of queries, they can reconstruct the values of raw features with high accuracy by knowing the distributions of the pre-hash values and type of the hash function. We also expand this attack and support a general attack for more complex hash functions using OMP, The details of this machine learning based attack is explained in Appendix D. In Appendix B we disccussed why none of the current solutions can solve all the issues.
|
| 262 |
+
|
| 263 |
+
§ 7 CONCLUSION
|
| 264 |
+
|
| 265 |
+
In this work, we shed light on the information leakage through sparse features in deep learning-based recommendation systems. Our work pivoted the prior investigation focus on dense feature protection to the unprotected access patterns of sparse features. The new insight from this work demonstrates even the access patterns can be a big threat to privacy.
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/_-gZhHVnI3e/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,374 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Certified Training: Small Boxes are All You Need
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
## Abstract
|
| 12 |
+
|
| 13 |
+
We propose the novel certified training method, SABR, which outperforms existing methods across perturbation magnitudes on MNIST, CIFAR-10, and TINY-IMAGENET, in terms of both standard and certifiable accuracies. The key insight behind SABR is that propagating interval bounds for a small but carefully selected subset of the adversarial input region is sufficient to approximate the worst-case loss over the whole region while significantly reducing approximation errors. SABR does not only establish a new state-of-the-art in all commonly used benchmarks but, more importantly, points to a new class of certified training methods 9 promising to overcome the robustness-accuracy trade-off.
|
| 14 |
+
|
| 15 |
+
## 10 1 Introduction
|
| 16 |
+
|
| 17 |
+
As neural networks are increasingly deployed in safety-critical domains, formal robustness guarantees against adversarial examples (Biggio et al., 2013; Szegedy et al., 2014) are more important than ever. However, despite significant progress, specialized training methods that improve certifiability at the cost of severely reduced accuracies are still required to obtain deterministic guarantees.
|
| 18 |
+
|
| 19 |
+
Generally, both training and certification methods compute a network's reachable set given an input region defined by an adversary specification and a concrete input, by propagating a symbolic over-approximation of this region through the network (Singh et al., 2018, 2019; Gowal et al., 2018a). Depending on the method used for propagation, both the computational complexity and tightness of this approximation can vary widely. For certified training, an over-approximation of the worst-case loss is computed from this reachable set and then optimized (Mirman et al., 2018; Zhang et al., 2020; Wong et al., 2018). Surprisingly, the least precise propagation methods yield the highest certified accuracies as more precise methods induce significantly harder optimization problems (Jovanovic et al., 2021). However, the large approximation errors incurred by these imprecise methods lead to over-regularization and thus poor accuracy. Combining precise worst-case loss approximations and a tractable optimization problem is thus the core challenge of certified training.
|
| 20 |
+
|
| 21 |
+
In this work, we tackle this challenge and propose a novel certified training method, SABR, Small Adversarial Bounding Regions, based on the following key insight: by propagating small but carefully selected subsets of the adversarial input region with imprecise methods (i.e., Box), we can obtain both well behaved optimization problems and precise approximations of the worst case loss. This yields networks with complex neuron interactions, enabling higher standard and certified accuracies, while pointing to a new class of certified training methods with significantly reduced regularization. SABR, thus, achieves state-of-the-art standard and certified accuracies across all commonly used settings on the MNIST, CIFAR-10, and TINYIMAGENET datasets.
|
| 22 |
+
|
| 23 |
+
Main Contributions Our main contributions are:
|
| 24 |
+
|
| 25 |
+
- A novel certified training method, SABR, reducing over-regularization to improve both standard and certified accuracy (§3).
|
| 26 |
+
|
| 27 |
+
- A theoretical investigation motivating SABR by deriving new insights into the growth of BOX relaxations during propagation (§4).
|
| 28 |
+
|
| 29 |
+
- An extensive empirical evaluation demonstrating that SABR outperforms all state-of-the-art certified training methods in terms of both standard and certifiable accuracies on MNIST, CIFAR-10, and TINYIMAGENET (§5).
|
| 30 |
+
|
| 31 |
+
## 2 Background
|
| 32 |
+
|
| 33 |
+
In this section, we provide the necessary background for SABR.
|
| 34 |
+
|
| 35 |
+
Adversarial Robustness Consider a classification model $\mathbf{h} : {\mathbb{R}}^{{d}_{\text{in }}} \mapsto {\mathbb{R}}^{c}$ that, given an input $\mathbf{x} \in$ $\mathcal{X} \subseteq {\mathbb{R}}^{{d}_{\text{in }}}$ , predicts numerical scores $\mathbf{y} \mathrel{\text{:=}} \mathbf{h}\left( \mathbf{x}\right)$ for every class. We say that $\mathbf{h}$ is adversarially robust on an ${\ell }_{p}$ -norm ball ${\mathcal{B}}_{p}^{{\epsilon }_{p}}\left( \mathbf{x}\right)$ of radius ${\epsilon }_{p}$ , if it consistently predicts the target class $t$ for all perturbed inputs ${\mathbf{x}}^{\prime } \in {\mathcal{B}}_{p}^{{\epsilon }_{p}}\left( \mathbf{x}\right)$ . More formally, we define adversarial robustness as:
|
| 36 |
+
|
| 37 |
+
$$
|
| 38 |
+
\underset{j}{\arg \max }h{\left( {\mathbf{x}}^{\prime }\right) }_{j} = t,\;\forall {\mathbf{x}}^{\prime } \in {\mathcal{B}}_{p}^{{\epsilon }_{p}}\left( \mathbf{x}\right) \mathrel{\text{:=}} \left\{ {{\mathbf{x}}^{\prime } \in \mathcal{X} \mid {\begin{Vmatrix}\mathbf{x} - {\mathbf{x}}^{\prime }\end{Vmatrix}}_{p} \leq {\epsilon }_{p}}\right\} . \tag{1}
|
| 39 |
+
$$
|
| 40 |
+
|
| 41 |
+
Neural Network Verification To verify that a neural network $\mathbf{h}$ is adversarially robust, several verification techniques have been proposed.
|
| 42 |
+
|
| 43 |
+
A simple but effective such method is verification with the BOX relaxation (Mirman et al., 2018), also called interval bound propagation (IBP) (Gowal et al., 2018b). Conceptually, we propagate the input region ${\mathcal{B}}_{p}^{{\epsilon }_{p}}\left( \mathbf{x}\right)$ in form of a hyper-box relaxation (each dimension is described as an interval) through the network to compute an over-approximation of its reachable set and then check whether all included outputs yield the correct classification. Given an input region ${\mathcal{B}}_{p}^{{\epsilon }_{p}}\left( \mathbf{x}\right)$ , we over-approximate it as a hyper-box, centered at ${\bar{x}}^{0} \mathrel{\text{:=}} x$ and with radius ${\delta }^{0} \mathrel{\text{:=}} {\epsilon }_{p}$ , such that we have the ${i}^{\text{th }}$ dimension of the input ${\mathbf{x}}_{i}^{0} \in \left\lbrack {{\bar{x}}_{i}^{0} - {\delta }_{i}^{0},{\bar{x}}_{i}^{0} + {\delta }_{i}^{0}}\right\rbrack$ . Given a linear layer ${\mathbf{f}}_{i}\left( {\mathbf{x}}^{i - 1}\right) = \mathbf{W}{\mathbf{x}}^{i - 1} + \mathbf{b} = : {\mathbf{x}}^{i}$ , we obtain the hyper-box relaxation of its output defined by center ${\overline{\mathbf{x}}}^{i} = \mathbf{W}{\overline{\mathbf{x}}}^{i - 1} + \mathbf{b}$ and radius ${\mathbf{\delta }}^{i} = \left| \mathbf{W}\right| {\mathbf{\delta }}^{i - 1}$ , where $\left| \cdot \right|$ denotes the elementwise absolute value. A ReLU activation $\operatorname{ReLU}\left( {\mathbf{x}}^{i - 1}\right) \mathrel{\text{:=}} \max \left( {0,{\mathbf{x}}^{i - 1}}\right)$ can be relaxed by propagating the lower and upper bound separately, resulting in the output hyper-box with ${\bar{x}}^{i} = \frac{{u}^{i} + {l}^{i}}{2}$ and ${\delta }^{i} = \frac{{u}^{i} - {l}^{i}}{2}$ where ${\mathbf{l}}^{i} = \operatorname{ReLU}\left( {{\overline{\mathbf{x}}}^{i - 1} - {\mathbf{\delta }}^{i - 1}}\right)$ and ${\mathbf{u}}^{i} = \operatorname{ReLU}\left( {{\overline{\mathbf{x}}}^{i - 1} + {\mathbf{\delta }}^{i - 1}}\right)$ . We can now show provable robustness if we find the upper bound on the logit difference ${y}_{i}^{\Delta } \mathrel{\text{:=}} {y}_{i} - {y}_{t} < 0,\forall i \neq t$ to be smaller than 0 .
|
| 44 |
+
|
| 45 |
+
Beyond Box, more precise verification approaches track more relational information at the cost of increased computational complexity (Palma et al., 2022; Wang et al., 2021; Ferrari et al., 2022).
|
| 46 |
+
|
| 47 |
+
Training for Robustness For neural networks to be certifiably robust, special training is necessary. Given a data distribution $\left( {\mathbf{x}, t}\right) \sim \mathcal{D}$ , standard training generally aims to find a network parametrization $\theta$ that minimizes the expected cross-entropy loss:
|
| 48 |
+
|
| 49 |
+
$$
|
| 50 |
+
{\theta }_{\mathrm{{std}}} = \underset{\theta }{\arg \min }{\mathbb{E}}_{\mathcal{D}}\left\lbrack {{\mathcal{L}}_{\mathrm{{CE}}}\left( {{\mathbf{h}}_{\mathbf{\theta }}\left( \mathbf{x}\right) , t}\right) }\right\rbrack ,\;\text{ with }\;{\mathcal{L}}_{\mathrm{{CE}}}\left( {\mathbf{y}, t}\right) = \ln \left( {1 + \mathop{\sum }\limits_{{i \neq t}}\exp \left( {{y}_{i} - {y}_{t}}\right) }\right) . \tag{2}
|
| 51 |
+
$$
|
| 52 |
+
|
| 53 |
+
When training for robustness, we, instead, wish to minimize the expected worst case loss around the data distribution, leading to the min-max optimization problem:
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
{\theta }_{\mathrm{{rob}}} = \underset{\theta }{\arg \min }{\mathbb{E}}_{\mathcal{D}}\left\lbrack {\mathop{\max }\limits_{{{\mathbf{x}}^{\prime } \in {\mathcal{B}}_{p}^{{\epsilon }_{p}}\left( \mathbf{x}\right) }}{\mathcal{L}}_{\mathrm{{CE}}}\left( {{\mathbf{h}}_{\mathbf{\theta }}\left( {\mathbf{x}}^{\prime }\right) , t}\right) }\right\rbrack . \tag{3}
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
Unfortunately, solving the inner maximization problem is generally intractable. Therefore, it is commonly under- or over-approximated, yielding adversarial and certified training, respectively.
|
| 60 |
+
|
| 61 |
+
Adversarial Training Adversarial training optimizes a lower bound on the inner optimization objective in Eq. (3) by first computing concrete examples ${\mathbf{x}}^{\prime } \in {\mathcal{B}}_{p}^{{\epsilon }_{p}}\left( \mathbf{x}\right)$ maximizing the loss term and then optimizing the network parameters $\mathbf{\theta }$ for these samples. While networks trained this way 5 typically exhibit good empirical robustness, they remain hard to formally verify and sometimes also vulnerable to stronger or different attacks (Tramèr et al., 2020; Croce & Hein, 2020).
|
| 62 |
+
|
| 63 |
+

|
| 64 |
+
|
| 65 |
+
Figure 1: Illustration of SABR training. Instead of propagating a Box approximation (dashed box &&) of the whole input region (red and green shapes in input space), SABR propagates a small subset of this region (solid box $\square$ ), selected to contain the adversarial example (black $\times$ ) and thus the misclassified region (red). The smaller BOX accumulates much fewer approximation errors during propagation, leading to a significantly smaller output relaxation, which induces much less regularization (medium blue 1) than training with the full region (large blue *), but more than training with just the adversarial example (small blue !).
|
| 66 |
+
|
| 67 |
+
Certified Training Certified training optimizes an upper bound on the inner maximization objective in Eq. (3), obtained via a bound propagation method. These methods compute an upper bound ${\mathbf{u}}_{{\mathbf{y}}^{\Delta }}$ on the logit differences ${\mathbf{y}}^{\Delta } \mathrel{\text{:=}} \mathbf{y} - {y}_{t}\mathbf{1}$ to obtain the robust cross-entropy loss ${\mathcal{L}}_{\mathrm{{CE}},\operatorname{rob}}\left( {{\mathcal{B}}_{p}^{{\epsilon }_{p}}\left( \mathbf{x}\right) , t}\right) = {\mathcal{L}}_{\mathrm{{CE}}}\left( {{\mathbf{u}}_{{\mathbf{y}}^{\Delta }}, t}\right)$ . Surprisingly, using the imprecise Box relaxation (Mirman et al., 2018; Gowal et al., 2018b) (denoted IBP) consistently produces better results than methods based on tighter abstractions (Zhang et al., 2020; Balunovic & Vechev, 2020; Wong et al., 2018). Jo-vanovic et al. (2021) trace this back to the optimization problems induced by the more precise methods becoming intractable to solve. While the heavily regularized, certifiably trained networks are amenable to certification, they suffer from severely reduced (standard) accuracies. Overcoming this robustness-accuracy trade-off remains a key challenge of robust machine learning.
|
| 68 |
+
|
| 69 |
+
## 3 Method - Small Regions for Certified Training
|
| 70 |
+
|
| 71 |
+
We address this challenge by proposing a novel certified training method, SABR - Small Adversarial Bounding Regions — yielding networks that are amenable to certification and retain relatively high standard accuracies. We leverage the key insight that computing an over-approximation of the worst-case loss for a small but carefully selected subset of the input region ${\mathcal{B}}_{p}^{{\epsilon }_{p}}\left( \mathbf{x}\right)$ often still captures the actual worst-case loss, while significantly reducing approximation errors.
|
| 72 |
+
|
| 73 |
+
We illustrate this in Fig. 1. Existing certified training methods propagate the whole input region (dashed Box :1in the input panel), yielding quickly growing approximation errors. The resulting imprecise over-approximations of the worst case loss (compare the red and green regions to the dashed Box ${}^{ \star }$ in the output panel) cause significant over-regularization (large blue arrow ${}^{ \star }$ ). Adversarial training methods, in contrast, only consider individual points ( $\times$ in Fig. 1) and fail to capture the worst-case loss, leading to insufficient regularization (small blue arrow 1 in the output panel). We tackle this problem by propagating small, adversarially chosen subsets of the input region (solid Box $\square$ in the input panel), which we call propagation regions. This yields significantly reduced approximation errors and thus more precise, although not necessarily sound over-approximation of the loss (see the solid BOX $\square$ in the output panel). The resulting intermediate level of regularization (medium blue arrow ${}^{1}$ ) allows us to train networks that are both robust and accurate.
|
| 74 |
+
|
| 75 |
+
We observe that, depending on the size of the propagated region, SABR can be seen as a continuous interpolation between adversarial training for infinitesimally small regions and standard certified training for the full input region.
|
| 76 |
+
|
| 77 |
+
Selecting the Propagation Region We parametrize the propagation region as an ${\ell }_{p}$ -norm ball ${\mathcal{B}}_{p}^{{\tau }_{p}}\left( {\mathbf{x}}^{\prime }\right)$ with center ${\mathbf{x}}^{\prime }$ and radius ${\tau }_{p} \leq {\epsilon }_{p} -$ ${\begin{Vmatrix}\mathbf{x} - {\mathbf{x}}^{\prime }\end{Vmatrix}}_{p}$ , ensuring that we indeed propagate a subset of the original region ${\mathcal{B}}_{p}^{{\epsilon }_{p}}\left( \mathbf{x}\right)$ . For notational clarity, we drop the subscript $p$ . We first choose $\tau = {\lambda \epsilon }$ by scaling the original perturbation radius $\epsilon$ with the subselection ratio $\lambda \in (0,1\rbrack$ . We then select ${\mathbf{x}}^{\prime }$ by first conducting a PGD attack, yielding the preliminary center ${\mathbf{x}}^{ * }$ , and then ensuring that the obtained region is fully contained in the original one by projecting ${\mathbf{x}}^{ * }$ onto ${\mathcal{B}}^{\epsilon - \tau }\left( \mathbf{x}\right)$ to obtain ${\mathbf{x}}^{\prime }$ . We show this in Fig. 2.
|
| 78 |
+
|
| 79 |
+

|
| 80 |
+
|
| 81 |
+
Figure 2: Illustration of SABR's propagation region selection process.
|
| 82 |
+
|
| 83 |
+
Propagation Method While SABR can be instantiated with any certified training method, we chose Box propagation (DIFFAI Mirman et al. (2018) or IBP (Gowal et al., 2018b)) to obtain well-behaved optimization problems (Jovanovic et al., 2021).
|
| 84 |
+
|
| 85 |
+
## 4 Understanding SABR: Robust Loss and Growth of Small Boxes
|
| 86 |
+
|
| 87 |
+
In this section, we aim to uncover the reasons behind SABR's success. Towards this, we first analyze the relationship between robust loss and over-approximation size before investigating the growth of the BOX approximation with propagation region size.
|
| 88 |
+
|
| 89 |
+
Robust Loss Analysis Certified training typically optimizes an over-approximation of the worst-case cross-entropy loss ${\mathcal{L}}_{\text{CE.rob }}$ , computed via the softmax of the upper-bound on the logit differences ${\mathbf{y}}^{\Delta } \mathrel{\text{:=}} \mathbf{y} - {y}_{t}$ . When training with the BOX relaxation and assuming the target class $t$ , w.l.o.g., we obtain ${\mathbf{y}}^{\Delta } \in \left\lbrack {{\overline{\mathbf{y}}}^{\Delta } - {\mathbf{\delta }}^{\Delta },{\overline{\mathbf{y}}}^{\Delta } + {\mathbf{\delta }}^{\Delta }}\right\rbrack$ and the robust cross entropy loss ${\mathcal{L}}_{\mathrm{{CE}},\text{ rob }}\left( \mathbf{x}\right) =$ $\ln \left( {1 + \mathop{\sum }\limits_{{i = 2}}^{n}{e}^{{\bar{y}}_{i}^{\Delta } + {\delta }_{i}^{\Delta }}}\right)$ . Further, we note that the BOX relaxations of many functions preserve the box centers, i.e., ${\overline{\mathbf{x}}}^{i} = \mathbf{f}\left( {\overline{\mathbf{x}}}^{i - 1}\right)$ . Only unstable ReLUs, i.e., ReLUs containing 0 in their input bounds, introduce a slight shift. However, these are empirically few in certifiably trained networks (see Table 5). We can thus decompose the logit differences determining the robust loss into an accuracy term ${\overline{\mathbf{y}}}^{\Delta }$ , corresponding to the misclassification margin of the adversarial example ${\mathbf{x}}^{\prime }$ at the center of the propagation region, and a robustness term ${\delta }^{\Delta }$ , bounding the difference to the actual worst-case logits. As these terms generally represent conflicting objectives, robustness and accuracy are balanced to minimize the robust optimization objective. Consequently, reducing the regularization induced by the robustness term will bias the optimization process towards standard accuracy.
|
| 90 |
+
|
| 91 |
+
Box Growth We investigate the growth of Box relaxations for an $L$ -layer network with linear layers ${\mathbf{f}}_{i}$ and ReLU activation functions $\mathbf{\sigma }$ . Given a Box input with radius ${\delta }^{i - 1}$ and center distribution ${\bar{x}}^{i - 1} \sim \mathcal{D}$ , we define the per-layer growth rate ${\kappa }^{i} = \frac{{\mathbb{E}}_{\mathcal{D}}\left\lbrack {\delta }^{i}\right\rbrack }{{\delta }^{i - 1}}$ as the ratio of input and expected output radius.
|
| 92 |
+
|
| 93 |
+

|
| 94 |
+
|
| 95 |
+
Figure 3: Input distribution for last ReLU layer depending on training method.
|
| 96 |
+
|
| 97 |
+
For linear layers with weight matrix $\mathbf{W}$ , we obtain an output radius ${\delta }^{i} = \left| \mathbf{W}\right| {\delta }^{i - 1}$ and thus a constant growth rate ${\kappa }^{i}$ , corresponding to the row-wise ${\ell }_{1}$ norm of the weight matrix ${\left| {\mathbf{W}}_{j, \cdot }\right| }_{1}$ . Empirically, we find most linear and convolutional layers to exhibit growth rates between 10 and 100 .
|
| 98 |
+
|
| 99 |
+
For ReLU layers ${\mathbf{x}}^{i} = \sigma \left( {\mathbf{x}}^{i - 1}\right)$ , the growth rate depends on the location and size of the inputs. Shi et al. (2021) assume the input Box centers ${\overline{\mathbf{x}}}^{i - 1}$ to be symmetrically distributed around 0, i.e., ${P}_{\mathcal{D}}\left( {\bar{x}}^{i - 1}\right) = {P}_{\mathcal{D}}\left( {-{\bar{x}}^{i - 1}}\right)$ , and obtain a constant growth rate of ${\kappa }^{i} = {0.5}$ . While this assumption holds at initialization, trained networks tend to have more inactive than active ReLUs (see Table 5), indicating asymmetric distributions with more negative inputs (see also Fig. 3). We investigate this more realistic setting. When input radii are ${\bar{\delta }}^{i - 1} \approx 0$ , active neurons will stay stably active, yielding ${\delta }^{i} = {\delta }^{i - 1}$ and inactive neurons will stay stably inactive, yielding ${\delta }^{i} = 0$ . Thus, we obtain a growth rate, equivalent to the portion of active neurons. In the other extreme ${\delta }^{i - 1} \rightarrow \infty$ , all neurons will become unstable with ${\bar{x}}^{i - 1} \ll {\delta }^{i - 1}$ , yielding ${\delta }^{i} \approx {0.5}{\delta }^{i - 1}$ , and thus a constant growth rate of ${\kappa }^{i} = {0.5}$ . Assuming pointwise asymmetry favouring negative inputs, i.e., $p\left( {{\bar{x}}^{i - 1} = - z}\right) > p\left( {{\bar{x}}^{i - 1} = }\right.$ $z),\forall z \in {\mathbb{R}}^{ > 0}$ , we show that between those two extremes, output radii grow strictly super-linear in the input size :
|
| 100 |
+
|
| 101 |
+

|
| 102 |
+
|
| 103 |
+
Figure 4: Actual (purple) mean output size growth and a linear approximation (orange) for a ReLU layer with $\bar{x} \sim \mathcal{N}(\mu =$ $- {1.0},\sigma = \sqrt{0.5})$ .
|
| 104 |
+
|
| 105 |
+
Theorem 4.1 (Hyper-Box Growth). Let $y \mathrel{\text{:=}} \sigma \left( x\right) = \max \left( {0, x}\right)$ be a ReLU function and consider box inputs with radius ${\delta }_{x}$ and asymmetrically distributed centers $\bar{x} \sim \mathcal{D}$ such that ${P}_{\mathcal{D}}\left( {\bar{x} = - z}\right) >$ ${P}_{\mathcal{D}}\left( {\bar{x} = z}\right) ,\forall z \in {\mathbb{R}}^{ > 0}$ . Then the mean output radius ${\delta }_{y}$ will grow super-linearly in the input radius ${\delta }_{x}$ . More formally:
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
\forall {\delta }_{x},{\delta }_{x}^{\prime } \in {\mathbb{R}}^{ \geq 0} : \;{\delta }_{x}^{\prime } > {\delta }_{x} \Rightarrow {\mathbb{E}}_{\mathcal{D}}\left\lbrack {\delta }_{y}^{\prime }\right\rbrack > {\mathbb{E}}_{\mathcal{D}}\left\lbrack {\delta }_{y}\right\rbrack + \left( {{\delta }_{x}^{\prime } - {\delta }_{x}}\right) \frac{\partial }{\partial {\delta }_{x}}{\mathbb{E}}_{\mathcal{D}}\left\lbrack {\delta }_{y}\right\rbrack . \tag{4}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
We defer a proof to App. A and illustrate this behavior in Fig. 4. Multiplying all layer-wise growth rates, we obtain the overall growth rate $\kappa = \mathop{\prod }\limits_{{i = 2}}^{L}{\kappa }^{i}$ , which is exponential in network depth and super-linear in input radius. When not specifically training with the BOX relaxation, we empirically observe that the large growth factors of linear layers dominate the shrinking effect of the ReLU layers, leading to a quick exponential growth in network depth. Further, for both SABR and IBP trained networks, the super-linear growth in input radius empirically manifests as exponential behavior (see Figs. 7 and 8). Using SABR, we thus expect the regularization induced by the robustness term to decrease super-linearly, and empirically even exponentially, with subselection ratio $\lambda$ , explaining the significantly higher accuracies compared to IBP.
|
| 112 |
+
|
| 113 |
+
## 5 Evaluation
|
| 114 |
+
|
| 115 |
+
In this section, we compare SABR to existing certified training methods on the challenging ${\ell }_{\infty }$ , deferring a detailed description of the experimental setup to App. B
|
| 116 |
+
|
| 117 |
+
Main Results We compare SABR to state-of-the-art certified training methods in Table 2 and Fig. 5, reporting the best results achieved with a given method on any architecture.
|
| 118 |
+
|
| 119 |
+

|
| 120 |
+
|
| 121 |
+
Figure 5: Certified over standard accuracy for different certified training methods. The upper right-hand corner is best.
|
| 122 |
+
|
| 123 |
+
In Fig. 5, we show certified over standard accuracy (upper right-hand corner is best) and observe that SABR (*) dominates all other methods, achieving both the highest certified and standard accuracy across all settings. Methods striving to balance accuracy and regularization by bridging the gap between provable and adversarial training $( \times$ , )(Balunovic & Vechev, 2020; Palma et al., 2022) perform only slightly worse than SABR at small perturbation radii, but much worse at large radii, e.g., attaining only ${27.5}\%$ (✗) and 27.9% ( ) certifiable accuracy for CIFAR-10 at $\epsilon = 8/{255}$ compared to ${35.25}\% \left( \bullet \right)$ . Similarly, methods focusing only on certified accuracy by directly optimizing over-approximations of the worst-case loss $\left( {+,\blacksquare }\right)$ (Gowal et al., 2018b; Zhang et al., 2020) tend to perform well at large perturbation radii, but poorly at small perturbation radii, e.g., on CIFAR-10 at $\epsilon = 2/{255}$ , SABR improves certified accuracy to ${62.6}\% \left( \pm \right)$ up from ${52.9}\% \left( \pm \right)$ and ${54.0}\% \left( \blacksquare \right)$ .
|
| 124 |
+
|
| 125 |
+
In contrast to certified training, Zhang et al. (2021) propose an architecture with inherent ${\ell }_{\infty }$ -robustness properties. While they attain higher certified accuracies on CIFAR-10 $\epsilon = 8/{255}$ , their training is notoriously hard (Zhang et al., 2021, 2022), yielding low standard accuracies of, e.g., only ${60.6}\%$ compared to ${79.52}\%$ at $\epsilon = 2/{255}$ . Further robustness can only be obtained against one perturbation type at a time.
|
| 126 |
+
|
| 127 |
+
Table 1: Comparison of standard (Std.) and certified (Cert.) accuracy [%] to ${\ell }_{\infty }$ -distance Net (Zhang et al. 2022)
|
| 128 |
+
|
| 129 |
+
<table><tr><td rowspan="2">Dataset</td><td rowspan="2">$\epsilon$</td><td colspan="2">${\ell }_{\infty }$ -distance Net</td><td colspan="2">SABR (ours)</td></tr><tr><td>Std.</td><td>Cert.</td><td>Std.</td><td>Cert.</td></tr><tr><td rowspan="2">MNIST</td><td>0.1</td><td>98.93</td><td>97.95</td><td>99.25</td><td>98.06</td></tr><tr><td>0.3</td><td>98.56</td><td>93.20</td><td>98.82</td><td>93.38</td></tr><tr><td rowspan="2">CIFAR-10</td><td>2/255</td><td>60.61</td><td>54.12</td><td>79.52</td><td>62.57</td></tr><tr><td>8/255</td><td>54.30</td><td>40.06</td><td>52.00</td><td>35.25</td></tr></table>
|
| 130 |
+
|
| 131 |
+
Certification Method and Propagation Region Size To analyze the interaction between certification method precision and propagation region size, we train a range of models with subselection ratios $\lambda$ varying from 0.0125 to 1.0 and analyze them with verification methods of increasing precision (Box, DEEPPOLY, MN-BAB) and a 50-step PGD attack (Madry et al., 2018) with 5 random restarts and the targeted logit margin loss (Carlini & Wagner, 2017). We illustrate results in Fig. 6 and observe that standard and adversarial accuracies increase with decreasing $\lambda$ , as regularization decreases. For $\lambda = 1$ , i.e., IBP training, we observe little difference between the verification methods. However, as we decrease $\lambda$ , the Box verified accuracy decreases quickly, despite BOX relaxations being used during training. In con-
|
| 132 |
+
|
| 133 |
+

|
| 134 |
+
|
| 135 |
+
Figure 6: Standard, adversarial, and certified accuracy depending on the certification method for 1000 CIFAR-10 samples at $\epsilon = 2/{255}$ .
|
| 136 |
+
|
| 137 |
+
Table 2: Comparison of the standard (Acc.) and certified (Cert. Acc.) accuracy for different certified training methods on the full MNIST, CIFAR-10, and TINYIMAGENET test sets. We use MN-BAB (Ferrari et al. 2022) for certification and report other results from the relevant literature.
|
| 138 |
+
|
| 139 |
+
<table><tr><td>Dataset</td><td>${\epsilon }_{\infty }$</td><td>Training Method</td><td>Source</td><td>Acc. [%]</td><td>Cert. Acc. [%]</td></tr><tr><td rowspan="8">MNIST</td><td rowspan="4">0.1</td><td>COLT</td><td>Balunovic & Vechev (2020)</td><td>99.2</td><td>97.1</td></tr><tr><td>CROWN-IBP</td><td>Zhang et al. (2020)</td><td>98.83</td><td>97.76</td></tr><tr><td>IBP</td><td>Shi et al. (2021</td><td>98.84</td><td>97.95</td></tr><tr><td>SABR</td><td>this work</td><td>99.25</td><td>98.06</td></tr><tr><td rowspan="4">0.3</td><td>COLT</td><td>Balunovic & Vechev (2020</td><td>97.3</td><td>85.7</td></tr><tr><td>CROWN-IBP</td><td>Zhang et al. (2020)</td><td>98.18</td><td>92.98</td></tr><tr><td>IBP</td><td>Shi et al. (2021</td><td>97.67</td><td>93.10</td></tr><tr><td>SABR</td><td>this work</td><td>98.82</td><td>93.38</td></tr><tr><td rowspan="10">CIFAR-10</td><td rowspan="5">2/255</td><td>COLT</td><td>Balunovic & Vechev (2020</td><td>78.4</td><td>60.5</td></tr><tr><td>CROWN-IBP</td><td>$\mathrm{{Zhang}}$ et al.(2020)</td><td>71.52</td><td>53.97</td></tr><tr><td>IBP</td><td>Shi et al.(2021)</td><td>66.84</td><td>52.85</td></tr><tr><td>IBP-R</td><td>$\mathrm{{Palma}}$ et al.(2022)</td><td>78.19</td><td>61.97</td></tr><tr><td>SABR</td><td>this work</td><td>79.52</td><td>62.57</td></tr><tr><td rowspan="5">8/255</td><td>COLT</td><td>Balunovic & Vechev (2020)</td><td>51.7</td><td>27.5</td></tr><tr><td>CROWN-IBP</td><td>$\mathrm{{Xu}}$ et al.(2020)</td><td>46.29</td><td>33.38</td></tr><tr><td>IBP</td><td>Shi et al. (2021</td><td>48.94</td><td>34.97</td></tr><tr><td>IBP-R</td><td>Palma et al. (2022)</td><td>51.43</td><td>27.87</td></tr><tr><td>SABR</td><td>this work</td><td>52.00</td><td>35.25</td></tr><tr><td rowspan="3">TINYIMAGENET</td><td rowspan="3">1/255</td><td>CROWN-IBP</td><td>Shi et al. (2021</td><td>25.62</td><td>17.93</td></tr><tr><td>IBP</td><td>Shi et al. (2021</td><td>25.92</td><td>17.87</td></tr><tr><td>SABR</td><td>this work</td><td>28.64</td><td>20.34</td></tr></table>
|
| 140 |
+
|
| 141 |
+
trast, using the most precise method, MN-BAB, we initially observe increasing certified accuracies, as the reduced regularization yields more accurate networks, before the level of regularization becomes insufficient for certification. While DEEPPOLY loses precision less quickly than Box, it can not benefit from more accurate networks. This indicates that the increased accuracy, enabled by the reduced regularization, may rely on complex neuron interactions, only captured by MN-BAB.
|
| 142 |
+
|
| 143 |
+
This qualitatively different behavior depending on the precision of the certification method highlights the importance of recent advances in neural network verification for certified training. Even more importantly, these results clearly show that provably robust networks do not necessarily require the level of regularization introduced by IBP training.
|
| 144 |
+
|
| 145 |
+
Loss Analysis In Fig. 7, we compare the robust loss of an SABR and an IBP trained network across different propagation region sizes (all centered around the original sample) depending on the bound propagation method used. When propagating the full input region $\left( {\lambda = 1}\right)$ , the SABR trained network yields a much higher robust loss than the IBP trained one. However, when comparing the respective training subselection ratios, $\lambda = {0.05}$ for SABR and $\lambda = {1.0}$ for IBP, SABR yields significantly smaller training losses, allowing the SABR trained network to reach a much lower standard loss. Finally, we observe the losses to clearly grow super-linearly with increasing propagation region sizes (note the logarithmic scaling of the y-axis) agreeing well with our theoretical results in §4.
|
| 146 |
+
|
| 147 |
+

|
| 148 |
+
|
| 149 |
+
Figure 7: Standard and robust cross-entropy loss, computed with Box (Box) and DEEPPOLY (DP) bounds for an IBP and SABR trained network over subselec-tion ratio $\lambda$ .
|
| 150 |
+
|
| 151 |
+
## 6 Conclusion
|
| 152 |
+
|
| 153 |
+
We introduced a novel certified training method called SABR (Small Adversarial Bounding Regions) based on the key insight, that propagating small but carefully selected subsets of the input region combines small approximation errors and thus regularization with well-behaved optimization problems. This allows SABR trained networks to outperform all existing certified training methods on all commonly used benchmarks in terms of both standard and certified accuracy. Even more importantly, SABR lays the foundation for a new class of certified training methods promising to overcome the robustness-accuracy trade-off and enabling the training of networks that are both accurate and certifiably robust.
|
| 154 |
+
|
| 155 |
+
References
|
| 156 |
+
|
| 157 |
+
Mislav Balunovic and Martin T. Vechev. Adversarial training and provable defenses: Bridging the gap. In Proc. of ICLR, 2020.
|
| 158 |
+
|
| 159 |
+
Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Srndic, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In Machine Learning and Knowledge Discovery in Databases - European Conference, ECML PKDD 2013, Prague, Czech Republic, September 23-27, 2013, Proceedings, Part III, volume 8190, 2013. doi: 10.1007/978-3-642-40994-3\\_25.
|
| 160 |
+
|
| 161 |
+
Nicholas Carlini and David A. Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy, SP 2017, San Jose, CA, USA, May 22-26, 2017, 2017. doi: 10.1109/SP.2017.49.
|
| 162 |
+
|
| 163 |
+
Francesco Croce and Matthias Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In Proc. of ICML, volume 119, 2020.
|
| 164 |
+
|
| 165 |
+
Claudio Ferrari, Mark Niklas Müller, Nikola Jovanovic, and Martin T. Vechev. Complete verification via multi-neuron relaxation guided branch-and-bound. In Proc. of ICLR, 2022.
|
| 166 |
+
|
| 167 |
+
Sven Gowal, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan Ue-sato, Relja Arandjelovic, Timothy A. Mann, and Pushmeet Kohli. On the effectiveness of interval bound propagation for training verifiably robust models. ArXiv preprint, abs/1810.12715, 2018a.
|
| 168 |
+
|
| 169 |
+
Sven Gowal, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan Ue-sato, Relja Arandjelovic, Timothy A. Mann, and Pushmeet Kohli. On the effectiveness of interval bound propagation for training verifiably robust models. ArXiv preprint, abs/1810.12715, 2018b.
|
| 170 |
+
|
| 171 |
+
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proc. of ICML, volume 37, 2015.
|
| 172 |
+
|
| 173 |
+
Nikola Jovanovic, Mislav Balunovic, Maximilian Baader, and Martin T. Vechev. Certified defenses: Why tighter relaxations may hurt training? ArXiv preprint, abs/2102.06700, 2021.
|
| 174 |
+
|
| 175 |
+
Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proc. of ICLR, 2015.
|
| 176 |
+
|
| 177 |
+
Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
|
| 178 |
+
|
| 179 |
+
Ya Le and Xuan S. Yang. Tiny imagenet visual recognition challenge. CS 231N, 7(7), 2015.
|
| 180 |
+
|
| 181 |
+
Yann LeCun, Corinna Cortes, and CJ Burges. Mnist handwritten digit database. ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist, 2, 2010.
|
| 182 |
+
|
| 183 |
+
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In Proc. of ICLR, 2018.
|
| 184 |
+
|
| 185 |
+
Matthew Mirman, Timon Gehr, and Martin T. Vechev. Differentiable abstract interpretation for provably robust neural networks. In Proc. of ICML, volume 80, 2018.
|
| 186 |
+
|
| 187 |
+
Alessandro De Palma, Rudy Bunel, Krishnamurthy Dvijotham, M. Pawan Kumar, and Robert Stan-forth. IBP regularization for verified adversarial robustness via branch-and-bound. ArXiv preprint, abs/2206.14772, 2022.
|
| 188 |
+
|
| 189 |
+
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, 2019.
|
| 190 |
+
|
| 191 |
+
Zhouxing Shi, Yihan Wang, Huan Zhang, Jinfeng Yi, and Cho-Jui Hsieh. Fast certified robust training via better initialization and shorter warmup. ArXiv preprint, abs/2103.17268, 2021.
|
| 192 |
+
|
| 193 |
+
Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, and Martin T. Vechev. Fast and effective robustness certification. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, 2018.
|
| 194 |
+
|
| 195 |
+
Gagandeep Singh, Timon Gehr, Markus Püschel, and Martin T. Vechev. An abstract domain for certifying neural networks. Proc. ACM Program. Lang., 3(POPL), 2019. doi: 10.1145/3290354.
|
| 196 |
+
|
| 197 |
+
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfel-low, and Rob Fergus. Intriguing properties of neural networks. In Proc. of ICLR, 2014.
|
| 198 |
+
|
| 199 |
+
Florian Tramèr, Nicholas Carlini, Wieland Brendel, and Aleksander Madry. On adaptive attacks to adversarial example defenses. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
|
| 200 |
+
|
| 201 |
+
Shiqi Wang, Huan Zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, and J. Zico Kolter. Beta-crown: Efficient bound propagation with per-neuron split constraints for neural network robustness verification. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, 2021.
|
| 202 |
+
|
| 203 |
+
Eric Wong, Frank R. Schmidt, Jan Hendrik Metzen, and J. Zico Kolter. Scaling provable adversarial defenses. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, 2018.
|
| 204 |
+
|
| 205 |
+
Kaidi Xu, Zhouxing Shi, Huan Zhang, Yihan Wang, Kai-Wei Chang, Minlie Huang, Bhavya Kailkhura, Xue Lin, and Cho-Jui Hsieh. Automatic perturbation analysis for scalable certified robustness and beyond. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
|
| 206 |
+
|
| 207 |
+
Bohang Zhang, Tianle Cai, Zhou Lu, Di He, and Liwei Wang. Towards certifying 1-infinity robustness using neural networks with 1-inf-dist neurons. In Proc. of ICML, volume 139, 2021.
|
| 208 |
+
|
| 209 |
+
Bohang Zhang, Du Jiang, Di He, and Liwei Wang. Boosting the certified robustness of 1-infinity distance nets. In Proc. of ICLR, 2022.
|
| 210 |
+
|
| 211 |
+
Huan Zhang, Hongge Chen, Chaowei Xiao, Sven Gowal, Robert Stanforth, Bo Li, Duane S. Boning, and Cho-Jui Hsieh. Towards stable and efficient training of verifiably robust neural networks. In Proc. of ICLR, 2020.
|
| 212 |
+
|
| 213 |
+
## A Deferred Proofs
|
| 214 |
+
|
| 215 |
+
In this section, we provide the proof for Lemma A.1. Let us first consider the following Lemma:
|
| 216 |
+
|
| 217 |
+
Lemma A. 1 (Hyper-Box Growth). Let $y \mathrel{\text{:=}} \sigma \left( x\right) = \max \left( {0, x}\right)$ be a ReLU function and consider box inputs with radius ${\delta }_{x}$ and centers $\bar{x} \sim \mathcal{D}$ . Then the mean radius $\mathbb{E}{\delta }_{y}$ of the output boxes will satisfy:
|
| 218 |
+
|
| 219 |
+
$$
|
| 220 |
+
\frac{\partial }{\partial {\delta }_{x, i}}{\mathbb{E}}_{\mathcal{D}}\left\lbrack {\delta }_{y, i}\right\rbrack = \frac{1}{2}{P}_{\mathcal{D}}\left\lbrack {-{\delta }_{x, i} < {\bar{x}}_{i} < {\delta }_{x, i}}\right\rbrack + {P}_{\mathcal{D}}\left\lbrack {{\bar{x}}_{i} > {\delta }_{x, i}}\right\rbrack > 0, \tag{5}
|
| 221 |
+
$$
|
| 222 |
+
|
| 223 |
+
and
|
| 224 |
+
|
| 225 |
+
$$
|
| 226 |
+
\frac{\partial }{{\partial }^{2}{\delta }_{x, i}}{\mathbb{E}}_{\mathcal{D}}\left\lbrack {\delta }_{y, i}\right\rbrack = \frac{1}{2}\left( {{P}_{\mathcal{D}}\left\lbrack {{\bar{x}}_{i} = - {\delta }_{x, i}}\right\rbrack - {P}_{\mathcal{D}}\left\lbrack {{\bar{x}}_{i} = {\delta }_{x, i}}\right\rbrack }\right) . \tag{6}
|
| 227 |
+
$$
|
| 228 |
+
|
| 229 |
+
0 Proof. Recall that given an input box with center $\overline{\mathbf{x}}$ and radius ${\delta }_{\mathbf{x}}$ , the output relaxation of a ReLU layer is defined by:
|
| 230 |
+
|
| 231 |
+
$$
|
| 232 |
+
{\bar{y}}_{i} = \left\{ {\begin{array}{ll} 0, & \text{ if }{\bar{x}}_{i} + {\delta }_{x, i} \leq 0 \\ \frac{{\bar{x}}_{i} + {\delta }_{x, i}}{2}, & \text{ elif }{\bar{x}}_{i} - {\delta }_{x, i} \leq 0 \\ {\bar{x}}_{i}, & \text{ else } \end{array},\;{\delta }_{y, i} = \left\{ \begin{array}{ll} 0, & \text{ if }{\bar{x}}_{i} + {\delta }_{x, i} \leq 0 \\ \frac{{\bar{x}}_{i} + {\delta }_{x, i}}{2}, & \text{ elif }{\bar{x}}_{i} - {\delta }_{x, i} \leq 0 \\ {\delta }_{x, i}, & \text{ else } \end{array}\right. }\right. \tag{7}
|
| 233 |
+
$$
|
| 234 |
+
|
| 235 |
+
342 We thus obtain the expectation
|
| 236 |
+
|
| 237 |
+
$$
|
| 238 |
+
{\mathbb{E}}_{\mathcal{D}}\left\lbrack {\delta }_{y, i}\right\rbrack = {\int }_{-{\delta }_{x, i}}^{{\delta }_{x, i}}\frac{{\bar{x}}_{i} + {\delta }_{x, i}}{2}p\left\lbrack {\bar{x}}_{i}\right\rbrack d{\bar{x}}_{i} + {\int }_{{\delta }_{x, i}}^{\infty }{\delta }_{x, i}{p}_{D}\left( {\bar{x}}_{i}\right) d{\bar{x}}_{i}
|
| 239 |
+
$$
|
| 240 |
+
|
| 241 |
+
$$
|
| 242 |
+
= \frac{{\delta }_{x, i}}{2}{P}_{\mathcal{D}}\left\lbrack {-{\delta }_{x, i} < {\bar{x}}_{i} < {\delta }_{x, i}}\right\rbrack + {\delta }_{x, i}{P}_{\mathcal{D}}\left\lbrack {{\bar{x}}_{i} > {\delta }_{x, i}}\right\rbrack + {\int }_{-{\delta }_{x, i}}^{{\delta }_{x, i}}\frac{{\bar{x}}_{i}}{2}p\left\lbrack {\bar{x}}_{i}\right\rbrack d{\bar{x}}_{i}, \tag{8}
|
| 243 |
+
$$
|
| 244 |
+
|
| 245 |
+
343 its derivative
|
| 246 |
+
|
| 247 |
+
$$
|
| 248 |
+
\frac{\partial }{\partial {\delta }_{x, i}}{\mathbb{E}}_{\mathcal{D}}\left\lbrack {\delta }_{y, i}\right\rbrack = \frac{1}{2}{P}_{\mathcal{D}}\left\lbrack {-{\delta }_{x, i} < {\bar{x}}_{i} < {\delta }_{x, i}}\right\rbrack + \frac{{\delta }_{x, i}}{2}\left( {{P}_{\mathcal{D}}\left\lbrack {{\bar{x}}_{i} = - {\delta }_{x, i}}\right\rbrack + {P}_{\mathcal{D}}\left\lbrack {{\bar{x}}_{i} = {\delta }_{x, i}}\right\rbrack }\right)
|
| 249 |
+
$$
|
| 250 |
+
|
| 251 |
+
$$
|
| 252 |
+
+ {P}_{\mathcal{D}}\left\lbrack {{\bar{x}}_{i} > {\delta }_{x, i}}\right\rbrack - {\delta }_{x, i}{P}_{\mathcal{D}}\left\lbrack {{\bar{x}}_{i} = {\delta }_{x, i}}\right\rbrack
|
| 253 |
+
$$
|
| 254 |
+
|
| 255 |
+
$$
|
| 256 |
+
+ \frac{{\delta }_{x, i}}{2}\left( {{P}_{\mathcal{D}}\left\lbrack {{\bar{x}}_{i} = - {\delta }_{x, i}}\right\rbrack - {P}_{\mathcal{D}}\left\lbrack {{\bar{x}}_{i} = {\delta }_{x, i}}\right\rbrack }\right)
|
| 257 |
+
$$
|
| 258 |
+
|
| 259 |
+
$$
|
| 260 |
+
= \frac{1}{2}{P}_{\mathcal{D}}\left\lbrack {-{\delta }_{x, i} < {\bar{x}}_{i} < {\delta }_{x, i}}\right\rbrack + {P}_{\mathcal{D}}\left\lbrack {{\bar{x}}_{i} > {\delta }_{x, i}}\right\rbrack > 0, \tag{9}
|
| 261 |
+
$$
|
| 262 |
+
|
| 263 |
+
and its curvature
|
| 264 |
+
|
| 265 |
+
$$
|
| 266 |
+
\frac{\partial }{{\partial }^{2}{\delta }_{x, i}}{\mathbb{E}}_{\mathcal{D}}\left\lbrack {\delta }_{y, i}\right\rbrack = \frac{1}{2}\left( {{P}_{\mathcal{D}}\left\lbrack {{\bar{x}}_{i} = - {\delta }_{x, i}}\right\rbrack + {P}_{\mathcal{D}}\left\lbrack {{\bar{x}}_{i} = {\delta }_{x, i}}\right\rbrack }\right) - {P}_{\mathcal{D}}\left\lbrack {{\bar{x}}_{i} = {\delta }_{x, i}}\right\rbrack
|
| 267 |
+
$$
|
| 268 |
+
|
| 269 |
+
$$
|
| 270 |
+
= \frac{1}{2}\left( {{P}_{\mathcal{D}}\left\lbrack {{\bar{x}}_{i} = - {\delta }_{x, i}}\right\rbrack - {P}_{\mathcal{D}}\left\lbrack {{\bar{x}}_{i} = {\delta }_{x, i}}\right\rbrack }\right) . \tag{10}
|
| 271 |
+
$$
|
| 272 |
+
|
| 273 |
+
345
|
| 274 |
+
|
| 275 |
+
Now, we can easily proof Theorem 4.1, restated below for convenience.
|
| 276 |
+
|
| 277 |
+
Theorem 4.1 (Hyper-Box Growth). Let $y \mathrel{\text{:=}} \sigma \left( x\right) = \max \left( {0, x}\right)$ be a ReLU function and consider box inputs with radius ${\delta }_{x}$ and asymmetrically distributed centers $\bar{x} \sim \mathcal{D}$ such that ${P}_{\mathcal{D}}\left( {\bar{x} = - z}\right) >$ ${P}_{\mathcal{D}}\left( {\bar{x} = z}\right) ,\forall z \in {\mathbb{R}}^{ > 0}$ . Then the mean output radius ${\delta }_{y}$ will grow super-linearly in the input radius ${\delta }_{x}$ . More formally:
|
| 278 |
+
|
| 279 |
+
$$
|
| 280 |
+
\forall {\delta }_{x},{\delta }_{x}^{\prime } \in {\mathbb{R}}^{ \geq 0} : \;{\delta }_{x}^{\prime } > {\delta }_{x} \Rightarrow {\mathbb{E}}_{\mathcal{D}}\left\lbrack {\delta }_{y}^{\prime }\right\rbrack > {\mathbb{E}}_{\mathcal{D}}\left\lbrack {\delta }_{y}\right\rbrack + \left( {{\delta }_{x}^{\prime } - {\delta }_{x}}\right) \frac{\partial }{\partial {\delta }_{x}}{\mathbb{E}}_{\mathcal{D}}\left\lbrack {\delta }_{y}\right\rbrack . \tag{4}
|
| 281 |
+
$$
|
| 282 |
+
|
| 283 |
+
51 Proof. We apply Lemma A. 1 by substituting an asymmetric center distribution $\mathcal{D}$ , satisfying ${P}_{\mathcal{D}}\left( {\bar{x} = - z}\right) > {P}_{\mathcal{D}}\left( {\bar{x} = z}\right) ,\forall z \in {\mathbb{R}}^{ > 0}$ into Eq. (6) to obtain:
|
| 284 |
+
|
| 285 |
+
$$
|
| 286 |
+
\frac{\partial }{{\partial }^{2}{\delta }_{x, i}}{\mathbb{E}}_{\mathcal{D}}\left\lbrack {\delta }_{y, i}\right\rbrack = \frac{1}{2}\left( {{P}_{\mathcal{D}}\left\lbrack {{\bar{x}}_{i} = - {\delta }_{x, i}}\right\rbrack - {P}_{\mathcal{D}}\left\lbrack {{\bar{x}}_{i} = {\delta }_{x, i}}\right\rbrack }\right) > 0.
|
| 287 |
+
$$
|
| 288 |
+
|
| 289 |
+
3 The theorem follows trivially from the strictly positive curvature.
|
| 290 |
+
|
| 291 |
+
54 Example for Piecewise Uniform Distribution Let us assume the centers $\bar{x} \sim D$ are distributed according to:
|
| 292 |
+
|
| 293 |
+
$$
|
| 294 |
+
{P}_{\mathcal{D}}\left\lbrack {\bar{x} = z}\right\rbrack = \left\{ {\begin{array}{ll} a, & \text{ if } - l \leq z < 0 \\ b, & \text{ elif }0 < u \leq l \\ 0, & \text{ else } \end{array},\;l = \frac{1}{a + b}}\right. \tag{11}
|
| 295 |
+
$$
|
| 296 |
+
|
| 297 |
+
356 where $a$ and $b$ . Then we have by Lemma A. 1
|
| 298 |
+
|
| 299 |
+
$$
|
| 300 |
+
{\mathbb{E}}_{\mathcal{D}}\left\lbrack {\delta }_{y}\right\rbrack = \frac{{\delta }_{x}}{2}{P}_{\mathcal{D}}\left\lbrack {-{\delta }_{x} < \bar{x} < {\delta }_{x}}\right\rbrack + {\delta }_{x}{P}_{\mathcal{D}}\left\lbrack {\bar{x} > {\delta }_{x}}\right\rbrack + {\int }_{-{\delta }_{x}}^{{\delta }_{x}}\frac{\bar{x}}{2}p\left\lbrack \bar{x}\right\rbrack d\bar{x} \tag{12}
|
| 301 |
+
$$
|
| 302 |
+
|
| 303 |
+
$$
|
| 304 |
+
= \frac{{\delta }_{x}^{2}}{2}\left( {a + b}\right) + b{\delta }_{x}\left( {l - {\delta }_{x}}\right) + \frac{{\delta }_{x}^{2}}{4}\left( {b - a}\right) \tag{13}
|
| 305 |
+
$$
|
| 306 |
+
|
| 307 |
+
$$
|
| 308 |
+
= {\delta }_{x}^{2}\frac{a - b}{4} + {\delta }_{x}\frac{b}{a + b}. \tag{14}
|
| 309 |
+
$$
|
| 310 |
+
|
| 311 |
+
We observe quadratic growth for $a > b$ and recover the symmetric special case of ${\mathbb{E}}_{\mathcal{D}}\left\lbrack {\delta }_{y}\right\rbrack = {0.5}{\delta }_{x}$ for $a = b$ .
|
| 312 |
+
|
| 313 |
+
## B Additional Experimental Details
|
| 314 |
+
|
| 315 |
+
In this section, we provide detailed informations on the exact experimental setup.
|
| 316 |
+
|
| 317 |
+
Experimental Setup We implement SABR in PyTorch (Paszke et al., 2019) and use MN-BAB (Ferrari et al., 2022) for certification. We conduct experiments on MNIST (LeCun et al., 2010), CIFAR-10 (Krizhevsky et al., 2009), and TINYIMAGENET (Le & Yang, 2015) for the challenging ${\ell }_{\infty }$ perturbations, using the same 7-layer convolutional architecture CNN7 as prior work (Shi et al., 2021) unless indicated otherwise (see App. B for more details). We choose similar training hyper-parameters as prior work (Shi et al., 2021) and provide more detailed information in App. B.
|
| 318 |
+
|
| 319 |
+
Datasets We conduct experiments on the MNIST (LeCun et al., 2010), CIFAR-10 (Krizhevsky et al., 2009), and TINYIMAGENET (Le & Yang, 2015) datasets. For TINYIMAGENET and CIFAR- 10 we follow Shi et al. (2021) and use random horizontal flips and random cropping as data augmentation during training and normalize inputs after applying perturbations. Following prior work (Xu et al., 2020; Shi et al., 2021), we evaluate CIFAR-10 and MNIST on their test sets and TINY-IMAGENET on its validation set, as test set labels are unavailable. Following Xu et al. (2020) and in contrast to Shi et al. (2021), we train and evaluate TINYIMAGENET with images cropped to ${56} \times {56}$ .
|
| 320 |
+
|
| 321 |
+
Training Hyperparameters We mostly follow the hyperpa-rameter choices from Shi et al. (2021) including their weight initialization and warm-up regularization ${}^{1}$ , and use ADAM (Kingma &Ba,2015) with an initial learning rate of $5 \times {10}^{-4}$ , decayed twice with a factor of 0.2 . For CIFAR-10 we train 160 an 180 epochs for $\epsilon = 2/{255}$ and $\epsilon = 8/{255}$ , respectively, decaying the learning rate after 120 and 140 and 140 and 160 epochs. For TINYIMAGENET $\epsilon = 1/{255}$ we use the same settings as for CIFAR-10 at $\epsilon = 8/{255}$ . For MNIST we train 70 epochs, decaying the learning rate after 50 and 60 epochs. We use gradient norm clipping to an ${\ell }_{2}$ threshold of 10 and choose a batch size of 128 for CIFAR-10 and TINYIMAGENET, and 256 for MNIST. We use ${\ell }_{1}$ regularization with factors according to Table 3, For all datasets, we perform one epoch of standard training $\left( {\epsilon = 0}\right)$ before annealing $\epsilon$ from 0 to its final value over 80 epochs for CIFAR-10 and TINYIMAGENET and for 20 epochs for MNIST. We use an $n = 8$ step PGD attack with an initial step size of $\alpha = {0.5}$ , decayed with a factor of 0.1 after the ${4}^{\text{th }}$ and ${7}^{\text{th }}$ step to select the center of the propagation region. We use a constant subselection ratio $\lambda$ with values shown in Table 3. For CIFAR-10 $\epsilon = 2/{255}$ we use shrinking with ${c}_{s} = {0.8}$ (see below).
|
| 322 |
+
|
| 323 |
+
Table 3: Hyperparameters for the experiments shown in Table 2.
|
| 324 |
+
|
| 325 |
+
<table><tr><td>Dataset</td><td>$\epsilon$</td><td>${\ell }_{1}$</td><td>$\lambda$</td></tr><tr><td rowspan="2">MNIST</td><td>0.1</td><td>${10}^{-5}$</td><td>0.4</td></tr><tr><td>0.3</td><td>${10}^{-6}$</td><td>0.6</td></tr><tr><td rowspan="2">CIFAR-10</td><td>2/255</td><td>${10}^{-6}$</td><td>0.1</td></tr><tr><td>8/255</td><td>0</td><td>0.7</td></tr><tr><td>TINYIMAGENET</td><td>1/255</td><td>${10}^{-6}$</td><td>0.4</td></tr></table>
|
| 326 |
+
|
| 327 |
+
---
|
| 328 |
+
|
| 329 |
+
${}^{1}$ For the ReLU warm-up regularization, the bounds of the small boxes are considered.
|
| 330 |
+
|
| 331 |
+
---
|
| 332 |
+
|
| 333 |
+

|
| 334 |
+
|
| 335 |
+
Figure 8: Standard (Std.) and robust cross-entropy loss, computed with Box (Box) bounds for an adversarially (left) and IBP (right) trained network over subselection ratio $\lambda$ . Note the logarithmic y-scale and different axes.
|
| 336 |
+
|
| 337 |
+
ReLU-Transformer with Shrinking Additionally to standard SABR, outlined in §3, we propose to amplify the Box growth rate reduction (see [4] affected by smaller propagation regions, by adapting the ReLU transformer as follows:
|
| 338 |
+
|
| 339 |
+
$$
|
| 340 |
+
{\bar{y}}_{i} = \left\{ {\begin{array}{ll} 0, & \text{ if }{\bar{x}}_{i} + {\delta }_{x, i} \leq 0 \\ {c}_{s}\frac{{\bar{x}}_{i} + {\delta }_{x, i}}{2}, & \text{ elif }{\bar{x}}_{i} - {\delta }_{x, i} \leq 0 \\ {\bar{x}}_{i}, & \text{ else } \end{array},\;{\delta }_{y, i} = \left\{ \begin{array}{ll} 0, & \text{ if }{\bar{x}}_{i} + {\delta }_{x, i} \leq 0 \\ {c}_{s}\frac{{\bar{x}}_{i} + {\delta }_{x, i}}{2}, & \text{ elif }{\bar{x}}_{i} - {\delta }_{x, i} \leq 0 \\ {\delta }_{x, i}, & \text{ else } \end{array}\right. }\right. \tag{15}
|
| 341 |
+
$$
|
| 342 |
+
|
| 343 |
+
We call ${c}_{s}$ the shrinking coefficient, as the output radius of unstable ReLUs is shrunken by multiplying it with this factor. We note that we only use these transformers for the CIFAR-10 $\epsilon = 2/{255}$ network discussed in Table 2.
|
| 344 |
+
|
| 345 |
+
Architectures Similar to prior work (Shi et al., 2021), we consider a 7-layer convolutional architecture, CNN7. The first 5 layers are convolutional layers with filter sizes [64, 64, 128, 128, 128], kernel size 3, strides $\left\lbrack {1,1,2,1,1}\right\rbrack$ , and padding 1. They are followed by a fully connected layer with 512 hidden units and the final classification. All but the last layers are followed by batch normalization (Ioffe & Szegedy, 2015) and ReLU activations. For the BN layers, we train using the statistics of the unperturbed data similar to Shi et al. (2021). During the PGD attack we use the BN layers in evaluation mode. We further consider narrower version, CNN7-narrow which is identical to CNN7 expect for using the filter sizes $\left\lbrack {{32},{32},{64},{64},{64}}\right\rbrack$ and a fully connected layer with 216 hidden units.
|
| 346 |
+
|
| 347 |
+
Hardware and Timings We train and certify all networks using single NVIDIA RTX 2080Ti, 3090, Titan RTX, or A6000. Training takes roughly 3 and 7 hours for MNIST and CIFAR-10, respectively, with TINYIMAGENET taking two and a half days on a single NVIDIA RTX 2080Ti. For more Details see Table 4. Verification with MN-BAB takes around ${34}\mathrm{\;h}$ for MNIST, ${28}\mathrm{\;h}$ for CIFAR-10 and $2\mathrm{\;h}$ for TINYIMAGENET on a NVIDIA Titan RTX.
|
| 348 |
+
|
| 349 |
+
Table 4: SABR training times on a single NVIDIA RTX 2080Ti.
|
| 350 |
+
|
| 351 |
+
<table><tr><td>Dataset</td><td>$\epsilon$</td><td>Time</td></tr><tr><td rowspan="2">MNIST</td><td>0.1</td><td>$3\mathrm{\;h}{23}\mathrm{\;{min}}$</td></tr><tr><td>0.3</td><td>3h 20 min</td></tr><tr><td rowspan="2">CIFAR-10</td><td>2/255</td><td>7h 6 min</td></tr><tr><td>8/255</td><td>7h 20 min</td></tr><tr><td>TINYIMAGENET</td><td>1/255</td><td>57h 24 min</td></tr></table>
|
| 352 |
+
|
| 353 |
+
## C Additional Experimental Results
|
| 354 |
+
|
| 355 |
+

|
| 356 |
+
|
| 357 |
+
Figure 9: Comparison of the robust cross-entropy losses computed with Box (Box) centered around unperturbed and adversarial examples for an IBP and SABR trained network over subselection ratio $\lambda$ .
|
| 358 |
+
|
| 359 |
+
Loss Analysis In Fig. 8, we show the error growth of an adversarially trained (left) and IBP trained model over increasing subselection ratios $\lambda$ . We observe that errors grow only slightly super-linear rather than exponential for the ad-versarially trained network. We trace this back to the large portion of crossing ReLUs (Table 5), especially in later layers, leading to the layer-wise growth being only linear. For the IBP trained model, in contrast, we observe exponential growth across a wide range of propagation region sizes, as the heavy regularization leads to a
|
| 360 |
+
|
| 361 |
+
small portion of active and unstable ReLUs. In Fig. 9, we compare Box errors around the unperturbed sample and the center computed with an adversarial attack, as described in §3. We observe that while the loss is larger around the adversarial centers, especially for small propagation regions, this effect is small compared to the difference between training or certification methods.
|
| 362 |
+
|
| 363 |
+
ReLU Activation States The portion of ReLU activations which are (stably) active, inactive, or unstable has been identified as an important characteristic of certifiably trained networks (Shi et al., 2021). We evaluate these metrics for IBP, SABR, and adversarially (PGD) trained networks on CIFAR-10 at $\epsilon = 2/{255}$ , using the Box relaxation to compute intermediate bounds, and report the average over all layers and test set samples in Table 5. We observe that, when evaluated on concrete points, the SABR trained network has around 37% more active ReLUs than the IBP trained one and almost as many as the PGD trained one, indicating a significantly smaller level of regularization. While the SABR trained network has around 3-times as many unstable ReLUs as the IBP trained network, when evaluated on the whole input region, it has 20-times fewer than the PGD trained one, highlighting the improved certifiability.
|
| 364 |
+
|
| 365 |
+
Table 5: Average percentage of active, inactive, and unstable ReLUs for concrete points and boxes depending on training method.
|
| 366 |
+
|
| 367 |
+
<table><tr><td rowspan="2">Method</td><td colspan="2">Point</td><td colspan="3">Whole Region</td></tr><tr><td>Act</td><td>Inact</td><td>Unst</td><td>Act</td><td>Inact</td></tr><tr><td>IBP</td><td>26.2</td><td>73.8</td><td>1.18</td><td>25.6</td><td>73.2</td></tr><tr><td>SABR</td><td>35.9</td><td>64.1</td><td>3.67</td><td>34.3</td><td>62.0</td></tr><tr><td>PGD</td><td>36.5</td><td>63.5</td><td>65.5</td><td>15.2</td><td>19.3</td></tr></table>
|
| 368 |
+
|
| 369 |
+
Gradient Alignment To analyze whether SABR training is actually more aligned with standard accuracy and empirical robustness, as indicated by our theory in $§4$ , we conduct the following experiment for CIFAR-10 and $\epsilon = 2/{255}$ : We train one network using SABR with $\lambda = {0.05}$ and one with IBP, corresponding to $\lambda = {1.0}$ . For both, we now compute the gradients ${\nabla }_{\theta }$ of their respective robust training losses ${\mathcal{L}}_{\text{rob }}$ and the cross-entropy loss ${\mathcal{L}}_{\mathrm{{CE}}}$ applied to unperturbed (Std.) and adversarial (Adv.) samples. We then report the mean cosine similarity between these gradients across the whole test set in Table 6. We clearly observe that the SABR loss is much better aligned with both the cross-entropy loss of unperturbed and adversarial samples, corresponding to standard accuracy and empirical robustness, respectively.
|
| 370 |
+
|
| 371 |
+
Table 6: Cosine similarity between ${\nabla }_{\theta }{\mathcal{L}}_{\text{rob }}$ for IBP and SABR and ${\nabla }_{\theta }{\mathcal{L}}_{\mathrm{{CE}}}$ for adversarial (Adv.) and unperturbed (Std.) examples.
|
| 372 |
+
|
| 373 |
+
<table><tr><td>Loss</td><td>IBP</td><td>SABR</td></tr><tr><td>Std.</td><td>0.5586</td><td>0.8071</td></tr><tr><td>Adv.</td><td>0.8047</td><td>0.9062</td></tr></table>
|
| 374 |
+
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/_-gZhHVnI3e/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,239 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ CERTIFIED TRAINING: SMALL BOXES ARE ALL YOU NEED
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
§ ABSTRACT
|
| 12 |
+
|
| 13 |
+
We propose the novel certified training method, SABR, which outperforms existing methods across perturbation magnitudes on MNIST, CIFAR-10, and TINY-IMAGENET, in terms of both standard and certifiable accuracies. The key insight behind SABR is that propagating interval bounds for a small but carefully selected subset of the adversarial input region is sufficient to approximate the worst-case loss over the whole region while significantly reducing approximation errors. SABR does not only establish a new state-of-the-art in all commonly used benchmarks but, more importantly, points to a new class of certified training methods 9 promising to overcome the robustness-accuracy trade-off.
|
| 14 |
+
|
| 15 |
+
§ 10 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
As neural networks are increasingly deployed in safety-critical domains, formal robustness guarantees against adversarial examples (Biggio et al., 2013; Szegedy et al., 2014) are more important than ever. However, despite significant progress, specialized training methods that improve certifiability at the cost of severely reduced accuracies are still required to obtain deterministic guarantees.
|
| 18 |
+
|
| 19 |
+
Generally, both training and certification methods compute a network's reachable set given an input region defined by an adversary specification and a concrete input, by propagating a symbolic over-approximation of this region through the network (Singh et al., 2018, 2019; Gowal et al., 2018a). Depending on the method used for propagation, both the computational complexity and tightness of this approximation can vary widely. For certified training, an over-approximation of the worst-case loss is computed from this reachable set and then optimized (Mirman et al., 2018; Zhang et al., 2020; Wong et al., 2018). Surprisingly, the least precise propagation methods yield the highest certified accuracies as more precise methods induce significantly harder optimization problems (Jovanovic et al., 2021). However, the large approximation errors incurred by these imprecise methods lead to over-regularization and thus poor accuracy. Combining precise worst-case loss approximations and a tractable optimization problem is thus the core challenge of certified training.
|
| 20 |
+
|
| 21 |
+
In this work, we tackle this challenge and propose a novel certified training method, SABR, Small Adversarial Bounding Regions, based on the following key insight: by propagating small but carefully selected subsets of the adversarial input region with imprecise methods (i.e., Box), we can obtain both well behaved optimization problems and precise approximations of the worst case loss. This yields networks with complex neuron interactions, enabling higher standard and certified accuracies, while pointing to a new class of certified training methods with significantly reduced regularization. SABR, thus, achieves state-of-the-art standard and certified accuracies across all commonly used settings on the MNIST, CIFAR-10, and TINYIMAGENET datasets.
|
| 22 |
+
|
| 23 |
+
Main Contributions Our main contributions are:
|
| 24 |
+
|
| 25 |
+
* A novel certified training method, SABR, reducing over-regularization to improve both standard and certified accuracy (§3).
|
| 26 |
+
|
| 27 |
+
* A theoretical investigation motivating SABR by deriving new insights into the growth of BOX relaxations during propagation (§4).
|
| 28 |
+
|
| 29 |
+
* An extensive empirical evaluation demonstrating that SABR outperforms all state-of-the-art certified training methods in terms of both standard and certifiable accuracies on MNIST, CIFAR-10, and TINYIMAGENET (§5).
|
| 30 |
+
|
| 31 |
+
§ 2 BACKGROUND
|
| 32 |
+
|
| 33 |
+
In this section, we provide the necessary background for SABR.
|
| 34 |
+
|
| 35 |
+
Adversarial Robustness Consider a classification model $\mathbf{h} : {\mathbb{R}}^{{d}_{\text{ in }}} \mapsto {\mathbb{R}}^{c}$ that, given an input $\mathbf{x} \in$ $\mathcal{X} \subseteq {\mathbb{R}}^{{d}_{\text{ in }}}$ , predicts numerical scores $\mathbf{y} \mathrel{\text{ := }} \mathbf{h}\left( \mathbf{x}\right)$ for every class. We say that $\mathbf{h}$ is adversarially robust on an ${\ell }_{p}$ -norm ball ${\mathcal{B}}_{p}^{{\epsilon }_{p}}\left( \mathbf{x}\right)$ of radius ${\epsilon }_{p}$ , if it consistently predicts the target class $t$ for all perturbed inputs ${\mathbf{x}}^{\prime } \in {\mathcal{B}}_{p}^{{\epsilon }_{p}}\left( \mathbf{x}\right)$ . More formally, we define adversarial robustness as:
|
| 36 |
+
|
| 37 |
+
$$
|
| 38 |
+
\underset{j}{\arg \max }h{\left( {\mathbf{x}}^{\prime }\right) }_{j} = t,\;\forall {\mathbf{x}}^{\prime } \in {\mathcal{B}}_{p}^{{\epsilon }_{p}}\left( \mathbf{x}\right) \mathrel{\text{ := }} \left\{ {{\mathbf{x}}^{\prime } \in \mathcal{X} \mid {\begin{Vmatrix}\mathbf{x} - {\mathbf{x}}^{\prime }\end{Vmatrix}}_{p} \leq {\epsilon }_{p}}\right\} . \tag{1}
|
| 39 |
+
$$
|
| 40 |
+
|
| 41 |
+
Neural Network Verification To verify that a neural network $\mathbf{h}$ is adversarially robust, several verification techniques have been proposed.
|
| 42 |
+
|
| 43 |
+
A simple but effective such method is verification with the BOX relaxation (Mirman et al., 2018), also called interval bound propagation (IBP) (Gowal et al., 2018b). Conceptually, we propagate the input region ${\mathcal{B}}_{p}^{{\epsilon }_{p}}\left( \mathbf{x}\right)$ in form of a hyper-box relaxation (each dimension is described as an interval) through the network to compute an over-approximation of its reachable set and then check whether all included outputs yield the correct classification. Given an input region ${\mathcal{B}}_{p}^{{\epsilon }_{p}}\left( \mathbf{x}\right)$ , we over-approximate it as a hyper-box, centered at ${\bar{x}}^{0} \mathrel{\text{ := }} x$ and with radius ${\delta }^{0} \mathrel{\text{ := }} {\epsilon }_{p}$ , such that we have the ${i}^{\text{ th }}$ dimension of the input ${\mathbf{x}}_{i}^{0} \in \left\lbrack {{\bar{x}}_{i}^{0} - {\delta }_{i}^{0},{\bar{x}}_{i}^{0} + {\delta }_{i}^{0}}\right\rbrack$ . Given a linear layer ${\mathbf{f}}_{i}\left( {\mathbf{x}}^{i - 1}\right) = \mathbf{W}{\mathbf{x}}^{i - 1} + \mathbf{b} = : {\mathbf{x}}^{i}$ , we obtain the hyper-box relaxation of its output defined by center ${\overline{\mathbf{x}}}^{i} = \mathbf{W}{\overline{\mathbf{x}}}^{i - 1} + \mathbf{b}$ and radius ${\mathbf{\delta }}^{i} = \left| \mathbf{W}\right| {\mathbf{\delta }}^{i - 1}$ , where $\left| \cdot \right|$ denotes the elementwise absolute value. A ReLU activation $\operatorname{ReLU}\left( {\mathbf{x}}^{i - 1}\right) \mathrel{\text{ := }} \max \left( {0,{\mathbf{x}}^{i - 1}}\right)$ can be relaxed by propagating the lower and upper bound separately, resulting in the output hyper-box with ${\bar{x}}^{i} = \frac{{u}^{i} + {l}^{i}}{2}$ and ${\delta }^{i} = \frac{{u}^{i} - {l}^{i}}{2}$ where ${\mathbf{l}}^{i} = \operatorname{ReLU}\left( {{\overline{\mathbf{x}}}^{i - 1} - {\mathbf{\delta }}^{i - 1}}\right)$ and ${\mathbf{u}}^{i} = \operatorname{ReLU}\left( {{\overline{\mathbf{x}}}^{i - 1} + {\mathbf{\delta }}^{i - 1}}\right)$ . We can now show provable robustness if we find the upper bound on the logit difference ${y}_{i}^{\Delta } \mathrel{\text{ := }} {y}_{i} - {y}_{t} < 0,\forall i \neq t$ to be smaller than 0 .
|
| 44 |
+
|
| 45 |
+
Beyond Box, more precise verification approaches track more relational information at the cost of increased computational complexity (Palma et al., 2022; Wang et al., 2021; Ferrari et al., 2022).
|
| 46 |
+
|
| 47 |
+
Training for Robustness For neural networks to be certifiably robust, special training is necessary. Given a data distribution $\left( {\mathbf{x},t}\right) \sim \mathcal{D}$ , standard training generally aims to find a network parametrization $\theta$ that minimizes the expected cross-entropy loss:
|
| 48 |
+
|
| 49 |
+
$$
|
| 50 |
+
{\theta }_{\mathrm{{std}}} = \underset{\theta }{\arg \min }{\mathbb{E}}_{\mathcal{D}}\left\lbrack {{\mathcal{L}}_{\mathrm{{CE}}}\left( {{\mathbf{h}}_{\mathbf{\theta }}\left( \mathbf{x}\right) ,t}\right) }\right\rbrack ,\;\text{ with }\;{\mathcal{L}}_{\mathrm{{CE}}}\left( {\mathbf{y},t}\right) = \ln \left( {1 + \mathop{\sum }\limits_{{i \neq t}}\exp \left( {{y}_{i} - {y}_{t}}\right) }\right) . \tag{2}
|
| 51 |
+
$$
|
| 52 |
+
|
| 53 |
+
When training for robustness, we, instead, wish to minimize the expected worst case loss around the data distribution, leading to the min-max optimization problem:
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
{\theta }_{\mathrm{{rob}}} = \underset{\theta }{\arg \min }{\mathbb{E}}_{\mathcal{D}}\left\lbrack {\mathop{\max }\limits_{{{\mathbf{x}}^{\prime } \in {\mathcal{B}}_{p}^{{\epsilon }_{p}}\left( \mathbf{x}\right) }}{\mathcal{L}}_{\mathrm{{CE}}}\left( {{\mathbf{h}}_{\mathbf{\theta }}\left( {\mathbf{x}}^{\prime }\right) ,t}\right) }\right\rbrack . \tag{3}
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
Unfortunately, solving the inner maximization problem is generally intractable. Therefore, it is commonly under- or over-approximated, yielding adversarial and certified training, respectively.
|
| 60 |
+
|
| 61 |
+
Adversarial Training Adversarial training optimizes a lower bound on the inner optimization objective in Eq. (3) by first computing concrete examples ${\mathbf{x}}^{\prime } \in {\mathcal{B}}_{p}^{{\epsilon }_{p}}\left( \mathbf{x}\right)$ maximizing the loss term and then optimizing the network parameters $\mathbf{\theta }$ for these samples. While networks trained this way 5 typically exhibit good empirical robustness, they remain hard to formally verify and sometimes also vulnerable to stronger or different attacks (Tramèr et al., 2020; Croce & Hein, 2020).
|
| 62 |
+
|
| 63 |
+
< g r a p h i c s >
|
| 64 |
+
|
| 65 |
+
Figure 1: Illustration of SABR training. Instead of propagating a Box approximation (dashed box &&) of the whole input region (red and green shapes in input space), SABR propagates a small subset of this region (solid box $\square$ ), selected to contain the adversarial example (black $\times$ ) and thus the misclassified region (red). The smaller BOX accumulates much fewer approximation errors during propagation, leading to a significantly smaller output relaxation, which induces much less regularization (medium blue 1) than training with the full region (large blue *), but more than training with just the adversarial example (small blue !).
|
| 66 |
+
|
| 67 |
+
Certified Training Certified training optimizes an upper bound on the inner maximization objective in Eq. (3), obtained via a bound propagation method. These methods compute an upper bound ${\mathbf{u}}_{{\mathbf{y}}^{\Delta }}$ on the logit differences ${\mathbf{y}}^{\Delta } \mathrel{\text{ := }} \mathbf{y} - {y}_{t}\mathbf{1}$ to obtain the robust cross-entropy loss ${\mathcal{L}}_{\mathrm{{CE}},\operatorname{rob}}\left( {{\mathcal{B}}_{p}^{{\epsilon }_{p}}\left( \mathbf{x}\right) ,t}\right) = {\mathcal{L}}_{\mathrm{{CE}}}\left( {{\mathbf{u}}_{{\mathbf{y}}^{\Delta }},t}\right)$ . Surprisingly, using the imprecise Box relaxation (Mirman et al., 2018; Gowal et al., 2018b) (denoted IBP) consistently produces better results than methods based on tighter abstractions (Zhang et al., 2020; Balunovic & Vechev, 2020; Wong et al., 2018). Jo-vanovic et al. (2021) trace this back to the optimization problems induced by the more precise methods becoming intractable to solve. While the heavily regularized, certifiably trained networks are amenable to certification, they suffer from severely reduced (standard) accuracies. Overcoming this robustness-accuracy trade-off remains a key challenge of robust machine learning.
|
| 68 |
+
|
| 69 |
+
§ 3 METHOD - SMALL REGIONS FOR CERTIFIED TRAINING
|
| 70 |
+
|
| 71 |
+
We address this challenge by proposing a novel certified training method, SABR - Small Adversarial Bounding Regions — yielding networks that are amenable to certification and retain relatively high standard accuracies. We leverage the key insight that computing an over-approximation of the worst-case loss for a small but carefully selected subset of the input region ${\mathcal{B}}_{p}^{{\epsilon }_{p}}\left( \mathbf{x}\right)$ often still captures the actual worst-case loss, while significantly reducing approximation errors.
|
| 72 |
+
|
| 73 |
+
We illustrate this in Fig. 1. Existing certified training methods propagate the whole input region (dashed Box :1in the input panel), yielding quickly growing approximation errors. The resulting imprecise over-approximations of the worst case loss (compare the red and green regions to the dashed Box ${}^{ \star }$ in the output panel) cause significant over-regularization (large blue arrow ${}^{ \star }$ ). Adversarial training methods, in contrast, only consider individual points ( $\times$ in Fig. 1) and fail to capture the worst-case loss, leading to insufficient regularization (small blue arrow 1 in the output panel). We tackle this problem by propagating small, adversarially chosen subsets of the input region (solid Box $\square$ in the input panel), which we call propagation regions. This yields significantly reduced approximation errors and thus more precise, although not necessarily sound over-approximation of the loss (see the solid BOX $\square$ in the output panel). The resulting intermediate level of regularization (medium blue arrow ${}^{1}$ ) allows us to train networks that are both robust and accurate.
|
| 74 |
+
|
| 75 |
+
We observe that, depending on the size of the propagated region, SABR can be seen as a continuous interpolation between adversarial training for infinitesimally small regions and standard certified training for the full input region.
|
| 76 |
+
|
| 77 |
+
Selecting the Propagation Region We parametrize the propagation region as an ${\ell }_{p}$ -norm ball ${\mathcal{B}}_{p}^{{\tau }_{p}}\left( {\mathbf{x}}^{\prime }\right)$ with center ${\mathbf{x}}^{\prime }$ and radius ${\tau }_{p} \leq {\epsilon }_{p} -$ ${\begin{Vmatrix}\mathbf{x} - {\mathbf{x}}^{\prime }\end{Vmatrix}}_{p}$ , ensuring that we indeed propagate a subset of the original region ${\mathcal{B}}_{p}^{{\epsilon }_{p}}\left( \mathbf{x}\right)$ . For notational clarity, we drop the subscript $p$ . We first choose $\tau = {\lambda \epsilon }$ by scaling the original perturbation radius $\epsilon$ with the subselection ratio $\lambda \in (0,1\rbrack$ . We then select ${\mathbf{x}}^{\prime }$ by first conducting a PGD attack, yielding the preliminary center ${\mathbf{x}}^{ * }$ , and then ensuring that the obtained region is fully contained in the original one by projecting ${\mathbf{x}}^{ * }$ onto ${\mathcal{B}}^{\epsilon - \tau }\left( \mathbf{x}\right)$ to obtain ${\mathbf{x}}^{\prime }$ . We show this in Fig. 2.
|
| 78 |
+
|
| 79 |
+
< g r a p h i c s >
|
| 80 |
+
|
| 81 |
+
Figure 2: Illustration of SABR's propagation region selection process.
|
| 82 |
+
|
| 83 |
+
Propagation Method While SABR can be instantiated with any certified training method, we chose Box propagation (DIFFAI Mirman et al. (2018) or IBP (Gowal et al., 2018b)) to obtain well-behaved optimization problems (Jovanovic et al., 2021).
|
| 84 |
+
|
| 85 |
+
§ 4 UNDERSTANDING SABR: ROBUST LOSS AND GROWTH OF SMALL BOXES
|
| 86 |
+
|
| 87 |
+
In this section, we aim to uncover the reasons behind SABR's success. Towards this, we first analyze the relationship between robust loss and over-approximation size before investigating the growth of the BOX approximation with propagation region size.
|
| 88 |
+
|
| 89 |
+
Robust Loss Analysis Certified training typically optimizes an over-approximation of the worst-case cross-entropy loss ${\mathcal{L}}_{\text{ CE.rob }}$ , computed via the softmax of the upper-bound on the logit differences ${\mathbf{y}}^{\Delta } \mathrel{\text{ := }} \mathbf{y} - {y}_{t}$ . When training with the BOX relaxation and assuming the target class $t$ , w.l.o.g., we obtain ${\mathbf{y}}^{\Delta } \in \left\lbrack {{\overline{\mathbf{y}}}^{\Delta } - {\mathbf{\delta }}^{\Delta },{\overline{\mathbf{y}}}^{\Delta } + {\mathbf{\delta }}^{\Delta }}\right\rbrack$ and the robust cross entropy loss ${\mathcal{L}}_{\mathrm{{CE}},\text{ rob }}\left( \mathbf{x}\right) =$ $\ln \left( {1 + \mathop{\sum }\limits_{{i = 2}}^{n}{e}^{{\bar{y}}_{i}^{\Delta } + {\delta }_{i}^{\Delta }}}\right)$ . Further, we note that the BOX relaxations of many functions preserve the box centers, i.e., ${\overline{\mathbf{x}}}^{i} = \mathbf{f}\left( {\overline{\mathbf{x}}}^{i - 1}\right)$ . Only unstable ReLUs, i.e., ReLUs containing 0 in their input bounds, introduce a slight shift. However, these are empirically few in certifiably trained networks (see Table 5). We can thus decompose the logit differences determining the robust loss into an accuracy term ${\overline{\mathbf{y}}}^{\Delta }$ , corresponding to the misclassification margin of the adversarial example ${\mathbf{x}}^{\prime }$ at the center of the propagation region, and a robustness term ${\delta }^{\Delta }$ , bounding the difference to the actual worst-case logits. As these terms generally represent conflicting objectives, robustness and accuracy are balanced to minimize the robust optimization objective. Consequently, reducing the regularization induced by the robustness term will bias the optimization process towards standard accuracy.
|
| 90 |
+
|
| 91 |
+
Box Growth We investigate the growth of Box relaxations for an $L$ -layer network with linear layers ${\mathbf{f}}_{i}$ and ReLU activation functions $\mathbf{\sigma }$ . Given a Box input with radius ${\delta }^{i - 1}$ and center distribution ${\bar{x}}^{i - 1} \sim \mathcal{D}$ , we define the per-layer growth rate ${\kappa }^{i} = \frac{{\mathbb{E}}_{\mathcal{D}}\left\lbrack {\delta }^{i}\right\rbrack }{{\delta }^{i - 1}}$ as the ratio of input and expected output radius.
|
| 92 |
+
|
| 93 |
+
< g r a p h i c s >
|
| 94 |
+
|
| 95 |
+
Figure 3: Input distribution for last ReLU layer depending on training method.
|
| 96 |
+
|
| 97 |
+
For linear layers with weight matrix $\mathbf{W}$ , we obtain an output radius ${\delta }^{i} = \left| \mathbf{W}\right| {\delta }^{i - 1}$ and thus a constant growth rate ${\kappa }^{i}$ , corresponding to the row-wise ${\ell }_{1}$ norm of the weight matrix ${\left| {\mathbf{W}}_{j, \cdot }\right| }_{1}$ . Empirically, we find most linear and convolutional layers to exhibit growth rates between 10 and 100 .
|
| 98 |
+
|
| 99 |
+
For ReLU layers ${\mathbf{x}}^{i} = \sigma \left( {\mathbf{x}}^{i - 1}\right)$ , the growth rate depends on the location and size of the inputs. Shi et al. (2021) assume the input Box centers ${\overline{\mathbf{x}}}^{i - 1}$ to be symmetrically distributed around 0, i.e., ${P}_{\mathcal{D}}\left( {\bar{x}}^{i - 1}\right) = {P}_{\mathcal{D}}\left( {-{\bar{x}}^{i - 1}}\right)$ , and obtain a constant growth rate of ${\kappa }^{i} = {0.5}$ . While this assumption holds at initialization, trained networks tend to have more inactive than active ReLUs (see Table 5), indicating asymmetric distributions with more negative inputs (see also Fig. 3). We investigate this more realistic setting. When input radii are ${\bar{\delta }}^{i - 1} \approx 0$ , active neurons will stay stably active, yielding ${\delta }^{i} = {\delta }^{i - 1}$ and inactive neurons will stay stably inactive, yielding ${\delta }^{i} = 0$ . Thus, we obtain a growth rate, equivalent to the portion of active neurons. In the other extreme ${\delta }^{i - 1} \rightarrow \infty$ , all neurons will become unstable with ${\bar{x}}^{i - 1} \ll {\delta }^{i - 1}$ , yielding ${\delta }^{i} \approx {0.5}{\delta }^{i - 1}$ , and thus a constant growth rate of ${\kappa }^{i} = {0.5}$ . Assuming pointwise asymmetry favouring negative inputs, i.e., $p\left( {{\bar{x}}^{i - 1} = - z}\right) > p\left( {{\bar{x}}^{i - 1} = }\right.$ $z),\forall z \in {\mathbb{R}}^{ > 0}$ , we show that between those two extremes, output radii grow strictly super-linear in the input size :
|
| 100 |
+
|
| 101 |
+
< g r a p h i c s >
|
| 102 |
+
|
| 103 |
+
Figure 4: Actual (purple) mean output size growth and a linear approximation (orange) for a ReLU layer with $\bar{x} \sim \mathcal{N}(\mu =$ $- {1.0},\sigma = \sqrt{0.5})$ .
|
| 104 |
+
|
| 105 |
+
Theorem 4.1 (Hyper-Box Growth). Let $y \mathrel{\text{ := }} \sigma \left( x\right) = \max \left( {0,x}\right)$ be a ReLU function and consider box inputs with radius ${\delta }_{x}$ and asymmetrically distributed centers $\bar{x} \sim \mathcal{D}$ such that ${P}_{\mathcal{D}}\left( {\bar{x} = - z}\right) >$ ${P}_{\mathcal{D}}\left( {\bar{x} = z}\right) ,\forall z \in {\mathbb{R}}^{ > 0}$ . Then the mean output radius ${\delta }_{y}$ will grow super-linearly in the input radius ${\delta }_{x}$ . More formally:
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
\forall {\delta }_{x},{\delta }_{x}^{\prime } \in {\mathbb{R}}^{ \geq 0} : \;{\delta }_{x}^{\prime } > {\delta }_{x} \Rightarrow {\mathbb{E}}_{\mathcal{D}}\left\lbrack {\delta }_{y}^{\prime }\right\rbrack > {\mathbb{E}}_{\mathcal{D}}\left\lbrack {\delta }_{y}\right\rbrack + \left( {{\delta }_{x}^{\prime } - {\delta }_{x}}\right) \frac{\partial }{\partial {\delta }_{x}}{\mathbb{E}}_{\mathcal{D}}\left\lbrack {\delta }_{y}\right\rbrack . \tag{4}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
We defer a proof to App. A and illustrate this behavior in Fig. 4. Multiplying all layer-wise growth rates, we obtain the overall growth rate $\kappa = \mathop{\prod }\limits_{{i = 2}}^{L}{\kappa }^{i}$ , which is exponential in network depth and super-linear in input radius. When not specifically training with the BOX relaxation, we empirically observe that the large growth factors of linear layers dominate the shrinking effect of the ReLU layers, leading to a quick exponential growth in network depth. Further, for both SABR and IBP trained networks, the super-linear growth in input radius empirically manifests as exponential behavior (see Figs. 7 and 8). Using SABR, we thus expect the regularization induced by the robustness term to decrease super-linearly, and empirically even exponentially, with subselection ratio $\lambda$ , explaining the significantly higher accuracies compared to IBP.
|
| 112 |
+
|
| 113 |
+
§ 5 EVALUATION
|
| 114 |
+
|
| 115 |
+
In this section, we compare SABR to existing certified training methods on the challenging ${\ell }_{\infty }$ , deferring a detailed description of the experimental setup to App. B
|
| 116 |
+
|
| 117 |
+
Main Results We compare SABR to state-of-the-art certified training methods in Table 2 and Fig. 5, reporting the best results achieved with a given method on any architecture.
|
| 118 |
+
|
| 119 |
+
< g r a p h i c s >
|
| 120 |
+
|
| 121 |
+
Figure 5: Certified over standard accuracy for different certified training methods. The upper right-hand corner is best.
|
| 122 |
+
|
| 123 |
+
In Fig. 5, we show certified over standard accuracy (upper right-hand corner is best) and observe that SABR (*) dominates all other methods, achieving both the highest certified and standard accuracy across all settings. Methods striving to balance accuracy and regularization by bridging the gap between provable and adversarial training $( \times$ , )(Balunovic & Vechev, 2020; Palma et al., 2022) perform only slightly worse than SABR at small perturbation radii, but much worse at large radii, e.g., attaining only ${27.5}\%$ (✗) and 27.9% ( ) certifiable accuracy for CIFAR-10 at $\epsilon = 8/{255}$ compared to ${35.25}\% \left( \bullet \right)$ . Similarly, methods focusing only on certified accuracy by directly optimizing over-approximations of the worst-case loss $\left( {+,\blacksquare }\right)$ (Gowal et al., 2018b; Zhang et al., 2020) tend to perform well at large perturbation radii, but poorly at small perturbation radii, e.g., on CIFAR-10 at $\epsilon = 2/{255}$ , SABR improves certified accuracy to ${62.6}\% \left( \pm \right)$ up from ${52.9}\% \left( \pm \right)$ and ${54.0}\% \left( \blacksquare \right)$ .
|
| 124 |
+
|
| 125 |
+
In contrast to certified training, Zhang et al. (2021) propose an architecture with inherent ${\ell }_{\infty }$ -robustness properties. While they attain higher certified accuracies on CIFAR-10 $\epsilon = 8/{255}$ , their training is notoriously hard (Zhang et al., 2021, 2022), yielding low standard accuracies of, e.g., only ${60.6}\%$ compared to ${79.52}\%$ at $\epsilon = 2/{255}$ . Further robustness can only be obtained against one perturbation type at a time.
|
| 126 |
+
|
| 127 |
+
Table 1: Comparison of standard (Std.) and certified (Cert.) accuracy [%] to ${\ell }_{\infty }$ -distance Net (Zhang et al. 2022)
|
| 128 |
+
|
| 129 |
+
max width=
|
| 130 |
+
|
| 131 |
+
2*Dataset 2*$\epsilon$ 2|c|${\ell }_{\infty }$ -distance Net 2|c|SABR (ours)
|
| 132 |
+
|
| 133 |
+
3-6
|
| 134 |
+
Std. Cert. Std. Cert.
|
| 135 |
+
|
| 136 |
+
1-6
|
| 137 |
+
2*MNIST 0.1 98.93 97.95 99.25 98.06
|
| 138 |
+
|
| 139 |
+
2-6
|
| 140 |
+
0.3 98.56 93.20 98.82 93.38
|
| 141 |
+
|
| 142 |
+
1-6
|
| 143 |
+
2*CIFAR-10 2/255 60.61 54.12 79.52 62.57
|
| 144 |
+
|
| 145 |
+
2-6
|
| 146 |
+
8/255 54.30 40.06 52.00 35.25
|
| 147 |
+
|
| 148 |
+
1-6
|
| 149 |
+
|
| 150 |
+
Certification Method and Propagation Region Size To analyze the interaction between certification method precision and propagation region size, we train a range of models with subselection ratios $\lambda$ varying from 0.0125 to 1.0 and analyze them with verification methods of increasing precision (Box, DEEPPOLY, MN-BAB) and a 50-step PGD attack (Madry et al., 2018) with 5 random restarts and the targeted logit margin loss (Carlini & Wagner, 2017). We illustrate results in Fig. 6 and observe that standard and adversarial accuracies increase with decreasing $\lambda$ , as regularization decreases. For $\lambda = 1$ , i.e., IBP training, we observe little difference between the verification methods. However, as we decrease $\lambda$ , the Box verified accuracy decreases quickly, despite BOX relaxations being used during training. In con-
|
| 151 |
+
|
| 152 |
+
< g r a p h i c s >
|
| 153 |
+
|
| 154 |
+
Figure 6: Standard, adversarial, and certified accuracy depending on the certification method for 1000 CIFAR-10 samples at $\epsilon = 2/{255}$ .
|
| 155 |
+
|
| 156 |
+
Table 2: Comparison of the standard (Acc.) and certified (Cert. Acc.) accuracy for different certified training methods on the full MNIST, CIFAR-10, and TINYIMAGENET test sets. We use MN-BAB (Ferrari et al. 2022) for certification and report other results from the relevant literature.
|
| 157 |
+
|
| 158 |
+
max width=
|
| 159 |
+
|
| 160 |
+
Dataset ${\epsilon }_{\infty }$ Training Method Source Acc. [%] Cert. Acc. [%]
|
| 161 |
+
|
| 162 |
+
1-6
|
| 163 |
+
8*MNIST 4*0.1 COLT Balunovic & Vechev (2020) 99.2 97.1
|
| 164 |
+
|
| 165 |
+
3-6
|
| 166 |
+
CROWN-IBP Zhang et al. (2020) 98.83 97.76
|
| 167 |
+
|
| 168 |
+
3-6
|
| 169 |
+
IBP Shi et al. (2021 98.84 97.95
|
| 170 |
+
|
| 171 |
+
3-6
|
| 172 |
+
SABR this work 99.25 98.06
|
| 173 |
+
|
| 174 |
+
2-6
|
| 175 |
+
4*0.3 COLT Balunovic & Vechev (2020 97.3 85.7
|
| 176 |
+
|
| 177 |
+
3-6
|
| 178 |
+
CROWN-IBP Zhang et al. (2020) 98.18 92.98
|
| 179 |
+
|
| 180 |
+
3-6
|
| 181 |
+
IBP Shi et al. (2021 97.67 93.10
|
| 182 |
+
|
| 183 |
+
3-6
|
| 184 |
+
SABR this work 98.82 93.38
|
| 185 |
+
|
| 186 |
+
1-6
|
| 187 |
+
10*CIFAR-10 5*2/255 COLT Balunovic & Vechev (2020 78.4 60.5
|
| 188 |
+
|
| 189 |
+
3-6
|
| 190 |
+
CROWN-IBP $\mathrm{{Zhang}}$ et al.(2020) 71.52 53.97
|
| 191 |
+
|
| 192 |
+
3-6
|
| 193 |
+
IBP Shi et al.(2021) 66.84 52.85
|
| 194 |
+
|
| 195 |
+
3-6
|
| 196 |
+
IBP-R $\mathrm{{Palma}}$ et al.(2022) 78.19 61.97
|
| 197 |
+
|
| 198 |
+
3-6
|
| 199 |
+
SABR this work 79.52 62.57
|
| 200 |
+
|
| 201 |
+
2-6
|
| 202 |
+
5*8/255 COLT Balunovic & Vechev (2020) 51.7 27.5
|
| 203 |
+
|
| 204 |
+
3-6
|
| 205 |
+
CROWN-IBP $\mathrm{{Xu}}$ et al.(2020) 46.29 33.38
|
| 206 |
+
|
| 207 |
+
3-6
|
| 208 |
+
IBP Shi et al. (2021 48.94 34.97
|
| 209 |
+
|
| 210 |
+
3-6
|
| 211 |
+
IBP-R Palma et al. (2022) 51.43 27.87
|
| 212 |
+
|
| 213 |
+
3-6
|
| 214 |
+
SABR this work 52.00 35.25
|
| 215 |
+
|
| 216 |
+
1-6
|
| 217 |
+
3*TINYIMAGENET 3*1/255 CROWN-IBP Shi et al. (2021 25.62 17.93
|
| 218 |
+
|
| 219 |
+
3-6
|
| 220 |
+
IBP Shi et al. (2021 25.92 17.87
|
| 221 |
+
|
| 222 |
+
3-6
|
| 223 |
+
SABR this work 28.64 20.34
|
| 224 |
+
|
| 225 |
+
1-6
|
| 226 |
+
|
| 227 |
+
trast, using the most precise method, MN-BAB, we initially observe increasing certified accuracies, as the reduced regularization yields more accurate networks, before the level of regularization becomes insufficient for certification. While DEEPPOLY loses precision less quickly than Box, it can not benefit from more accurate networks. This indicates that the increased accuracy, enabled by the reduced regularization, may rely on complex neuron interactions, only captured by MN-BAB.
|
| 228 |
+
|
| 229 |
+
This qualitatively different behavior depending on the precision of the certification method highlights the importance of recent advances in neural network verification for certified training. Even more importantly, these results clearly show that provably robust networks do not necessarily require the level of regularization introduced by IBP training.
|
| 230 |
+
|
| 231 |
+
Loss Analysis In Fig. 7, we compare the robust loss of an SABR and an IBP trained network across different propagation region sizes (all centered around the original sample) depending on the bound propagation method used. When propagating the full input region $\left( {\lambda = 1}\right)$ , the SABR trained network yields a much higher robust loss than the IBP trained one. However, when comparing the respective training subselection ratios, $\lambda = {0.05}$ for SABR and $\lambda = {1.0}$ for IBP, SABR yields significantly smaller training losses, allowing the SABR trained network to reach a much lower standard loss. Finally, we observe the losses to clearly grow super-linearly with increasing propagation region sizes (note the logarithmic scaling of the y-axis) agreeing well with our theoretical results in §4.
|
| 232 |
+
|
| 233 |
+
< g r a p h i c s >
|
| 234 |
+
|
| 235 |
+
Figure 7: Standard and robust cross-entropy loss, computed with Box (Box) and DEEPPOLY (DP) bounds for an IBP and SABR trained network over subselec-tion ratio $\lambda$ .
|
| 236 |
+
|
| 237 |
+
§ 6 CONCLUSION
|
| 238 |
+
|
| 239 |
+
We introduced a novel certified training method called SABR (Small Adversarial Bounding Regions) based on the key insight, that propagating small but carefully selected subsets of the input region combines small approximation errors and thus regularization with well-behaved optimization problems. This allows SABR trained networks to outperform all existing certified training methods on all commonly used benchmarks in terms of both standard and certified accuracy. Even more importantly, SABR lays the foundation for a new class of certified training methods promising to overcome the robustness-accuracy trade-off and enabling the training of networks that are both accurate and certifiably robust.
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/_Fl5G8NCA2/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,282 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Assessing Performance and Fairness Metrics in Face Recognition - Bootstrap Methods
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
## Abstract
|
| 12 |
+
|
| 13 |
+
The ROC curve is the major tool for assessing not only the performance but also the fairness properties of a similarity scoring function in Face Recognition. In order to draw reliable conclusions based on empirical ROC analysis, evaluating accurately the uncertainty related to statistical versions of the ROC curves of interest is necessary. For this purpose, we explain in this paper that, because the True/False Acceptance Rates are of the form of U-statistics in the case of similarity scoring, the naive bootstrap approach is not valid here and that a dedicated recentering technique must be used instead. This is illustrated on real data of face images, when applied to several ROC-based metrics such as popular fairness metrics.
|
| 14 |
+
|
| 15 |
+
## 10 1 Face Recognition - Performance & Fairness
|
| 16 |
+
|
| 17 |
+
The deployment of Face Recognition (FR) systems brings with it a pressing demand for methodological tools to assess their trustworthiness. The reliability of FR systems concerns their estimated performance of course, but also their properties regarding fairness: ideally, the system should exhibit approximately the same performance, independently of the sensitive group (determined by e.g. gender, age group, race) to which it is applied. While, until now, the benchmark of FR systems is essentially reduced to an ad-hoc evaluation of the performance metrics (i.e. ROC analysis) on a face image dataset of reference, the purpose of this paper is to explain, and illustrate using real data how the bootstrap methodology can be used to quantify the uncertainty/variability of the performance metrics, as well as that of some popular fairness metrics. Hopefully, this paves the way for a more valuable and trustworthy comparative analysis of the merits and drawbacks of FR systems.
|
| 18 |
+
|
| 19 |
+
In FR, the usual objective is to learn an encoder function $f : {\mathbb{R}}^{h \times w \times c} \rightarrow {\mathbb{R}}^{d}$ that embeds the images in a way that brings same identities closer together. Each image is of size(h, w), while $c$ corresponds to the color channel dimension. It is worth noting that a pre-processing detection step (finding a face within an image) is required to make all face images have the same size(h, w). For an image $x \in {\mathbb{R}}^{h \times w \times c}$ , its latent representation $f\left( x\right) \in {\mathbb{R}}^{d}$ is called the face embedding of $x$ .
|
| 20 |
+
|
| 21 |
+
Since the advent of deep learning, the encoder $f$ is a deep Convolutional Neural Network (CNN) whose parameters are learned on a huge FR dataset, made of face images and identity labels. In brief, the training consists in taking all images ${x}_{i}^{k}$ , labelled with identity $k$ , computing their embeddings $f\left( {x}_{i}^{k}\right)$ and adjusting the parameters of $f$ so that those embeddings are as close as possible (for a given similarity measure) and as far as possible from the embeddings of identity $l \neq k$ . The usual similarity measure is the cosine similarity which is defined as $s\left( {{x}_{i},{x}_{j}}\right) \mathrel{\text{:=}} f{\left( {x}_{i}\right) }^{\top }f\left( {x}_{j}\right) /\left( {\begin{Vmatrix}{f\left( {x}_{i}\right) }\end{Vmatrix} \cdot \begin{Vmatrix}{f\left( {x}_{j}\right) }\end{Vmatrix}}\right)$ for two images ${x}_{i},{x}_{j}$ , with $\parallel \cdot \parallel$ standing for the usual Euclidean norm. In some early works [Schroff et al.,2015], the Euclidean metric $\begin{Vmatrix}{f\left( {x}_{i}\right) - f\left( {x}_{j}\right) }\end{Vmatrix}$ was also used. In the rest of this document, we discard the notation $f$ for the encoder, and only use the similarity $s$ (which contains the encoder), as we are not interested in the encoder training.
|
| 22 |
+
|
| 23 |
+
### 1.1 Performance Evaluation in Face Recognition
|
| 24 |
+
|
| 25 |
+
There are generally two FR use cases: identification, which consists in finding the specific identity of a probe face among several previously enrolled identities, and verification (which we focus on throughout this paper), which aims at deciding whether two face images correspond to the same identity or not. In practice, the evaluation of a trained FR model is achieved using an evaluation dataset, where all possible pairs $\left( {{x}_{i},{x}_{j}}\right)$ of face images are considered. Then, an operating point $t \in \left\lbrack {-1,1}\right\rbrack$ (threshold of acceptance) is chosen to classify the pair $\left( {{x}_{i},{x}_{j}}\right)$ as genuine (same identity) if $s\left( {{x}_{i},{x}_{j}}\right) > t$ and impostor (distinct identities) otherwise. In the following, we describe the statistical measures for evaluating a FR model, given an evaluation dataset.
|
| 26 |
+
|
| 27 |
+
Assuming that there are $K$ distinct identities, the evaluation dataset can be modeled by a random variable $\left( {X, y}\right) \in {\mathbb{R}}^{h \times w \times c} \times \{ 1,\ldots , K\}$ . We denote by $\mathbf{P}$ the corresponding probability law. For $1 \leq k \leq K$ , we assume that the identities are equiprobable i.e. $\mathbf{P}\left( {y = k}\right) = \frac{1}{K}.X$ is determined by its conditional distributions ${X}^{k} \mathrel{\text{:=}} \left( {X \mid y = k}\right) \sim {\mathcal{I}}_{k}$ and we consider that ${X}^{k},{X}^{l}$ are independent if $k \neq l$ .
|
| 28 |
+
|
| 29 |
+
Let $\left( {{X}_{1},{y}_{1}}\right)$ and $\left( {{X}_{2},{y}_{2}}\right)$ be two independent random variables with law $\mathbf{P}$ . We distinguish between the False Negative Rate (FNR) and the True Negative Rate (TNR), respectively defined by:
|
| 30 |
+
|
| 31 |
+
$$
|
| 32 |
+
F\left( t\right) = \mathbf{P}\left( {s\left( {{X}_{1},{X}_{2}}\right) \leq t \mid {y}_{1} = {y}_{2}}\right) \;\text{ and }\;G\left( t\right) = \mathbf{P}\left( {s\left( {{X}_{1},{X}_{2}}\right) \leq t \mid {y}_{1} \neq {y}_{2}}\right) .
|
| 33 |
+
$$
|
| 34 |
+
|
| 35 |
+
With these notations, the ROC curve is defined as the graph of the mapping
|
| 36 |
+
|
| 37 |
+
$$
|
| 38 |
+
\mathrm{{ROC}} : \alpha \mapsto \operatorname{ROC}\left( \alpha \right) = F \circ {G}^{-1}\left( {1 - \alpha }\right) \;\text{ with }\;\alpha \in \left\lbrack {0,1}\right\rbrack .
|
| 39 |
+
$$
|
| 40 |
+
|
| 41 |
+
Note that by $\operatorname{ROC}\left( \alpha \right)$ , one usually means $1 - F \circ {G}^{-1}\left( {1 - \alpha }\right)$ in machine learning and statistical literature but the FR community favors the DET curve $\left( {1 - \operatorname{ROC}\left( \alpha \right) }\right)$ , which we will call $\mathrm{{ROC}}$ curve in the following.
|
| 42 |
+
|
| 43 |
+
In practice, those metrics are not computable since we only have a finite dataset. We denote by ${n}_{k}$ the number of face images of identity $k$ , for $1 \leq k \leq K$ , within the evaluation dataset. The images of identity $k$ are modeled by random variables ${\left( {X}_{i}^{k}\right) }_{1 \leq i \leq {n}_{k}}$ , independent copies of ${X}^{k}$ . The empirical approximations ${F}_{N}$ and ${G}_{N}$ of $F$ and $G$ are:
|
| 44 |
+
|
| 45 |
+
$$
|
| 46 |
+
{F}_{N}\left( t\right) = \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}\frac{1}{\left( \begin{matrix} {n}_{k} \\ 2 \end{matrix}\right) }\mathop{\sum }\limits_{{1 \leq i < j \leq {n}_{k}}}{\mathbf{1}}_{s\left( {{X}_{i}^{k},{X}_{j}^{k}}\right) \leq t}
|
| 47 |
+
$$
|
| 48 |
+
|
| 49 |
+
and
|
| 50 |
+
|
| 51 |
+
$$
|
| 52 |
+
{G}_{N}\left( t\right) = \frac{1}{\left( \begin{matrix} K \\ 2 \end{matrix}\right) }\mathop{\sum }\limits_{{1 \leq k < l \leq K}}\frac{1}{{n}_{k}{n}_{l}}\mathop{\sum }\limits_{\substack{{1 \leq i \leq {n}_{k}} \\ {1 \leq j \leq {n}_{l}} }}{\mathbf{1}}_{s\left( {{X}_{i}^{k},{X}_{j}^{l}}\right) \leq t}.
|
| 53 |
+
$$
|
| 54 |
+
|
| 55 |
+
The empirical ROC curve is naturally:
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
{\operatorname{ROC}}_{N} : \alpha \mapsto {\operatorname{ROC}}_{N}\left( \alpha \right) = {F}_{N} \circ {G}_{N}^{-1}\left( {1 - \alpha }\right) \;\text{ with }\;\alpha \in \left\lbrack {0,1}\right\rbrack . \tag{1}
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
### 1.2 Fairness Metrics in Face Recognition
|
| 62 |
+
|
| 63 |
+
To be consistent with the FR community, we change our previous notations (only for addressing fairness metrics) and define the False Rejection Rate (FRR) and the False Acceptance Rate (FAR) respectively as $\operatorname{FRR}\left( t\right) \mathrel{\text{:=}} {F}_{N}\left( t\right)$ and $\operatorname{FAR}\left( t\right) \mathrel{\text{:=}} 1 - {G}_{N}\left( t\right)$ . Both are error rates that should be minimized, one more than the other depending on the use case. With those notations, the empirical ROC curve is ${\operatorname{ROC}}_{N}\left( \alpha \right) = \operatorname{FRR}\left( {t}_{\alpha }\right)$ with $\operatorname{FAR}\left( {t}_{\alpha }\right) = \alpha$ .
|
| 64 |
+
|
| 65 |
+
In order to inspect fairness issues in FR, one should look at differentials in performance amongst several subgroups of the population. Those subgroups are distinguishable by a sensitive attribute (e.g. gender, race, age,...). For a given discrete sensitive attribute that can take $A > 1$ different values, we enrich our previous model and consider a random variable(X, y, a)where $a \in \mathcal{A} = \{ 0,1,\ldots , A - 1\}$ . With a slight abuse of notations, we still denote by $\mathbf{P}$ the corresponding probability law and, for every fixed value $a$ , we can further define
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
{F}^{a}\left( t\right) \mathrel{\text{:=}} \mathbf{P}\left( {s\left( {{X}_{1},{X}_{2}}\right) \leq t \mid {y}_{1} = {y}_{2},{a}_{1} = {a}_{2} = a}\right)
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
{G}^{a}\left( t\right) \mathrel{\text{:=}} \mathbf{P}\left( {s\left( {{X}_{1},{X}_{2}}\right) \leq t \mid {y}_{1} \neq {y}_{2},{a}_{1} = {a}_{2} = a}\right) .
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
The empirical approximations of ${F}^{a}\left( t\right)$ and $\left( {1 - {G}^{a}\left( t\right) }\right)$ are denoted respectively by ${\mathrm{{FRR}}}_{a}\left( t\right)$ and ${\mathrm{{FAR}}}_{a}\left( t\right)$ . In the following, we list several popular FR fairness metrics. All of them are used by the U.S. National Institute of Standards and Technology (NIST) in their FRVT report [Grother, 2022]. Those fairness metrics attempt to quantify the differentials in ${\left( {\mathrm{{FAR}}}_{a}\left( t\right) \right) }_{a \in \mathcal{A}}$ and ${\left( {\mathrm{{FRR}}}_{a}\left( t\right) \right) }_{a \in \mathcal{A}}$ . Since each metric fairness has two versions (one for the differentials in terms of FAR, the other in terms of FRR), we only present its FAR version. All metrics depend here on the threshold ${t}_{\alpha }$ which satisfies ${\mathrm{{FAR}}}_{\text{total }}\left( {t}_{\alpha }\right) = \alpha \in \left\lbrack {0,1}\right\rbrack$ , meaning that the threshold is set so that it achieves a FAR equal to $\alpha$ for the global population, and not for some specific subgroup.
|
| 76 |
+
|
| 77 |
+
Max-min ratio. This metric has also been introduced by Conti et al. [2022], but for another choice of threshold ${t}_{\alpha }$ . Its advantage is to be very interpretable but it is sensitive to low values in the
|
| 78 |
+
|
| 79 |
+
denominator.
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
{\operatorname{FAR}}_{\min }^{\max }\left( \alpha \right) = \frac{\mathop{\max }\limits_{{a \in \{ 0,1\} }}{\operatorname{FAR}}_{a}\left( {t}_{\alpha }\right) }{\mathop{\min }\limits_{{a \in \{ 0,1\} }}{\operatorname{FAR}}_{a}\left( {t}_{\alpha }\right) }.
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
Max-geomean ratio. This metric replaces the previous minimum by the geometric mean ${\mathrm{{FAR}}}^{ \dagger }\left( {t}_{\alpha }\right)$ of the values ${\left( {\mathrm{{FAR}}}_{a}\left( {t}_{\alpha }\right) \right) }_{a \in \mathcal{A}}$ , in order to be less sensitive to low values in the denominator.
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
{\operatorname{FAR}}_{\text{geomean }}^{\max }\left( \alpha \right) = \frac{\mathop{\max }\limits_{{a \in \{ 0,1\} }}{\operatorname{FAR}}_{a}\left( {t}_{\alpha }\right) }{{\operatorname{FAR}}^{ \dagger }\left( {t}_{\alpha }\right) }.
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
6 Log-geomean sum. It is a sum of normalized logarithms.
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
{\operatorname{FAR}}_{\text{geomean }}^{\log }\left( \alpha \right) = \mathop{\sum }\limits_{{a \in \mathcal{A}}}\left| {{\log }_{10}\frac{{\operatorname{FAR}}_{a}\left( {t}_{\alpha }\right) }{{\operatorname{FAR}}^{ \dagger }\left( {t}_{\alpha }\right) }}\right| .
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
Gini coefficient. The Gini coefficient is a measure of inequality in a population. It ranges from a minimum value of zero, when all individuals are equal, to a theoretical maximum of one in an infinite population in which every individual except one has a size of zero.
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
{\operatorname{FAR}}_{\operatorname{Gini}}\left( \alpha \right) = \frac{\left| \mathcal{A}\right| }{\left| \mathcal{A}\right| - 1}\frac{\mathop{\sum }\limits_{{a \in \mathcal{A}}}\mathop{\sum }\limits_{{b \in \mathcal{A}}}\left| {{\operatorname{FAR}}_{a}\left( {t}_{\alpha }\right) - {\operatorname{FAR}}_{b}\left( {t}_{\alpha }\right) }\right| }{2{\left| \mathcal{A}\right| }^{2}{\operatorname{FAR}}^{ \dagger }\left( {t}_{\alpha }\right) }.
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
Conti et al. [2022] argue that the choice of a threshold ${t}_{\alpha }$ achieving a global ${\mathrm{{FAR}}}_{\text{total }} = \alpha$ is not entirely relevant since it depends on the relative proportions of each sensitive attribute value $a$ within the evaluation dataset together with the relative proportion of intra-group impostors. They propose instead a threshold ${t}_{\alpha }$ such that each group $a$ satisfies ${\mathrm{{FAR}}}_{a}\left( {t}_{\alpha }\right) \leq \alpha$ . Since we are dealing with a unique evaluation dataset, we do not use such a threshold choice, to be consistent with the last three fairness metrics. Other fairness metrics exist in the literature such as the maximum difference in the values ${\left( {\mathrm{{FAR}}}_{a}\left( {t}_{\alpha }\right) \right) }_{a \in \mathcal{A}}$ used by Alasadi et al. [2019], Dhar et al. [2021]. They have the disadvantage of not being normalized and are thus not interpretable, especially when comparing their values at different levels $\alpha$ .
|
| 104 |
+
|
| 105 |
+
## 2 Assessing the Uncertainty of Face Recognition Metrics through Bootstrap
|
| 106 |
+
|
| 107 |
+
As previously explained, the ROC curves (and their related scalar summaries) of a similarity scoring function $s\left( {x,{x}^{\prime }}\right)$ (determined in practice by an encoder function to which cosine similarity is next applied) provide the main tool to assess performance and fairness in face recognition. We now investigate how to bootstrap these functional criteria, in order to evaluate the uncertainty/variability inherent in their estimation based on (supposedly i.i.d.) sampling observations drawn from the statistical populations under study. Indeed, this evaluation is crucial to judge whether the similarity scoring function candidate meets the performance/fairness requirements in a trustworthy manner, as will be next discussed on real examples in the next section.
|
| 108 |
+
|
| 109 |
+
Bootstrapping the ROC curve of a similarity scoring function. Extending the limit results in Hiesh and Turnbull [1996], the consistency of the empirical ROC curve (I) of a similarity scoring function $s\left( {x,{x}^{\prime }}\right)$ can be classically established, as well as its asymptotic Gaussianity (under additional hypotheses, involving the absolute continuity of distributions $F$ and $G$ in particular), in a standard multi-sample asymptotic framework, i.e. stipulating that, for all $k \in \{ 1,\ldots , K\} ,{n}_{k}/N \rightarrow {\lambda }_{k} > 0$ as $N \rightarrow + \infty$ ). Indeed, under appropriate mild technical assumptions, one may prove that the sequence of stochastic processes
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
{\left\{ \sqrt{N}\left( {\operatorname{ROC}}_{N}\left( \alpha \right) - \operatorname{ROC}\left( \alpha \right) \right) \right\} }_{\alpha \in \left( {0,1}\right) }
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
converges in distribution to a Gaussian law as $N \rightarrow \infty$ . However, this limit law can hardly be used to build (asymptotic) confidence bands for the true ROC curve (or confidence intervals for scalar summary ROC-based metrics) in practice, due to its great complexity (the limit law, depending on the unknown densities of $F$ and $G$ is built from Brownian bridges and its approximate numerical simulation is a considerable challenge). Resampling techniques must be used instead, in order to mimic the random fluctuations of ${\operatorname{ROC}}_{N}\left( \alpha \right) - \operatorname{ROC}\left( \alpha \right)$ . Application of the (smoothed) bootstrap methodology to ROC analysis has been investigated at length in the bipartite ranking context, i.e. for binary classification data [Bertail et al., 2008]. In the classification framework, bootstrap versions of the empirical ROC curve are simply obtained by means of uniform sampling with replacement in the two statistical populations (positive and negative). In this case, the empirical true/false positive rates are of the form of i.i.d. averages, which greatly differs from the present situation, where ${F}_{N}\left( t\right)$ is an average of independent mono-sample $U$ -statistics of degree 2, while ${G}_{N}\left( t\right)$ is a multi-sample $U$ - statistic of degree(1,1). As will be shown below and illustrated in Appendix B, the pairwise nature of the statistical quantity ${F}_{N}\left( t\right)$ computed is of great consequence, insofar as a ’naive’ implementation of the bootstrap completely fails to reproduce ${\mathrm{{ROC}}}_{N}$ ’s variability when applied to the latter. Indeed, it systematically leads to a serious underestimation of ${F}_{N}\left( t\right)$ , and consequently to an underestimation of ${\mathrm{{ROC}}}_{N}$ uniformly on(0,1). For simplicity’s sake, we describe the reason behind this phenomenon by considering the problem of bootstrapping the statistic ${F}_{N}\left( t\right)$ and explain next how to remedy this problem.
|
| 116 |
+
|
| 117 |
+
For all $1 \leq k \leq K$ , consider $\left( {{X}_{1 * }^{k},\ldots ,{X}_{{n}_{k} * }^{k}}\right)$ a bootstrap sample related to identity $k$ , drawn by simple sampling with replacement from original data $\left\{ \left( {{X}_{1}^{k},\ldots ,{X}_{{n}_{k}}^{k}}\right) \right\}$ . Recall that the original statistic is of the form:
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
{F}_{N}\left( t\right) = \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}{F}_{N}^{k}\left( t\right) \;\text{ with }\;{F}_{N}^{k}\left( t\right) = \frac{1}{\left( \begin{matrix} {n}_{k} \\ 2 \end{matrix}\right) }\mathop{\sum }\limits_{{1 \leq i < j \leq {n}_{k}}}{\mathbf{1}}_{s\left( {{X}_{i}^{k},{X}_{j}^{k}}\right) \leq t}.
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
- Using the previous bootstrap sample, we can compute a bootstrap version of ${F}_{N}^{k}\left( t\right)$ :
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
{F}_{N * }^{k}\left( t\right) = \frac{1}{\left( \begin{matrix} {n}_{k} \\ 2 \end{matrix}\right) }\mathop{\sum }\limits_{{1 \leq i < j \leq {n}_{k}}}{\mathbf{1}}_{s\left( {{X}_{i * }^{k},{X}_{j * }^{k}}\right) \leq t}.
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
${F}_{N}^{k}\left( t\right)$ is a (non degenerate) $U$ -statistic of order 2 (an average over all pairs) with symmetric kernel ${\mathbf{1}}_{s\left( {x,{x}^{\prime }}\right) \leq t}$ , and thus involves no ’diagonal’ terms of type ${\mathbf{1}}_{s\left( {{X}_{i}^{k},{X}_{i}^{k}}\right) \leq t}$ . Indeed, evaluating the similarity of an image and itself brings no information (it is naturally equal to 1 when considering cosine similarity). By contrast, it is shown in Janssen [1997] that the bootstrap version ${F}_{N * }^{k}\left( t\right)$ of ${F}_{N}^{k}\left( t\right)$ is in expectation equal to its $V$ -statistic version, i.e. the version obtained by incorporating the diagonal terms in the average. In details, denoting ${\mathbf{E}}^{ * }\left\lbrack {\cdot \mid {X}_{1}^{k},\ldots ,{X}_{{n}_{k}}^{k}}\right\rbrack$ the conditional expectation with respect to $\left( {{X}_{1}^{k},\ldots ,{X}_{{n}_{k}}^{k}}\right)$ (i.e. it denotes the expectation related to the randomness induced by the resampling), we have that:
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
{\mathbf{E}}^{ * }\left\lbrack {{F}_{N * }^{k}\left( t\right) \mid {X}_{1}^{k},\ldots ,{X}_{{n}_{k}}^{k}}\right\rbrack = \frac{1}{{n}_{k}^{2}}\mathop{\sum }\limits_{{1 \leq i, j \leq {n}_{k}}}{\mathbf{1}}_{s\left( {{X}_{i}^{k},{X}_{j}^{k}}\right) \leq t}.
|
| 133 |
+
$$
|
| 134 |
+
|
| 135 |
+
Grouping all $K$ identities, we can compute a bootstrap version of ${F}_{N}\left( t\right)$ :
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
{F}_{N * }\left( t\right) = \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}\frac{1}{\left( \begin{matrix} {n}_{k} \\ 2 \end{matrix}\right) }\mathop{\sum }\limits_{{1 \leq i < j \leq {n}_{k}}}{\mathbf{1}}_{s\left( {{X}_{i * }^{k},{X}_{j * }^{k}}\right) \leq t}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
whose expectation is:
|
| 142 |
+
|
| 143 |
+
$$
|
| 144 |
+
{\bar{F}}_{N * }\left( t\right) \mathrel{\text{:=}} {\mathbf{E}}^{ * }\left\lbrack {{F}_{N * }\left( t\right) \mid {X}_{1}^{1},\ldots ,{X}_{{n}_{K}}^{K}}\right\rbrack = \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}\frac{1}{{n}_{k}^{2}}\mathop{\sum }\limits_{{1 \leq i, j \leq {n}_{k}}}{\mathbf{1}}_{s\left( {{X}_{i}^{k},{X}_{j}^{k}}\right) \leq t}.
|
| 145 |
+
$$
|
| 146 |
+
|
| 147 |
+
This means that bootstrapping ${F}_{N}\left( t\right)$ would result in many values ${F}_{N * }\left( t\right)$ that are not centered around ${F}_{N}\left( t\right)$ , but centered around ${\bar{F}}_{N * }\left( t\right)$ . From Janssen [1997] (Theorem 3), we find that $\mathbf{P}\left\lbrack {\sqrt{N}\left( {{F}_{N * }\left( t\right) - {\bar{F}}_{N * }\left( t\right) }\right) \leq x\left| {{X}_{1}^{1},\ldots ,{X}_{{n}_{K}}^{K}}\right\rbrack }\right.$ is a uniformly consistent estimator for $\mathbf{P}\left\lbrack {\sqrt{N}\left( {{F}_{N}\left( t\right) - F\left( t\right) }\right) \leq x}\right\rbrack$ . As a consequence, we can build confidence intervals for ${F}_{N}\left( t\right)$ in the following way: from the bootstrap samples, build confidence interval for $\left( {{F}_{N * }\left( t\right) - {\bar{F}}_{N * }\left( t\right) }\right)$ and shift it by ${F}_{N}\left( t\right)$ .
|
| 148 |
+
|
| 149 |
+
By contrast a naive bootstrap method, involving no recentering, can be applied to:
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
{G}_{N}\left( t\right) = \frac{1}{\left( \begin{matrix} K \\ 2 \end{matrix}\right) }\mathop{\sum }\limits_{{1 \leq k < l \leq K}}{G}_{N}^{k, l}\left( t\right) \;\text{ with }\;{G}_{N}^{k, l}\left( t\right) = \frac{1}{{n}_{k}{n}_{l}}\mathop{\sum }\limits_{\substack{{1 \leq i \leq {n}_{k}} \\ {1 \leq j \leq {n}_{l}} }}{\mathbf{1}}_{s\left( {{X}_{i}^{k},{X}_{j}^{l}}\right) \leq t}.
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
Since each sample of the 2-sample U-statistic ${G}_{N}^{k, l}\left( t\right)$ is of order(1,1), we have:
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
{\mathbf{E}}^{ * }\left\lbrack {{G}_{N * }^{k, l}\left( t\right) \mid {X}_{1}^{k},\ldots ,{X}_{{n}_{l}}^{l}}\right\rbrack = {G}_{N}^{k, l}\left( t\right) \;\text{ and }\;{\bar{G}}_{N * }\left( t\right) = {G}_{N}\left( t\right) .
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
The previous confidence interval method still works here: build a confidence interval for $\left( {{G}_{N * }\left( t\right) - {\bar{G}}_{N * }\left( t\right) }\right) = \left( {{G}_{N * }\left( t\right) - {G}_{N}\left( t\right) }\right)$ and shift it by ${G}_{N}\left( t\right)$ . However, in this case, the bootstrap values ${G}_{N * }\left( t\right)$ are centered around ${G}_{N}\left( t\right)$ . This method for confidence interval construction extends naturally to the bootstrap of the empirical quantile function ${G}_{N}^{-1}$ . In theory, a smoothed version of the bootstrap, consisting in sampling from smoothed (by means of a e.g. Gaussian kernel) versions of the empirical distributions should be used for bootstrapping quantiles. However, given the very large size of the pooled dataset here, smoothing can be ignored in practice.
|
| 162 |
+
|
| 163 |
+
Finally, we regroup the bootstrap of ${F}_{N}\left( t\right)$ and of ${G}_{N}^{-1}\left( \alpha \right)$ to present the bootstrap of the empirical ROC curve ${\operatorname{ROC}}_{N}\left( \alpha \right) = {F}_{N} \circ {G}_{N}^{-1}\left( {1 - \alpha }\right)$ . Using many bootstrap samples, a confidence interval is found for $\left( {{\operatorname{ROC}}_{N * }\left( \alpha \right) - {\overline{\operatorname{ROC}}}_{N * }\left( \alpha \right) }\right) = \left( {{F}_{N * } \circ {G}_{N * }^{-1}\left( {1 - \alpha }\right) - {\bar{F}}_{N * } \circ {G}_{N}^{-1}\left( {1 - \alpha }\right) }\right)$ , then shifted by ${\operatorname{ROC}}_{N}\left( \alpha \right)$ . A pseudo-code for building the confidence interval for ${\operatorname{ROC}}_{N}\left( \alpha \right)$ at level ${\alpha }_{CI} \in \left\lbrack {0,1}\right\rbrack$ is summarized in Algorithm 1. We highlight the significance of the recentering step and why a naive bootstrap does not work in Appendix B.
|
| 164 |
+
|
| 165 |
+
Bootstrapping fairness metrics We apply the same bootstrap algorithm for all fairness metrics since they are functions of ${F}_{N}$ and ${G}_{N}$ . For instance, consider a fairness measure that depends on ${\mathrm{{FRR}}}_{a}\left( t\right)$ . ${\mathrm{{FRR}}}_{a}\left( t\right)$ would be computed as ${F}_{N}\left( t\right)$ for the classic fairness measure (equivalent of ${\mathrm{{ROC}}}_{N}$ above), as ${F}_{N * }\left( t\right)$ for the bootstrap fairness (equivalent of ${\mathrm{{ROC}}}_{N * }$ ) and as ${\bar{F}}_{N * }$ for the V-statistic fairness (equivalent of ${\overline{\mathrm{{ROC}}}}_{N * }$ ), using pairs of attribute $a$ in all three cases.
|
| 166 |
+
|
| 167 |
+
## 3 Numerical Experiments - Discussion
|
| 168 |
+
|
| 169 |
+
We use as encoder the trained ${}^{1}$ model ArcFace [Deng et al.,2019a] whose CNN architecture is a ResNet100 [Han et al., 2017]. It has been trained on the MS1M-RetinaFace dataset, introduced by [Deng et al., 2019b] in the ICCV 2019 Lightweight Face Recognition Challenge. We choose the dataset RFW [Wang et al.,2019] as evaluation dataset. It is composed of ${40}\mathrm{k}$ face images from ${11}\mathrm{k}$ distinct identities. This dataset is also provided with ground-truth race labels (the four available labels are: African, Asian, Caucasian, Indian) and is widely used for fairness evaluation since it is equally distributed among the race subgroups, in terms of images and identities. The official RFW protocol only considers a few matching pairs among all the possible pairs given the whole RFW dataset. The number of images is typically not enough to get good estimates of our fairness metrics at low FAR. To overcome this, we consider all possible same-race matching pairs among the whole RFW dataset. All images are pre-processed by the Retina-Face detector Deng et al. [2019c] and are of size ${112} \times {112}$ pixels.
|
| 170 |
+
|
| 171 |
+
Our first experiment is the computation of the confidence bands at 95% confidence level $\left( {{\alpha }_{CI} = {0.05}}\right)$ for each intra-group ROC i.e. the ROC corresponding to each race label. This is the output of our Algorithm 1 using $B = {100}$ bootstrap samples and the result is displayed in Figure 1. It can be observed that Caucasians have a better performance than other races and that the uncertainty makes all races potentially indistinguishable in terms of performance at high FAR levels. Notice that the uncertainty increases when any of the error rates FAR, FRR is low, which happens when a few matching pairs are incorrectly classified, making the error rates really sensitive to those pairs. To quantify better the uncertainty in the estimation of ${\operatorname{ROC}}_{N}\left( \alpha \right)$ , we compute the standard deviation of the $B = {100}$ bootstrap ROC curves ${\operatorname{ROC}}_{N * }\left( \alpha \right)$ , for each race label. For a fair comparison, we normalize this standard deviation by ${\operatorname{ROC}}_{N}\left( \alpha \right)$ (classic evaluation). The result is provided as a function of $\alpha$ in Figure 2. This normalized standard deviation is a natural proxy measure for the uncertainty in the estimation of the ROC of each race label. It is worth noting that the higher uncertainty is achieved by Asians and Indians at low FAR levels and by Caucasians at high FAR levels. Note that Caucasians have the best performance at low FAR levels and, at the same time, the lowest uncertainty about it among all race labels.
|
| 172 |
+
|
| 173 |
+
---
|
| 174 |
+
|
| 175 |
+
https://github.com/deepinsight/insightface/tree/master/recognition/arcface_torch
|
| 176 |
+
|
| 177 |
+
---
|
| 178 |
+
|
| 179 |
+

|
| 180 |
+
|
| 181 |
+
Figure 1: Confidence bands at ${95}\%$ confidence level for the ROC of each race label. $B = {100}$ bootstrap samples are used. The classic intragroup ROC curves are depicted as solid lines.
|
| 182 |
+
|
| 183 |
+

|
| 184 |
+
|
| 185 |
+
Figure 2: Normalized standard deviation of $B = {100}$ intra-group bootstrap ROC curves, for each race label. The renormalization factor is the classic intra-group ROC curve.
|
| 186 |
+
|
| 187 |
+
Then, we investigate the uncertainty related to certain possible fairness measures. The race label is used here as the sensitive attribute $a$ . We compute the previous normalized standard deviation for the considered fairness metrics, in the same way than for Figure 2. For each metric, we take $B = {100}$ bootstrap samples, giving 100 fairness values at each ${\mathrm{{FAR}}}_{\text{total }} = \alpha$ level. For each $\alpha$ , the standard deviation of those values is found, and then normalized by the classic fairness measure at this level $\alpha$ , for a fair comparison. As illustrated in Figure 3, the Gini coefficient and the log-geomean sum fairness metrics show high (similar) uncertainty. The max-geomean ratio metric displays the lowest uncertainty, both in terms of FAR and FRR, which makes it particularly suitable for fairness evaluation. In addition, the max-geomean (and the max-min) ratio metrics have the significant advantage to be interpretable.
|
| 188 |
+
|
| 189 |
+

|
| 190 |
+
|
| 191 |
+
Figure 3: Normalized standard deviation of $B = {100}$ bootstrap fairness curves (as functions of ${\mathrm{{FAR}}}_{\text{total }}$ ), for each fairness metric. The renormalization factor is the classic fairness measure.
|
| 192 |
+
|
| 193 |
+
While the gold standard by which fairness will be evaluated in the future is not fixed yet, we believe that it should definitely incorporate uncertainty measures, since it could lead to wrong conclusions otherwise. The bootstrap approach is simple, fast and yet it has not been explored by the FR community.
|
| 194 |
+
|
| 195 |
+
## 217 References
|
| 196 |
+
|
| 197 |
+
218 Jamal Alasadi, Ahmed Al Hilli, and Vivek K Singh. Toward fairness in face matching algorithms. In 219 Proceedings of the 1st International Workshop on Fairness, Accountability, and Transparency in 220 MultiMedia, pages 19-25, 2019.
|
| 198 |
+
|
| 199 |
+
P. Bertail, S. Clémençon, and N. Vayatis. On boostrapping the ROC curve. In Advances in Neural Information Processing Systems 21, pages 137-144, 2008.
|
| 200 |
+
|
| 201 |
+
Jean-Rémy Conti, Nathan Noiry, Stephan Clemencon, Vincent Despiegel, and Stéphane Gentric. Mitigating gender bias in face recognition using the von mises-fisher mixture model. In International Conference on Machine Learning, pages 4344-4369. PMLR, 2022.
|
| 202 |
+
|
| 203 |
+
Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4690-4699, 2019a.
|
| 204 |
+
|
| 205 |
+
Jiankang Deng, Jia Guo, Debing Zhang, Yafeng Deng, Xiangju Lu, and Song Shi. Lightweight face recognition challenge. In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pages 2638-2646, 2019b. doi: 10.1109/ICCVW.2019.00322.
|
| 206 |
+
|
| 207 |
+
Jiankang Deng, Jia Guo, Yuxiang Zhou, Jinke Yu, Irene Kotsia, and Stefanos Zafeiriou. Retinaface: Single-stage dense face localisation in the wild. arXiv preprint arXiv:1905.00641, 2019c.
|
| 208 |
+
|
| 209 |
+
Prithviraj Dhar, Joshua Gleason, Aniket Roy, Carlos D Castillo, and Rama Chellappa. Pass: Protected attribute suppression system for mitigating bias in face recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15087-15096, 2021.
|
| 210 |
+
|
| 211 |
+
Patrick Grother. Face recognition vendor test (frvt) part 8: Summarizing demographic differentials. 2022.
|
| 212 |
+
|
| 213 |
+
Dongyoon Han, Jiwhan Kim, and Junmo Kim. Deep pyramidal residual networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul 2017. doi: 10.1109/cvpr. 2017.668. URL http://dx.doi.org/10.1109/CVPR.2017.668.
|
| 214 |
+
|
| 215 |
+
F. Hiesh and B. Turnbull. Nonparametric and semiparametric estimation of the receiver operating characteristic curve. The annals of Statistics, 24:25-40, 1996.
|
| 216 |
+
|
| 217 |
+
Paul Janssen. Bootstrapping u-statistics. South African Statistical Journal, 31(2):185-216, 1997.
|
| 218 |
+
|
| 219 |
+
Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 815-823, 2015.
|
| 220 |
+
|
| 221 |
+
Mei Wang, Weihong Deng, Jiani Hu, Xunqiang Tao, and Yaohai Huang. Racial faces in the wild: Reducing racial bias by information maximization adaptation network. In Proceedings of the ieee/cvf international conference on computer vision, pages 692-702, 2019.
|
| 222 |
+
|
| 223 |
+
Algorithm 1 Bootstrap of ${\operatorname{ROC}}_{N}\left( \alpha \right)$
|
| 224 |
+
|
| 225 |
+
---
|
| 226 |
+
|
| 227 |
+
Input: $K \geq 0$ , images $\left( {{X}_{1}^{1},\ldots ,{X}_{{n}_{K}}^{K}}\right)$ , encoder $f$
|
| 228 |
+
|
| 229 |
+
Require: $\alpha \in \left\lbrack {0,1}\right\rbrack , B \geq 0,{\alpha }_{CI} \in \left\lbrack {0,1}\right\rbrack$
|
| 230 |
+
|
| 231 |
+
Output: ${\mathrm{{CI}}}_{ - },{\mathrm{{CI}}}_{ + }$ , bounds for the confidence interval of ${\mathrm{{ROC}}}_{N}\left( \alpha \right)$ at level ${\alpha }_{CI}$
|
| 232 |
+
|
| 233 |
+
${\overline{\mathrm{{ROC}}}}_{N * } \leftarrow {\bar{F}}_{N * } \circ {G}_{N}^{-1}\left( {1 - \alpha }\right)$
|
| 234 |
+
|
| 235 |
+
gap $\leftarrow \varnothing$
|
| 236 |
+
|
| 237 |
+
for $b \leftarrow 1, B$ do
|
| 238 |
+
|
| 239 |
+
${X}_{\left( b\right) } \leftarrow \varnothing$
|
| 240 |
+
|
| 241 |
+
for $k \leftarrow 1, K$ do
|
| 242 |
+
|
| 243 |
+
${X}_{\left( b\right) }^{k} \leftarrow$ sample with replacement ${n}_{k}$ images among $\left( {{X}_{1}^{k},\ldots ,{X}_{{n}_{k}}^{k}}\right)$
|
| 244 |
+
|
| 245 |
+
${X}_{\left( b\right) } \leftarrow {X}_{\left( b\right) } \cup {X}_{\left( b\right) }^{k}$
|
| 246 |
+
|
| 247 |
+
end for
|
| 248 |
+
|
| 249 |
+
${\operatorname{ROC}}_{N,\left( b\right) } \leftarrow {F}_{N * } \circ {G}_{N * }^{-1}\left( {1 - \alpha }\right)$ for bootstrap sample ${X}_{\left( b\right) }$
|
| 250 |
+
|
| 251 |
+
${\operatorname{gap}}_{\left( b\right) } \leftarrow {\operatorname{ROC}}_{N,\left( b\right) } - {\overline{\operatorname{ROC}}}_{N * }$
|
| 252 |
+
|
| 253 |
+
gap $\leftarrow$ gap $\cup {\operatorname{gap}}_{\left( b\right) }$
|
| 254 |
+
|
| 255 |
+
end for
|
| 256 |
+
|
| 257 |
+
${\mathrm{{CI}}}_{ - } \leftarrow \frac{{\alpha }_{CI}}{2}$ -th quantile of gap
|
| 258 |
+
|
| 259 |
+
${\mathrm{{CI}}}_{ + } \leftarrow \left( {1 - \frac{{\alpha }_{CI}}{2}}\right)$ -th quantile of gap
|
| 260 |
+
|
| 261 |
+
${\operatorname{ROC}}_{N} \leftarrow {F}_{N} \circ {G}_{N}^{-1}\left( {1 - \alpha }\right)$
|
| 262 |
+
|
| 263 |
+
${\mathrm{{CI}}}_{ - } \leftarrow {\mathrm{{ROC}}}_{N} + {\mathrm{{CI}}}_{ - }$
|
| 264 |
+
|
| 265 |
+
${\mathrm{{CI}}}_{ + } \leftarrow {\mathrm{{ROC}}}_{N} + {\mathrm{{CI}}}_{ + }$
|
| 266 |
+
|
| 267 |
+
---
|
| 268 |
+
|
| 269 |
+
## B Visualization of the recentering step
|
| 270 |
+
|
| 271 |
+
In this section, we underline the significance of the recentering step of Algorithm 1. For the sake of simplicity, we achieve the bootstrap of the ROC curve for the global population, and not for some specific subgroups.
|
| 272 |
+
|
| 273 |
+
Let suppose that a naive bootstrap is done, that is we get some bootstrap image samples and, for each of them, we compute the bootstrap version ${\mathrm{{ROC}}}_{N * }$ of ${\mathrm{{ROC}}}_{N}$ . If a naive bootstrap is achieved, the bootstrap versions ${\operatorname{ROC}}_{N * }$ (for many bootstrap samples) would be supposed to be centered around ${\mathrm{{ROC}}}_{N}$ . By taking quantiles of ${\mathrm{{ROC}}}_{N * }\left( \alpha \right)$ for a given FAR level equal to $\alpha$ , we would get the confidence interval at this FAR level $\alpha$ . However, as illustrated in Figure 4 and Figure 5, this is not the case. The theoretical reasons have been detailed in Section 2. Briefly, since $\left( {{\operatorname{ROC}}_{N * }\left( \alpha \right) - {\overline{\operatorname{ROC}}}_{N * }\left( \alpha \right) }\right)$ is a good estimator of $\left( {{\operatorname{ROC}}_{N}\left( \alpha \right) - \operatorname{ROC}\left( \alpha \right) }\right)$ , we can obtain confidence intervals for the latter with confidence intervals for the former.
|
| 274 |
+
|
| 275 |
+

|
| 276 |
+
|
| 277 |
+
Figure 4: Bootstrap versions ${\mathrm{{ROC}}}_{N * }$ of the ROC curve for the global population of the RFW dataset. $B = {100}$ bootstrap samples are considered. The classic version ${\mathrm{{ROC}}}_{N}$ is depicted as a dark-blue solid line while its V-statistic version ${\overline{\mathrm{{ROC}}}}_{N * }$ is depicted as a red solid line.
|
| 278 |
+
|
| 279 |
+

|
| 280 |
+
|
| 281 |
+
Figure 5: Confidence bands at ${95}\%$ confidence level for the bootstrap versions ${\mathrm{{ROC}}}_{N * }$ of the ROC curve for the global population of the RFW dataset. $B = {100}$ bootstrap samples are considered. The classic version ${\mathrm{{ROC}}}_{N}$ is depicted as a dark-blue solid line while its V-statistic version ${\overline{\mathrm{{ROC}}}}_{N * }$ is depicted as a red solid line.
|
| 282 |
+
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/_Fl5G8NCA2/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,189 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ ASSESSING PERFORMANCE AND FAIRNESS METRICS IN FACE RECOGNITION - BOOTSTRAP METHODS
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
§ ABSTRACT
|
| 12 |
+
|
| 13 |
+
The ROC curve is the major tool for assessing not only the performance but also the fairness properties of a similarity scoring function in Face Recognition. In order to draw reliable conclusions based on empirical ROC analysis, evaluating accurately the uncertainty related to statistical versions of the ROC curves of interest is necessary. For this purpose, we explain in this paper that, because the True/False Acceptance Rates are of the form of U-statistics in the case of similarity scoring, the naive bootstrap approach is not valid here and that a dedicated recentering technique must be used instead. This is illustrated on real data of face images, when applied to several ROC-based metrics such as popular fairness metrics.
|
| 14 |
+
|
| 15 |
+
§ 10 1 FACE RECOGNITION - PERFORMANCE & FAIRNESS
|
| 16 |
+
|
| 17 |
+
The deployment of Face Recognition (FR) systems brings with it a pressing demand for methodological tools to assess their trustworthiness. The reliability of FR systems concerns their estimated performance of course, but also their properties regarding fairness: ideally, the system should exhibit approximately the same performance, independently of the sensitive group (determined by e.g. gender, age group, race) to which it is applied. While, until now, the benchmark of FR systems is essentially reduced to an ad-hoc evaluation of the performance metrics (i.e. ROC analysis) on a face image dataset of reference, the purpose of this paper is to explain, and illustrate using real data how the bootstrap methodology can be used to quantify the uncertainty/variability of the performance metrics, as well as that of some popular fairness metrics. Hopefully, this paves the way for a more valuable and trustworthy comparative analysis of the merits and drawbacks of FR systems.
|
| 18 |
+
|
| 19 |
+
In FR, the usual objective is to learn an encoder function $f : {\mathbb{R}}^{h \times w \times c} \rightarrow {\mathbb{R}}^{d}$ that embeds the images in a way that brings same identities closer together. Each image is of size(h, w), while $c$ corresponds to the color channel dimension. It is worth noting that a pre-processing detection step (finding a face within an image) is required to make all face images have the same size(h, w). For an image $x \in {\mathbb{R}}^{h \times w \times c}$ , its latent representation $f\left( x\right) \in {\mathbb{R}}^{d}$ is called the face embedding of $x$ .
|
| 20 |
+
|
| 21 |
+
Since the advent of deep learning, the encoder $f$ is a deep Convolutional Neural Network (CNN) whose parameters are learned on a huge FR dataset, made of face images and identity labels. In brief, the training consists in taking all images ${x}_{i}^{k}$ , labelled with identity $k$ , computing their embeddings $f\left( {x}_{i}^{k}\right)$ and adjusting the parameters of $f$ so that those embeddings are as close as possible (for a given similarity measure) and as far as possible from the embeddings of identity $l \neq k$ . The usual similarity measure is the cosine similarity which is defined as $s\left( {{x}_{i},{x}_{j}}\right) \mathrel{\text{ := }} f{\left( {x}_{i}\right) }^{\top }f\left( {x}_{j}\right) /\left( {\begin{Vmatrix}{f\left( {x}_{i}\right) }\end{Vmatrix} \cdot \begin{Vmatrix}{f\left( {x}_{j}\right) }\end{Vmatrix}}\right)$ for two images ${x}_{i},{x}_{j}$ , with $\parallel \cdot \parallel$ standing for the usual Euclidean norm. In some early works [Schroff et al.,2015], the Euclidean metric $\begin{Vmatrix}{f\left( {x}_{i}\right) - f\left( {x}_{j}\right) }\end{Vmatrix}$ was also used. In the rest of this document, we discard the notation $f$ for the encoder, and only use the similarity $s$ (which contains the encoder), as we are not interested in the encoder training.
|
| 22 |
+
|
| 23 |
+
§ 1.1 PERFORMANCE EVALUATION IN FACE RECOGNITION
|
| 24 |
+
|
| 25 |
+
There are generally two FR use cases: identification, which consists in finding the specific identity of a probe face among several previously enrolled identities, and verification (which we focus on throughout this paper), which aims at deciding whether two face images correspond to the same identity or not. In practice, the evaluation of a trained FR model is achieved using an evaluation dataset, where all possible pairs $\left( {{x}_{i},{x}_{j}}\right)$ of face images are considered. Then, an operating point $t \in \left\lbrack {-1,1}\right\rbrack$ (threshold of acceptance) is chosen to classify the pair $\left( {{x}_{i},{x}_{j}}\right)$ as genuine (same identity) if $s\left( {{x}_{i},{x}_{j}}\right) > t$ and impostor (distinct identities) otherwise. In the following, we describe the statistical measures for evaluating a FR model, given an evaluation dataset.
|
| 26 |
+
|
| 27 |
+
Assuming that there are $K$ distinct identities, the evaluation dataset can be modeled by a random variable $\left( {X,y}\right) \in {\mathbb{R}}^{h \times w \times c} \times \{ 1,\ldots ,K\}$ . We denote by $\mathbf{P}$ the corresponding probability law. For $1 \leq k \leq K$ , we assume that the identities are equiprobable i.e. $\mathbf{P}\left( {y = k}\right) = \frac{1}{K}.X$ is determined by its conditional distributions ${X}^{k} \mathrel{\text{ := }} \left( {X \mid y = k}\right) \sim {\mathcal{I}}_{k}$ and we consider that ${X}^{k},{X}^{l}$ are independent if $k \neq l$ .
|
| 28 |
+
|
| 29 |
+
Let $\left( {{X}_{1},{y}_{1}}\right)$ and $\left( {{X}_{2},{y}_{2}}\right)$ be two independent random variables with law $\mathbf{P}$ . We distinguish between the False Negative Rate (FNR) and the True Negative Rate (TNR), respectively defined by:
|
| 30 |
+
|
| 31 |
+
$$
|
| 32 |
+
F\left( t\right) = \mathbf{P}\left( {s\left( {{X}_{1},{X}_{2}}\right) \leq t \mid {y}_{1} = {y}_{2}}\right) \;\text{ and }\;G\left( t\right) = \mathbf{P}\left( {s\left( {{X}_{1},{X}_{2}}\right) \leq t \mid {y}_{1} \neq {y}_{2}}\right) .
|
| 33 |
+
$$
|
| 34 |
+
|
| 35 |
+
With these notations, the ROC curve is defined as the graph of the mapping
|
| 36 |
+
|
| 37 |
+
$$
|
| 38 |
+
\mathrm{{ROC}} : \alpha \mapsto \operatorname{ROC}\left( \alpha \right) = F \circ {G}^{-1}\left( {1 - \alpha }\right) \;\text{ with }\;\alpha \in \left\lbrack {0,1}\right\rbrack .
|
| 39 |
+
$$
|
| 40 |
+
|
| 41 |
+
Note that by $\operatorname{ROC}\left( \alpha \right)$ , one usually means $1 - F \circ {G}^{-1}\left( {1 - \alpha }\right)$ in machine learning and statistical literature but the FR community favors the DET curve $\left( {1 - \operatorname{ROC}\left( \alpha \right) }\right)$ , which we will call $\mathrm{{ROC}}$ curve in the following.
|
| 42 |
+
|
| 43 |
+
In practice, those metrics are not computable since we only have a finite dataset. We denote by ${n}_{k}$ the number of face images of identity $k$ , for $1 \leq k \leq K$ , within the evaluation dataset. The images of identity $k$ are modeled by random variables ${\left( {X}_{i}^{k}\right) }_{1 \leq i \leq {n}_{k}}$ , independent copies of ${X}^{k}$ . The empirical approximations ${F}_{N}$ and ${G}_{N}$ of $F$ and $G$ are:
|
| 44 |
+
|
| 45 |
+
$$
|
| 46 |
+
{F}_{N}\left( t\right) = \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}\frac{1}{\left( \begin{matrix} {n}_{k} \\ 2 \end{matrix}\right) }\mathop{\sum }\limits_{{1 \leq i < j \leq {n}_{k}}}{\mathbf{1}}_{s\left( {{X}_{i}^{k},{X}_{j}^{k}}\right) \leq t}
|
| 47 |
+
$$
|
| 48 |
+
|
| 49 |
+
and
|
| 50 |
+
|
| 51 |
+
$$
|
| 52 |
+
{G}_{N}\left( t\right) = \frac{1}{\left( \begin{matrix} K \\ 2 \end{matrix}\right) }\mathop{\sum }\limits_{{1 \leq k < l \leq K}}\frac{1}{{n}_{k}{n}_{l}}\mathop{\sum }\limits_{\substack{{1 \leq i \leq {n}_{k}} \\ {1 \leq j \leq {n}_{l}} }}{\mathbf{1}}_{s\left( {{X}_{i}^{k},{X}_{j}^{l}}\right) \leq t}.
|
| 53 |
+
$$
|
| 54 |
+
|
| 55 |
+
The empirical ROC curve is naturally:
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
{\operatorname{ROC}}_{N} : \alpha \mapsto {\operatorname{ROC}}_{N}\left( \alpha \right) = {F}_{N} \circ {G}_{N}^{-1}\left( {1 - \alpha }\right) \;\text{ with }\;\alpha \in \left\lbrack {0,1}\right\rbrack . \tag{1}
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
§ 1.2 FAIRNESS METRICS IN FACE RECOGNITION
|
| 62 |
+
|
| 63 |
+
To be consistent with the FR community, we change our previous notations (only for addressing fairness metrics) and define the False Rejection Rate (FRR) and the False Acceptance Rate (FAR) respectively as $\operatorname{FRR}\left( t\right) \mathrel{\text{ := }} {F}_{N}\left( t\right)$ and $\operatorname{FAR}\left( t\right) \mathrel{\text{ := }} 1 - {G}_{N}\left( t\right)$ . Both are error rates that should be minimized, one more than the other depending on the use case. With those notations, the empirical ROC curve is ${\operatorname{ROC}}_{N}\left( \alpha \right) = \operatorname{FRR}\left( {t}_{\alpha }\right)$ with $\operatorname{FAR}\left( {t}_{\alpha }\right) = \alpha$ .
|
| 64 |
+
|
| 65 |
+
In order to inspect fairness issues in FR, one should look at differentials in performance amongst several subgroups of the population. Those subgroups are distinguishable by a sensitive attribute (e.g. gender, race, age,...). For a given discrete sensitive attribute that can take $A > 1$ different values, we enrich our previous model and consider a random variable(X, y, a)where $a \in \mathcal{A} = \{ 0,1,\ldots ,A - 1\}$ . With a slight abuse of notations, we still denote by $\mathbf{P}$ the corresponding probability law and, for every fixed value $a$ , we can further define
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
{F}^{a}\left( t\right) \mathrel{\text{ := }} \mathbf{P}\left( {s\left( {{X}_{1},{X}_{2}}\right) \leq t \mid {y}_{1} = {y}_{2},{a}_{1} = {a}_{2} = a}\right)
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
{G}^{a}\left( t\right) \mathrel{\text{ := }} \mathbf{P}\left( {s\left( {{X}_{1},{X}_{2}}\right) \leq t \mid {y}_{1} \neq {y}_{2},{a}_{1} = {a}_{2} = a}\right) .
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
The empirical approximations of ${F}^{a}\left( t\right)$ and $\left( {1 - {G}^{a}\left( t\right) }\right)$ are denoted respectively by ${\mathrm{{FRR}}}_{a}\left( t\right)$ and ${\mathrm{{FAR}}}_{a}\left( t\right)$ . In the following, we list several popular FR fairness metrics. All of them are used by the U.S. National Institute of Standards and Technology (NIST) in their FRVT report [Grother, 2022]. Those fairness metrics attempt to quantify the differentials in ${\left( {\mathrm{{FAR}}}_{a}\left( t\right) \right) }_{a \in \mathcal{A}}$ and ${\left( {\mathrm{{FRR}}}_{a}\left( t\right) \right) }_{a \in \mathcal{A}}$ . Since each metric fairness has two versions (one for the differentials in terms of FAR, the other in terms of FRR), we only present its FAR version. All metrics depend here on the threshold ${t}_{\alpha }$ which satisfies ${\mathrm{{FAR}}}_{\text{ total }}\left( {t}_{\alpha }\right) = \alpha \in \left\lbrack {0,1}\right\rbrack$ , meaning that the threshold is set so that it achieves a FAR equal to $\alpha$ for the global population, and not for some specific subgroup.
|
| 76 |
+
|
| 77 |
+
Max-min ratio. This metric has also been introduced by Conti et al. [2022], but for another choice of threshold ${t}_{\alpha }$ . Its advantage is to be very interpretable but it is sensitive to low values in the
|
| 78 |
+
|
| 79 |
+
denominator.
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
{\operatorname{FAR}}_{\min }^{\max }\left( \alpha \right) = \frac{\mathop{\max }\limits_{{a \in \{ 0,1\} }}{\operatorname{FAR}}_{a}\left( {t}_{\alpha }\right) }{\mathop{\min }\limits_{{a \in \{ 0,1\} }}{\operatorname{FAR}}_{a}\left( {t}_{\alpha }\right) }.
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
Max-geomean ratio. This metric replaces the previous minimum by the geometric mean ${\mathrm{{FAR}}}^{ \dagger }\left( {t}_{\alpha }\right)$ of the values ${\left( {\mathrm{{FAR}}}_{a}\left( {t}_{\alpha }\right) \right) }_{a \in \mathcal{A}}$ , in order to be less sensitive to low values in the denominator.
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
{\operatorname{FAR}}_{\text{ geomean }}^{\max }\left( \alpha \right) = \frac{\mathop{\max }\limits_{{a \in \{ 0,1\} }}{\operatorname{FAR}}_{a}\left( {t}_{\alpha }\right) }{{\operatorname{FAR}}^{ \dagger }\left( {t}_{\alpha }\right) }.
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
6 Log-geomean sum. It is a sum of normalized logarithms.
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
{\operatorname{FAR}}_{\text{ geomean }}^{\log }\left( \alpha \right) = \mathop{\sum }\limits_{{a \in \mathcal{A}}}\left| {{\log }_{10}\frac{{\operatorname{FAR}}_{a}\left( {t}_{\alpha }\right) }{{\operatorname{FAR}}^{ \dagger }\left( {t}_{\alpha }\right) }}\right| .
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
Gini coefficient. The Gini coefficient is a measure of inequality in a population. It ranges from a minimum value of zero, when all individuals are equal, to a theoretical maximum of one in an infinite population in which every individual except one has a size of zero.
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
{\operatorname{FAR}}_{\operatorname{Gini}}\left( \alpha \right) = \frac{\left| \mathcal{A}\right| }{\left| \mathcal{A}\right| - 1}\frac{\mathop{\sum }\limits_{{a \in \mathcal{A}}}\mathop{\sum }\limits_{{b \in \mathcal{A}}}\left| {{\operatorname{FAR}}_{a}\left( {t}_{\alpha }\right) - {\operatorname{FAR}}_{b}\left( {t}_{\alpha }\right) }\right| }{2{\left| \mathcal{A}\right| }^{2}{\operatorname{FAR}}^{ \dagger }\left( {t}_{\alpha }\right) }.
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
Conti et al. [2022] argue that the choice of a threshold ${t}_{\alpha }$ achieving a global ${\mathrm{{FAR}}}_{\text{ total }} = \alpha$ is not entirely relevant since it depends on the relative proportions of each sensitive attribute value $a$ within the evaluation dataset together with the relative proportion of intra-group impostors. They propose instead a threshold ${t}_{\alpha }$ such that each group $a$ satisfies ${\mathrm{{FAR}}}_{a}\left( {t}_{\alpha }\right) \leq \alpha$ . Since we are dealing with a unique evaluation dataset, we do not use such a threshold choice, to be consistent with the last three fairness metrics. Other fairness metrics exist in the literature such as the maximum difference in the values ${\left( {\mathrm{{FAR}}}_{a}\left( {t}_{\alpha }\right) \right) }_{a \in \mathcal{A}}$ used by Alasadi et al. [2019], Dhar et al. [2021]. They have the disadvantage of not being normalized and are thus not interpretable, especially when comparing their values at different levels $\alpha$ .
|
| 104 |
+
|
| 105 |
+
§ 2 ASSESSING THE UNCERTAINTY OF FACE RECOGNITION METRICS THROUGH BOOTSTRAP
|
| 106 |
+
|
| 107 |
+
As previously explained, the ROC curves (and their related scalar summaries) of a similarity scoring function $s\left( {x,{x}^{\prime }}\right)$ (determined in practice by an encoder function to which cosine similarity is next applied) provide the main tool to assess performance and fairness in face recognition. We now investigate how to bootstrap these functional criteria, in order to evaluate the uncertainty/variability inherent in their estimation based on (supposedly i.i.d.) sampling observations drawn from the statistical populations under study. Indeed, this evaluation is crucial to judge whether the similarity scoring function candidate meets the performance/fairness requirements in a trustworthy manner, as will be next discussed on real examples in the next section.
|
| 108 |
+
|
| 109 |
+
Bootstrapping the ROC curve of a similarity scoring function. Extending the limit results in Hiesh and Turnbull [1996], the consistency of the empirical ROC curve (I) of a similarity scoring function $s\left( {x,{x}^{\prime }}\right)$ can be classically established, as well as its asymptotic Gaussianity (under additional hypotheses, involving the absolute continuity of distributions $F$ and $G$ in particular), in a standard multi-sample asymptotic framework, i.e. stipulating that, for all $k \in \{ 1,\ldots ,K\} ,{n}_{k}/N \rightarrow {\lambda }_{k} > 0$ as $N \rightarrow + \infty$ ). Indeed, under appropriate mild technical assumptions, one may prove that the sequence of stochastic processes
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
{\left\{ \sqrt{N}\left( {\operatorname{ROC}}_{N}\left( \alpha \right) - \operatorname{ROC}\left( \alpha \right) \right) \right\} }_{\alpha \in \left( {0,1}\right) }
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
converges in distribution to a Gaussian law as $N \rightarrow \infty$ . However, this limit law can hardly be used to build (asymptotic) confidence bands for the true ROC curve (or confidence intervals for scalar summary ROC-based metrics) in practice, due to its great complexity (the limit law, depending on the unknown densities of $F$ and $G$ is built from Brownian bridges and its approximate numerical simulation is a considerable challenge). Resampling techniques must be used instead, in order to mimic the random fluctuations of ${\operatorname{ROC}}_{N}\left( \alpha \right) - \operatorname{ROC}\left( \alpha \right)$ . Application of the (smoothed) bootstrap methodology to ROC analysis has been investigated at length in the bipartite ranking context, i.e. for binary classification data [Bertail et al., 2008]. In the classification framework, bootstrap versions of the empirical ROC curve are simply obtained by means of uniform sampling with replacement in the two statistical populations (positive and negative). In this case, the empirical true/false positive rates are of the form of i.i.d. averages, which greatly differs from the present situation, where ${F}_{N}\left( t\right)$ is an average of independent mono-sample $U$ -statistics of degree 2, while ${G}_{N}\left( t\right)$ is a multi-sample $U$ - statistic of degree(1,1). As will be shown below and illustrated in Appendix B, the pairwise nature of the statistical quantity ${F}_{N}\left( t\right)$ computed is of great consequence, insofar as a ’naive’ implementation of the bootstrap completely fails to reproduce ${\mathrm{{ROC}}}_{N}$ ’s variability when applied to the latter. Indeed, it systematically leads to a serious underestimation of ${F}_{N}\left( t\right)$ , and consequently to an underestimation of ${\mathrm{{ROC}}}_{N}$ uniformly on(0,1). For simplicity’s sake, we describe the reason behind this phenomenon by considering the problem of bootstrapping the statistic ${F}_{N}\left( t\right)$ and explain next how to remedy this problem.
|
| 116 |
+
|
| 117 |
+
For all $1 \leq k \leq K$ , consider $\left( {{X}_{1 * }^{k},\ldots ,{X}_{{n}_{k} * }^{k}}\right)$ a bootstrap sample related to identity $k$ , drawn by simple sampling with replacement from original data $\left\{ \left( {{X}_{1}^{k},\ldots ,{X}_{{n}_{k}}^{k}}\right) \right\}$ . Recall that the original statistic is of the form:
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
{F}_{N}\left( t\right) = \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}{F}_{N}^{k}\left( t\right) \;\text{ with }\;{F}_{N}^{k}\left( t\right) = \frac{1}{\left( \begin{matrix} {n}_{k} \\ 2 \end{matrix}\right) }\mathop{\sum }\limits_{{1 \leq i < j \leq {n}_{k}}}{\mathbf{1}}_{s\left( {{X}_{i}^{k},{X}_{j}^{k}}\right) \leq t}.
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
* Using the previous bootstrap sample, we can compute a bootstrap version of ${F}_{N}^{k}\left( t\right)$ :
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
{F}_{N * }^{k}\left( t\right) = \frac{1}{\left( \begin{matrix} {n}_{k} \\ 2 \end{matrix}\right) }\mathop{\sum }\limits_{{1 \leq i < j \leq {n}_{k}}}{\mathbf{1}}_{s\left( {{X}_{i * }^{k},{X}_{j * }^{k}}\right) \leq t}.
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
${F}_{N}^{k}\left( t\right)$ is a (non degenerate) $U$ -statistic of order 2 (an average over all pairs) with symmetric kernel ${\mathbf{1}}_{s\left( {x,{x}^{\prime }}\right) \leq t}$ , and thus involves no ’diagonal’ terms of type ${\mathbf{1}}_{s\left( {{X}_{i}^{k},{X}_{i}^{k}}\right) \leq t}$ . Indeed, evaluating the similarity of an image and itself brings no information (it is naturally equal to 1 when considering cosine similarity). By contrast, it is shown in Janssen [1997] that the bootstrap version ${F}_{N * }^{k}\left( t\right)$ of ${F}_{N}^{k}\left( t\right)$ is in expectation equal to its $V$ -statistic version, i.e. the version obtained by incorporating the diagonal terms in the average. In details, denoting ${\mathbf{E}}^{ * }\left\lbrack {\cdot \mid {X}_{1}^{k},\ldots ,{X}_{{n}_{k}}^{k}}\right\rbrack$ the conditional expectation with respect to $\left( {{X}_{1}^{k},\ldots ,{X}_{{n}_{k}}^{k}}\right)$ (i.e. it denotes the expectation related to the randomness induced by the resampling), we have that:
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
{\mathbf{E}}^{ * }\left\lbrack {{F}_{N * }^{k}\left( t\right) \mid {X}_{1}^{k},\ldots ,{X}_{{n}_{k}}^{k}}\right\rbrack = \frac{1}{{n}_{k}^{2}}\mathop{\sum }\limits_{{1 \leq i,j \leq {n}_{k}}}{\mathbf{1}}_{s\left( {{X}_{i}^{k},{X}_{j}^{k}}\right) \leq t}.
|
| 133 |
+
$$
|
| 134 |
+
|
| 135 |
+
Grouping all $K$ identities, we can compute a bootstrap version of ${F}_{N}\left( t\right)$ :
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
{F}_{N * }\left( t\right) = \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}\frac{1}{\left( \begin{matrix} {n}_{k} \\ 2 \end{matrix}\right) }\mathop{\sum }\limits_{{1 \leq i < j \leq {n}_{k}}}{\mathbf{1}}_{s\left( {{X}_{i * }^{k},{X}_{j * }^{k}}\right) \leq t}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
whose expectation is:
|
| 142 |
+
|
| 143 |
+
$$
|
| 144 |
+
{\bar{F}}_{N * }\left( t\right) \mathrel{\text{ := }} {\mathbf{E}}^{ * }\left\lbrack {{F}_{N * }\left( t\right) \mid {X}_{1}^{1},\ldots ,{X}_{{n}_{K}}^{K}}\right\rbrack = \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}\frac{1}{{n}_{k}^{2}}\mathop{\sum }\limits_{{1 \leq i,j \leq {n}_{k}}}{\mathbf{1}}_{s\left( {{X}_{i}^{k},{X}_{j}^{k}}\right) \leq t}.
|
| 145 |
+
$$
|
| 146 |
+
|
| 147 |
+
This means that bootstrapping ${F}_{N}\left( t\right)$ would result in many values ${F}_{N * }\left( t\right)$ that are not centered around ${F}_{N}\left( t\right)$ , but centered around ${\bar{F}}_{N * }\left( t\right)$ . From Janssen [1997] (Theorem 3), we find that $\mathbf{P}\left\lbrack {\sqrt{N}\left( {{F}_{N * }\left( t\right) - {\bar{F}}_{N * }\left( t\right) }\right) \leq x\left| {{X}_{1}^{1},\ldots ,{X}_{{n}_{K}}^{K}}\right\rbrack }\right.$ is a uniformly consistent estimator for $\mathbf{P}\left\lbrack {\sqrt{N}\left( {{F}_{N}\left( t\right) - F\left( t\right) }\right) \leq x}\right\rbrack$ . As a consequence, we can build confidence intervals for ${F}_{N}\left( t\right)$ in the following way: from the bootstrap samples, build confidence interval for $\left( {{F}_{N * }\left( t\right) - {\bar{F}}_{N * }\left( t\right) }\right)$ and shift it by ${F}_{N}\left( t\right)$ .
|
| 148 |
+
|
| 149 |
+
By contrast a naive bootstrap method, involving no recentering, can be applied to:
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
{G}_{N}\left( t\right) = \frac{1}{\left( \begin{matrix} K \\ 2 \end{matrix}\right) }\mathop{\sum }\limits_{{1 \leq k < l \leq K}}{G}_{N}^{k,l}\left( t\right) \;\text{ with }\;{G}_{N}^{k,l}\left( t\right) = \frac{1}{{n}_{k}{n}_{l}}\mathop{\sum }\limits_{\substack{{1 \leq i \leq {n}_{k}} \\ {1 \leq j \leq {n}_{l}} }}{\mathbf{1}}_{s\left( {{X}_{i}^{k},{X}_{j}^{l}}\right) \leq t}.
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
Since each sample of the 2-sample U-statistic ${G}_{N}^{k,l}\left( t\right)$ is of order(1,1), we have:
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
{\mathbf{E}}^{ * }\left\lbrack {{G}_{N * }^{k,l}\left( t\right) \mid {X}_{1}^{k},\ldots ,{X}_{{n}_{l}}^{l}}\right\rbrack = {G}_{N}^{k,l}\left( t\right) \;\text{ and }\;{\bar{G}}_{N * }\left( t\right) = {G}_{N}\left( t\right) .
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
The previous confidence interval method still works here: build a confidence interval for $\left( {{G}_{N * }\left( t\right) - {\bar{G}}_{N * }\left( t\right) }\right) = \left( {{G}_{N * }\left( t\right) - {G}_{N}\left( t\right) }\right)$ and shift it by ${G}_{N}\left( t\right)$ . However, in this case, the bootstrap values ${G}_{N * }\left( t\right)$ are centered around ${G}_{N}\left( t\right)$ . This method for confidence interval construction extends naturally to the bootstrap of the empirical quantile function ${G}_{N}^{-1}$ . In theory, a smoothed version of the bootstrap, consisting in sampling from smoothed (by means of a e.g. Gaussian kernel) versions of the empirical distributions should be used for bootstrapping quantiles. However, given the very large size of the pooled dataset here, smoothing can be ignored in practice.
|
| 162 |
+
|
| 163 |
+
Finally, we regroup the bootstrap of ${F}_{N}\left( t\right)$ and of ${G}_{N}^{-1}\left( \alpha \right)$ to present the bootstrap of the empirical ROC curve ${\operatorname{ROC}}_{N}\left( \alpha \right) = {F}_{N} \circ {G}_{N}^{-1}\left( {1 - \alpha }\right)$ . Using many bootstrap samples, a confidence interval is found for $\left( {{\operatorname{ROC}}_{N * }\left( \alpha \right) - {\overline{\operatorname{ROC}}}_{N * }\left( \alpha \right) }\right) = \left( {{F}_{N * } \circ {G}_{N * }^{-1}\left( {1 - \alpha }\right) - {\bar{F}}_{N * } \circ {G}_{N}^{-1}\left( {1 - \alpha }\right) }\right)$ , then shifted by ${\operatorname{ROC}}_{N}\left( \alpha \right)$ . A pseudo-code for building the confidence interval for ${\operatorname{ROC}}_{N}\left( \alpha \right)$ at level ${\alpha }_{CI} \in \left\lbrack {0,1}\right\rbrack$ is summarized in Algorithm 1. We highlight the significance of the recentering step and why a naive bootstrap does not work in Appendix B.
|
| 164 |
+
|
| 165 |
+
Bootstrapping fairness metrics We apply the same bootstrap algorithm for all fairness metrics since they are functions of ${F}_{N}$ and ${G}_{N}$ . For instance, consider a fairness measure that depends on ${\mathrm{{FRR}}}_{a}\left( t\right)$ . ${\mathrm{{FRR}}}_{a}\left( t\right)$ would be computed as ${F}_{N}\left( t\right)$ for the classic fairness measure (equivalent of ${\mathrm{{ROC}}}_{N}$ above), as ${F}_{N * }\left( t\right)$ for the bootstrap fairness (equivalent of ${\mathrm{{ROC}}}_{N * }$ ) and as ${\bar{F}}_{N * }$ for the V-statistic fairness (equivalent of ${\overline{\mathrm{{ROC}}}}_{N * }$ ), using pairs of attribute $a$ in all three cases.
|
| 166 |
+
|
| 167 |
+
§ 3 NUMERICAL EXPERIMENTS - DISCUSSION
|
| 168 |
+
|
| 169 |
+
We use as encoder the trained ${}^{1}$ model ArcFace [Deng et al.,2019a] whose CNN architecture is a ResNet100 [Han et al., 2017]. It has been trained on the MS1M-RetinaFace dataset, introduced by [Deng et al., 2019b] in the ICCV 2019 Lightweight Face Recognition Challenge. We choose the dataset RFW [Wang et al.,2019] as evaluation dataset. It is composed of ${40}\mathrm{k}$ face images from ${11}\mathrm{k}$ distinct identities. This dataset is also provided with ground-truth race labels (the four available labels are: African, Asian, Caucasian, Indian) and is widely used for fairness evaluation since it is equally distributed among the race subgroups, in terms of images and identities. The official RFW protocol only considers a few matching pairs among all the possible pairs given the whole RFW dataset. The number of images is typically not enough to get good estimates of our fairness metrics at low FAR. To overcome this, we consider all possible same-race matching pairs among the whole RFW dataset. All images are pre-processed by the Retina-Face detector Deng et al. [2019c] and are of size ${112} \times {112}$ pixels.
|
| 170 |
+
|
| 171 |
+
Our first experiment is the computation of the confidence bands at 95% confidence level $\left( {{\alpha }_{CI} = {0.05}}\right)$ for each intra-group ROC i.e. the ROC corresponding to each race label. This is the output of our Algorithm 1 using $B = {100}$ bootstrap samples and the result is displayed in Figure 1. It can be observed that Caucasians have a better performance than other races and that the uncertainty makes all races potentially indistinguishable in terms of performance at high FAR levels. Notice that the uncertainty increases when any of the error rates FAR, FRR is low, which happens when a few matching pairs are incorrectly classified, making the error rates really sensitive to those pairs. To quantify better the uncertainty in the estimation of ${\operatorname{ROC}}_{N}\left( \alpha \right)$ , we compute the standard deviation of the $B = {100}$ bootstrap ROC curves ${\operatorname{ROC}}_{N * }\left( \alpha \right)$ , for each race label. For a fair comparison, we normalize this standard deviation by ${\operatorname{ROC}}_{N}\left( \alpha \right)$ (classic evaluation). The result is provided as a function of $\alpha$ in Figure 2. This normalized standard deviation is a natural proxy measure for the uncertainty in the estimation of the ROC of each race label. It is worth noting that the higher uncertainty is achieved by Asians and Indians at low FAR levels and by Caucasians at high FAR levels. Note that Caucasians have the best performance at low FAR levels and, at the same time, the lowest uncertainty about it among all race labels.
|
| 172 |
+
|
| 173 |
+
https://github.com/deepinsight/insightface/tree/master/recognition/arcface_torch
|
| 174 |
+
|
| 175 |
+
< g r a p h i c s >
|
| 176 |
+
|
| 177 |
+
Figure 1: Confidence bands at ${95}\%$ confidence level for the ROC of each race label. $B = {100}$ bootstrap samples are used. The classic intragroup ROC curves are depicted as solid lines.
|
| 178 |
+
|
| 179 |
+
< g r a p h i c s >
|
| 180 |
+
|
| 181 |
+
Figure 2: Normalized standard deviation of $B = {100}$ intra-group bootstrap ROC curves, for each race label. The renormalization factor is the classic intra-group ROC curve.
|
| 182 |
+
|
| 183 |
+
Then, we investigate the uncertainty related to certain possible fairness measures. The race label is used here as the sensitive attribute $a$ . We compute the previous normalized standard deviation for the considered fairness metrics, in the same way than for Figure 2. For each metric, we take $B = {100}$ bootstrap samples, giving 100 fairness values at each ${\mathrm{{FAR}}}_{\text{ total }} = \alpha$ level. For each $\alpha$ , the standard deviation of those values is found, and then normalized by the classic fairness measure at this level $\alpha$ , for a fair comparison. As illustrated in Figure 3, the Gini coefficient and the log-geomean sum fairness metrics show high (similar) uncertainty. The max-geomean ratio metric displays the lowest uncertainty, both in terms of FAR and FRR, which makes it particularly suitable for fairness evaluation. In addition, the max-geomean (and the max-min) ratio metrics have the significant advantage to be interpretable.
|
| 184 |
+
|
| 185 |
+
< g r a p h i c s >
|
| 186 |
+
|
| 187 |
+
Figure 3: Normalized standard deviation of $B = {100}$ bootstrap fairness curves (as functions of ${\mathrm{{FAR}}}_{\text{ total }}$ ), for each fairness metric. The renormalization factor is the classic fairness measure.
|
| 188 |
+
|
| 189 |
+
While the gold standard by which fairness will be evaluated in the future is not fixed yet, we believe that it should definitely incorporate uncertainty measures, since it could lead to wrong conclusions otherwise. The bootstrap approach is simple, fast and yet it has not been explored by the FR community.
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/_QcreQjxHi/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,386 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# But Are You Sure? Quantifying Uncertainty in Model Explanations
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s) Affiliation Address email
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
Even when a black-box model makes accurate predictions (e.g., whether it will rain tomorrow), it is difficult to extract principles from the model that improve human understanding (e.g., what set of atmospheric conditions best predict rainfall). Model explanations via explainability methods (e.g., LIME, Shapley values) can help by highlighting interpretable aspects of the model, such the data features to which the model is most sensitive. However, these methods can be unstable and inconsistent, which often ends up providing unreliable insights. Moreover, under the existence of many near-optimal models, there is no guarantee that explanations for a single model will agree with explanations from the true model that generated the data. In this work, instead of explaining a single best-fitting model, we develop principled methods to construct an uncertainty set for the "true explanation": the explanation from the (unknown) true model that generated the data. We show finite-sample guarantees that the uncertainty set we return includes the explanation for the true model with high probability. We show through synthetic experiments that our uncertainty sets have high fidelity to the explanations of the true model. We then report our findings on real-world data.
|
| 8 |
+
|
| 9 |
+
## 17 1 Introduction
|
| 10 |
+
|
| 11 |
+
Data is now collected at a much faster rate than can be processed directly by humans. Thus, machine learning is used to synthesize complex datasets into predictive models. For example, machine learning models can predict the 3D structure of proteins from their amino acid sequences [1] and forecast supply chain demand $\left\lbrack {2,3}\right\rbrack$ . However, modern models are often black boxes, meaning that even when they make accurate predictions, it is difficult to extract interpretable principles and intuitions. Whereas human experts can share their reasoning with other humans, predictive models typically lack this ability to communicate principles.
|
| 12 |
+
|
| 13 |
+
In response to this challenge, there has been growing interest in model explanations: human-interpretable descriptions of model predictions [4-10]. The explanations highlight aspects of the model that are particularly relevant for some downstream goal, such as calibrating trust in a model or identifying patterns in complex data. Popular explanations include Shapley values, LIME, integrated gradients, TCAV, and counterfactual explanations.
|
| 14 |
+
|
| 15 |
+
Use cases for model explanations can be organized around two goals: model auditing and scientific inquiry. In model auditing, the goal is to validate or debug the predictions of a trained model. For example, we might ask "Does this model rely on income to predict this individual's credit worthiness?". In contrast, in scientific inquiry the object of interest is the data generating distribution itself. An analogous question for scientific inquiry would be "Is income truly correlated with credit worthiness for people similar to this individual?" Explanations used for model auditing give insights about the model, while explanations for scientific inquiry give insights about the world. Note that a single explanation (e.g., the local relationship between income and credit worthiness) is usually well-defined for both a trained model and for a data generating distribution. Therefore, most explanations can be used for both model auditing and scientific inquiry. In this work, we focus on using explanations for scientific inquiry.
|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
|
| 19 |
+
Figure 1: Expected absolute Shapley value ${\phi }_{i}^{\text{shap }}\left( f\right)$ explanations for the three features of a linear model. The true labels were sampled from the linear model ${y}^{\left( i\right) } = \left\lbrack {1,0,0}\right\rbrack \cdot {x}^{\left( i\right) } + {\epsilon }^{\left( i\right) }$ where ${\epsilon }^{\left( i\right) }$ is i.i.d. Gaussian noise. The red stars indicate the explanations for this true model. Features 1 and 2 are highly correlated $\left( {{\rho }_{12} = {0.99}}\right)$ , so Feature 2 predicts the label even though the true model assigns Feature 2 no weight. We fit linear models using ridge regression. Left: Uncertainty sets constructed by resampling the training data to estimate the distribution of the explanation of the ridge regression model. Note that the naive uncertainty sets do not include the true explanation. Center: Uncertainty sets constructed using uniform convergence results. The uncertainty sets are valid but wider due to their stronger guarantees. Right: Uncertainty sets constructed using conformal inference. The intervals are tighter but only guarantee coverage in expectation over the true model.
|
| 20 |
+
|
| 21 |
+
Explanations are already being used for scientific inquiry in many domains, such as materials discovery [11], genomics [12, 13], motor vehicle collisions [14], environmental science [15], and finance [16, 17]. Usually, a practitioner choose a single "best-fitting" model and treat explanations of that model as representative of the data generating distribution. However, model explanations are known to be unstable (i.e., sensitive to small perturbations in the data) [18-23] and inconsistent (i.e., random variations in training algorithms can lead models trained on the same data to give different explanations) [24]. The problem is worsened by the phenomenon of model multiplicity: the existence of distinct models with comparable performance [25-27]. If there exist competing models-each of which provides a different explanation of the data-generating distribution-how can we tell which explanation is correct? [28] These issues threaten the applicability of existing explainability procedures for scientific inquiry. Given that explanations are known to vary widely among even near-optimal models [29], we cannot assume explanations from a model with good performance are representative of the data generating distribution. For example, in Figure 1 (left panel), trained models consistently disagree with the true explanation of a well-specified linear model.
|
| 22 |
+
|
| 23 |
+
In this work, we aim to develop simple and broadly-applicable procedures to use explanations for valid scientific inquiry. Instead of computing the explanation for a single best-fitting model, we give an uncertainty set containing plausible explanations for the (unknown) data generating distribution. This uncertainty set is guaranteed to include the correct explanation with high probability. Our main contributions include:
|
| 24 |
+
|
| 25 |
+
- We give simple examples where existing explainability procedures fail to recover the explanation of the data generating distribution.
|
| 26 |
+
|
| 27 |
+
- We propose three simple algorithms for rigorously inferring the explanation of the data generating distribution. One algorithm applies to tractable Bayesian models, one to intractable Bayesian models, and one to frequentist models.
|
| 28 |
+
|
| 29 |
+
- We show that all three algorithms give uncertainty sets that include the correct explanation with guaranteed high probability.
|
| 30 |
+
|
| 31 |
+
## 2 Framework
|
| 32 |
+
|
| 33 |
+
We consider the task of using features $x \in \mathcal{X}$ to predict an outcome $y \in \mathcal{Y}$ . Given a dataset of $n$ i.i.d. pairs $D = \left\{ {\left( {{x}_{1},{y}_{1}}\right) ,\ldots ,\left( {{x}_{n},{y}_{n}}\right) }\right\}$ , the learning task is to select a model $f$ from a model class $\mathcal{F} = \{ f : \mathcal{X} \rightarrow \mathcal{P}\left( \mathcal{Y}\right) \}$ to minimize a loss function $\ell \left( {f\left( x\right) , y}\right)$ in expectation. Here, $\mathcal{P}\left( \mathcal{Y}\right)$ is the set of probability measures over $\mathcal{Y}$ . Furthermore, we are interested in an explanation $\phi : \mathcal{F} \rightarrow \Phi$ that assigns to every model an interpretation in some space $\Phi$ . For example, we can let $\phi$ map any $f \in \mathcal{F}$ to ${\phi }_{i, x}^{\text{shap }}\left( f\right)$ , to the Shapley value of the $i$ -th feature applied to the feature vector $x$ , with $D$ as the reference dataset; or to the expected absolute Shapley value of the $i$ -th feature ${\phi }_{i}^{\text{shap }}\left( f\right) \mathrel{\text{:=}} {\mathbb{E}}_{x}\left\lbrack \left| {{\phi }_{i, x}^{\text{shap }}\left( f\right) }\right| \right\rbrack$ . In binary classification where $y \in \{ 0,1\}$ , one can consider a counterfactual explanation, namely where ${\phi }_{x, + }^{\mathrm{{CF}}}\left( \widehat{f}\right)$ returns the closest point ${x}^{\prime }$ to $x$ such that the label is predicted to be most likely of the positive class $\widehat{P}\left( {Y = 1 \mid X = {x}^{\prime }}\right) > {0.5}$ .
|
| 34 |
+
|
| 35 |
+
In a frequentist treatment, suppose the model is well-specified, so there exists some model ${f}^{ * } \in \mathcal{F}$ in the model class that gives the true conditional distribution $p\left( {y \mid x}\right)$ . We refer to ${f}^{ * }$ as the true model since it exactly reflects the data generating distribution. We are interested in the explanation of the true model $\phi \left( {f}^{ * }\right)$ . Since we do not know the true model, we use some model-fitting algorithm $\mathcal{A} : \mathcal{D} \rightarrow \mathcal{F}$ that takes as input a dataset $D$ in the space $\mathcal{D} = \mathop{\bigcup }\limits_{{n = 1}}^{\infty }{\left( \mathcal{X} \times \mathcal{Y}\right) }^{n}$ and outputs a model $\widehat{f} = \mathcal{A}\left( D\right)$ . For example, in empirical risk minimization we choose the model that minimizes the loss on the training data $\widehat{f} = \arg \mathop{\min }\limits_{{f \in \mathcal{F}}}\frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\ell \left( {f\left( {x}_{i}\right) ,{y}_{i}}\right)$ . Alternatively, in a Bayesian model, we assume a prior distribution $p\left( {f}^{ * }\right)$ over the model class. In this case, we want to compute the distribution of the true explanation given the dataset $p\left( {\phi \left( {f}^{ * }\right) \mid D}\right)$ .
|
| 36 |
+
|
| 37 |
+
## 3 Quantifying Uncertainty in Explanations
|
| 38 |
+
|
| 39 |
+
In this section, we describe three approaches for quantifying uncertainty in explanations. Our goal is to use the data to construct an uncertainty set $C = C\left( D\right)$ that includes the true explanation with probability at least $1 - \alpha$ for some chosen confidence level $\alpha \in \left( {0,1}\right)$ :
|
| 40 |
+
|
| 41 |
+
$$
|
| 42 |
+
\mathbb{P}\left( {\phi \left( {f}^{ * }\right) \in C}\right) \geq 1 - \alpha \tag{1}
|
| 43 |
+
$$
|
| 44 |
+
|
| 45 |
+
In a frequentist setting, this type of guarantee requires us to call on uniform convergence results from learning theory. This approach has important drawbacks (first among them, the resulting uncertainty sets tend to be large), so we also introduce two algorithms for Bayesian models-one for Bayesian models with tractable posteriors and one for Bayesian models regardless of the tractability of the posterior-that tend to produce tighter uncertainty sets when they apply. In the remainder of this section, we introduce the three approaches and their associated guarantees.
|
| 46 |
+
|
| 47 |
+
### 3.1 Frequentist Confidence Intervals
|
| 48 |
+
|
| 49 |
+
In this section, we introduce a method for constructing valid confidence intervals in a frequentist setting, when the model class is sufficiently simple. We measure simplicity in a learning theoretic sense; our results hold for model classes that satisfy uniform convergence, a property often derived used the Vapnik-Chervonenkis (VC) dimension or Rademacher complexity of a model class.
|
| 50 |
+
|
| 51 |
+
Uniform convergence states that the empirical loss ${\mathcal{L}}_{n}\left( f\right) = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\ell \left( {f\left( {x}_{i}\right) ,{y}_{i}}\right)$ converges to the population loss $\mathcal{L}\left( f\right) = \mathbb{E}\left\lbrack {\ell \left( {f\left( x\right) , y}\right) }\right\rbrack$ "uniformly" across the model class as the number of training samples $n$ goes to infinity. Formally, uniform convergence says that with probability at least $1 - \alpha$ ,
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
\mathop{\sup }\limits_{{f \in \mathcal{F}}}\left| {\mathcal{L}\left( f\right) - {\mathcal{L}}_{n}\left( f\right) }\right| \leq \epsilon \left( \alpha \right) \tag{2}
|
| 55 |
+
$$
|
| 56 |
+
|
| 57 |
+
where $\epsilon \left( \alpha \right) \geq 0$ is a value depending on the chosen confidence level $\alpha$ . First, we will note that uniform convergence gives us a confidence set for the true model. Then, we can bound the explanation of the true model by computing the most extreme explanations within this confidence set. It is easy to show that uniform convergence bounds the excess empirical risk of the true model. Lemma 1. If uniform convergence holds, then with probability at least $1 - \alpha$ ,
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
{\mathcal{L}}_{n}\left( {f}^{ * }\right) \leq \mathop{\inf }\limits_{{f \in \mathcal{F}}}{\mathcal{L}}_{n}\left( f\right) + {2\epsilon }\left( \alpha \right) . \tag{3}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
Lemma 1 is a basic result from learning theory. (See Appendix B for a proof). We can construct a confidence set for the true model,
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
{\mathcal{F}}_{\alpha } = \left\{ {f \in \mathcal{F} : {\mathcal{L}}_{n}\left( f\right) \leq \mathop{\inf }\limits_{{{f}^{\prime } \in \mathcal{F}}}{\mathcal{L}}_{n}\left( {f}^{\prime }\right) + {2\epsilon }\left( \alpha \right) }\right\} , \tag{4}
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
which contains the true model with high probability $\mathbb{P}\left( {{f}^{ * } \in {\mathcal{F}}_{\alpha }}\right) \geq 1 - \alpha$ due to Lemma 1 . Thus, the set of explanations corresponding to ${\mathcal{F}}_{\alpha }$ , namely ${C}_{UC} = \left\{ {\phi \left( f\right) : f \in {\mathcal{F}}_{\alpha }}\right\}$ , includes the true explanation $\phi \left( {f}^{ * }\right)$ with high probability:
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
\mathbb{P}\left( {\phi \left( {f}^{ * }\right) \in {C}_{UC}}\right) \geq 1 - \alpha \tag{5}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
While this confidence set has the desired coverage guarantee, it is not immediately obvious how to compute it since we usually cannot realize ${\mathcal{F}}_{\alpha }$ in practice due to computational limitations. See Appendix C for an algorithm to approximate the confidence interval ${C}_{UC}$ .
|
| 76 |
+
|
| 77 |
+
### 3.2 Prior with Tractable Posterior
|
| 78 |
+
|
| 79 |
+
The algorithm in the previous section guarantees coverage for any true function. A natural question to ask is, "can we get tighter guarantees if we only require coverage on average, when the true model is distributed according to some known distribution?" This is often the setting in Bayesian statistics, where we have a prior distribution $p\left( {f}^{ * }\right)$ over the true model. When the model is tractable, we can exactly compute the posterior distribution $p\left( {{f}^{ * } \mid D}\right)$ given the data.
|
| 80 |
+
|
| 81 |
+
A useful quantification of uncertainty for tractable Bayesian models is a credible interval from the induced posterior distribution for the explanation $p\left( {\phi \left( f\right) \mid D}\right)$ . That is, if we sample a model according to the posterior distribution, we want an interval $C \subseteq \Phi$ that includes the explanation for this model with probability at least $1 - \alpha$ .
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
\mathbb{P}\left( {\phi \left( f\right) \in C \mid D}\right) \geq 1 - \alpha \tag{6}
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
We can get this guarantee by estimating the $\left( {\alpha /2}\right)$ and $\left( {1 - \alpha /2}\right)$ -quantiles of the induced posterior for the explanation $p\left( {\phi \left( f\right) \mid D}\right)$ , e.g., by drawing random samples from the induced posterior. First, we independently sample $T$ models ${f}_{1},\ldots ,{f}_{T}$ from the posterior $p\left( {f \mid D}\right)$ . We explain each model to get $T$ explanations $\phi \left( {f}_{1}\right) ,\ldots ,\phi \left( {f}_{T}\right)$ , which are independently distributed according to the induced posterior $p\left( {\phi \left( f\right) \mid D}\right)$ . If the true model ${f}^{ * }$ is distributed according to the posterior, then $\phi \left( {f}^{ * }\right)$ and $\phi \left( {f}_{1}\right) ,\ldots ,\phi \left( {f}_{T}\right)$ are i.i.d. random variables. This means that $\phi \left( {f}^{ * }\right)$ is equally likely to be the smallest, second smallest, ..., largest element of this collection. If we define the ranking function $R\left( u\right) = \mathop{\sum }\limits_{{t = 1}}^{T}\mathbb{1}\left\{ {u \leq \phi \left( {f}_{t}\right) }\right\}$ then $R\left( {\phi \left( {f}^{ * }\right) }\right)$ is distributed uniformly on the set $\{ 0,1,2,\ldots , T\}$ . Thus, if we define the interval ${C}_{\text{Bayes }}$ with lower bound and upper bound as the $\left\lfloor {\frac{\alpha }{2}\left( {T + 1}\right) }\right\rfloor /T$ - quantile and $\left\lceil {\left( {1 - \frac{\alpha }{2}}\right) \left( {T + 1}\right) }\right\rceil /T$ -quantile (respectively) of the set $\left\{ {\phi \left( {f}_{1}\right) ,\ldots ,\phi \left( {f}_{T}\right) }\right\}$ , then
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
\mathbb{P}\left( {\phi \left( {f}^{ * }\right) \in {C}_{\text{Bayes }} \mid D}\right) \geq 1 - \alpha \tag{7}
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
Both ${f}^{ * }$ and ${C}_{\text{Bayes }}$ are random variables in Equation (7), due to the prior over the ${f}^{ * }$ and the randomly sampled models ${f}_{1},\ldots ,{f}_{T}$ , respectively.
|
| 94 |
+
|
| 95 |
+
Algorithm 1: BEI: Bayesian Explanation Intervals
|
| 96 |
+
|
| 97 |
+
---
|
| 98 |
+
|
| 99 |
+
Input : Posterior distribution $p\left( {f \mid D}\right)$ , explanation algorithm $\phi$
|
| 100 |
+
|
| 101 |
+
for $t = 1,\ldots , T$ do
|
| 102 |
+
|
| 103 |
+
Sample a model ${f}_{t} \sim p\left( {f \mid D}\right)$
|
| 104 |
+
|
| 105 |
+
Compute an explanation $\phi \left( {f}_{t}\right)$ for the sampled model
|
| 106 |
+
|
| 107 |
+
end
|
| 108 |
+
|
| 109 |
+
$\mathbf{{Return} : }$
|
| 110 |
+
|
| 111 |
+
the confidence interval ${C}_{\text{Trained }}$ with
|
| 112 |
+
|
| 113 |
+
lower bound Quantile $\left( {\left\{ {\phi \left( {f}_{1}\right) ,\ldots ,\phi \left( {f}_{T}\right) }\right\} ;\left\lfloor {\frac{\alpha }{2}\left( {T + 1}\right) }\right\rfloor /T}\right)$ , and
|
| 114 |
+
|
| 115 |
+
upper bound Quantile $\left( {\left\{ {\phi \left( {f}_{1}\right) ,\ldots ,\phi \left( {f}_{T}\right) }\right\} ;\left\lceil {\left( {1 - \frac{\alpha }{2}}\right) \left( {T + 1}\right) }\right\rceil /T}\right)$
|
| 116 |
+
|
| 117 |
+
---
|
| 118 |
+
|
| 119 |
+
### 3.3 Prior with Intractable Posterior
|
| 120 |
+
|
| 121 |
+
There are many models for which the posterior distribution is intractable and would be too expensive to compute exactly. When this happens, standard approaches are to either perform approximate inference or change the model so that posterior inference is tractable. However, approximate inference estimates does not perfectly recover the posterior, which means guarantees that apply under the approximate posterior may not apply under the exact posterior. The option to change our model is also not satisfying; if we fit the data with an altered model, our explanations no longer pertain to the true model. In this section, we introduce a third option that alleviates these challenges: we construct uncertainty sets that only require us to know the prior, instead of the posterior.
|
| 122 |
+
|
| 123 |
+
In exchange for not requiring the posterior distribution, we only get a marginal coverage guarantee:
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
\mathbb{P}\left( {\phi \left( {f}^{ * }\right) \in {C}_{\text{Conformal }}}\right) \geq 1 - \alpha \tag{8}
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
The algorithm works by first sampling $T$ "true models", ${f}_{1},\ldots ,{f}_{T}$ , independently from the prior. If the predictions from ${f}_{t}$ are probability distributions, then we can resample new labels using the model, ${y}_{i}^{t} \sim {f}_{t}\left( {x}_{i}\right) , i = 1,\ldots , n$ . By pairing each original input ${x}_{i}$ with the corresponding resampled label ${y}_{i}^{t}$ , we have a dataset ${D}_{t} = \left\{ {\left( {{x}_{1},{y}_{1}^{t}}\right) ,\ldots ,\left( {{x}_{n},{y}_{n}^{t}}\right) }\right\}$ drawn from the model ${f}_{t}$ . We can then train a model ${\widehat{f}}_{t} = \mathcal{A}\left( {D}_{t}\right)$ on this new dataset. By computing explanations $\phi \left( {f}_{t}\right)$ and $\phi \left( {\widehat{f}}_{t}\right)$ for these models, we now have $T$ i.i.d. comparisons of true and estimated explanations. If the true model ${f}^{ * }$ for our original dataset was drawn according to the prior distribution, then the pair $\left( {\phi \left( {f}^{ * }\right) ,\phi \left( \widehat{f}\right) }\right)$ corresponding to the original dataset is also i.i.d. with respect to the simulated explanations $\left( {\phi \left( {f}_{1}\right) ,\phi \left( {\widehat{f}}_{1}\right) }\right) ,\ldots ,\left( {\phi \left( {f}_{T}\right) ,\phi \left( {\widehat{f}}_{T}\right) }\right)$ . This means that we can apply conformal prediction. See Algorithm 2 for a detailed description of Conformal Explanation Intervals (CEI).
|
| 130 |
+
|
| 131 |
+
Algorithm 2: CEI: Conformal Explanation Intervals
|
| 132 |
+
|
| 133 |
+
---
|
| 134 |
+
|
| 135 |
+
Input : Model-fitting algorithm $\mathcal{A}$ , dataset $D = \left( {{x}_{1},{y}_{1}}\right) ,\ldots ,\left( {{x}_{n},{y}_{n}}\right)$
|
| 136 |
+
|
| 137 |
+
Input : Nonconformity score $s : \Phi \times \Phi \rightarrow \mathbb{R}$
|
| 138 |
+
|
| 139 |
+
Train a model $\widehat{f} = \mathcal{A}\left( D\right)$ using the dataset
|
| 140 |
+
|
| 141 |
+
Explain the trained model $\widehat{\phi } = \phi \left( \widehat{f}\right)$
|
| 142 |
+
|
| 143 |
+
for $t = 1,\ldots , T$ do
|
| 144 |
+
|
| 145 |
+
Sample a model ${f}_{t} \sim p\left( f\right)$
|
| 146 |
+
|
| 147 |
+
Sample a dataset of labels ${x}_{i}^{t} \sim {f}_{t}\left( {y}_{i}\right)$
|
| 148 |
+
|
| 149 |
+
Define the synthetic dataset ${D}_{t} = \left\{ {\left( {{x}_{1},{y}_{i}^{t}}\right) ,\ldots ,\left( {{x}_{n},{y}_{n}^{t}}\right) }\right\}$
|
| 150 |
+
|
| 151 |
+
Train a model ${\widehat{f}}_{t} = \mathcal{A}\left( {D}_{t}\right)$
|
| 152 |
+
|
| 153 |
+
Explain the sampled model ${\phi }_{t} = \phi \left( {f}_{t}\right)$ and the trained model ${\widehat{\phi }}_{t} = \phi \left( {\widehat{f}}_{t}\right)$
|
| 154 |
+
|
| 155 |
+
Compute the nonconformity score ${s}_{t} = s\left( {{\phi }_{t},{\widehat{\phi }}_{t}}\right)$
|
| 156 |
+
|
| 157 |
+
end
|
| 158 |
+
|
| 159 |
+
1 Set the threshold $\tau$ as the $\left\lceil {\left( {1 - \alpha }\right) \left( {T + 1}\right) /T}\right\rceil$ -quantile of the set $\left\{ {{s}_{1},\ldots ,{s}_{T}}\right\}$
|
| 160 |
+
|
| 161 |
+
Return:
|
| 162 |
+
|
| 163 |
+
The confidence interval ${C}_{\text{conformal }} = \{ \varphi \in \Phi : s\left( {\varphi ,\phi \left( \widehat{f}\right) }\right) \leq \tau \}$
|
| 164 |
+
|
| 165 |
+
---
|
| 166 |
+
|
| 167 |
+
## 4 Experiments
|
| 168 |
+
|
| 169 |
+
Experimental Setup We train neural networks to predict a real-valued label. The model outputs a mean and variance for a Gaussian distribution, and is trained with the negative log-likelihood loss. The architecture has 2 hidden layers, each with 100 neurons, and uses ReLU activations.
|
| 170 |
+
|
| 171 |
+
We compute uncertainty sets for the explanation of the true model using the conformal explanation intervals method. As the explanation, we use the average of the absolute value of the Shapley value of the feature across the dataset - a type of feature importance. Computing conformal explanation intervals requires us to place a prior over the model class. We set the prior distribution for each weight to be Gaussian with zero mean and variance as the reciprocal of the dimension of the layer. The prior for the biases are standard Gaussian distributions.
|
| 172 |
+
|
| 173 |
+
172 Experimental Results We find that conclusions about the relative importance of features can only be made for some datasets. For example, for the MPG dataset, the features displacement,
|
| 174 |
+
|
| 175 |
+

|
| 176 |
+
|
| 177 |
+
Figure 2: Feature importance scores (as measured by the mean Shapley value of each feature across the dataset). Confidence intervals are computed using the conformal explanation intervals method.
|
| 178 |
+
|
| 179 |
+
horsepower, and weight have high importance with low uncertainty. However, in the PROTEIN dataset, it is difficult to make any meaningful conclusions about the relative importance of features, possibly due to the existence of competing models that use different features. Additional experimental results are included in Appendix D.
|
| 180 |
+
|
| 181 |
+
Limitations The validity of these confidence intervals is dependent on our trusting the prior distribution. However, it is difficult to design meaningful prior distributions for neural networks. In some cases, it may be advantageous to use simpler model classes where one can more readily construct meaningful priors.
|
| 182 |
+
|
| 183 |
+
References
|
| 184 |
+
|
| 185 |
+
[1] John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ron-neberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin žídek, Anna Potapenko, et al. Highly accurate protein structure prediction with alphafold. Nature, 596(7873):583-589, 2021.
|
| 186 |
+
|
| 187 |
+
[2] Real Carbonneau, Kevin Laframboise, and Rustam Vahidov. Application of machine learning techniques for supply chain demand forecasting. European Journal of Operational Research, 184(3):1140-1154, 2008.
|
| 188 |
+
|
| 189 |
+
[3] Rohit Sharma, Sachin S Kamble, Angappa Gunasekaran, Vikas Kumar, and Anil Kumar. A systematic literature review on machine learning applications for sustainable agriculture supply chain performance. Computers & Operations Research, 119:104926, 2020.
|
| 190 |
+
|
| 191 |
+
[4] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135-1144, 2016.
|
| 192 |
+
|
| 193 |
+
[5] Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 4765-4774. Curran Associates, Inc., 2017.
|
| 194 |
+
|
| 195 |
+
[6] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Anchors: High-precision model-agnostic explanations. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.
|
| 196 |
+
|
| 197 |
+
[7] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In International conference on machine learning, pages 3319-3328. PMLR, 2017.
|
| 198 |
+
|
| 199 |
+
[8] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013.
|
| 200 |
+
|
| 201 |
+
[9] Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. In International conference on machine learning, pages 1885-1894. PMLR, 2017.
|
| 202 |
+
|
| 203 |
+
[10] Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, et al. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In International conference on machine learning, pages 2668-2677. PMLR, 2018.
|
| 204 |
+
|
| 205 |
+
[11] Paul Raccuglia, Katherine C Elbert, Philip DF Adler, Casey Falk, Malia B Wenny, Aurelio Mollo, Matthias Zeller, Sorelle A Friedler, Joshua Schrier, and Alexander J Norquist. Machine-learning-assisted materials discovery using failed experiments. Nature, 533(7601): 73-76, 2016.
|
| 206 |
+
|
| 207 |
+
[12] Yue Bi, Dongxu Xiang, Zongyuan Ge, Fuyi Li, Cangzhi Jia, and Jiangning Song. An interpretable prediction model for identifying n7-methylguanosine sites based on xgboost and shap. Molecular Therapy-Nucleic Acids, 22:362-372, 2020.
|
| 208 |
+
|
| 209 |
+
[13] Pal V Johnsen, Signe Riemer-Sørensen, Andrew Thomas DeWan, Megan E Cahill, and Mette Langaas. A new method for exploring gene-gene and gene-environment interactions in gwas with tree ensemble methods and shap values. BMC bioinformatics, 22(1):1-29, 2021.
|
| 210 |
+
|
| 211 |
+
[14] Xiao Wen, Yuanchang Xie, Lingtao Wu, and Liming Jiang. Quantifying and comparing the effects of key risk factors on various types of roadway segment crashes with lightgbm and shap. Accident Analysis & Prevention, 159:106261, 2021.
|
| 212 |
+
|
| 213 |
+
[15] Xinzhi Zhou, Haijia Wen, Ziwei Li, Hui Zhang, and Wengang Zhang. An interpretable model for the susceptibility of rainfall-induced shallow landslides based on shap and xgboost. Geo-carto International, (just-accepted):1-27, 2022.
|
| 214 |
+
|
| 215 |
+
[16] Sami Ben Jabeur, Salma Mefteh-Wali, and Jean-Laurent Viviani. Forecasting gold price with the xgboost algorithm and shap interaction values. Annals of Operations Research, pages 1-21, 2021.
|
| 216 |
+
|
| 217 |
+
[17] Karim El Mokhtari, Ben Peachey Higdon, and Ayşe Başar. Interpreting financial time series with shap values. In Proceedings of the 29th Annual International Conference on Computer Science and Software Engineering, pages 166-172, 2019.
|
| 218 |
+
|
| 219 |
+
[18] Amirata Ghorbani, Abubakar Abid, and James Zou. Interpretation of neural networks is fragile. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 3681-3688, 2019.
|
| 220 |
+
|
| 221 |
+
[19] Dylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, and Himabindu Lakkaraju. Fooling lime and shap: Adversarial attacks on post hoc explanation methods. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pages 180-186, 2020.
|
| 222 |
+
|
| 223 |
+
[20] Ann-Kathrin Dombrowski, Maximillian Alber, Christopher Anders, Marcel Ackermann, Klaus-Robert Müller, and Pan Kessel. Explanations can be manipulated and geometry is to blame. Advances in Neural Information Processing Systems, 32, 2019.
|
| 224 |
+
|
| 225 |
+
[21] Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. Sanity checks for saliency maps. Advances in neural information processing systems, 31, 2018.
|
| 226 |
+
|
| 227 |
+
[22] Davide Alvarez-Melis and Tommi S Jaakkola. On the robustness of interpretability methods. arXiv preprint arXiv:1806.08049, 2018.
|
| 228 |
+
|
| 229 |
+
[23] Himabindu Lakkaraju, Nino Arsov, and Osbert Bastani. Robust and stable black box explanations. In International Conference on Machine Learning, pages 5628-5638. PMLR, 2020.
|
| 230 |
+
|
| 231 |
+
[24] Eunjin Lee, David Braines, Mitchell Stiffler, Adam Hudler, and Daniel Harborne. Developing the sensitivity of lime for better machine learning explanation. In Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, volume 11006, pages 349-356. SPIE, 2019.
|
| 232 |
+
|
| 233 |
+
[25] Charles Marx, Flavio Calmon, and Berk Ustun. Predictive multiplicity in classification. In International Conference on Machine Learning, pages 6765-6774. PMLR, 2020.
|
| 234 |
+
|
| 235 |
+
[26] Alexander D'Amour, Katherine Heller, Dan Moldovan, Ben Adlam, Babak Alipanahi, Alex Beutel, Christina Chen, Jonathan Deaton, Jacob Eisenstein, Matthew D Hoffman, et al. Un-derspecification presents challenges for credibility in modern machine learning. arXiv preprint arXiv:2011.03395, 2020.
|
| 236 |
+
|
| 237 |
+
[27] Emily Black, Manish Raghavan, and Solon Barocas. Model multiplicity: Opportunities, concerns, and solutions. 2022.
|
| 238 |
+
|
| 239 |
+
[28] Leo Breiman. Statistical modeling: The two cultures (with comments and a rejoinder by the author). Statistical science, 16(3):199-231, 2001.
|
| 240 |
+
|
| 241 |
+
[29] Jiayun Dong and Cynthia Rudin. Variable importance clouds: A way to explore variable importance for the set of good models. arXiv preprint arXiv:1901.03209, 2019.
|
| 242 |
+
|
| 243 |
+
[30] LS Shapley. A value for n-person games, contributions to the theory of games (kuhn, hw, tucker, aw, eds.), 307-317. Annals of Mathematical Studies, 28:275-293, 1953.
|
| 244 |
+
|
| 245 |
+
[31] Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 4768-4777, 2017.
|
| 246 |
+
|
| 247 |
+
[32] Christopher Frye, Colin Rowat, and Ilya Feige. Asymmetric shapley values: incorporating causal knowledge into model-agnostic explainability. Advances in Neural Information Processing Systems, 33, 2020.
|
| 248 |
+
|
| 249 |
+
[33] Tom Heskes, Evi Sijben, Ioan Gabriel Bucur, and Tom Claassen. Causal shapley values: Exploiting causal knowledge to explain individual predictions of complex models. arXiv preprint arXiv:2011.01625, 2020.
|
| 250 |
+
|
| 251 |
+
[34] Ruth Fong and Andrea Vedaldi. Explanations for attributing deep neural network predictions. In Explainable ai: Interpreting, explaining and visualizing deep learning, pages 149-167. Springer, 2019.
|
| 252 |
+
|
| 253 |
+
[35] Tianqi Chen and Carlos Guestrin. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, pages 785-794, 2016.
|
| 254 |
+
|
| 255 |
+
[36] Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagating activation differences. In International Conference on Machine Learning, pages 3145-3153. PMLR, 2017.
|
| 256 |
+
|
| 257 |
+
[37] Yilin Ning, Marcus Eng Hock Ong, Bibhas Chakraborty, Benjamin Alan Goldstein, Daniel Shu Wei Ting, Roger Vaughan, and Nan Liu. Shapley variable importance cloud for interpretable machine learning. Patterns, 3(4):100452, 2022.
|
| 258 |
+
|
| 259 |
+
[38] Javier Antorán, Umang Bhatt, Tameem Adel, Adrian Weller, and José Miguel Hernández-Lobato. Getting a clue: A method for explaining uncertainty estimates. arXiv preprint arXiv:2006.06848, 2020.
|
| 260 |
+
|
| 261 |
+
[39] Dan Ley, Umang Bhatt, and Adrian Weller. $\{ \delta \}$ -clue: Diverse sets of explanations for uncertainty estimates. arXiv preprint arXiv:2104.06323, 2021.
|
| 262 |
+
|
| 263 |
+
[40] Torgyn Shaikhina, Umang Bhatt, Roxanne Zhang, Konstantinos Georgatzis, Alice Xiang, and Adrian Weller. Effects of uncertainty on the quality of feature importance explanations. In AAAI Workshop on Explainable Agency in Artificial Intelligence, 2021.
|
| 264 |
+
|
| 265 |
+
[41] Roger Koenker and Gilbert Bassett Jr. Regression quantiles. Econometrica: Journal of the Econometric Society, pages 33-50, 1978.
|
| 266 |
+
|
| 267 |
+
[42] Roger Koenker. Quantile Regression. Econometric Society Monographs. Cambridge University Press, 2005. doi: 10.1017/CBO9780511754098.
|
| 268 |
+
|
| 269 |
+
[43] Youngsuk Park, Danielle Maddix, François-Xavier Aubet, Kelvin Kan, Jan Gasthaus, and Yuyang Wang. Learning quantile functions without quantile crossing for distribution-free time series forecasting. In International Conference on Artificial Intelligence and Statistics, pages 8127-8150. PMLR, 2022.
|
| 270 |
+
|
| 271 |
+
[44] Glenn Shafer and Vladimir Vovk. A tutorial on conformal prediction. Journal of Machine Learning Research, 9(3), 2008.
|
| 272 |
+
|
| 273 |
+
[45] Anastasios N Angelopoulos and Stephen Bates. A gentle introduction to conformal prediction and distribution-free uncertainty quantification. arXiv preprint arXiv:2107.07511, 2021.
|
| 274 |
+
|
| 275 |
+
[46] Dylan Slack, Anna Hilgard, Sameer Singh, and Himabindu Lakkaraju. Reliable post hoc explanations: Modeling uncertainty in explainability. Advances in Neural Information Processing Systems, 34:9391-9404, 2021.
|
| 276 |
+
|
| 277 |
+
## 312 A Related Work
|
| 278 |
+
|
| 279 |
+
Explainable AI tries to explain the model behaviors that human can understand and extract knowledge. Some models are self-explanatory, such as generalized linear models (GLMs), tree-based models. For complex neural network models, Generalized Additive Models (GAMs) can modify them into more interpretable ones. From the complex models already trained and given, model-agnostic explanation is a popular class as a post-hot process, from various Shapley values based approaches [30-33], perterbation-based approaches [34], local approximations [4], tree-based one [35], to DeepLIFT [36]. Recently, there have been works pointing out robustness [23] under distribution shift and distribution [37] for these post-hoc explanations. Separately, for probabilistic models, several studies explain uncertainty estimates of the prediction [38, 39] and its effects [40].
|
| 280 |
+
|
| 281 |
+
There are various ways to quantify uncertainty for prediction tasks. Direct probabilistic methods from Bayesian modeling, Gaussian Process to variational inference can give prediction interval through distributional assumptions in either parametric or nonparametric ways. Quantile regression [41-43] allows to construct prediction intervals via optimizing pinball loss. Conformal prediction is another way to form uncertainty interval, usually by transforming point estimates to probabilistic ones as a post-hoc process [44, 45].
|
| 282 |
+
|
| 283 |
+
There are several works that considered uncertainty in explainablity AI. However, [46] focuses on explaining a single model without taking multiple near-optimality or multiplicity of the models into account and [29] only applies to a restricted set of explanations and models, both of which are limited to Shapley values and not fully capable of explaining true model behaviours of our interest.
|
| 284 |
+
|
| 285 |
+
## B Proofs
|
| 286 |
+
|
| 287 |
+
3 Lemma 1. If uniform convergence holds, then with probability at least $1 - \alpha$ ,
|
| 288 |
+
|
| 289 |
+
$$
|
| 290 |
+
{\mathcal{L}}_{n}\left( {f}^{ * }\right) \leq \mathop{\inf }\limits_{{f \in \mathcal{F}}}{\mathcal{L}}_{n}\left( f\right) + {2\epsilon }\left( \alpha \right) .
|
| 291 |
+
$$
|
| 292 |
+
|
| 293 |
+
Proof of Lemma 1. To see this, denote by $E$ the event in Equation (2), which occurs with probability 335 $1 - \alpha$ . If the event $E$ occurs, then for all other models $f \in \mathcal{F}$ we have
|
| 294 |
+
|
| 295 |
+
$$
|
| 296 |
+
{\mathcal{L}}_{n}\left( {f}^{ * }\right) \leq \mathcal{L}\left( {f}^{ * }\right) + \epsilon \left( \alpha \right)
|
| 297 |
+
$$
|
| 298 |
+
|
| 299 |
+
(E occurred)
|
| 300 |
+
|
| 301 |
+
$$
|
| 302 |
+
\leq \mathcal{L}\left( f\right) + \epsilon \left( \alpha \right)
|
| 303 |
+
$$
|
| 304 |
+
|
| 305 |
+
(Optimality of ${f}^{ * }$ )
|
| 306 |
+
|
| 307 |
+
$$
|
| 308 |
+
\leq {\mathcal{L}}_{n}\left( f\right) + {2\epsilon }\left( \alpha \right) \text{.}
|
| 309 |
+
$$
|
| 310 |
+
|
| 311 |
+
(E occurred)
|
| 312 |
+
|
| 313 |
+
And since this inequality holds for all $f \in \mathcal{F}$ , it also holds for the infimum over $\mathcal{F}$ . This gives the result in Equation (3).
|
| 314 |
+
|
| 315 |
+
## C Computing the Frequentist Confidence Interval
|
| 316 |
+
|
| 317 |
+
We focus now on the case where the explanation is a single real number. For explanations like SHAP and LIME that output a vector containing a score for each feature, we can either construct a confidence interval for each entry independently, or construct a confidence interval for a real-valued function of the vector such as its norm. When the explanation $\phi \left( f\right)$ is a real number, we can equivalently write ${C}_{UC}$ as
|
| 318 |
+
|
| 319 |
+
$$
|
| 320 |
+
\left\lbrack {\mathop{\inf }\limits_{{f \in {\mathcal{F}}_{\alpha }}}\phi \left( f\right) ,\mathop{\sup }\limits_{{f \in {\mathcal{F}}_{\alpha }}}\phi \left( f\right) }\right\rbrack . \tag{9}
|
| 321 |
+
$$
|
| 322 |
+
|
| 323 |
+
44 Thus, instead of explaining every near-optimal model, we can compute ${C}_{UC}$ by solving two constrained optimization problems:
|
| 324 |
+
|
| 325 |
+
$$
|
| 326 |
+
\text{minimize}\phi \left( f\right) \tag{10}
|
| 327 |
+
$$
|
| 328 |
+
|
| 329 |
+
$$
|
| 330 |
+
\text{s.t.}{\mathcal{L}}_{n}\left( f\right) \leq \mathop{\inf }\limits_{{{f}^{\prime } \in \mathcal{F}}}{\mathcal{L}}_{n}\left( {f}^{\prime }\right) + {2\epsilon }\left( \alpha \right)
|
| 331 |
+
$$
|
| 332 |
+
|
| 333 |
+
$$
|
| 334 |
+
\text{maximize}\phi \left( f\right) \tag{11}
|
| 335 |
+
$$
|
| 336 |
+
|
| 337 |
+
$$
|
| 338 |
+
\text{s.t.}{\mathcal{L}}_{n}\left( f\right) \leq \mathop{\inf }\limits_{{{f}^{\prime } \in \mathcal{F}}}{\mathcal{L}}_{n}\left( {f}^{\prime }\right) + {2\epsilon }\left( \alpha \right)
|
| 339 |
+
$$
|
| 340 |
+
|
| 341 |
+
Note that the constraint is usually nonconvex, so we cannot solve this problem exactly. However, we can solve a set of related unconstrained problems to approximate the solution. For Equation (10), we can define the mixed training objective:
|
| 342 |
+
|
| 343 |
+
$$
|
| 344 |
+
{J}_{\lambda }\left( f\right) = {\lambda \phi }\left( f\right) + \left( {1 - \lambda }\right) {\mathcal{L}}_{n}\left( f\right) \tag{12}
|
| 345 |
+
$$
|
| 346 |
+
|
| 347 |
+
The training objective corresponding to Equation (11) is given by negating the first term of ${J}_{\lambda }\left( f\right)$ .
|
| 348 |
+
|
| 349 |
+
By optimizing this objective for a sequence of $\lambda \in \left\lbrack {0,1}\right\rbrack$ , we can estimate the Pareto frontier of $\phi \left( f\right)$ and ${\mathcal{L}}_{n}\left( f\right)$ . By choosing the first point on this Pareto frontier that satisfies the constraint, we can estimate the solution to the optimization problems posed in Equations (10) and (11).
|
| 350 |
+
|
| 351 |
+
Algorithm 3: UCEI: Uniform Convergence Explanation Intervals
|
| 352 |
+
|
| 353 |
+
---
|
| 354 |
+
|
| 355 |
+
Input : dataset $D$ , mixture weights $0 \leq {\lambda }_{1} < \cdots < {\lambda }_{K} \leq 1$
|
| 356 |
+
|
| 357 |
+
Estimate the ERM $\widehat{f} = \mathop{\min }\limits_{{f \in \mathcal{F}}}{\mathcal{L}}_{n}\left( f\right)$ and its empirical risk ${\mathcal{L}}_{n}\left( \widehat{f}\right)$
|
| 358 |
+
|
| 359 |
+
for $\lambda \in \left\{ {{\lambda }_{1},\ldots ,{\lambda }_{K}}\right\}$ do
|
| 360 |
+
|
| 361 |
+
Optimize the mixed objective ${\widehat{f}}_{\lambda }^{ - } = \mathop{\min }\limits_{{f \in \mathcal{F}}}{\lambda \phi }\left( f\right) + \left( {1 - \lambda }\right) {\mathcal{L}}_{n}\left( f\right)$
|
| 362 |
+
|
| 363 |
+
Optimize the mixed objective ${\widehat{f}}_{\lambda }^{ + } = \mathop{\min }\limits_{{f \in \mathcal{F}}} - {\lambda \phi }\left( f\right) + \left( {1 - \lambda }\right) {\mathcal{L}}_{n}\left( f\right)$
|
| 364 |
+
|
| 365 |
+
end
|
| 366 |
+
|
| 367 |
+
6 Compute the lower bound $l = \min \left\{ {{\widehat{f}}_{\lambda }^{ - } : {\mathcal{L}}_{n}\left( {f}_{\lambda }^{ - }\right) \leq {\mathcal{L}}_{n}\left( \widehat{f}\right) + {2\epsilon }\left( \alpha \right) }\right\}$
|
| 368 |
+
|
| 369 |
+
7 Compute the upper bound $u = \max \left\{ {{\widehat{f}}_{\lambda }^{ + } : {\mathcal{L}}_{n}\left( {f}_{\lambda }^{ + }\right) \leq {\mathcal{L}}_{n}\left( \widehat{f}\right) + {2\epsilon }\left( \alpha \right) }\right\}$
|
| 370 |
+
|
| 371 |
+
$\mathbf{{Return} : }$
|
| 372 |
+
|
| 373 |
+
The confidence interval ${\widehat{C}}_{UC} = \left\lbrack {l, u}\right\rbrack$
|
| 374 |
+
|
| 375 |
+
---
|
| 376 |
+
|
| 377 |
+
## 354 D Additional Experimental Results
|
| 378 |
+
|
| 379 |
+

|
| 380 |
+
|
| 381 |
+
Figure 3: Feature importance scores (as measured by the mean Shapley value of each feature across the dataset). Confidence intervals are computed using the conformal explanation intervals method.
|
| 382 |
+
|
| 383 |
+

|
| 384 |
+
|
| 385 |
+
Figure 4: Feature importance scores (as measured by the mean Shapley value of each feature across the dataset). Confidence intervals are computed using the conformal explanation intervals method.
|
| 386 |
+
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/_QcreQjxHi/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,173 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ BUT ARE YOU SURE? QUANTIFYING UNCERTAINTY IN MODEL EXPLANATIONS
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s) Affiliation Address email
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
Even when a black-box model makes accurate predictions (e.g., whether it will rain tomorrow), it is difficult to extract principles from the model that improve human understanding (e.g., what set of atmospheric conditions best predict rainfall). Model explanations via explainability methods (e.g., LIME, Shapley values) can help by highlighting interpretable aspects of the model, such the data features to which the model is most sensitive. However, these methods can be unstable and inconsistent, which often ends up providing unreliable insights. Moreover, under the existence of many near-optimal models, there is no guarantee that explanations for a single model will agree with explanations from the true model that generated the data. In this work, instead of explaining a single best-fitting model, we develop principled methods to construct an uncertainty set for the "true explanation": the explanation from the (unknown) true model that generated the data. We show finite-sample guarantees that the uncertainty set we return includes the explanation for the true model with high probability. We show through synthetic experiments that our uncertainty sets have high fidelity to the explanations of the true model. We then report our findings on real-world data.
|
| 8 |
+
|
| 9 |
+
§ 17 1 INTRODUCTION
|
| 10 |
+
|
| 11 |
+
Data is now collected at a much faster rate than can be processed directly by humans. Thus, machine learning is used to synthesize complex datasets into predictive models. For example, machine learning models can predict the 3D structure of proteins from their amino acid sequences [1] and forecast supply chain demand $\left\lbrack {2,3}\right\rbrack$ . However, modern models are often black boxes, meaning that even when they make accurate predictions, it is difficult to extract interpretable principles and intuitions. Whereas human experts can share their reasoning with other humans, predictive models typically lack this ability to communicate principles.
|
| 12 |
+
|
| 13 |
+
In response to this challenge, there has been growing interest in model explanations: human-interpretable descriptions of model predictions [4-10]. The explanations highlight aspects of the model that are particularly relevant for some downstream goal, such as calibrating trust in a model or identifying patterns in complex data. Popular explanations include Shapley values, LIME, integrated gradients, TCAV, and counterfactual explanations.
|
| 14 |
+
|
| 15 |
+
Use cases for model explanations can be organized around two goals: model auditing and scientific inquiry. In model auditing, the goal is to validate or debug the predictions of a trained model. For example, we might ask "Does this model rely on income to predict this individual's credit worthiness?". In contrast, in scientific inquiry the object of interest is the data generating distribution itself. An analogous question for scientific inquiry would be "Is income truly correlated with credit worthiness for people similar to this individual?" Explanations used for model auditing give insights about the model, while explanations for scientific inquiry give insights about the world. Note that a single explanation (e.g., the local relationship between income and credit worthiness) is usually well-defined for both a trained model and for a data generating distribution. Therefore, most explanations can be used for both model auditing and scientific inquiry. In this work, we focus on using explanations for scientific inquiry.
|
| 16 |
+
|
| 17 |
+
< g r a p h i c s >
|
| 18 |
+
|
| 19 |
+
Figure 1: Expected absolute Shapley value ${\phi }_{i}^{\text{ shap }}\left( f\right)$ explanations for the three features of a linear model. The true labels were sampled from the linear model ${y}^{\left( i\right) } = \left\lbrack {1,0,0}\right\rbrack \cdot {x}^{\left( i\right) } + {\epsilon }^{\left( i\right) }$ where ${\epsilon }^{\left( i\right) }$ is i.i.d. Gaussian noise. The red stars indicate the explanations for this true model. Features 1 and 2 are highly correlated $\left( {{\rho }_{12} = {0.99}}\right)$ , so Feature 2 predicts the label even though the true model assigns Feature 2 no weight. We fit linear models using ridge regression. Left: Uncertainty sets constructed by resampling the training data to estimate the distribution of the explanation of the ridge regression model. Note that the naive uncertainty sets do not include the true explanation. Center: Uncertainty sets constructed using uniform convergence results. The uncertainty sets are valid but wider due to their stronger guarantees. Right: Uncertainty sets constructed using conformal inference. The intervals are tighter but only guarantee coverage in expectation over the true model.
|
| 20 |
+
|
| 21 |
+
Explanations are already being used for scientific inquiry in many domains, such as materials discovery [11], genomics [12, 13], motor vehicle collisions [14], environmental science [15], and finance [16, 17]. Usually, a practitioner choose a single "best-fitting" model and treat explanations of that model as representative of the data generating distribution. However, model explanations are known to be unstable (i.e., sensitive to small perturbations in the data) [18-23] and inconsistent (i.e., random variations in training algorithms can lead models trained on the same data to give different explanations) [24]. The problem is worsened by the phenomenon of model multiplicity: the existence of distinct models with comparable performance [25-27]. If there exist competing models-each of which provides a different explanation of the data-generating distribution-how can we tell which explanation is correct? [28] These issues threaten the applicability of existing explainability procedures for scientific inquiry. Given that explanations are known to vary widely among even near-optimal models [29], we cannot assume explanations from a model with good performance are representative of the data generating distribution. For example, in Figure 1 (left panel), trained models consistently disagree with the true explanation of a well-specified linear model.
|
| 22 |
+
|
| 23 |
+
In this work, we aim to develop simple and broadly-applicable procedures to use explanations for valid scientific inquiry. Instead of computing the explanation for a single best-fitting model, we give an uncertainty set containing plausible explanations for the (unknown) data generating distribution. This uncertainty set is guaranteed to include the correct explanation with high probability. Our main contributions include:
|
| 24 |
+
|
| 25 |
+
* We give simple examples where existing explainability procedures fail to recover the explanation of the data generating distribution.
|
| 26 |
+
|
| 27 |
+
* We propose three simple algorithms for rigorously inferring the explanation of the data generating distribution. One algorithm applies to tractable Bayesian models, one to intractable Bayesian models, and one to frequentist models.
|
| 28 |
+
|
| 29 |
+
* We show that all three algorithms give uncertainty sets that include the correct explanation with guaranteed high probability.
|
| 30 |
+
|
| 31 |
+
§ 2 FRAMEWORK
|
| 32 |
+
|
| 33 |
+
We consider the task of using features $x \in \mathcal{X}$ to predict an outcome $y \in \mathcal{Y}$ . Given a dataset of $n$ i.i.d. pairs $D = \left\{ {\left( {{x}_{1},{y}_{1}}\right) ,\ldots ,\left( {{x}_{n},{y}_{n}}\right) }\right\}$ , the learning task is to select a model $f$ from a model class $\mathcal{F} = \{ f : \mathcal{X} \rightarrow \mathcal{P}\left( \mathcal{Y}\right) \}$ to minimize a loss function $\ell \left( {f\left( x\right) ,y}\right)$ in expectation. Here, $\mathcal{P}\left( \mathcal{Y}\right)$ is the set of probability measures over $\mathcal{Y}$ . Furthermore, we are interested in an explanation $\phi : \mathcal{F} \rightarrow \Phi$ that assigns to every model an interpretation in some space $\Phi$ . For example, we can let $\phi$ map any $f \in \mathcal{F}$ to ${\phi }_{i,x}^{\text{ shap }}\left( f\right)$ , to the Shapley value of the $i$ -th feature applied to the feature vector $x$ , with $D$ as the reference dataset; or to the expected absolute Shapley value of the $i$ -th feature ${\phi }_{i}^{\text{ shap }}\left( f\right) \mathrel{\text{ := }} {\mathbb{E}}_{x}\left\lbrack \left| {{\phi }_{i,x}^{\text{ shap }}\left( f\right) }\right| \right\rbrack$ . In binary classification where $y \in \{ 0,1\}$ , one can consider a counterfactual explanation, namely where ${\phi }_{x, + }^{\mathrm{{CF}}}\left( \widehat{f}\right)$ returns the closest point ${x}^{\prime }$ to $x$ such that the label is predicted to be most likely of the positive class $\widehat{P}\left( {Y = 1 \mid X = {x}^{\prime }}\right) > {0.5}$ .
|
| 34 |
+
|
| 35 |
+
In a frequentist treatment, suppose the model is well-specified, so there exists some model ${f}^{ * } \in \mathcal{F}$ in the model class that gives the true conditional distribution $p\left( {y \mid x}\right)$ . We refer to ${f}^{ * }$ as the true model since it exactly reflects the data generating distribution. We are interested in the explanation of the true model $\phi \left( {f}^{ * }\right)$ . Since we do not know the true model, we use some model-fitting algorithm $\mathcal{A} : \mathcal{D} \rightarrow \mathcal{F}$ that takes as input a dataset $D$ in the space $\mathcal{D} = \mathop{\bigcup }\limits_{{n = 1}}^{\infty }{\left( \mathcal{X} \times \mathcal{Y}\right) }^{n}$ and outputs a model $\widehat{f} = \mathcal{A}\left( D\right)$ . For example, in empirical risk minimization we choose the model that minimizes the loss on the training data $\widehat{f} = \arg \mathop{\min }\limits_{{f \in \mathcal{F}}}\frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\ell \left( {f\left( {x}_{i}\right) ,{y}_{i}}\right)$ . Alternatively, in a Bayesian model, we assume a prior distribution $p\left( {f}^{ * }\right)$ over the model class. In this case, we want to compute the distribution of the true explanation given the dataset $p\left( {\phi \left( {f}^{ * }\right) \mid D}\right)$ .
|
| 36 |
+
|
| 37 |
+
§ 3 QUANTIFYING UNCERTAINTY IN EXPLANATIONS
|
| 38 |
+
|
| 39 |
+
In this section, we describe three approaches for quantifying uncertainty in explanations. Our goal is to use the data to construct an uncertainty set $C = C\left( D\right)$ that includes the true explanation with probability at least $1 - \alpha$ for some chosen confidence level $\alpha \in \left( {0,1}\right)$ :
|
| 40 |
+
|
| 41 |
+
$$
|
| 42 |
+
\mathbb{P}\left( {\phi \left( {f}^{ * }\right) \in C}\right) \geq 1 - \alpha \tag{1}
|
| 43 |
+
$$
|
| 44 |
+
|
| 45 |
+
In a frequentist setting, this type of guarantee requires us to call on uniform convergence results from learning theory. This approach has important drawbacks (first among them, the resulting uncertainty sets tend to be large), so we also introduce two algorithms for Bayesian models-one for Bayesian models with tractable posteriors and one for Bayesian models regardless of the tractability of the posterior-that tend to produce tighter uncertainty sets when they apply. In the remainder of this section, we introduce the three approaches and their associated guarantees.
|
| 46 |
+
|
| 47 |
+
§ 3.1 FREQUENTIST CONFIDENCE INTERVALS
|
| 48 |
+
|
| 49 |
+
In this section, we introduce a method for constructing valid confidence intervals in a frequentist setting, when the model class is sufficiently simple. We measure simplicity in a learning theoretic sense; our results hold for model classes that satisfy uniform convergence, a property often derived used the Vapnik-Chervonenkis (VC) dimension or Rademacher complexity of a model class.
|
| 50 |
+
|
| 51 |
+
Uniform convergence states that the empirical loss ${\mathcal{L}}_{n}\left( f\right) = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\ell \left( {f\left( {x}_{i}\right) ,{y}_{i}}\right)$ converges to the population loss $\mathcal{L}\left( f\right) = \mathbb{E}\left\lbrack {\ell \left( {f\left( x\right) ,y}\right) }\right\rbrack$ "uniformly" across the model class as the number of training samples $n$ goes to infinity. Formally, uniform convergence says that with probability at least $1 - \alpha$ ,
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
\mathop{\sup }\limits_{{f \in \mathcal{F}}}\left| {\mathcal{L}\left( f\right) - {\mathcal{L}}_{n}\left( f\right) }\right| \leq \epsilon \left( \alpha \right) \tag{2}
|
| 55 |
+
$$
|
| 56 |
+
|
| 57 |
+
where $\epsilon \left( \alpha \right) \geq 0$ is a value depending on the chosen confidence level $\alpha$ . First, we will note that uniform convergence gives us a confidence set for the true model. Then, we can bound the explanation of the true model by computing the most extreme explanations within this confidence set. It is easy to show that uniform convergence bounds the excess empirical risk of the true model. Lemma 1. If uniform convergence holds, then with probability at least $1 - \alpha$ ,
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
{\mathcal{L}}_{n}\left( {f}^{ * }\right) \leq \mathop{\inf }\limits_{{f \in \mathcal{F}}}{\mathcal{L}}_{n}\left( f\right) + {2\epsilon }\left( \alpha \right) . \tag{3}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
Lemma 1 is a basic result from learning theory. (See Appendix B for a proof). We can construct a confidence set for the true model,
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
{\mathcal{F}}_{\alpha } = \left\{ {f \in \mathcal{F} : {\mathcal{L}}_{n}\left( f\right) \leq \mathop{\inf }\limits_{{{f}^{\prime } \in \mathcal{F}}}{\mathcal{L}}_{n}\left( {f}^{\prime }\right) + {2\epsilon }\left( \alpha \right) }\right\} , \tag{4}
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
which contains the true model with high probability $\mathbb{P}\left( {{f}^{ * } \in {\mathcal{F}}_{\alpha }}\right) \geq 1 - \alpha$ due to Lemma 1 . Thus, the set of explanations corresponding to ${\mathcal{F}}_{\alpha }$ , namely ${C}_{UC} = \left\{ {\phi \left( f\right) : f \in {\mathcal{F}}_{\alpha }}\right\}$ , includes the true explanation $\phi \left( {f}^{ * }\right)$ with high probability:
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
\mathbb{P}\left( {\phi \left( {f}^{ * }\right) \in {C}_{UC}}\right) \geq 1 - \alpha \tag{5}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
While this confidence set has the desired coverage guarantee, it is not immediately obvious how to compute it since we usually cannot realize ${\mathcal{F}}_{\alpha }$ in practice due to computational limitations. See Appendix C for an algorithm to approximate the confidence interval ${C}_{UC}$ .
|
| 76 |
+
|
| 77 |
+
§ 3.2 PRIOR WITH TRACTABLE POSTERIOR
|
| 78 |
+
|
| 79 |
+
The algorithm in the previous section guarantees coverage for any true function. A natural question to ask is, "can we get tighter guarantees if we only require coverage on average, when the true model is distributed according to some known distribution?" This is often the setting in Bayesian statistics, where we have a prior distribution $p\left( {f}^{ * }\right)$ over the true model. When the model is tractable, we can exactly compute the posterior distribution $p\left( {{f}^{ * } \mid D}\right)$ given the data.
|
| 80 |
+
|
| 81 |
+
A useful quantification of uncertainty for tractable Bayesian models is a credible interval from the induced posterior distribution for the explanation $p\left( {\phi \left( f\right) \mid D}\right)$ . That is, if we sample a model according to the posterior distribution, we want an interval $C \subseteq \Phi$ that includes the explanation for this model with probability at least $1 - \alpha$ .
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
\mathbb{P}\left( {\phi \left( f\right) \in C \mid D}\right) \geq 1 - \alpha \tag{6}
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
We can get this guarantee by estimating the $\left( {\alpha /2}\right)$ and $\left( {1 - \alpha /2}\right)$ -quantiles of the induced posterior for the explanation $p\left( {\phi \left( f\right) \mid D}\right)$ , e.g., by drawing random samples from the induced posterior. First, we independently sample $T$ models ${f}_{1},\ldots ,{f}_{T}$ from the posterior $p\left( {f \mid D}\right)$ . We explain each model to get $T$ explanations $\phi \left( {f}_{1}\right) ,\ldots ,\phi \left( {f}_{T}\right)$ , which are independently distributed according to the induced posterior $p\left( {\phi \left( f\right) \mid D}\right)$ . If the true model ${f}^{ * }$ is distributed according to the posterior, then $\phi \left( {f}^{ * }\right)$ and $\phi \left( {f}_{1}\right) ,\ldots ,\phi \left( {f}_{T}\right)$ are i.i.d. random variables. This means that $\phi \left( {f}^{ * }\right)$ is equally likely to be the smallest, second smallest, ..., largest element of this collection. If we define the ranking function $R\left( u\right) = \mathop{\sum }\limits_{{t = 1}}^{T}\mathbb{1}\left\{ {u \leq \phi \left( {f}_{t}\right) }\right\}$ then $R\left( {\phi \left( {f}^{ * }\right) }\right)$ is distributed uniformly on the set $\{ 0,1,2,\ldots ,T\}$ . Thus, if we define the interval ${C}_{\text{ Bayes }}$ with lower bound and upper bound as the $\left\lfloor {\frac{\alpha }{2}\left( {T + 1}\right) }\right\rfloor /T$ - quantile and $\left\lceil {\left( {1 - \frac{\alpha }{2}}\right) \left( {T + 1}\right) }\right\rceil /T$ -quantile (respectively) of the set $\left\{ {\phi \left( {f}_{1}\right) ,\ldots ,\phi \left( {f}_{T}\right) }\right\}$ , then
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
\mathbb{P}\left( {\phi \left( {f}^{ * }\right) \in {C}_{\text{ Bayes }} \mid D}\right) \geq 1 - \alpha \tag{7}
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
Both ${f}^{ * }$ and ${C}_{\text{ Bayes }}$ are random variables in Equation (7), due to the prior over the ${f}^{ * }$ and the randomly sampled models ${f}_{1},\ldots ,{f}_{T}$ , respectively.
|
| 94 |
+
|
| 95 |
+
Algorithm 1: BEI: Bayesian Explanation Intervals
|
| 96 |
+
|
| 97 |
+
Input : Posterior distribution $p\left( {f \mid D}\right)$ , explanation algorithm $\phi$
|
| 98 |
+
|
| 99 |
+
for $t = 1,\ldots ,T$ do
|
| 100 |
+
|
| 101 |
+
Sample a model ${f}_{t} \sim p\left( {f \mid D}\right)$
|
| 102 |
+
|
| 103 |
+
Compute an explanation $\phi \left( {f}_{t}\right)$ for the sampled model
|
| 104 |
+
|
| 105 |
+
end
|
| 106 |
+
|
| 107 |
+
$\mathbf{{Return} : }$
|
| 108 |
+
|
| 109 |
+
the confidence interval ${C}_{\text{ Trained }}$ with
|
| 110 |
+
|
| 111 |
+
lower bound Quantile $\left( {\left\{ {\phi \left( {f}_{1}\right) ,\ldots ,\phi \left( {f}_{T}\right) }\right\} ;\left\lfloor {\frac{\alpha }{2}\left( {T + 1}\right) }\right\rfloor /T}\right)$ , and
|
| 112 |
+
|
| 113 |
+
upper bound Quantile $\left( {\left\{ {\phi \left( {f}_{1}\right) ,\ldots ,\phi \left( {f}_{T}\right) }\right\} ;\left\lceil {\left( {1 - \frac{\alpha }{2}}\right) \left( {T + 1}\right) }\right\rceil /T}\right)$
|
| 114 |
+
|
| 115 |
+
§ 3.3 PRIOR WITH INTRACTABLE POSTERIOR
|
| 116 |
+
|
| 117 |
+
There are many models for which the posterior distribution is intractable and would be too expensive to compute exactly. When this happens, standard approaches are to either perform approximate inference or change the model so that posterior inference is tractable. However, approximate inference estimates does not perfectly recover the posterior, which means guarantees that apply under the approximate posterior may not apply under the exact posterior. The option to change our model is also not satisfying; if we fit the data with an altered model, our explanations no longer pertain to the true model. In this section, we introduce a third option that alleviates these challenges: we construct uncertainty sets that only require us to know the prior, instead of the posterior.
|
| 118 |
+
|
| 119 |
+
In exchange for not requiring the posterior distribution, we only get a marginal coverage guarantee:
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
\mathbb{P}\left( {\phi \left( {f}^{ * }\right) \in {C}_{\text{ Conformal }}}\right) \geq 1 - \alpha \tag{8}
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
The algorithm works by first sampling $T$ "true models", ${f}_{1},\ldots ,{f}_{T}$ , independently from the prior. If the predictions from ${f}_{t}$ are probability distributions, then we can resample new labels using the model, ${y}_{i}^{t} \sim {f}_{t}\left( {x}_{i}\right) ,i = 1,\ldots ,n$ . By pairing each original input ${x}_{i}$ with the corresponding resampled label ${y}_{i}^{t}$ , we have a dataset ${D}_{t} = \left\{ {\left( {{x}_{1},{y}_{1}^{t}}\right) ,\ldots ,\left( {{x}_{n},{y}_{n}^{t}}\right) }\right\}$ drawn from the model ${f}_{t}$ . We can then train a model ${\widehat{f}}_{t} = \mathcal{A}\left( {D}_{t}\right)$ on this new dataset. By computing explanations $\phi \left( {f}_{t}\right)$ and $\phi \left( {\widehat{f}}_{t}\right)$ for these models, we now have $T$ i.i.d. comparisons of true and estimated explanations. If the true model ${f}^{ * }$ for our original dataset was drawn according to the prior distribution, then the pair $\left( {\phi \left( {f}^{ * }\right) ,\phi \left( \widehat{f}\right) }\right)$ corresponding to the original dataset is also i.i.d. with respect to the simulated explanations $\left( {\phi \left( {f}_{1}\right) ,\phi \left( {\widehat{f}}_{1}\right) }\right) ,\ldots ,\left( {\phi \left( {f}_{T}\right) ,\phi \left( {\widehat{f}}_{T}\right) }\right)$ . This means that we can apply conformal prediction. See Algorithm 2 for a detailed description of Conformal Explanation Intervals (CEI).
|
| 126 |
+
|
| 127 |
+
Algorithm 2: CEI: Conformal Explanation Intervals
|
| 128 |
+
|
| 129 |
+
Input : Model-fitting algorithm $\mathcal{A}$ , dataset $D = \left( {{x}_{1},{y}_{1}}\right) ,\ldots ,\left( {{x}_{n},{y}_{n}}\right)$
|
| 130 |
+
|
| 131 |
+
Input : Nonconformity score $s : \Phi \times \Phi \rightarrow \mathbb{R}$
|
| 132 |
+
|
| 133 |
+
Train a model $\widehat{f} = \mathcal{A}\left( D\right)$ using the dataset
|
| 134 |
+
|
| 135 |
+
Explain the trained model $\widehat{\phi } = \phi \left( \widehat{f}\right)$
|
| 136 |
+
|
| 137 |
+
for $t = 1,\ldots ,T$ do
|
| 138 |
+
|
| 139 |
+
Sample a model ${f}_{t} \sim p\left( f\right)$
|
| 140 |
+
|
| 141 |
+
Sample a dataset of labels ${x}_{i}^{t} \sim {f}_{t}\left( {y}_{i}\right)$
|
| 142 |
+
|
| 143 |
+
Define the synthetic dataset ${D}_{t} = \left\{ {\left( {{x}_{1},{y}_{i}^{t}}\right) ,\ldots ,\left( {{x}_{n},{y}_{n}^{t}}\right) }\right\}$
|
| 144 |
+
|
| 145 |
+
Train a model ${\widehat{f}}_{t} = \mathcal{A}\left( {D}_{t}\right)$
|
| 146 |
+
|
| 147 |
+
Explain the sampled model ${\phi }_{t} = \phi \left( {f}_{t}\right)$ and the trained model ${\widehat{\phi }}_{t} = \phi \left( {\widehat{f}}_{t}\right)$
|
| 148 |
+
|
| 149 |
+
Compute the nonconformity score ${s}_{t} = s\left( {{\phi }_{t},{\widehat{\phi }}_{t}}\right)$
|
| 150 |
+
|
| 151 |
+
end
|
| 152 |
+
|
| 153 |
+
1 Set the threshold $\tau$ as the $\left\lceil {\left( {1 - \alpha }\right) \left( {T + 1}\right) /T}\right\rceil$ -quantile of the set $\left\{ {{s}_{1},\ldots ,{s}_{T}}\right\}$
|
| 154 |
+
|
| 155 |
+
Return:
|
| 156 |
+
|
| 157 |
+
The confidence interval ${C}_{\text{ conformal }} = \{ \varphi \in \Phi : s\left( {\varphi ,\phi \left( \widehat{f}\right) }\right) \leq \tau \}$
|
| 158 |
+
|
| 159 |
+
§ 4 EXPERIMENTS
|
| 160 |
+
|
| 161 |
+
Experimental Setup We train neural networks to predict a real-valued label. The model outputs a mean and variance for a Gaussian distribution, and is trained with the negative log-likelihood loss. The architecture has 2 hidden layers, each with 100 neurons, and uses ReLU activations.
|
| 162 |
+
|
| 163 |
+
We compute uncertainty sets for the explanation of the true model using the conformal explanation intervals method. As the explanation, we use the average of the absolute value of the Shapley value of the feature across the dataset - a type of feature importance. Computing conformal explanation intervals requires us to place a prior over the model class. We set the prior distribution for each weight to be Gaussian with zero mean and variance as the reciprocal of the dimension of the layer. The prior for the biases are standard Gaussian distributions.
|
| 164 |
+
|
| 165 |
+
172 Experimental Results We find that conclusions about the relative importance of features can only be made for some datasets. For example, for the MPG dataset, the features displacement,
|
| 166 |
+
|
| 167 |
+
< g r a p h i c s >
|
| 168 |
+
|
| 169 |
+
Figure 2: Feature importance scores (as measured by the mean Shapley value of each feature across the dataset). Confidence intervals are computed using the conformal explanation intervals method.
|
| 170 |
+
|
| 171 |
+
horsepower, and weight have high importance with low uncertainty. However, in the PROTEIN dataset, it is difficult to make any meaningful conclusions about the relative importance of features, possibly due to the existence of competing models that use different features. Additional experimental results are included in Appendix D.
|
| 172 |
+
|
| 173 |
+
Limitations The validity of these confidence intervals is dependent on our trusting the prior distribution. However, it is difficult to design meaningful prior distributions for neural networks. In some cases, it may be advantageous to use simpler model classes where one can more readily construct meaningful priors.
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/_h_ikjOEGL_/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,255 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Striving for data-model efficiency: Identifying data externalities on group performance
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
## Abstract
|
| 12 |
+
|
| 13 |
+
Building trustworthy, effective, and responsible machine learning systems hinges on understanding how differences in training data and modeling decisions interact to impact predictive performance. In this work, we seek to better understand how we might characterize, detect, and design for data-model synergies. We focus on a particular type of data-model inefficiency, in which adding training data from some sources can actually lower performance evaluated on key sub-groups of the population, a phenomenon we refer to as negative data externalities on group performance. Such externalities can arise in standard learning settings and can manifest differently depending on conditions between training set size and model size. Data externalities directly imply a lower bound on feasible model improvements, yet improving models efficiently requires understanding the underlying data-model tensions. From a broader perspective, our results indicate that data-efficiency is a key component of both accurate and trustworthy machine learning.
|
| 14 |
+
|
| 15 |
+
## 14 1 Introduction
|
| 16 |
+
|
| 17 |
+
Although key aspects of trustworthiness and responsibility in machine learning are often framed from an algorithmic perspective, we explore an alternative framing that focuses on how our chosen modeling and training procedures perform under different data (collection) regimes. We refer to the guiding goal of using the available training data to achieve the best possible performance for all target populations as "data efficiency." While this has clear alignment with accuracy maximization, data minimization, and fairness, it is not always clear how to test or design for data-efficiency generally.
|
| 18 |
+
|
| 19 |
+
We focus on a specific type of data inefficiency, in which adding training data from certain data sources can actually decrease performance on key groups of the target population. We call this phenomenon negative data externalities on group performance (defined formally in definition 1). While recent works have examined how the amount of training data from one group affects performance evaluated on that same group [7, 11, 19], studying between-group trends can evidence tensions between different aspects of model performance that might otherwise go unnoticed.
|
| 20 |
+
|
| 21 |
+
In this work, we formalize data externalities and their immediate cost to model performance (§2.5.3). We describe mechanisms by which negative data externalities can arise and detail why understanding these mechanisms matters for designing effective model improvements (§4). Our experiments show how data externalities shift with conditions on sample size and model capacity (§5). Altogether, our results suggest that data externalities may be suprisingly pervasive in real-world settings, yet avoidable by designing training procedures to promote data-efficiency. Our findings have implications to fairness, privacy, participation, and transparency in machine learning (§6).
|
| 22 |
+
|
| 23 |
+
Related Work. Alongside a recognition of the role of training data in achieving high-performing models [1, 19, 20] has come an appreciation that dataset size does not imply requisite quality [4], representativeness [6], or diversity [3]. For example, concerns have been raised about the use of large, multi-source text datasets, including transparency of data provenance [10] contributed to by a "documentation debt" [3]. In this work, we study how training data from different sources can affect model performance, focusing on possible externalities to key subgroups of the target population.
|
| 24 |
+
|
| 25 |
+
Prior work has proposed scaling laws to describe how the total number of training samples and model size affect aggregate performance [2, 16]. Our study connects to a growing line of research on how varying the amount of training data from different sources, groups, or classes impacts model performance, including subgroup and average accuracies [13, 19], and statistical fairness [7, 11]. Scaling laws that have been proposed to model the impact of data from one dataset (e.g. [2, 16]) or multiple source groups (e.g. [7, 19]) on model performance typically implicitly encode that more data never worsens performance, and thus do not allow for data externalities of the type we study here. Data valuation techniques have been proposed to quantify the (possibly negative) value of individual data points [12, 18]. In a framework more similar to our own, [15] studies scenarios where leaving out entire classes from a training dataset can improve transfer learning performance.
|
| 26 |
+
|
| 27 |
+
Finally, we note that the general terms "data externalities" and "information externalities" have also been used to describe externalities on other factors, such as privacy [8] and data valuation [14].
|
| 28 |
+
|
| 29 |
+
## 2 Setting and notation
|
| 30 |
+
|
| 31 |
+
We assume that each training instance is collected from exactly one of $g \in {\mathcal{G}}_{\text{source }}$ distinct source groups. Each evaluation instance can be associated with any subset of predefined evaluation groups $g \in {\mathcal{G}}_{\text{eval }}$ , which may be intersecting. In this work, we assume that groups are known at training and evaluation time. We will often make the simplifying assumption that ${\mathcal{G}}_{\text{eval }} = {\mathcal{G}}_{\text{source }}$ referring to the set of groups simply as $\mathcal{G}$ .
|
| 32 |
+
|
| 33 |
+
We assume that one can sample instances $\left( {x, y}\right) \sim {\mathcal{D}}_{g}$ for each group in $g \in {\mathcal{G}}_{\text{source }} \cup {\mathcal{G}}_{\text{eval }}$ , where ${\mathcal{D}}_{g}$ denotes the data distribution corresponding to group $g$ . We say that a model $f\left( \cdot \right)$ results in group risks, or expected losses of $\mathcal{R}\left( g\right) = {\mathbb{E}}_{\left( {x, y}\right) \sim {\mathcal{D}}_{g}}\left\lbrack {\ell \left( {f\left( x\right) , y}\right) }\right\rbrack$ defined with respect to to loss function $\ell$ .
|
| 34 |
+
|
| 35 |
+
Per-group risks as random variables parameterized by training dataset composition. Following [19], we consider the learned model ${f}_{\theta }$ as a random function parameterized by both the model parameters $\theta$ and allocations governing sample sizes from each source. Denote the (random) training dataset $\mathcal{S}$ as the union of the $\left| {\mathcal{G}}_{\text{source }}\right|$ sets each comprising samples from each training source:
|
| 36 |
+
|
| 37 |
+
$$
|
| 38 |
+
\mathcal{S}\left( {{n}_{1},\ldots ,{n}_{\left| {\mathcal{G}}_{\text{source }}\right| }}\right) = \mathop{\bigcup }\limits_{{g \in {\mathcal{G}}_{\text{source }}}}{\left\{ \left( {x}_{i},{y}_{i}\right) { \sim }_{i.i.d.}{\mathcal{D}}_{g}\right\} }_{i = 1}^{{n}_{g}} \tag{1}
|
| 39 |
+
$$
|
| 40 |
+
|
| 41 |
+
where sample sizes are determined by the total dataset size $n = \mathop{\sum }\limits_{{g \in {\mathcal{G}}_{\text{source }}}}{n}_{g}$ and allocations ${n}_{g}/n$ . With these definitions in place, we can define the expected risk of a training procedure as a function of the composition of the training set. For example, for a standard training procedure that selects a model from class ${\mathcal{F}}_{\Theta } = {\left\{ {f}_{\theta }\right\} }_{\theta \in \Theta }$ to minimize loss ${\ell }_{\text{train }}$ :
|
| 42 |
+
|
| 43 |
+
$$
|
| 44 |
+
\mathbb{E}\left\lbrack {\mathcal{R}\left( {g}_{\text{eval }}\right) ;{\left\{ {n}_{g}\right\} }_{g \in {\mathcal{G}}_{\text{source }}};{\mathcal{F}}_{\Theta },{\ell }_{\text{train }}}\right\rbrack = {\mathbb{E}}_{\mathcal{S}\left( \left\{ {n}_{g}\right\} \right) }\left\lbrack {{\mathbb{E}}_{\left( {x, y}\right) \sim {\mathcal{D}}_{{g}_{\text{eval }}}}\left\lbrack {\ell \left( {{f}_{\theta }\left( x\right) , y}\right) \mid \theta = {\theta }_{{\ell }_{\text{min }, S}}^{ * }}\right\rbrack }\right\rbrack \tag{2}
|
| 45 |
+
$$
|
| 46 |
+
|
| 47 |
+
where ${\theta }_{{\ell }_{\text{train },\mathcal{S}}}^{ * } = \arg \mathop{\min }\limits_{{\theta \in \Theta }}{\ell }_{\text{train }}\left( {{f}_{\theta },\mathcal{S}}\right)$ . Equation (2) gives us a framework by which to describe performance of our learning procedure across evaluation groups as sensitive to many different scenarios and choices, including the training source sample sizes ${n}_{g}$ and model class complexity.
|
| 48 |
+
|
| 49 |
+
## 3 Data externalities and their costs
|
| 50 |
+
|
| 51 |
+
We now use the framework of $\$ 2$ to characterize our main phenomenon of study: when adding training data from a particular source group decreases performance for an evaluation group.
|
| 52 |
+
|
| 53 |
+
Definition 1 (negative data externality on group risk). For fixed training procedure described by model class $\mathcal{F}$ and loss objective ${\ell }_{\text{train }}$ , a negative data externality on group risk (w.r.t. groups ${\mathcal{G}}_{\text{eval }}$ ) occurs when the expected risk for some evaluation group can increase as a result of adding randomly sampled training data:
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
\exists {g}_{\text{eval }} \in {\mathcal{G}}_{\text{eval }},{\left\{ {n}_{g}^{\prime } \leq {n}_{g}\right\} }_{g \in {\mathcal{G}}_{\text{source }}} : \tag{3}
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
\mathbb{E}\left\lbrack {\mathcal{R}\left( {g}_{\text{eval }}\right) ;{\left\{ {n}_{g}^{\prime }\right\} }_{g \in {\mathcal{G}}_{\text{source }}};\mathcal{F},{\ell }_{\text{train }}}\right\rbrack < \mathbb{E}\left\lbrack {\mathcal{R}\left( {g}_{\text{eval }}\right) ;{\left\{ {n}_{g}\right\} }_{g \in {\mathcal{G}}_{\text{source }}};\mathcal{F},{\ell }_{\text{train }}}\right\rbrack .
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
We can interpret eq. (3) across all possible allocations that could be collected, or with respect to an existing dataset with total size $n = \mathop{\sum }\limits_{{g \in {\mathcal{G}}_{\text{source }}}}{n}_{g}$ , where ${\mathcal{D}}_{g}$ denote empirical distributions and sampling is done uniformly at random. In the second case, a data externality exposes an implicit "cost" to some evaluation group, formalized as a room for improvement, $\Delta$ , in the following claim.
|
| 64 |
+
|
| 65 |
+
Claim 1 (Data externalities lower bound room for model improvement). For a training dataset with ${\left\{ {n}_{g}\right\} }_{g \in {\mathcal{G}}_{\text{source }}}$ , for fixed training procedure with model class $\mathcal{F}$ and training loss ${\ell }_{\text{train }}$ , the maximum magnitude of data externality ${\Delta }_{{g}_{\text{eval }}}$ on group ${g}_{\text{eval }}$ ,
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
{\Delta }_{{g}_{\text{eval }}} \mathrel{\text{:=}} \mathop{\max }\limits_{{{n}_{g}^{\prime } \leq {n}_{g}\forall g}}\left\lbrack {\mathbb{E}\left\lbrack {\mathcal{R}\left( {g}_{\text{eval }}\right) ;{\left\{ {n}_{g}\right\} }_{g \in {\mathcal{G}}_{\text{source }}};\mathcal{F},{\ell }_{\text{train }}}\right\rbrack - \mathbb{E}\left\lbrack {\mathcal{R}\left( {g}_{\text{eval }}\right) ;{\left\{ {n}_{g}^{\prime }\right\} }_{g \in {\mathcal{G}}_{\text{source }}};\mathcal{F},{\ell }_{\text{train }}}\right\rbrack }\right\rbrack , \tag{4}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
is a lower bound on the best possible improvement in expected risk for group ${g}_{\text{eval }}$ that can be achieved using this dataset without raising the expected risk for groups disjoint from ${g}_{\text{eval }}$ .
|
| 72 |
+
|
| 73 |
+
Proof. We can construct an alternative training procedure that first subsets the training data uniformly at random from each source group according to the ${n}_{a}^{\prime }$ that maximize the expression in eq. (4). Since groups are assumed to be known, we can selectively apply this new model to instances from ${g}_{\text{eval }}$ and use the original model for all other instances. This split model lowers expected risk by ${\Delta }_{{g}_{\text{eval }}}$ for ${g}_{\text{eval }}$ and does not alter expected risk for instances not in ${g}_{\text{eval }}$ . As this procedure only optimizes over the training data sub-sampling, it is a lower bound for the possible performance improvements.
|
| 74 |
+
|
| 75 |
+
Claim 1 highlights that identifying data externalities can improve the model for groups on which the model under-performs, without any negative consequence to disjoint evaluation groups. However, data externalities also tell us something more subtle about the compatibility of our model with the underlying structures in our data. In the next section, we investigate possible causes of data externalities, and what they mean for improving model performance.
|
| 76 |
+
|
| 77 |
+
## 4 When do negative data externalities arise?
|
| 78 |
+
|
| 79 |
+
An intuitive setting where data externalities can arise is when the complexity of model class ${\mathcal{F}}_{\Theta }$ is constrained or mis-specified so that the optimal parameters differ per group, i.e. ${\theta }_{g}^{ * } \neq {\theta }_{{g}^{\prime }}^{ * }$ , where ${\theta }_{q}^{ * } \mathrel{\text{:=}} \arg \mathop{\min }\limits_{{\theta \in \Theta }}{\mathbb{E}}_{\left( x, y \sim {\mathcal{D}}_{q}\right) }\left\lbrack {\ell \left( {{f}_{\theta }\left( x\right) , y}\right) }\right\rbrack$ . Here we detail such a setting, as well as another example in which data externalities arise even when the optimal model is the same for all groups $\left( {{\theta }_{g}^{ * } = {\theta }_{{g}^{\prime }}^{ * }}\right)$ .
|
| 80 |
+
|
| 81 |
+
Our examples use an illustrative model where we assume there exists a true affine relationship for each group, with shared linear weights but different intercepts (see §A.3 for exact parameters):
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
{y}_{g} = w \cdot {x}_{g} + {b}_{A} \cdot \mathbb{I}\left\lbrack {g = A}\right\rbrack + {b}_{B} \cdot \mathbb{I}\left\lbrack {g = B}\right\rbrack + {\epsilon }_{g}\;g \in \{ A, B\} = \mathcal{G},
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
but the model class is the set of affine models shared between groups $\left( {{f}_{\theta }\left( {x, g}\right) = {\theta }_{1}x + {\theta }_{2}}\right)$ . In the first example, we vary the intercept of the true model between the groups, as well as the mean of the feature distribution (fig. 1a, left). The discrepancy between the true model and the model class results in data externalities with magnitude ( $\Delta$ in eq. (4)) increasing monotonically as the number of samples from the other group increases (positive slopes in fig. 1a, right).
|
| 88 |
+
|
| 89 |
+

|
| 90 |
+
|
| 91 |
+
Figure 1: Two illustrative examples based on the data-generating distribution described in §4
|
| 92 |
+
|
| 93 |
+
In the second example, the same model parameters apply to both groups, but we vary the spread of the feature distribution and scale of observation noise ${\epsilon }_{g}$ between groups, effectively decreasing the signal-to-noise ratio for group $B$ relative to group $A$ (fig. 16, left). This results in data externalities evaluated on group $A$ for small to mid-range number of samples from group $B$ , which dissipate with larger ${n}_{B}$ (fig. 1b, right). What constitutes "small to mid-range" values of ${n}_{B}$ is relative to ${n}_{A}$ : the magnitude of the negative data externalities decrease with sufficient samples from group $A$ .
|
| 94 |
+
|
| 95 |
+
In these two examples, negative data externalities arise for different reasons and the modeling intervention that would best address the data externalities depends on the cause of the tension. In the first example, allowing for a more complex model class which fits a different intercept term for each group (expanding model capacity in a targeted way) will alleviate data externalities for any $\left\{ {{n}_{A},{n}_{B}}\right\}$ . In the second example, removing negative data externalities by splitting the model by group would eliminate the positive externalities of adding samples from group B when ${n}_{B}$ is large $\left( { \geq {1e5}\text{in fig. Tb). A more appropriate strategyfor the second example would reweight instances}}\right)$ according to their source group in when computing and optimizing the training loss.
|
| 96 |
+
|
| 97 |
+
While claim 1 formalizes that data externalities signal a clear opportunity to improve performance, the examples above highlight that best way to make model improvements will depend on the setting. Data externalities could thus be considered as a symptom indicating sub-optimal data-efficiency of a given modeling procedure. Remedying the exposed tensions in an effective manner will require understanding the underlying mechanisms giving rise to the observed data externalities.
|
| 98 |
+
|
| 99 |
+
## 5 Data externalities with real data
|
| 100 |
+
|
| 101 |
+
Experiments in this section expose negative data externalities with respect to the empirical distributions defined by two different real-world datasets (see §A.1 for more details on datasets):
|
| 102 |
+
|
| 103 |
+
The Goodreads datasets [22] contain book reviews and ratings of books from different genres. We collect data for two genres - history/biography (history) and fantasy/paranormal (fantasy) - which comprise the two groups $g$ in our setup. Similar to in [19], the binary prediction task is to discern whether a book review corresponds to a 5 star rating (1) or less (0), given the text of the review.
|
| 104 |
+
|
| 105 |
+
The CivilComments dataset with identity information [5] contains online comments with human-annotated labels corresponding to whether the comment is considered toxic and whether the comment is targeted at a specific group $g$ . We focus on the four largest identity groups present in the dataset: female, male, Christian, and Muslim (groups are determined as binary labels if the annotator average is at least 0.5 for that identity group, similar to toxicity labels).
|
| 106 |
+
|
| 107 |
+
Beyond evidencing that negative data externalities can manifest in real-data settings, we design experiments to understand when they manifest, in light of results from §4. We examine how data externalities arise under different conditions on total sample size (§5.1) and model capacity (§5.2).
|
| 108 |
+
|
| 109 |
+
To identify data externalities with existing datasets, we sub-sample the available training data from each source group $g$ uniformly at random with different allocations defined by ${\left\{ {n}_{g}\right\} }_{g \in \mathcal{G}}$ . To estimate eq. (2) for each group, we measure the per-group performance of the resulting models on fixed evaluation sets. We report per-group area under the reciever operating characteristic (AUROC) as a metric that is insensitive to class imbalances (see §A.2). We report variability across multiple random draws of the training set for each allocation ${\left\{ {n}_{g}\right\} }_{g \in \mathcal{G}\text{.}}$
|
| 110 |
+
|
| 111 |
+
### 5.1 Data externalities and dataset size (book rating prediction)
|
| 112 |
+
|
| 113 |
+
We first examine how data externalities manifest across different possible sample sizes and ratios between groups, using the rating prediction task described above. To compare performance across many subsets of training data, we chose a model that is fast to train. Following [7, 19], we use a linear regression model with ${\ell }_{1}$ penalty (lasso), trained on 1000-dimensional tf-idf vector embeddings of the review texts. The ${\ell }_{1}$ penalty is chosen via a cross-validation search for each subset.
|
| 114 |
+
|
| 115 |
+
Figure 2 shows that negative data externalities can arise in real-data settings, evidenced by the decrease in AUROC evaluated one group as data from another group is added to the training set (left to right on horizontal axis). As discussed in claim 1, these externalities provide obvious modeling interventions to increase per-group performance. For each panel (evaluation group) in fig. 2, we could subset the full training data to the allocation that maximizes AUROC along the vertical axis, resulting in two models trained with different training subsets, and as a result obtaining greater performance on each group (compared to performance with the full training set, $n = {400},{000}$ ).
|
| 116 |
+
|
| 117 |
+

|
| 118 |
+
|
| 119 |
+
Figure 2: Data externalities manifest when there is sufficient training data from the group being evaluated. Each curve fixes the number of training points corresponding to the evaluation group. From left to right, training data is added randomly from the other genre. Solid lines show average per-group performance; shaded regions show 2 standard errors above and below the mean over 10 trials. Negative slopes diagnose data externalities measured with respect to per-group AUROC.
|
| 120 |
+
|
| 121 |
+
The magnitude of the externality (measured from peak of each curve to rightmost point) is small, as expected from previous work that assumes between-group trends have a negligible effect on per-group performance as a function of training set allocations [7, 19]. Nonetheless, the existence of data externalities suggests that in some contexts, more nuanced scaling laws would be appropriate for describing model performance across data allocations.
|
| 122 |
+
|
| 123 |
+
The curves in fig. 2 are not all monotonic, suggesting that there is not an "all or nothing" answer to whether merging or splitting training data from multiple groups optimizes model performance: sometimes, the best performance for group A is achieved by adding a moderate amount of training samples from group B. In fact, negative data externalities tend to manifest only once a certain number of points from the group in question are present in the training set (lighter-colored curves).
|
| 124 |
+
|
| 125 |
+
Drawing on our understanding from §4, we hypothesize that as the total number of training points increases, the training data "saturates" the model class, in the sense that the variance reduction due to additional points from group $\mathrm{B}$ is not worth the bias away from the optimal model parameters with respect to group A's data distribution. The exact saturation point would depend on the distribution of features and labels in each group and the distance between the group-optimal model parameters, the latter depending on the capacity of the model class. To examine this further, our next set of experiments examines how data externalities manifest with models of different capacity.
|
| 126 |
+
|
| 127 |
+
### 5.2 Data externalities and model size (toxicity classification)
|
| 128 |
+
|
| 129 |
+
We now examine how data externalities can differ for models of different sizes, using the CivilCom-ments with identities dataset described above. We fine-tune pre-trained miniature BERT models of different model capacity from [21]. Model architectures are determined by the number of transformer layers $L$ and hidden embeddings size $H$ with corresponding number of attention heads. For each fine-tuning run we use the Adam optimizer with learning rate 0.0001 and weight decay of 0.01 and train for 100 epochs with batchsize 64 and 500 gradient steps per epoch (see §A.2).
|
| 130 |
+
|
| 131 |
+
The leftmost points in each top panel of fig. 3 start with the maximum number of training points from the given group and increase the number of training points by adding data at random from the rest of the training set. The different hues in fig. 3 rank the models in terms of number of overall parameters (see table 1 in [21]). Negative slopes in the top row of fig. 3 (and corresponding negative values in the bottom row of fig. 3) evidence that data externalities arise in this data context and prediction task.
|
| 132 |
+
|
| 133 |
+

|
| 134 |
+
|
| 135 |
+
Figure 3: Increasing model capacity has a complicated effect on magnitude of data externalities. Performance and data externalities are measured across groups in the CivilComments dataset [5] for different miniature BERT models [21] with size determined by $L, H$ . Top: group AUROCs for different training configurations and model architectures; shaded areas denote 2 standard errors above and below the mean across 5 trials. Bottom: the magnitude of data externalities from the top row.
|
| 136 |
+
|
| 137 |
+
While increasing width or depth of the miniature BERT models (generally moving left to right on the bottom panels) can decrease the exhibited data externalities on per-group AUROC, increasing model complexity does not necessarily mitigate negative data externalities on group performance, and in some cases can exacerbate them. Adding additional layers to the model can increase the magnitude of data externalities evidenced (first three bars of groups female, Muslim in fig. 3, bottom), even though models with more layers tend to have higher overall performance. While this phenomenon tends to be more stark for models of increasing depth, we caution interpretation of the relative merits of adding model capacity with either depth or width without further analysis.
|
| 138 |
+
|
| 139 |
+
Taken together, the results in §5.1 and §5.2 evidence that data externalities manifest in different real-world data contexts and shed light on when and why they might manifest. Results in fig. 2 suggest that data externalities arise for one group primarily when there is enough representation in the training data from that group to "saturate" the model. Results in fig. 3 highlight that this point of saturation may depend on the complexity of the model parameterization among other factors. Future work could leverage these findings to build a stronger understanding of how model capacity might be tailored to jointly increase performance while promoting data efficiency under different conditions.
|
| 140 |
+
|
| 141 |
+
## 6 Discussion and open questions
|
| 142 |
+
|
| 143 |
+
We have shown that data externalities, a phenomenon in which adding more data from some input sources reduces performance on key evaluation groups, can occur in many machine learning settings. While this specific type of "data inefficiency" indicates room for model improvement, eq. (4) is likely to be a coarse lower bound for the possible improvements that could be made. Furthermore, the simple model modification described in the proof of claim 1 is only computationally reasonable when the number of evaluation groups is relatively small. Characterizing when and how data externalities can be (i) reliably identified for unknown evaluation groups or large number of groups and (ii) effectively mitigated within reasonable computational limits will be important future work.
|
| 144 |
+
|
| 145 |
+
We have focused on understanding how and when data externalities manifest across learning settings and training procedures. It would be interesting interesting to study data externalities and data-efficiency more generally as a principle by which to design algorithms from the outset. For example, a learning procedure guaranteeing no data externalities could enhance transparency regarding how input data affects model outputs, toward aligning of the goals of data minimization, participatory approaches, and fairness with traditional performance optimization in machine learning.
|
| 146 |
+
|
| 147 |
+
References
|
| 148 |
+
|
| 149 |
+
[1] Lora Aroyo, Matthew Lease, Praveen Paritosh, and Mike Schaekermann. Data excellence for AI: Why should you care? Interactions, 29(2):66-69, feb 2022.
|
| 150 |
+
|
| 151 |
+
[2] Yamini Bansal, Behrooz Ghorbani, Ankush Garg, Biao Zhang, Colin Cherry, Behnam Neyshabur, and Orhan Firat. Data scaling laws in NMT: The effect of noise and architecture. In International Conference on Machine Learning, pages 1466-1482. PMLR, 2022.
|
| 152 |
+
|
| 153 |
+
[3] Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT '21, page 610-623, New York, NY, USA, 2021. Association for Computing Machinery.
|
| 154 |
+
|
| 155 |
+
[4] Christine L Borgman. Big data, little data, no data: Scholarship in the networked world. MIT press, 2017.
|
| 156 |
+
|
| 157 |
+
[5] Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. Nuanced metrics for measuring unintended bias with real data for text classification. In Companion proceedings of the 2019 world wide web conference, pages 491-500, 2019.
|
| 158 |
+
|
| 159 |
+
[6] Valerie C Bradley, Shiro Kuriwaki, Michael Isakov, Dino Sejdinovic, Xiao-Li Meng, and Seth Flaxman. Unrepresentative big surveys significantly overestimated us vaccine uptake. Nature, 600(7890):695-700, 2021.
|
| 160 |
+
|
| 161 |
+
[7] Irene Chen, Fredrik D Johansson, and David Sontag. Why is my classifier discriminatory? Advances in Neural Information Processing Systems, 31, 2018.
|
| 162 |
+
|
| 163 |
+
[8] Jay Pil Choi, Doh-Shin Jeon, and Byung-Cheol Kim. Privacy and personal data collection with information externalities. Journal of Public Economics, 173:113-124, 2019.
|
| 164 |
+
|
| 165 |
+
[9] Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. Measuring and mitigating unintended bias in text classification. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 67-73, 2018.
|
| 166 |
+
|
| 167 |
+
[10] Jesse Dodge, Maarten Sap, Ana Marasović, William Agnew, Gabriel Ilharco, Dirk Groeneveld, Margaret Mitchell, and Matt Gardner. Documenting large webtext corpora: A case study on the colossal clean crawled corpus. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1286-1305, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics.
|
| 168 |
+
|
| 169 |
+
[11] Hadi Elzayn and Benjamin Fish. The effects of competition and regulation on error inequality in data-driven markets. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* '20, page 669-679, New York, NY, USA, 2020. Association for Computing Machinery.
|
| 170 |
+
|
| 171 |
+
[12] Amirata Ghorbani and James Zou. Data shapley: Equitable valuation of data for machine learning. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 2242-2251. PMLR, 09-15 Jun 2019.
|
| 172 |
+
|
| 173 |
+
[13] Tatsunori Hashimoto. Model performance scaling with multiple data sources. In International Conference on Machine Learning, pages 4107-4116. PMLR, 2021.
|
| 174 |
+
|
| 175 |
+
[14] Shota Ichihashi. The economics of data externalities. Journal of Economic Theory, 196:105316, 2021.
|
| 176 |
+
|
| 177 |
+
[15] Saachi Jain, Hadi Salman, Alaa Khaddaj, Eric Wong, Sung Min Park, and Aleksander Madry. A data-based perspective on transfer learning. arXiv preprint arXiv:2207.05739, 2022.
|
| 178 |
+
|
| 179 |
+
[16] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020.
|
| 180 |
+
|
| 181 |
+
[17] Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, et al. Wilds: A benchmark of in-the-wild distribution shifts. In International Conference on Machine Learning, pages 5637-5664. PMLR, 2021.
|
| 182 |
+
|
| 183 |
+
[18] Yongchan Kwon and James Zou. Beta shapley: A unified and noise-reduced data valuation framework for machine learning. arXiv preprint arXiv:2110.14049, 2021.
|
| 184 |
+
|
| 185 |
+
[19] Esther Rolf, Theodora T Worledge, Benjamin Recht, and Michael Jordan. Representation matters: Assessing the importance of subgroup allocations in training data. In International Conference on Machine Learning, pages 9040-9051. PMLR, 2021.
|
| 186 |
+
|
| 187 |
+
[20] Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In Proceedings of the IEEE international conference on computer vision, pages 843-852, 2017.
|
| 188 |
+
|
| 189 |
+
[21] Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Well-read students learn better: On the importance of pre-training compact models. arXiv preprint arXiv:1908.08962, 2019.
|
| 190 |
+
|
| 191 |
+
[22] Mengting Wan and Julian J. McAuley. Item recommendation on monotonic behavior chains. In Sole Pera, Michael D. Ekstrand, Xavier Amatriain, and John O'Donovan, editors, Proceedings of the 12th ACM Conference on Recommender Systems, RecSys 2018, Vancouver, BC, Canada, October 2-7, 2018, pages 86-94. ACM, 2018.
|
| 192 |
+
|
| 193 |
+
## A Appendix
|
| 194 |
+
|
| 195 |
+
### A.1 Background and context for datasets
|
| 196 |
+
|
| 197 |
+
Goodreads. The Goodreads datasets contain a corpus of book reviews across several genres, with numerical ratings and information about the book and reviewer [22]. This dataset and task have been used to study group effects across author genders and book genres in previous work [7, 19].
|
| 198 |
+
|
| 199 |
+
We follow a preprocessing approach similar to that in [19], where groups are genres. Specifically, we instantiate our experimental dataset with reviews from two genres: history/biography (history) and fantasy/paranormal (fantasy), excluding the few books with overlap between these genres. We take 250,000 review instances at random from the reviews of the most popular 1000 books per genre, and split this data into group-balanced training set of 400,000 instances and validation/test set of 100,000 instances.
|
| 200 |
+
|
| 201 |
+
We consider binary labels of whether the rating accompanying each review text is a 5 star or less rating (originally measured on a 1-5 scale). The class distribution similar between groups, with an average label of 0.39 across history group training instances, and 0.41 across fantasy training instances. Review texts are embedded using tf-idf vectorization with 1000 features, and ignoring ‘english’ stopwords. The tf-idf vectors are computed with respect to the entire 500,000 instances.
|
| 202 |
+
|
| 203 |
+
CivilComments (with identities) dataset for toxicity classification in text. Classifying toxic comments in a corpus of text can be challenging due to different meanings or sentiments of words or phrases when they are used in reference to certain groups or topics [9]. The CivilComments dataset with identity information [5] contains online comments with human-annotated labels corresponding to whether the comment is considered toxic, as well as whether the comment is targeted at a specific group.
|
| 204 |
+
|
| 205 |
+
As noted in [17], the test set for some identity groups can be very small. In our analysis, we focus on the four largest identity groups present in the dataset: female, male, Christian, and Muslim. Groups are determined as binary labels if the annotator average is at least 0.5 for that identity group; binary toxicity labels are assigned similarly. The full training sets contain 405,130 total instances, with 53,429 instances corresponding to the female group, 44,848 instances corresponding to the male 6 group, 40,423 instances corresponding to the Christian group, and 21,006 instances corresponding 17 to the Muslim group. The average toxicity rate (average binary label) differs across groups: in the training set the toxicity rates by group are: 14% (female), 15% (male), 11% (Christian), and 24% (Muslim).
|
| 206 |
+
|
| 207 |
+
To increase the number of evaluation samples across groups, we combine the validation and test sets in our analysis. Combining the validation and test sets means we don't have a separate validation set to tune model hyper-parameters. Should a separate validation set be necessary, one could use the pre-processed dataset from [17], which combines some groups together to increase per-group sample sizes and allocates a larger fraction of the overall data to separate validation and test sets.
|
| 208 |
+
|
| 209 |
+
### A.2 Additional experiment details
|
| 210 |
+
|
| 211 |
+
#### A.2.1 Per-group AUROC
|
| 212 |
+
|
| 213 |
+
Evaluating AUROC within each group measures the strength of the relative ordering of the predictions within that group. This is especially desirable for our experimental purposes should label distributions differ between groups, and is appropriate since we assume groups are known during training and evaluation. While this is appropriate for our setting, we not note the limitations of using per-group AUROC when assigning per-group classification thresholds is infeasible [5], e.g. when groups are assumed to be unknown.
|
| 214 |
+
|
| 215 |
+
#### A.2.2 Details on $§{5.2}$
|
| 216 |
+
|
| 217 |
+
The pretrained miniature Bert models we use in §5.2 were accessed via https://tfhub.dev/ google/collections/bert/1. Details on the number of parameters, training time, and pretraining process for each model can be found in [21].
|
| 218 |
+
|
| 219 |
+
For each fine-tuning run we use the Adam optimizer with learning rate 0.0001 and weight decay of 0.01 and train for 100 epochs with batchsize 64 and 500 gradient steps per epoch. While we fix these parameters as part of the fixed training procedure definition for this experiment (e.g. as relates to definition [1], it is likely that the optimal hyper-parameter values would differ across model architectures and amounts of training data. Future work could incorporate hyper-parameter optimizations as part of the training procedures for each (dataset, model) pair.
|
| 220 |
+
|
| 221 |
+
### A.3 Illustrative example details
|
| 222 |
+
|
| 223 |
+
Our examples in [4] use the following linear model:
|
| 224 |
+
|
| 225 |
+
$$
|
| 226 |
+
{y}_{g} = w \cdot {x}_{g} + {b}_{A} \cdot \mathbb{I}\left\lbrack {g = A}\right\rbrack + {b}_{B} \cdot \mathbb{I}\left\lbrack {g = B}\right\rbrack + {\epsilon }_{g}\;g \in \{ A, B\} = \mathcal{G}.
|
| 227 |
+
$$
|
| 228 |
+
|
| 229 |
+
Here we give the exact parameters that produce experimental results in fig. 1. In the first example 346 (fig. 1a), we vary the intercept of the true model between the groups, as well as mean of the distribution of the features:
|
| 230 |
+
|
| 231 |
+
$$
|
| 232 |
+
w = - 1;{b}_{A} = - {10},{b}_{B} = {10} \tag{5}
|
| 233 |
+
$$
|
| 234 |
+
|
| 235 |
+
$$
|
| 236 |
+
{x}_{A} \sim \mathcal{N}\left( {{10},5}\right) + {\epsilon }_{A},\;{\epsilon }_{A} \sim \mathcal{N}\left( {0,1}\right)
|
| 237 |
+
$$
|
| 238 |
+
|
| 239 |
+
$$
|
| 240 |
+
{x}_{B} \sim \mathcal{N}\left( {-{10},5}\right) + {\epsilon }_{B},\;{\epsilon }_{\mathbf{A}} \sim \mathcal{N}\left( {0,1}\right) .
|
| 241 |
+
$$
|
| 242 |
+
|
| 243 |
+
In the second example (fig. 1b), the same model applies to both groups, but we vary the parameters governing observation noise ${\epsilon }_{g}$ between groups, as well as the spread of the feature distribution:
|
| 244 |
+
|
| 245 |
+
$$
|
| 246 |
+
w = - 1;{b}_{A} = {b}_{B} = - {10} \tag{6}
|
| 247 |
+
$$
|
| 248 |
+
|
| 249 |
+
$$
|
| 250 |
+
{x}_{A} \sim \mathcal{N}\left( {-{10},5}\right) + {\epsilon }_{A},\;{\epsilon }_{A} \sim \mathcal{N}\left( {0,1}\right)
|
| 251 |
+
$$
|
| 252 |
+
|
| 253 |
+
$$
|
| 254 |
+
{x}_{B} \sim \mathcal{N}\left( {-{10},2}\right) + {\epsilon }_{B},\;{\epsilon }_{B} \sim \mathcal{N}\left( {0,{10}}\right) .
|
| 255 |
+
$$
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/_h_ikjOEGL_/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,145 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ STRIVING FOR DATA-MODEL EFFICIENCY: IDENTIFYING DATA EXTERNALITIES ON GROUP PERFORMANCE
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
§ ABSTRACT
|
| 12 |
+
|
| 13 |
+
Building trustworthy, effective, and responsible machine learning systems hinges on understanding how differences in training data and modeling decisions interact to impact predictive performance. In this work, we seek to better understand how we might characterize, detect, and design for data-model synergies. We focus on a particular type of data-model inefficiency, in which adding training data from some sources can actually lower performance evaluated on key sub-groups of the population, a phenomenon we refer to as negative data externalities on group performance. Such externalities can arise in standard learning settings and can manifest differently depending on conditions between training set size and model size. Data externalities directly imply a lower bound on feasible model improvements, yet improving models efficiently requires understanding the underlying data-model tensions. From a broader perspective, our results indicate that data-efficiency is a key component of both accurate and trustworthy machine learning.
|
| 14 |
+
|
| 15 |
+
§ 14 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
Although key aspects of trustworthiness and responsibility in machine learning are often framed from an algorithmic perspective, we explore an alternative framing that focuses on how our chosen modeling and training procedures perform under different data (collection) regimes. We refer to the guiding goal of using the available training data to achieve the best possible performance for all target populations as "data efficiency." While this has clear alignment with accuracy maximization, data minimization, and fairness, it is not always clear how to test or design for data-efficiency generally.
|
| 18 |
+
|
| 19 |
+
We focus on a specific type of data inefficiency, in which adding training data from certain data sources can actually decrease performance on key groups of the target population. We call this phenomenon negative data externalities on group performance (defined formally in definition 1). While recent works have examined how the amount of training data from one group affects performance evaluated on that same group [7, 11, 19], studying between-group trends can evidence tensions between different aspects of model performance that might otherwise go unnoticed.
|
| 20 |
+
|
| 21 |
+
In this work, we formalize data externalities and their immediate cost to model performance (§2.5.3). We describe mechanisms by which negative data externalities can arise and detail why understanding these mechanisms matters for designing effective model improvements (§4). Our experiments show how data externalities shift with conditions on sample size and model capacity (§5). Altogether, our results suggest that data externalities may be suprisingly pervasive in real-world settings, yet avoidable by designing training procedures to promote data-efficiency. Our findings have implications to fairness, privacy, participation, and transparency in machine learning (§6).
|
| 22 |
+
|
| 23 |
+
Related Work. Alongside a recognition of the role of training data in achieving high-performing models [1, 19, 20] has come an appreciation that dataset size does not imply requisite quality [4], representativeness [6], or diversity [3]. For example, concerns have been raised about the use of large, multi-source text datasets, including transparency of data provenance [10] contributed to by a "documentation debt" [3]. In this work, we study how training data from different sources can affect model performance, focusing on possible externalities to key subgroups of the target population.
|
| 24 |
+
|
| 25 |
+
Prior work has proposed scaling laws to describe how the total number of training samples and model size affect aggregate performance [2, 16]. Our study connects to a growing line of research on how varying the amount of training data from different sources, groups, or classes impacts model performance, including subgroup and average accuracies [13, 19], and statistical fairness [7, 11]. Scaling laws that have been proposed to model the impact of data from one dataset (e.g. [2, 16]) or multiple source groups (e.g. [7, 19]) on model performance typically implicitly encode that more data never worsens performance, and thus do not allow for data externalities of the type we study here. Data valuation techniques have been proposed to quantify the (possibly negative) value of individual data points [12, 18]. In a framework more similar to our own, [15] studies scenarios where leaving out entire classes from a training dataset can improve transfer learning performance.
|
| 26 |
+
|
| 27 |
+
Finally, we note that the general terms "data externalities" and "information externalities" have also been used to describe externalities on other factors, such as privacy [8] and data valuation [14].
|
| 28 |
+
|
| 29 |
+
§ 2 SETTING AND NOTATION
|
| 30 |
+
|
| 31 |
+
We assume that each training instance is collected from exactly one of $g \in {\mathcal{G}}_{\text{ source }}$ distinct source groups. Each evaluation instance can be associated with any subset of predefined evaluation groups $g \in {\mathcal{G}}_{\text{ eval }}$ , which may be intersecting. In this work, we assume that groups are known at training and evaluation time. We will often make the simplifying assumption that ${\mathcal{G}}_{\text{ eval }} = {\mathcal{G}}_{\text{ source }}$ referring to the set of groups simply as $\mathcal{G}$ .
|
| 32 |
+
|
| 33 |
+
We assume that one can sample instances $\left( {x,y}\right) \sim {\mathcal{D}}_{g}$ for each group in $g \in {\mathcal{G}}_{\text{ source }} \cup {\mathcal{G}}_{\text{ eval }}$ , where ${\mathcal{D}}_{g}$ denotes the data distribution corresponding to group $g$ . We say that a model $f\left( \cdot \right)$ results in group risks, or expected losses of $\mathcal{R}\left( g\right) = {\mathbb{E}}_{\left( {x,y}\right) \sim {\mathcal{D}}_{g}}\left\lbrack {\ell \left( {f\left( x\right) ,y}\right) }\right\rbrack$ defined with respect to to loss function $\ell$ .
|
| 34 |
+
|
| 35 |
+
Per-group risks as random variables parameterized by training dataset composition. Following [19], we consider the learned model ${f}_{\theta }$ as a random function parameterized by both the model parameters $\theta$ and allocations governing sample sizes from each source. Denote the (random) training dataset $\mathcal{S}$ as the union of the $\left| {\mathcal{G}}_{\text{ source }}\right|$ sets each comprising samples from each training source:
|
| 36 |
+
|
| 37 |
+
$$
|
| 38 |
+
\mathcal{S}\left( {{n}_{1},\ldots ,{n}_{\left| {\mathcal{G}}_{\text{ source }}\right| }}\right) = \mathop{\bigcup }\limits_{{g \in {\mathcal{G}}_{\text{ source }}}}{\left\{ \left( {x}_{i},{y}_{i}\right) { \sim }_{i.i.d.}{\mathcal{D}}_{g}\right\} }_{i = 1}^{{n}_{g}} \tag{1}
|
| 39 |
+
$$
|
| 40 |
+
|
| 41 |
+
where sample sizes are determined by the total dataset size $n = \mathop{\sum }\limits_{{g \in {\mathcal{G}}_{\text{ source }}}}{n}_{g}$ and allocations ${n}_{g}/n$ . With these definitions in place, we can define the expected risk of a training procedure as a function of the composition of the training set. For example, for a standard training procedure that selects a model from class ${\mathcal{F}}_{\Theta } = {\left\{ {f}_{\theta }\right\} }_{\theta \in \Theta }$ to minimize loss ${\ell }_{\text{ train }}$ :
|
| 42 |
+
|
| 43 |
+
$$
|
| 44 |
+
\mathbb{E}\left\lbrack {\mathcal{R}\left( {g}_{\text{ eval }}\right) ;{\left\{ {n}_{g}\right\} }_{g \in {\mathcal{G}}_{\text{ source }}};{\mathcal{F}}_{\Theta },{\ell }_{\text{ train }}}\right\rbrack = {\mathbb{E}}_{\mathcal{S}\left( \left\{ {n}_{g}\right\} \right) }\left\lbrack {{\mathbb{E}}_{\left( {x,y}\right) \sim {\mathcal{D}}_{{g}_{\text{ eval }}}}\left\lbrack {\ell \left( {{f}_{\theta }\left( x\right) ,y}\right) \mid \theta = {\theta }_{{\ell }_{\text{ min },S}}^{ * }}\right\rbrack }\right\rbrack \tag{2}
|
| 45 |
+
$$
|
| 46 |
+
|
| 47 |
+
where ${\theta }_{{\ell }_{\text{ train },\mathcal{S}}}^{ * } = \arg \mathop{\min }\limits_{{\theta \in \Theta }}{\ell }_{\text{ train }}\left( {{f}_{\theta },\mathcal{S}}\right)$ . Equation (2) gives us a framework by which to describe performance of our learning procedure across evaluation groups as sensitive to many different scenarios and choices, including the training source sample sizes ${n}_{g}$ and model class complexity.
|
| 48 |
+
|
| 49 |
+
§ 3 DATA EXTERNALITIES AND THEIR COSTS
|
| 50 |
+
|
| 51 |
+
We now use the framework of $\$ 2$ to characterize our main phenomenon of study: when adding training data from a particular source group decreases performance for an evaluation group.
|
| 52 |
+
|
| 53 |
+
Definition 1 (negative data externality on group risk). For fixed training procedure described by model class $\mathcal{F}$ and loss objective ${\ell }_{\text{ train }}$ , a negative data externality on group risk (w.r.t. groups ${\mathcal{G}}_{\text{ eval }}$ ) occurs when the expected risk for some evaluation group can increase as a result of adding randomly sampled training data:
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
\exists {g}_{\text{ eval }} \in {\mathcal{G}}_{\text{ eval }},{\left\{ {n}_{g}^{\prime } \leq {n}_{g}\right\} }_{g \in {\mathcal{G}}_{\text{ source }}} : \tag{3}
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
\mathbb{E}\left\lbrack {\mathcal{R}\left( {g}_{\text{ eval }}\right) ;{\left\{ {n}_{g}^{\prime }\right\} }_{g \in {\mathcal{G}}_{\text{ source }}};\mathcal{F},{\ell }_{\text{ train }}}\right\rbrack < \mathbb{E}\left\lbrack {\mathcal{R}\left( {g}_{\text{ eval }}\right) ;{\left\{ {n}_{g}\right\} }_{g \in {\mathcal{G}}_{\text{ source }}};\mathcal{F},{\ell }_{\text{ train }}}\right\rbrack .
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
We can interpret eq. (3) across all possible allocations that could be collected, or with respect to an existing dataset with total size $n = \mathop{\sum }\limits_{{g \in {\mathcal{G}}_{\text{ source }}}}{n}_{g}$ , where ${\mathcal{D}}_{g}$ denote empirical distributions and sampling is done uniformly at random. In the second case, a data externality exposes an implicit "cost" to some evaluation group, formalized as a room for improvement, $\Delta$ , in the following claim.
|
| 64 |
+
|
| 65 |
+
Claim 1 (Data externalities lower bound room for model improvement). For a training dataset with ${\left\{ {n}_{g}\right\} }_{g \in {\mathcal{G}}_{\text{ source }}}$ , for fixed training procedure with model class $\mathcal{F}$ and training loss ${\ell }_{\text{ train }}$ , the maximum magnitude of data externality ${\Delta }_{{g}_{\text{ eval }}}$ on group ${g}_{\text{ eval }}$ ,
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
{\Delta }_{{g}_{\text{ eval }}} \mathrel{\text{ := }} \mathop{\max }\limits_{{{n}_{g}^{\prime } \leq {n}_{g}\forall g}}\left\lbrack {\mathbb{E}\left\lbrack {\mathcal{R}\left( {g}_{\text{ eval }}\right) ;{\left\{ {n}_{g}\right\} }_{g \in {\mathcal{G}}_{\text{ source }}};\mathcal{F},{\ell }_{\text{ train }}}\right\rbrack - \mathbb{E}\left\lbrack {\mathcal{R}\left( {g}_{\text{ eval }}\right) ;{\left\{ {n}_{g}^{\prime }\right\} }_{g \in {\mathcal{G}}_{\text{ source }}};\mathcal{F},{\ell }_{\text{ train }}}\right\rbrack }\right\rbrack , \tag{4}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
is a lower bound on the best possible improvement in expected risk for group ${g}_{\text{ eval }}$ that can be achieved using this dataset without raising the expected risk for groups disjoint from ${g}_{\text{ eval }}$ .
|
| 72 |
+
|
| 73 |
+
Proof. We can construct an alternative training procedure that first subsets the training data uniformly at random from each source group according to the ${n}_{a}^{\prime }$ that maximize the expression in eq. (4). Since groups are assumed to be known, we can selectively apply this new model to instances from ${g}_{\text{ eval }}$ and use the original model for all other instances. This split model lowers expected risk by ${\Delta }_{{g}_{\text{ eval }}}$ for ${g}_{\text{ eval }}$ and does not alter expected risk for instances not in ${g}_{\text{ eval }}$ . As this procedure only optimizes over the training data sub-sampling, it is a lower bound for the possible performance improvements.
|
| 74 |
+
|
| 75 |
+
Claim 1 highlights that identifying data externalities can improve the model for groups on which the model under-performs, without any negative consequence to disjoint evaluation groups. However, data externalities also tell us something more subtle about the compatibility of our model with the underlying structures in our data. In the next section, we investigate possible causes of data externalities, and what they mean for improving model performance.
|
| 76 |
+
|
| 77 |
+
§ 4 WHEN DO NEGATIVE DATA EXTERNALITIES ARISE?
|
| 78 |
+
|
| 79 |
+
An intuitive setting where data externalities can arise is when the complexity of model class ${\mathcal{F}}_{\Theta }$ is constrained or mis-specified so that the optimal parameters differ per group, i.e. ${\theta }_{g}^{ * } \neq {\theta }_{{g}^{\prime }}^{ * }$ , where ${\theta }_{q}^{ * } \mathrel{\text{ := }} \arg \mathop{\min }\limits_{{\theta \in \Theta }}{\mathbb{E}}_{\left( x,y \sim {\mathcal{D}}_{q}\right) }\left\lbrack {\ell \left( {{f}_{\theta }\left( x\right) ,y}\right) }\right\rbrack$ . Here we detail such a setting, as well as another example in which data externalities arise even when the optimal model is the same for all groups $\left( {{\theta }_{g}^{ * } = {\theta }_{{g}^{\prime }}^{ * }}\right)$ .
|
| 80 |
+
|
| 81 |
+
Our examples use an illustrative model where we assume there exists a true affine relationship for each group, with shared linear weights but different intercepts (see §A.3 for exact parameters):
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
{y}_{g} = w \cdot {x}_{g} + {b}_{A} \cdot \mathbb{I}\left\lbrack {g = A}\right\rbrack + {b}_{B} \cdot \mathbb{I}\left\lbrack {g = B}\right\rbrack + {\epsilon }_{g}\;g \in \{ A,B\} = \mathcal{G},
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
but the model class is the set of affine models shared between groups $\left( {{f}_{\theta }\left( {x,g}\right) = {\theta }_{1}x + {\theta }_{2}}\right)$ . In the first example, we vary the intercept of the true model between the groups, as well as the mean of the feature distribution (fig. 1a, left). The discrepancy between the true model and the model class results in data externalities with magnitude ( $\Delta$ in eq. (4)) increasing monotonically as the number of samples from the other group increases (positive slopes in fig. 1a, right).
|
| 88 |
+
|
| 89 |
+
< g r a p h i c s >
|
| 90 |
+
|
| 91 |
+
Figure 1: Two illustrative examples based on the data-generating distribution described in §4
|
| 92 |
+
|
| 93 |
+
In the second example, the same model parameters apply to both groups, but we vary the spread of the feature distribution and scale of observation noise ${\epsilon }_{g}$ between groups, effectively decreasing the signal-to-noise ratio for group $B$ relative to group $A$ (fig. 16, left). This results in data externalities evaluated on group $A$ for small to mid-range number of samples from group $B$ , which dissipate with larger ${n}_{B}$ (fig. 1b, right). What constitutes "small to mid-range" values of ${n}_{B}$ is relative to ${n}_{A}$ : the magnitude of the negative data externalities decrease with sufficient samples from group $A$ .
|
| 94 |
+
|
| 95 |
+
In these two examples, negative data externalities arise for different reasons and the modeling intervention that would best address the data externalities depends on the cause of the tension. In the first example, allowing for a more complex model class which fits a different intercept term for each group (expanding model capacity in a targeted way) will alleviate data externalities for any $\left\{ {{n}_{A},{n}_{B}}\right\}$ . In the second example, removing negative data externalities by splitting the model by group would eliminate the positive externalities of adding samples from group B when ${n}_{B}$ is large $\left( { \geq {1e5}\text{ in fig. Tb). A more appropriate strategyfor the second example would reweight instances }}\right)$ according to their source group in when computing and optimizing the training loss.
|
| 96 |
+
|
| 97 |
+
While claim 1 formalizes that data externalities signal a clear opportunity to improve performance, the examples above highlight that best way to make model improvements will depend on the setting. Data externalities could thus be considered as a symptom indicating sub-optimal data-efficiency of a given modeling procedure. Remedying the exposed tensions in an effective manner will require understanding the underlying mechanisms giving rise to the observed data externalities.
|
| 98 |
+
|
| 99 |
+
§ 5 DATA EXTERNALITIES WITH REAL DATA
|
| 100 |
+
|
| 101 |
+
Experiments in this section expose negative data externalities with respect to the empirical distributions defined by two different real-world datasets (see §A.1 for more details on datasets):
|
| 102 |
+
|
| 103 |
+
The Goodreads datasets [22] contain book reviews and ratings of books from different genres. We collect data for two genres - history/biography (history) and fantasy/paranormal (fantasy) - which comprise the two groups $g$ in our setup. Similar to in [19], the binary prediction task is to discern whether a book review corresponds to a 5 star rating (1) or less (0), given the text of the review.
|
| 104 |
+
|
| 105 |
+
The CivilComments dataset with identity information [5] contains online comments with human-annotated labels corresponding to whether the comment is considered toxic and whether the comment is targeted at a specific group $g$ . We focus on the four largest identity groups present in the dataset: female, male, Christian, and Muslim (groups are determined as binary labels if the annotator average is at least 0.5 for that identity group, similar to toxicity labels).
|
| 106 |
+
|
| 107 |
+
Beyond evidencing that negative data externalities can manifest in real-data settings, we design experiments to understand when they manifest, in light of results from §4. We examine how data externalities arise under different conditions on total sample size (§5.1) and model capacity (§5.2).
|
| 108 |
+
|
| 109 |
+
To identify data externalities with existing datasets, we sub-sample the available training data from each source group $g$ uniformly at random with different allocations defined by ${\left\{ {n}_{g}\right\} }_{g \in \mathcal{G}}$ . To estimate eq. (2) for each group, we measure the per-group performance of the resulting models on fixed evaluation sets. We report per-group area under the reciever operating characteristic (AUROC) as a metric that is insensitive to class imbalances (see §A.2). We report variability across multiple random draws of the training set for each allocation ${\left\{ {n}_{g}\right\} }_{g \in \mathcal{G}\text{ . }}$
|
| 110 |
+
|
| 111 |
+
§ 5.1 DATA EXTERNALITIES AND DATASET SIZE (BOOK RATING PREDICTION)
|
| 112 |
+
|
| 113 |
+
We first examine how data externalities manifest across different possible sample sizes and ratios between groups, using the rating prediction task described above. To compare performance across many subsets of training data, we chose a model that is fast to train. Following [7, 19], we use a linear regression model with ${\ell }_{1}$ penalty (lasso), trained on 1000-dimensional tf-idf vector embeddings of the review texts. The ${\ell }_{1}$ penalty is chosen via a cross-validation search for each subset.
|
| 114 |
+
|
| 115 |
+
Figure 2 shows that negative data externalities can arise in real-data settings, evidenced by the decrease in AUROC evaluated one group as data from another group is added to the training set (left to right on horizontal axis). As discussed in claim 1, these externalities provide obvious modeling interventions to increase per-group performance. For each panel (evaluation group) in fig. 2, we could subset the full training data to the allocation that maximizes AUROC along the vertical axis, resulting in two models trained with different training subsets, and as a result obtaining greater performance on each group (compared to performance with the full training set, $n = {400},{000}$ ).
|
| 116 |
+
|
| 117 |
+
< g r a p h i c s >
|
| 118 |
+
|
| 119 |
+
Figure 2: Data externalities manifest when there is sufficient training data from the group being evaluated. Each curve fixes the number of training points corresponding to the evaluation group. From left to right, training data is added randomly from the other genre. Solid lines show average per-group performance; shaded regions show 2 standard errors above and below the mean over 10 trials. Negative slopes diagnose data externalities measured with respect to per-group AUROC.
|
| 120 |
+
|
| 121 |
+
The magnitude of the externality (measured from peak of each curve to rightmost point) is small, as expected from previous work that assumes between-group trends have a negligible effect on per-group performance as a function of training set allocations [7, 19]. Nonetheless, the existence of data externalities suggests that in some contexts, more nuanced scaling laws would be appropriate for describing model performance across data allocations.
|
| 122 |
+
|
| 123 |
+
The curves in fig. 2 are not all monotonic, suggesting that there is not an "all or nothing" answer to whether merging or splitting training data from multiple groups optimizes model performance: sometimes, the best performance for group A is achieved by adding a moderate amount of training samples from group B. In fact, negative data externalities tend to manifest only once a certain number of points from the group in question are present in the training set (lighter-colored curves).
|
| 124 |
+
|
| 125 |
+
Drawing on our understanding from §4, we hypothesize that as the total number of training points increases, the training data "saturates" the model class, in the sense that the variance reduction due to additional points from group $\mathrm{B}$ is not worth the bias away from the optimal model parameters with respect to group A's data distribution. The exact saturation point would depend on the distribution of features and labels in each group and the distance between the group-optimal model parameters, the latter depending on the capacity of the model class. To examine this further, our next set of experiments examines how data externalities manifest with models of different capacity.
|
| 126 |
+
|
| 127 |
+
§ 5.2 DATA EXTERNALITIES AND MODEL SIZE (TOXICITY CLASSIFICATION)
|
| 128 |
+
|
| 129 |
+
We now examine how data externalities can differ for models of different sizes, using the CivilCom-ments with identities dataset described above. We fine-tune pre-trained miniature BERT models of different model capacity from [21]. Model architectures are determined by the number of transformer layers $L$ and hidden embeddings size $H$ with corresponding number of attention heads. For each fine-tuning run we use the Adam optimizer with learning rate 0.0001 and weight decay of 0.01 and train for 100 epochs with batchsize 64 and 500 gradient steps per epoch (see §A.2).
|
| 130 |
+
|
| 131 |
+
The leftmost points in each top panel of fig. 3 start with the maximum number of training points from the given group and increase the number of training points by adding data at random from the rest of the training set. The different hues in fig. 3 rank the models in terms of number of overall parameters (see table 1 in [21]). Negative slopes in the top row of fig. 3 (and corresponding negative values in the bottom row of fig. 3) evidence that data externalities arise in this data context and prediction task.
|
| 132 |
+
|
| 133 |
+
< g r a p h i c s >
|
| 134 |
+
|
| 135 |
+
Figure 3: Increasing model capacity has a complicated effect on magnitude of data externalities. Performance and data externalities are measured across groups in the CivilComments dataset [5] for different miniature BERT models [21] with size determined by $L,H$ . Top: group AUROCs for different training configurations and model architectures; shaded areas denote 2 standard errors above and below the mean across 5 trials. Bottom: the magnitude of data externalities from the top row.
|
| 136 |
+
|
| 137 |
+
While increasing width or depth of the miniature BERT models (generally moving left to right on the bottom panels) can decrease the exhibited data externalities on per-group AUROC, increasing model complexity does not necessarily mitigate negative data externalities on group performance, and in some cases can exacerbate them. Adding additional layers to the model can increase the magnitude of data externalities evidenced (first three bars of groups female, Muslim in fig. 3, bottom), even though models with more layers tend to have higher overall performance. While this phenomenon tends to be more stark for models of increasing depth, we caution interpretation of the relative merits of adding model capacity with either depth or width without further analysis.
|
| 138 |
+
|
| 139 |
+
Taken together, the results in §5.1 and §5.2 evidence that data externalities manifest in different real-world data contexts and shed light on when and why they might manifest. Results in fig. 2 suggest that data externalities arise for one group primarily when there is enough representation in the training data from that group to "saturate" the model. Results in fig. 3 highlight that this point of saturation may depend on the complexity of the model parameterization among other factors. Future work could leverage these findings to build a stronger understanding of how model capacity might be tailored to jointly increase performance while promoting data efficiency under different conditions.
|
| 140 |
+
|
| 141 |
+
§ 6 DISCUSSION AND OPEN QUESTIONS
|
| 142 |
+
|
| 143 |
+
We have shown that data externalities, a phenomenon in which adding more data from some input sources reduces performance on key evaluation groups, can occur in many machine learning settings. While this specific type of "data inefficiency" indicates room for model improvement, eq. (4) is likely to be a coarse lower bound for the possible improvements that could be made. Furthermore, the simple model modification described in the proof of claim 1 is only computationally reasonable when the number of evaluation groups is relatively small. Characterizing when and how data externalities can be (i) reliably identified for unknown evaluation groups or large number of groups and (ii) effectively mitigated within reasonable computational limits will be important future work.
|
| 144 |
+
|
| 145 |
+
We have focused on understanding how and when data externalities manifest across learning settings and training procedures. It would be interesting interesting to study data externalities and data-efficiency more generally as a principle by which to design algorithms from the outset. For example, a learning procedure guaranteeing no data externalities could enhance transparency regarding how input data affects model outputs, toward aligning of the goals of data minimization, participatory approaches, and fairness with traditional performance optimization in machine learning.
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/uBlxaWPm8l/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,165 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Addressing Bias in Face Detectors using Decentralised Data collection with incentives
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
## Abstract
|
| 12 |
+
|
| 13 |
+
Recent developments in machine learning have shown that successful models do not rely only on huge amounts of data but the right kind of data. We show in this paper how this data-centric approach can be facilitated in a decentralised manner to enable efficient data collection for algorithms. Face detectors are a class of models that suffer heavily from bias issues as they have to work on a large variety of different data.
|
| 14 |
+
|
| 15 |
+
We also propose a face detection and anonymization approach using a hybrid MultiTask Cascaded CNN with FaceNet Embeddings to benchmark multiple datasets to describe and evaluate the bias in the models towards different ethnicities, gender and age groups along with ways to enrich fairness in a decentralized system of data labelling, correction and verification by users to create a robust pipeline for model retraining.
|
| 16 |
+
|
| 17 |
+
## 13 1 Introduction
|
| 18 |
+
|
| 19 |
+
The amount of data available and used for training public datasets is vast, yet there is an inherent bias in these datasets towards certain ethnicity groups like caucasian faces as compared to other ethnicities such as Asian, African, Indian, etc. There is definitely a need to mitigate the bias and emphasize on the improvement of fairness in face detection algorithms. This will improve the efficiency and accuracy of Face Verification (FV), recognition, anonymization and other use-cases of face detection.
|
| 20 |
+
|
| 21 |
+
With the advent of publicly available images on social media and the internet, there is a need to enforce personal privacy of people by performing face anonymization on these images. In this work, we propose a ML pipeline to detect faces using a robust multi-task cascaded CNN architecture along with other pre-trained models such as VGGFace2 ([3] and FaceNet [15]) to anonymize the detected faces and blur them using a Gaussian function. We also benchmark the performance of certain custom and pre-trained models on various open-sourced datasets such as MIAP (16), FairFace (9), and RFW (21) (Racial Faces in Wild) to understand the bias of models trained on these datasets. Along with face anonymization, we also determine the age and gender demographics of the detected faces to find any bias present in open-source models. We also evaluate the performance of these open-source models before and after training it on a diverse and fairness-induced dataset by proposing a decentralized system of data evaluation and verification by users of the model output generated (faces detected in the input), see section 3.3.
|
| 22 |
+
|
| 23 |
+
Lastly, we also discuss ways to de-bias the data during pre-processing and post-processing and how to reduce the false positives using clustering and statistical analysis of the generated output. We propose a decentralized platform for data collection and annotation of data with user incentives for 4 detecting any machine-undetected faces in images as part of an initiative to increase model fairness and reduce ethnicity, age, and gender bias.
|
| 24 |
+
|
| 25 |
+
## 2 Related Work
|
| 26 |
+
|
| 27 |
+
The current systems in computer vision have higher yield and astonishing results in several areas, but there are several societal issues related to demographics, ethnicity, gender, age, etc. that have been discussed more recently due to their usage in face recognition, object detection and other applications [18] (19) [8]. Most image recognition algorithms have high disparity in performance on images across the world as discussed in (5) (17) (22) due to the bias in the dataset used for training and also the differences in pipeline used. This bias is generally due to dataset disparity since most of the open-source datasets created and benchmarked are localized to only a few locations restricting the diversity in data quality. Secondly, the other set of related papers talk about harmful and mislabelled data associations which can often lead to a lot of wrongful associations across gender and ethnicity groups in general as discussed by Crawford et al. (4). Some of the other indicators which causes disparity in performance of a face detection algorithm towards certain groups of people is due to bias in learned representations or embeddings of users of underrepresented groups and other demographic traits. Raji et al. [14] talks about reduction of errors in evaluating the commercial face detectors by changing the evaluation metrics used. Ensuring privacy as part of face recognition campaign is an equally important problem, and limited research has been done on the task of extracting and remove private and sensitive information from public dataset and image databases. There has been a few previous work done in literature (2) (12) (7) that blur the background or use gaussian/pixelation functions to blur faces in an image.
|
| 28 |
+
|
| 29 |
+
To improve the robustness and add fairness to the datasets and models used in the above problem approach, we propose a decentralized tool for collecting, annotating and verifying the face detections made by face recognition algorithms across different parts of the world to ensure the data samples collected are rich in diversity, help identify the bias in current commercial and open-source models, generate edge-cases and training samples that could be used to retrain these detectors to improve the coverage of data distribution learnt by our models.
|
| 30 |
+
|
| 31 |
+
## 3 Methodology
|
| 32 |
+
|
| 33 |
+
We aim to build a robust face anonymization pipeline along with functionalities to determine the characteristics of the detected faces as shown in Fig 1 on a decentralized platform for verification and annotation. We also try to estimate the bias towards certain ethnicities and characteristic features in some of the popular pre-trained model architectures such as MTCNN (Multitask Cascade CNN) (23), FaceNet (15) and RetinaNet (10) against the open-source datasets used for understanding and evaluating bias in the face detectors.
|
| 34 |
+
|
| 35 |
+
### 3.1 Datasets
|
| 36 |
+
|
| 37 |
+
In order to understand the bias of ethnicity, age, and gender, it is important to evaluate the performance of classification of different ethnicities as a binary task of faces detected and undetected to understand if there is a bias towards some ethnicity classes having stronger attribute indicators as compared to the rest. The following datasets are a good benchmark to determine the bias since each of these datasets have been labelled and open-sourced keeping the diversity and inclusion of most ethnicities in mind.
|
| 38 |
+
|
| 39 |
+
MIAP Dataset: The MIAP (More Inclusive Annotations for People) Dataset (16) is a subset of Open Images Dataset with new set of annotations for all people found in these images enabling fairness in face detection algorithms. The Dataset contains new annotations for 100,000 Images (Training set of 70k and Valid/Test set of ${30k}$ images). Annotations of the dataset includes ${454}\mathrm{k}$ bounding boxes along with Age and Gender group representations.
|
| 40 |
+
|
| 41 |
+
FairFace Dataset: FairFace(9) a facial image database contains nearly ${100}\mathrm{k}$ images which is also created to reduce the bias during training by having equal representation of classes from YFCC- 100M Flickr dataset (20). The dataset consists of 7 classes namely, White, Latino, Indian, East Asian, Southeast Asian, Black and Middle Eastern. Models trained on FairFace have reported higher performance metrics [9] as compared to other general datasets, and hence, we have included this dataset as well as part of our study.
|
| 42 |
+
|
| 43 |
+
Racial Faces in the Wild (RFW): The RFW (21) Database primarily consists of four test subsets in terms of ethnicity backgrounds, namely Indian, Asian, African and Caucasian. Each subset consists of images for face verification, which is around ${10}\mathrm{k}$ images of $3\mathrm{k}$ individuals.
|
| 44 |
+
|
| 45 |
+

|
| 46 |
+
|
| 47 |
+
Figure 1: End to End Architecture of Face Anonymization and attribute extraction
|
| 48 |
+
|
| 49 |
+
### 3.2 Architecture
|
| 50 |
+
|
| 51 |
+
The end-to-end pipeline uses multiple models to detect faces from the input images. MTCNN (23) and VGGFace (3) are used for generating bounding boxes of the detected faces, post which, we enhance the output bounding boxes and extract the face image to generate a gaussian blurred image as part of our goal to anonymize the faces. These architectures have been employed and are chosen as standard models for face attribute extraction algorithms. The non-anonymized copy of the detected images are used as input to the FaceNet (15) model for generating the face embedding vectors.
|
| 52 |
+
|
| 53 |
+
The MTCNN architecture proposed by Zhang et al. (23) mainly consists of three different stages and each stage consists of a Neural Network, namely, the Proposal Network, Refine Network and the Output network. The first stage uses a shallow CNN architecture to generate candidates proposal windows, which the Refine network enhances with a deeper CNN. The output network refines the result of previous layers and generates the face landmark positions. Since the architecture uses different face landmark locations to estimate a face, we use it as part of our experiment to evaluate face recognition datasets for estimating inherent bias.
|
| 54 |
+
|
| 55 |
+
FaceNet is another model proposed by Schroff et al. (15) outputting a 128-dimension vector, also known as a face embedding which is optimized to differentiate between similar and dissimilar faces using euclidean metrics. The architecture uses a triplet-based loss function, which uses positive and negative samples to estimate the distance between each other respectively as part of the loss function. For each face detected in the inferred image, an embedding is calculated. We use FaceNet embeddings to cluster similar faces using DBSCAN (6) on faces extracted from the MTCNN model. DBSCAN generally uses two parameters, namely, the minimum distance between two instances for them to be grouped together, and second, the number of points to form a cluster. So, if distance between two faces is high, they tend to form different clusters. PCA (II), a very popular dimensionality reduction technique is used to reduce the 128-dimensional vector to 2-dimensional vector to visualize the cluster faces as part of estimating bias in the algorithms.
|
| 56 |
+
|
| 57 |
+
Generally the undetected faces and misclassified faces in the dataset for different pre-trained and popular model architectures form outliers or belong to wrong clusters which are easy to identify. We then employ clustering metrics to estimate the embedding based partitioning of clusters, such as: Mean Silhouette Coefficient which measures similarity and dissimilarity between elements of a certain cluster, and, Davies-Bouldin index (13).
|
| 58 |
+
|
| 59 |
+
### 3.3 Decentralized Data Collection Platform
|
| 60 |
+
|
| 61 |
+
We propose a decentralized data platform for crowdsourcing images and image datasets. The purpose of the tool is to run inference on images that users upload and perform face anonymization using our algorithm. This creates two opportunities for incentivizing users to use our tool:
|
| 62 |
+
|
| 63 |
+
(1) Users can now annotate images on the interface directly in case of any undetected faces (False negatives) and wrong detections (False positives) and post successful verification by verifiers, will be incentivized in form of bounties and revenue shares, (2) Users can upload, annotate, and verify annotations of images while still keeping the ownership of their data - they give a license to the platform to use it and in return get a share in the revenue their contributions create, i.e, any form of revenue generated using models built using the dataset will ensure that users receive a royalty for contributing to the dataset. Also, the missed edge-cases (face detections and false positives) across various images will be collected in the system and will be used for retraining the face detectors in a periodic manner to improve the performance of the model and enrich fairness and reduce the inherent bias in the data and trained model.
|
| 64 |
+
|
| 65 |
+
Distributing ownership of the dataset across creators, annotators and verifiers will democratize the system of ownership without only one central party controlling it and the revenue/value generated by the built algorithms and datasets can flow back to the community directly. The trained model and inference algorithm will be published on decentralised algorithm marketplaces so it will be possible to run inference in decentralised compute environments to make downloading and copying the model impossible. For end-to-end workflow, refer appendix 3,
|
| 66 |
+
|
| 67 |
+
## 4 Results
|
| 68 |
+
|
| 69 |
+
As seen in Table 1, we present a few statistical metrics to determine the fairness for different ethnicities in the RFW dataset using MTCNN (23) model and FaceNet embeddings. It is not directly evident from the results of one group getting better results consistently, but a clear pattern in the bias towards certain ethnicities became evident on deeper study. The prediction accuracy for Asian (A) and Black (B) groups were lower compared to Indian (I) and White (W). But, this is not enough to indicate the bias as there isn't a significant difference between different groups. However, Positive Predictive Value (PPV) and False Postive Rate (FPR) indicate higher confidence in White faces than other groups with a significantly lower False Positive Rate, and this pattern is also seen in PPV, as for white faces, the value is as high as 0.98 and compared to Asian group which is only around 0.78 indicating a higher precision rates in detecting white faces compared to other groups.
|
| 70 |
+
|
| 71 |
+
Table 1: Statistical metrics for RFW Dataset using pre-trained MTCNN + FaceNet embeddings
|
| 72 |
+
|
| 73 |
+
<table><tr><td>Metrics (M)</td><td>Asian(A)</td><td>Indian(I)</td><td>Black(B)</td><td>White(W)</td></tr><tr><td>Prediction Accuracy</td><td>0.91</td><td>0.95</td><td>0.92</td><td>0.97</td></tr><tr><td>False Positive Rate</td><td>0.07</td><td>0.04</td><td>0.08</td><td>0.005</td></tr><tr><td>False Negative rate</td><td>0.05</td><td>0.08</td><td>0.04</td><td>0.14</td></tr><tr><td>Positive Predictive Value</td><td>0.78</td><td>0.93</td><td>0.82</td><td>0.98</td></tr></table>
|
| 74 |
+
|
| 75 |
+
We also tried to quantify the similarity between users in a given cluster extracted and processed by FaceNet embeddings followed by dimensional reduction techniques for both MIAP and RFW datasets in Table 2. As we can see, the trend in mean silhouette score (MSC) is such that attribute with higher number of distinct clusters has a higher score, indicating higher similarity between elements to their own cluster. Clearly for MIAP (16), as we can see when we calculate the metrics for combined clusters of race and gender, the MSC score is higher than when calculated individually, indicating that the gender clusters in any race are closer and correlated than between two ethnicities (racial groups). The Davies-Bouldin index also shows a very similar pattern for the MIAP dataset indicating that clusters are best seperated when combined than when clustered individually in the order: both clustered together, racial groups clustered together and finally clustered based on gender.
|
| 76 |
+
|
| 77 |
+
These results clearly state the need for a trained model that is unbiased towards all ethnicities and gender groups. To enrich fairness in training, the MTCNN + FaceNet model was retrained on a FairFace [9], a balanced dataset with equal distribution of all ethnicities and gender groups with
|
| 78 |
+
|
| 79 |
+
Table 2: Clustering metrics for RFW and MIAP Dataset using MTCNN + FaceNet embeddings
|
| 80 |
+
|
| 81 |
+
<table><tr><td>Metrics</td><td>RFW-Race</td><td>MIAP-Race</td><td>MIAP-Gender</td><td>MIAP-Both</td></tr><tr><td>MSC</td><td>0.12</td><td>0.16</td><td>0.09</td><td>0.19</td></tr><tr><td>DBI</td><td>4.21</td><td>3.89</td><td>6.47</td><td>3.64</td></tr></table>
|
| 82 |
+
|
| 83 |
+
adjusted labels of Race similar to RFW and MIAP Datasets. The increase in prediction accuracy of classes ranged between 1% to 5.5%, and PPV showed an increase of upto 19% after retraining. This shows there was a clear improvement in performance of the model as shown in Table 3 indicating that, an unbiased dataset used for training along with a few data augmentation techniques can improve the model performance such that, the results are not biased towards any single gender or racial group.
|
| 84 |
+
|
| 85 |
+
Table 3: Statistical metrics for RFW Dataset using FairFace trained MTCNN + FaceNet embeddings
|
| 86 |
+
|
| 87 |
+
<table><tr><td>Metrics (M')</td><td>Asian(A)</td><td>Indian(I)</td><td>Black(B)</td><td>White(W)</td></tr><tr><td>Prediction Accuracy</td><td>0.96</td><td>0.95</td><td>0.97</td><td>0.98</td></tr><tr><td>False Positive Rate</td><td>0.01</td><td>0.01</td><td>0.008</td><td>0.005</td></tr><tr><td>False Negative rate</td><td>0.05</td><td>0.03</td><td>0.04</td><td>0.04</td></tr><tr><td>Positive Predictive Value</td><td>0.93</td><td>0.92</td><td>0.94</td><td>0.98</td></tr></table>
|
| 88 |
+
|
| 89 |
+
Hence, as proposed in Section 3.3, a Data Portal will be used for curation and publishing of various datasets with the support of annotators and verifiers. The incentive of ownership in dataset usage and also for labelling incorrectly detected faces and missed face detections on the tool also helps increase the engagement of users on the portal to challenge the model and receive bounty in return. This will help us to periodically retrain the face anonymization models on various edge-cases and improve fairness in these models in a decentralized manner.
|
| 90 |
+
|
| 91 |
+
## 5 Conclusion
|
| 92 |
+
|
| 93 |
+
In conclusion, we believe that measuring fairness in face anonymization algorithms is necessary to deploy technology that is unbiased and more inclusive to all the different ethnicities, gender and age groups. We proposed a decentralized tool to improve the quality of training datasets used in modelling face recognition algorithms by shifting focus onto identifying and quantifying "bias" in the core algorithm towards different groups and de-biasing it. The debiasing steps included both, creating a diverse dataset with better representation of most demographics and retraining all the layers of the core algorithm to allow the same model to be fine-tuned (Dense layers only) periodically based on the missed detections identified by the annotators and verifiers in the tool. The bias measurement framework was outlied in this paper.
|
| 94 |
+
|
| 95 |
+
As part of our analysis, we figured that most face detection algorithms are predominantly biased towards white faces across both MIAP and RFW datasets irrespective of the gender groups. Since, the clustered embeddings of FaceNet model showed that clustering metrics were much higher when male and female faces were clustered together across all ethnicities than when clustered seperately. This indicates a need for diversity in the dataset across all ethnicities; it is more likely to be fairer when the dataset creation happens in a more decentralised manner and users across the world contribute in adding images, identifying missed detections of a certain demographic group or in validating the corrected output from a fellow user.
|
| 96 |
+
|
| 97 |
+
In future work, we will focus on answering the questions raised during the above discussion in terms of breaking down the clusters in more detail to help us interpret correlation between the data points which led the model to cluster certain points closer to each other. We also plan to make the Data Portal public with access to all users to ensure that the users will be able to upload their own data into the pipeline and also get incentivized based on the usage of their data from any algorithm that is built on top of their data. We also plan to improve the anonymization algorithm using a GAN based approach to ensure the data distribution of the anonymized face does not change completely. In addition, we also plan to integrate Spotify's Annoy (II) for indexing similar faces across the Data Portal to find similar images of users uploaded for denoising the uploaded data for duplicates.
|
| 98 |
+
|
| 99 |
+
## 6 Appendix
|
| 100 |
+
|
| 101 |
+
### 6.1 Clustering similar faces using FaceNet Embeddings
|
| 102 |
+
|
| 103 |
+
)2 The visual representation of the RFW (Racial Faces in Wild) dataset faces clustered using dimension- 203 ality reduction technique: t-SNE in 2-dim space followed by DBSCAN algorithm (Converted from 128-dim vector generated by FaceNet representations or face embeddings). As seen visually, similar
|
| 104 |
+
|
| 105 |
+

|
| 106 |
+
|
| 107 |
+
Figure 2: Face Embeddings visualized using t-SNE and DBSCAN
|
| 108 |
+
|
| 109 |
+
demographic groups are clustered closer to each other in the 2-D space. On using different clusters sizes, the density of clusters changed accordingly. The number of clusters that gave the optimal clustering metrics was chosen for benchmarking the dataset's clustering metrics.
|
| 110 |
+
|
| 111 |
+
### 6.2 Data Portal pipeline
|
| 112 |
+
|
| 113 |
+

|
| 114 |
+
|
| 115 |
+
Figure 3: Proposed end-to-end working of the decentralized data portal
|
| 116 |
+
|
| 117 |
+
## References
|
| 118 |
+
|
| 119 |
+
[1] A. Andoni, P. Indyk, and I. Razenshteyn. Approximate nearest neighbor search in high dimensions, 2018.
|
| 120 |
+
|
| 121 |
+
[2] M. Boyle, C. Edwards, and S. Greenberg. The effects of filtered video on awareness and privacy. In Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work, CSCW '00, page 1-10, New York, NY, USA, 2000. Association for Computing Machinery.
|
| 122 |
+
|
| 123 |
+
[3] Q. Cao, L. Shen, W. Xie, O. M. Parkhi, and A. Zisserman. Vggface2: A dataset for recognising faces across pose and age, 2017.
|
| 124 |
+
|
| 125 |
+
[4] K. Crawford and T. Paglen. Excavating ai: the politics of training sets for machine learning. Project Website, 2019.
|
| 126 |
+
|
| 127 |
+
[5] T. DeVries, I. Misra, C. Wang, and L. van der Maaten. Does object recognition work for everyone?, 2019.
|
| 128 |
+
|
| 129 |
+
[6] M. Ester, H.-P. Kriegel, J. Sander, and X. Xu. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, KDD'96, page 226-231. AAAI Press, 1996.
|
| 130 |
+
|
| 131 |
+
[7] R. Gross, L. Sweeney, F. de la Torre, and S. Baker. Model-based face de-identification. In 2006 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'06), pages 161-161, 2006.
|
| 132 |
+
|
| 133 |
+
[8] B. F. Klare, M. J. Burge, J. C. Klontz, R. W. V. Bruegge, and A. K. Jain. Face recognition performance: Role of demographic information. IEEE Transactions on Information Forensics and Security, 7(6):1789-1801, 2012.
|
| 134 |
+
|
| 135 |
+
[9] K. Kärkkäinen and J. Joo. Fairface: Face attribute dataset for balanced race, gender, and age, 2019.
|
| 136 |
+
|
| 137 |
+
[10] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár. Focal loss for dense object detection, 2017.
|
| 138 |
+
|
| 139 |
+
[11] A. Mačkiewicz and W. Ratajczak. Principal components analysis (pca). Computers Geosciences, 19(3):303-342, 1993.
|
| 140 |
+
|
| 141 |
+
[12] C. Neustaedter, S. Greenberg, and M. Boyle. Blur filtration fails to preserve privacy for home-based video conferencing. ACM Trans. Comput.-Hum. Interact., 13(1):1-36, mar 2006.
|
| 142 |
+
|
| 143 |
+
[13] S. Petrovic. A comparison between the silhouette index and the davies-bouldin index in labelling ids clusters. In Proceedings of the 11th Nordic workshop of secure IT systems, volume 2006, pages 53-64. Citeseer, 2006.
|
| 144 |
+
|
| 145 |
+
[14] I. D. Raji and J. Buolamwini. Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial ai products. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, AIES '19, page 429-435, New York, NY, USA, 2019. Association for Computing Machinery.
|
| 146 |
+
|
| 147 |
+
[15] F. Schroff, D. Kalenichenko, and J. Philbin. FaceNet: A unified embedding for face recognition and clustering. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, jun 2015.
|
| 148 |
+
|
| 149 |
+
[16] C. Schumann, S. Ricco, U. Prabhu, V. Ferrari, and C. Pantofaru. A step toward more inclusive people annotations for fairness. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. ACM, jul 2021.
|
| 150 |
+
|
| 151 |
+
[17] S. Shankar, Y. Halpern, E. Breck, J. Atwood, J. Wilson, and D. Sculley. No classification without representation: Assessing geodiversity issues in open data sets for the developing world, 2017.
|
| 152 |
+
|
| 153 |
+
[18] P. Terhörst, J. N. Kolf, N. Damer, F. Kirchbuchner, and A. Kuijper. Face quality estimation and its correlation to demographic and non-demographic bias in face recognition. In 2020 IEEE International Joint Conference on Biometrics (IJCB), pages 1-11. IEEE, 2020.
|
| 154 |
+
|
| 155 |
+
[19] P. Terhörst, J. N. Kolf, M. Huber, F. Kirchbuchner, N. Damer, A. M. Moreno, J. Fierrez, and A. Kuijper. A comprehensive study on face recognition biases beyond demographics. IEEE Transactions on Technology and Society, 3(1):16-30, 2021.
|
| 156 |
+
|
| 157 |
+
[20] B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, and L.-J. Li. YFCC100m. Communications of the ACM, 59(2):64-73, jan 2016.
|
| 158 |
+
|
| 159 |
+
[21] M. Wang, W. Deng, J. Hu, X. Tao, and Y. Huang. Racial faces in-the-wild: Reducing racial bias by information maximization adaptation network. 2018.
|
| 160 |
+
|
| 161 |
+
[22] K. Yang, K. Qinami, L. Fei-Fei, J. Deng, and O. Russakovsky. Towards fairer datasets: Filtering and balancing the distribution of the people subtree in the imagenet hierarchy. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pages 547-558, 2020.
|
| 162 |
+
|
| 163 |
+
[23] K. Zhang, Z. Zhang, Z. Li, and Y. Qiao. Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Processing Letters, 23(10):1499-1503, oct 2016.
|
| 164 |
+
|
| 165 |
+
212 213 214 215 216 217 218
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/uBlxaWPm8l/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,157 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ ADDRESSING BIAS IN FACE DETECTORS USING DECENTRALISED DATA COLLECTION WITH INCENTIVES
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
§ ABSTRACT
|
| 12 |
+
|
| 13 |
+
Recent developments in machine learning have shown that successful models do not rely only on huge amounts of data but the right kind of data. We show in this paper how this data-centric approach can be facilitated in a decentralised manner to enable efficient data collection for algorithms. Face detectors are a class of models that suffer heavily from bias issues as they have to work on a large variety of different data.
|
| 14 |
+
|
| 15 |
+
We also propose a face detection and anonymization approach using a hybrid MultiTask Cascaded CNN with FaceNet Embeddings to benchmark multiple datasets to describe and evaluate the bias in the models towards different ethnicities, gender and age groups along with ways to enrich fairness in a decentralized system of data labelling, correction and verification by users to create a robust pipeline for model retraining.
|
| 16 |
+
|
| 17 |
+
§ 13 1 INTRODUCTION
|
| 18 |
+
|
| 19 |
+
The amount of data available and used for training public datasets is vast, yet there is an inherent bias in these datasets towards certain ethnicity groups like caucasian faces as compared to other ethnicities such as Asian, African, Indian, etc. There is definitely a need to mitigate the bias and emphasize on the improvement of fairness in face detection algorithms. This will improve the efficiency and accuracy of Face Verification (FV), recognition, anonymization and other use-cases of face detection.
|
| 20 |
+
|
| 21 |
+
With the advent of publicly available images on social media and the internet, there is a need to enforce personal privacy of people by performing face anonymization on these images. In this work, we propose a ML pipeline to detect faces using a robust multi-task cascaded CNN architecture along with other pre-trained models such as VGGFace2 ([3] and FaceNet [15]) to anonymize the detected faces and blur them using a Gaussian function. We also benchmark the performance of certain custom and pre-trained models on various open-sourced datasets such as MIAP (16), FairFace (9), and RFW (21) (Racial Faces in Wild) to understand the bias of models trained on these datasets. Along with face anonymization, we also determine the age and gender demographics of the detected faces to find any bias present in open-source models. We also evaluate the performance of these open-source models before and after training it on a diverse and fairness-induced dataset by proposing a decentralized system of data evaluation and verification by users of the model output generated (faces detected in the input), see section 3.3.
|
| 22 |
+
|
| 23 |
+
Lastly, we also discuss ways to de-bias the data during pre-processing and post-processing and how to reduce the false positives using clustering and statistical analysis of the generated output. We propose a decentralized platform for data collection and annotation of data with user incentives for 4 detecting any machine-undetected faces in images as part of an initiative to increase model fairness and reduce ethnicity, age, and gender bias.
|
| 24 |
+
|
| 25 |
+
§ 2 RELATED WORK
|
| 26 |
+
|
| 27 |
+
The current systems in computer vision have higher yield and astonishing results in several areas, but there are several societal issues related to demographics, ethnicity, gender, age, etc. that have been discussed more recently due to their usage in face recognition, object detection and other applications [18] (19) [8]. Most image recognition algorithms have high disparity in performance on images across the world as discussed in (5) (17) (22) due to the bias in the dataset used for training and also the differences in pipeline used. This bias is generally due to dataset disparity since most of the open-source datasets created and benchmarked are localized to only a few locations restricting the diversity in data quality. Secondly, the other set of related papers talk about harmful and mislabelled data associations which can often lead to a lot of wrongful associations across gender and ethnicity groups in general as discussed by Crawford et al. (4). Some of the other indicators which causes disparity in performance of a face detection algorithm towards certain groups of people is due to bias in learned representations or embeddings of users of underrepresented groups and other demographic traits. Raji et al. [14] talks about reduction of errors in evaluating the commercial face detectors by changing the evaluation metrics used. Ensuring privacy as part of face recognition campaign is an equally important problem, and limited research has been done on the task of extracting and remove private and sensitive information from public dataset and image databases. There has been a few previous work done in literature (2) (12) (7) that blur the background or use gaussian/pixelation functions to blur faces in an image.
|
| 28 |
+
|
| 29 |
+
To improve the robustness and add fairness to the datasets and models used in the above problem approach, we propose a decentralized tool for collecting, annotating and verifying the face detections made by face recognition algorithms across different parts of the world to ensure the data samples collected are rich in diversity, help identify the bias in current commercial and open-source models, generate edge-cases and training samples that could be used to retrain these detectors to improve the coverage of data distribution learnt by our models.
|
| 30 |
+
|
| 31 |
+
§ 3 METHODOLOGY
|
| 32 |
+
|
| 33 |
+
We aim to build a robust face anonymization pipeline along with functionalities to determine the characteristics of the detected faces as shown in Fig 1 on a decentralized platform for verification and annotation. We also try to estimate the bias towards certain ethnicities and characteristic features in some of the popular pre-trained model architectures such as MTCNN (Multitask Cascade CNN) (23), FaceNet (15) and RetinaNet (10) against the open-source datasets used for understanding and evaluating bias in the face detectors.
|
| 34 |
+
|
| 35 |
+
§ 3.1 DATASETS
|
| 36 |
+
|
| 37 |
+
In order to understand the bias of ethnicity, age, and gender, it is important to evaluate the performance of classification of different ethnicities as a binary task of faces detected and undetected to understand if there is a bias towards some ethnicity classes having stronger attribute indicators as compared to the rest. The following datasets are a good benchmark to determine the bias since each of these datasets have been labelled and open-sourced keeping the diversity and inclusion of most ethnicities in mind.
|
| 38 |
+
|
| 39 |
+
MIAP Dataset: The MIAP (More Inclusive Annotations for People) Dataset (16) is a subset of Open Images Dataset with new set of annotations for all people found in these images enabling fairness in face detection algorithms. The Dataset contains new annotations for 100,000 Images (Training set of 70k and Valid/Test set of ${30k}$ images). Annotations of the dataset includes ${454}\mathrm{k}$ bounding boxes along with Age and Gender group representations.
|
| 40 |
+
|
| 41 |
+
FairFace Dataset: FairFace(9) a facial image database contains nearly ${100}\mathrm{k}$ images which is also created to reduce the bias during training by having equal representation of classes from YFCC- 100M Flickr dataset (20). The dataset consists of 7 classes namely, White, Latino, Indian, East Asian, Southeast Asian, Black and Middle Eastern. Models trained on FairFace have reported higher performance metrics [9] as compared to other general datasets, and hence, we have included this dataset as well as part of our study.
|
| 42 |
+
|
| 43 |
+
Racial Faces in the Wild (RFW): The RFW (21) Database primarily consists of four test subsets in terms of ethnicity backgrounds, namely Indian, Asian, African and Caucasian. Each subset consists of images for face verification, which is around ${10}\mathrm{k}$ images of $3\mathrm{k}$ individuals.
|
| 44 |
+
|
| 45 |
+
< g r a p h i c s >
|
| 46 |
+
|
| 47 |
+
Figure 1: End to End Architecture of Face Anonymization and attribute extraction
|
| 48 |
+
|
| 49 |
+
§ 3.2 ARCHITECTURE
|
| 50 |
+
|
| 51 |
+
The end-to-end pipeline uses multiple models to detect faces from the input images. MTCNN (23) and VGGFace (3) are used for generating bounding boxes of the detected faces, post which, we enhance the output bounding boxes and extract the face image to generate a gaussian blurred image as part of our goal to anonymize the faces. These architectures have been employed and are chosen as standard models for face attribute extraction algorithms. The non-anonymized copy of the detected images are used as input to the FaceNet (15) model for generating the face embedding vectors.
|
| 52 |
+
|
| 53 |
+
The MTCNN architecture proposed by Zhang et al. (23) mainly consists of three different stages and each stage consists of a Neural Network, namely, the Proposal Network, Refine Network and the Output network. The first stage uses a shallow CNN architecture to generate candidates proposal windows, which the Refine network enhances with a deeper CNN. The output network refines the result of previous layers and generates the face landmark positions. Since the architecture uses different face landmark locations to estimate a face, we use it as part of our experiment to evaluate face recognition datasets for estimating inherent bias.
|
| 54 |
+
|
| 55 |
+
FaceNet is another model proposed by Schroff et al. (15) outputting a 128-dimension vector, also known as a face embedding which is optimized to differentiate between similar and dissimilar faces using euclidean metrics. The architecture uses a triplet-based loss function, which uses positive and negative samples to estimate the distance between each other respectively as part of the loss function. For each face detected in the inferred image, an embedding is calculated. We use FaceNet embeddings to cluster similar faces using DBSCAN (6) on faces extracted from the MTCNN model. DBSCAN generally uses two parameters, namely, the minimum distance between two instances for them to be grouped together, and second, the number of points to form a cluster. So, if distance between two faces is high, they tend to form different clusters. PCA (II), a very popular dimensionality reduction technique is used to reduce the 128-dimensional vector to 2-dimensional vector to visualize the cluster faces as part of estimating bias in the algorithms.
|
| 56 |
+
|
| 57 |
+
Generally the undetected faces and misclassified faces in the dataset for different pre-trained and popular model architectures form outliers or belong to wrong clusters which are easy to identify. We then employ clustering metrics to estimate the embedding based partitioning of clusters, such as: Mean Silhouette Coefficient which measures similarity and dissimilarity between elements of a certain cluster, and, Davies-Bouldin index (13).
|
| 58 |
+
|
| 59 |
+
§ 3.3 DECENTRALIZED DATA COLLECTION PLATFORM
|
| 60 |
+
|
| 61 |
+
We propose a decentralized data platform for crowdsourcing images and image datasets. The purpose of the tool is to run inference on images that users upload and perform face anonymization using our algorithm. This creates two opportunities for incentivizing users to use our tool:
|
| 62 |
+
|
| 63 |
+
(1) Users can now annotate images on the interface directly in case of any undetected faces (False negatives) and wrong detections (False positives) and post successful verification by verifiers, will be incentivized in form of bounties and revenue shares, (2) Users can upload, annotate, and verify annotations of images while still keeping the ownership of their data - they give a license to the platform to use it and in return get a share in the revenue their contributions create, i.e, any form of revenue generated using models built using the dataset will ensure that users receive a royalty for contributing to the dataset. Also, the missed edge-cases (face detections and false positives) across various images will be collected in the system and will be used for retraining the face detectors in a periodic manner to improve the performance of the model and enrich fairness and reduce the inherent bias in the data and trained model.
|
| 64 |
+
|
| 65 |
+
Distributing ownership of the dataset across creators, annotators and verifiers will democratize the system of ownership without only one central party controlling it and the revenue/value generated by the built algorithms and datasets can flow back to the community directly. The trained model and inference algorithm will be published on decentralised algorithm marketplaces so it will be possible to run inference in decentralised compute environments to make downloading and copying the model impossible. For end-to-end workflow, refer appendix 3,
|
| 66 |
+
|
| 67 |
+
§ 4 RESULTS
|
| 68 |
+
|
| 69 |
+
As seen in Table 1, we present a few statistical metrics to determine the fairness for different ethnicities in the RFW dataset using MTCNN (23) model and FaceNet embeddings. It is not directly evident from the results of one group getting better results consistently, but a clear pattern in the bias towards certain ethnicities became evident on deeper study. The prediction accuracy for Asian (A) and Black (B) groups were lower compared to Indian (I) and White (W). But, this is not enough to indicate the bias as there isn't a significant difference between different groups. However, Positive Predictive Value (PPV) and False Postive Rate (FPR) indicate higher confidence in White faces than other groups with a significantly lower False Positive Rate, and this pattern is also seen in PPV, as for white faces, the value is as high as 0.98 and compared to Asian group which is only around 0.78 indicating a higher precision rates in detecting white faces compared to other groups.
|
| 70 |
+
|
| 71 |
+
Table 1: Statistical metrics for RFW Dataset using pre-trained MTCNN + FaceNet embeddings
|
| 72 |
+
|
| 73 |
+
max width=
|
| 74 |
+
|
| 75 |
+
Metrics (M) Asian(A) Indian(I) Black(B) White(W)
|
| 76 |
+
|
| 77 |
+
1-5
|
| 78 |
+
Prediction Accuracy 0.91 0.95 0.92 0.97
|
| 79 |
+
|
| 80 |
+
1-5
|
| 81 |
+
False Positive Rate 0.07 0.04 0.08 0.005
|
| 82 |
+
|
| 83 |
+
1-5
|
| 84 |
+
False Negative rate 0.05 0.08 0.04 0.14
|
| 85 |
+
|
| 86 |
+
1-5
|
| 87 |
+
Positive Predictive Value 0.78 0.93 0.82 0.98
|
| 88 |
+
|
| 89 |
+
1-5
|
| 90 |
+
|
| 91 |
+
We also tried to quantify the similarity between users in a given cluster extracted and processed by FaceNet embeddings followed by dimensional reduction techniques for both MIAP and RFW datasets in Table 2. As we can see, the trend in mean silhouette score (MSC) is such that attribute with higher number of distinct clusters has a higher score, indicating higher similarity between elements to their own cluster. Clearly for MIAP (16), as we can see when we calculate the metrics for combined clusters of race and gender, the MSC score is higher than when calculated individually, indicating that the gender clusters in any race are closer and correlated than between two ethnicities (racial groups). The Davies-Bouldin index also shows a very similar pattern for the MIAP dataset indicating that clusters are best seperated when combined than when clustered individually in the order: both clustered together, racial groups clustered together and finally clustered based on gender.
|
| 92 |
+
|
| 93 |
+
These results clearly state the need for a trained model that is unbiased towards all ethnicities and gender groups. To enrich fairness in training, the MTCNN + FaceNet model was retrained on a FairFace [9], a balanced dataset with equal distribution of all ethnicities and gender groups with
|
| 94 |
+
|
| 95 |
+
Table 2: Clustering metrics for RFW and MIAP Dataset using MTCNN + FaceNet embeddings
|
| 96 |
+
|
| 97 |
+
max width=
|
| 98 |
+
|
| 99 |
+
Metrics RFW-Race MIAP-Race MIAP-Gender MIAP-Both
|
| 100 |
+
|
| 101 |
+
1-5
|
| 102 |
+
MSC 0.12 0.16 0.09 0.19
|
| 103 |
+
|
| 104 |
+
1-5
|
| 105 |
+
DBI 4.21 3.89 6.47 3.64
|
| 106 |
+
|
| 107 |
+
1-5
|
| 108 |
+
|
| 109 |
+
adjusted labels of Race similar to RFW and MIAP Datasets. The increase in prediction accuracy of classes ranged between 1% to 5.5%, and PPV showed an increase of upto 19% after retraining. This shows there was a clear improvement in performance of the model as shown in Table 3 indicating that, an unbiased dataset used for training along with a few data augmentation techniques can improve the model performance such that, the results are not biased towards any single gender or racial group.
|
| 110 |
+
|
| 111 |
+
Table 3: Statistical metrics for RFW Dataset using FairFace trained MTCNN + FaceNet embeddings
|
| 112 |
+
|
| 113 |
+
max width=
|
| 114 |
+
|
| 115 |
+
Metrics (M') Asian(A) Indian(I) Black(B) White(W)
|
| 116 |
+
|
| 117 |
+
1-5
|
| 118 |
+
Prediction Accuracy 0.96 0.95 0.97 0.98
|
| 119 |
+
|
| 120 |
+
1-5
|
| 121 |
+
False Positive Rate 0.01 0.01 0.008 0.005
|
| 122 |
+
|
| 123 |
+
1-5
|
| 124 |
+
False Negative rate 0.05 0.03 0.04 0.04
|
| 125 |
+
|
| 126 |
+
1-5
|
| 127 |
+
Positive Predictive Value 0.93 0.92 0.94 0.98
|
| 128 |
+
|
| 129 |
+
1-5
|
| 130 |
+
|
| 131 |
+
Hence, as proposed in Section 3.3, a Data Portal will be used for curation and publishing of various datasets with the support of annotators and verifiers. The incentive of ownership in dataset usage and also for labelling incorrectly detected faces and missed face detections on the tool also helps increase the engagement of users on the portal to challenge the model and receive bounty in return. This will help us to periodically retrain the face anonymization models on various edge-cases and improve fairness in these models in a decentralized manner.
|
| 132 |
+
|
| 133 |
+
§ 5 CONCLUSION
|
| 134 |
+
|
| 135 |
+
In conclusion, we believe that measuring fairness in face anonymization algorithms is necessary to deploy technology that is unbiased and more inclusive to all the different ethnicities, gender and age groups. We proposed a decentralized tool to improve the quality of training datasets used in modelling face recognition algorithms by shifting focus onto identifying and quantifying "bias" in the core algorithm towards different groups and de-biasing it. The debiasing steps included both, creating a diverse dataset with better representation of most demographics and retraining all the layers of the core algorithm to allow the same model to be fine-tuned (Dense layers only) periodically based on the missed detections identified by the annotators and verifiers in the tool. The bias measurement framework was outlied in this paper.
|
| 136 |
+
|
| 137 |
+
As part of our analysis, we figured that most face detection algorithms are predominantly biased towards white faces across both MIAP and RFW datasets irrespective of the gender groups. Since, the clustered embeddings of FaceNet model showed that clustering metrics were much higher when male and female faces were clustered together across all ethnicities than when clustered seperately. This indicates a need for diversity in the dataset across all ethnicities; it is more likely to be fairer when the dataset creation happens in a more decentralised manner and users across the world contribute in adding images, identifying missed detections of a certain demographic group or in validating the corrected output from a fellow user.
|
| 138 |
+
|
| 139 |
+
In future work, we will focus on answering the questions raised during the above discussion in terms of breaking down the clusters in more detail to help us interpret correlation between the data points which led the model to cluster certain points closer to each other. We also plan to make the Data Portal public with access to all users to ensure that the users will be able to upload their own data into the pipeline and also get incentivized based on the usage of their data from any algorithm that is built on top of their data. We also plan to improve the anonymization algorithm using a GAN based approach to ensure the data distribution of the anonymized face does not change completely. In addition, we also plan to integrate Spotify's Annoy (II) for indexing similar faces across the Data Portal to find similar images of users uploaded for denoising the uploaded data for duplicates.
|
| 140 |
+
|
| 141 |
+
§ 6 APPENDIX
|
| 142 |
+
|
| 143 |
+
§ 6.1 CLUSTERING SIMILAR FACES USING FACENET EMBEDDINGS
|
| 144 |
+
|
| 145 |
+
)2 The visual representation of the RFW (Racial Faces in Wild) dataset faces clustered using dimension- 203 ality reduction technique: t-SNE in 2-dim space followed by DBSCAN algorithm (Converted from 128-dim vector generated by FaceNet representations or face embeddings). As seen visually, similar
|
| 146 |
+
|
| 147 |
+
< g r a p h i c s >
|
| 148 |
+
|
| 149 |
+
Figure 2: Face Embeddings visualized using t-SNE and DBSCAN
|
| 150 |
+
|
| 151 |
+
demographic groups are clustered closer to each other in the 2-D space. On using different clusters sizes, the density of clusters changed accordingly. The number of clusters that gave the optimal clustering metrics was chosen for benchmarking the dataset's clustering metrics.
|
| 152 |
+
|
| 153 |
+
§ 6.2 DATA PORTAL PIPELINE
|
| 154 |
+
|
| 155 |
+
< g r a p h i c s >
|
| 156 |
+
|
| 157 |
+
Figure 3: Proposed end-to-end working of the decentralized data portal
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/uEQTusqzEg-/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,201 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Brief Overview of AI Governance for Responsible Machine Learning Systems
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
## Abstract
|
| 12 |
+
|
| 13 |
+
Organizations of all sizes, across all industries and domains are leveraging artificial intelligence (AI) technologies to solve some of their biggest challenges around operations, customer experience, and much more. However, due to the probabilistic nature of AI, the risks associated with it are far greater than traditional technologies. Research has shown that these risks can range anywhere from regulatory, compliance, reputational, user trust and societal risks, to financial and even existential risks. Depending on the nature and size of the organization, AI technologies can pose a significant risk, if not used in a responsible way. This position paper seeks to present a brief introduction to AI governance, which is a framework designed to oversee the responsible use of AI with the goal of preventing and mitigating risks. Having such a framework will not only manage risks but also gain maximum value out of AI projects and develop consistency for organization-wide adoption of AI.
|
| 14 |
+
|
| 15 |
+
## 13 1 Introduction
|
| 16 |
+
|
| 17 |
+
In this position paper, we share our insights about AI Governance in companies, which enables new connections between various aspects and properties of trustworthy and socially responsible ML: security, robustness, privacy, fairness, ethics, interpretability, transparency, etc.
|
| 18 |
+
|
| 19 |
+
For a long time artificial intelligence (AI) was something enterprise organizations adopted due to the huge amounts of resources they have at their fingertips. Today, smaller companies are able to take advantage of AI due to newer technologies, e.g. cloud software, which are significantly more affordable than what was available in the past [2, 8, 11, 12]. AI has been on an upward trajectory in recent years and it will increase significantly over the next several years [21] [15] [28]. Obviously, AI is seen as a fruitful investment for many organizations. However, every investment has its pros and cons. Unfortunately, the cons associated with AI adoption are caused by the builders of such AI systems who do not take the necessary steps to avoid problems down the road [20, 12].
|
| 20 |
+
|
| 21 |
+
### 1.1 Problems within Industry
|
| 22 |
+
|
| 23 |
+
Applications of AI in industry is still in its infancy. With that said, many problems have arisen since its adoption [10, 12, 20], which can be attributed to several factors:
|
| 24 |
+
|
| 25 |
+
Lack of risk management: Too much attention is given to applications of AI and its potential success and not enough attention is given to its potential pitfalls & risks.
|
| 26 |
+
|
| 27 |
+
AI adoption is moving too fast: According to a survey by KPMG in 2021 [16], many respondents noted that AI technology is moving too fast for their comfort in industrial manufacturing (55%), technology (49%), financial services (37%), government (37%), and health care (35%) sectors.
|
| 28 |
+
|
| 29 |
+
AI adoption needs government intervention: According to the same survey by KPMG, [16] an overwhelming percentage of respondents agreed that governments should be involved in regulating AI technology in the industrial manufacturing (94%), retail (87%), financial services (86%), life sciences (86%), technology (86%), health care (84%), and government (82%) sectors.
|
| 30 |
+
|
| 31 |
+
Companies are still immature when it comes to adopting AI: Some companies are not prepared for business conditions to change once a ML model is deployed into the real world.
|
| 32 |
+
|
| 33 |
+
Many of these problems can be avoided with proper governance mechanisms. AI without such mechanisms is a dangerous game with detrimental outcomes due its inherent uncertainty [36, 7]. 1 With that said, adding governance into applications of AI is imperative to ensure safety in production.
|
| 34 |
+
|
| 35 |
+
## 42 2 AI Governance
|
| 36 |
+
|
| 37 |
+
In order for organizations to realize the maximum value out of AI projects and develop consistency for organization-wide adoption of AI, while managing significant risks to their business, they must implement AI governance [30, 9, 13, 17]; this enables organizations to not only develop AI projects in a responsible way, but also ensure that there is consistency across the entire organization and the business objective is front and center. With the AI governance implemented (as illustrated in Figure 1), the following benefits can be realized:
|
| 38 |
+
|
| 39 |
+
Alignment and Clarity: teams would be aware and aligned on what the industry, international, regional, local, and organizational policies are that need to be adhered to.
|
| 40 |
+
|
| 41 |
+
Thoughtfulness and Accountability: teams would put deliberate effort into justifying the business case for AI projects, and put conscious effort into thinking about end-user experience, adversarial impacts, public safety & privacy. This also places greater accountability on the teams developing their respective AI projects.
|
| 42 |
+
|
| 43 |
+
Consistency and Organizational Adoption: teams would have a more consistent way of developing and collaborating on their AI projects, leading to increased tracking and transparency for their projects. This also provides an overarching view of all AI projects going on within the organization, leading to increased visibility and overall adoption.
|
| 44 |
+
|
| 45 |
+
Process, Communication, and Tools: teams would have complete understanding of what the steps are in order to move the AI project to production to start realizing business value. They would also be able to leverage tools that take them through the defined process, while being able to communicate with the right stakeholders through the tool.
|
| 46 |
+
|
| 47 |
+
Trust and Public Perception: as teams build out their AI projects more thoughtfully, this will inherently build trust amongst customers and end users, and therefore a positive public perception.
|
| 48 |
+
|
| 49 |
+
The components of AI governance should focus on organizational and use case planning, AI development, and AI "operationalization", which come together to make a 4 stage AI life-cycle approach.
|
| 50 |
+
|
| 51 |
+

|
| 52 |
+
|
| 53 |
+
Figure 1: Illustration of the AI Governance application towards responsible AI in companies.
|
| 54 |
+
|
| 55 |
+
### 2.1 Organizational Planning
|
| 56 |
+
|
| 57 |
+
An AI Governance Program [17, 9, 35] should be organized in such a way that (a) there is comprehensive understanding of regulations, laws, and policies amongst all team members (b) resources and help available for team members who encounter challenges (c) there is a light weight, yet clear process to assist team members.
|
| 58 |
+
|
| 59 |
+
## 1. Regulations, Laws, Policies
|
| 60 |
+
|
| 61 |
+
Laws and regulations that apply to a specific entity should be identified, documented, and available for others to review and audit. These regulations, laws, and policies vary across industry and sometimes by geographical location. Organizations should, if applicable, develop policies for themselves, which reflect their values and ethical views [35, 24, 12]; this enables teams to be more autonomous and make decisions with confidence.
|
| 62 |
+
|
| 63 |
+
## 2. Organization (Center of Competency)
|
| 64 |
+
|
| 65 |
+
Establishing groups within an organization that provide support to teams with AI projects can prove to be quite beneficial. This includes a group that is knowledgeable with regulations, laws, and policies and can answer any questions that AI teams may have; a group that is able to share best practices across different AI teams within the organization; a group that is able to audit the data, model, process, etc. to ensure there isn't a breach or non-compliance. For more information, we refer the reader to to the survey by Floridi et al. [12].
|
| 66 |
+
|
| 67 |
+
## 3. Process
|
| 68 |
+
|
| 69 |
+
Developing a light-weight process that provides guidelines to AI teams can help with their efficiency, rather than hinder their progress and velocity. This involves identifying what the approval process and incident response would be for data, model, deployments, etc.
|
| 70 |
+
|
| 71 |
+
### 2.2 Use Case Planning
|
| 72 |
+
|
| 73 |
+
Building use cases involves establishing business value, technology stack, and model usage. The group of people involved in this process can include: subject matter experts, data scientists/analysts/annotators and ML engineers, IT professionals, and finance departments.
|
| 74 |
+
|
| 75 |
+
Business Value Framework. The AI team should ensure that the motivation for the AI use case is documented and communicated amongst all stakeholders. This should also include the original hypothesis, and the metrics that would be used for evaluating the experiments.
|
| 76 |
+
|
| 77 |
+
Tools, Technology, Products. The AI team should either select from a set of pre-approved tools and products from the organization or get a set of tools and products approved before using in an AI user case. If tools for AI development are not governed, it not only leads to high costs and inability to manage the tools (as existing IT teams are aware), it also leads to not being able to create repeatability and traceability into AI models.
|
| 78 |
+
|
| 79 |
+
Model Usage. Once a sense of value is attached to the use case, then the next step would be to break down the use case to its sub-components which include, but are not limited to, identifying the consumer of the model, the model's limitations, and potential bias that may exist within the model, along with its implications. Also, one would want to ensure inclusiveness of the target, public safety/user privacy, and identification of the model interface needed for their intended use case.
|
| 80 |
+
|
| 81 |
+
### 2.3 AI Development
|
| 82 |
+
|
| 83 |
+
Development of a machine learning model, including data handling and analysis, modeling, generating explanations, bias detection, accuracy and efficacy analysis, security and robustness checks, model lineage, validation, and documentation.
|
| 84 |
+
|
| 85 |
+
## 1. Data Handling, Analysis and Modeling
|
| 86 |
+
|
| 87 |
+
The first technical step to any AI project is the procurement and analysis of data, which is critical as it lays the foundation for all work going forward. Once data is analyzed, then one must decipher if modeling is needed for the use case at hand. If modeling is needed, then the application of AI can take place. Such an application is an iterative process spanned across many different types of people.
|
| 88 |
+
|
| 89 |
+
## 2. Explanations and Bias
|
| 90 |
+
|
| 91 |
+
The goal of model explanations is to relate feature values to model predictions in a human-friendly manner [25]. What one does with these explanations breaks down to 3 personas: modeler, intermediary user, and the end user. The modeler would use explanations for model debugging and gaining understanding of the model they just built. The intermediary user would use what the modeler made for actionable insights. And finally, the end user is the person the model affects directly. For these reasons, Explainable Artificial Intelligence (XAI) is a very active research topic [4, 14].
|
| 92 |
+
|
| 93 |
+
Bias, whether intentional (disparate treatment) or unintentional (disparate impact), is a cause for concern in many applications of AI [31, 22]. Common things to investigate when it comes to preventing bias include the data source used for the modeling process, performance issues amongst different demographics, disparate impact, identifying known limitations & potential adverse implications, and the models impact on public safety [5]. We refer the reader to the survey by Mehrabi et al. [22] for more details about bias and fairness in AI.
|
| 94 |
+
|
| 95 |
+
## 3. Accuracy, Efficacy, & Robustness
|
| 96 |
+
|
| 97 |
+
Accuracy of a machine learning model is critical for any business application in which predictions drive potential actions. However, it is not the most important metric to optimize. One must also consider the efficacy of a model, i.e., is the model making the intended business impact?
|
| 98 |
+
|
| 99 |
+
When a model is serving predictions in a production setting, the data can be a little or significantly different from the data that the project team had access to. Although model drift and feature drift can capture this discrepancy, it is a lagging indicator, and by that time, the model has already made predictions. This is where Robustness comes in: project teams can proactively test for model robustness, using "out of scope" data, to understand whether the model perturbs. The out of scope data can be a combination of manual generation (toggle with feature values) and automatic generation (system toggles feature values).
|
| 100 |
+
|
| 101 |
+
## 4. Security
|
| 102 |
+
|
| 103 |
+
ML systems today are subject to general attacks that can affect any public facing IT system [6], [27]; specialized attacks that exploit insider access to data and ML code; external access to ML prediction APIs and endpoints [33], [32]; and trojans that can hide in third-party ML artifacts. Such attacks must be accounted for and tested against before sending a machine learning model out into the real world.
|
| 104 |
+
|
| 105 |
+
## 5. Documentation & Validation
|
| 106 |
+
|
| 107 |
+
An overall lineage of the entire AI project life-cycle should be documented to ensure transparency and understanding [23], [29], which will be useful for the AI team working on the project and also future teams who must reference this project for their own application.
|
| 108 |
+
|
| 109 |
+
Model validation [26, 18] is the set of processes and activities that are carried out by a third party, with the intent to verify that models are robust and performing as expected, in line with the business use case. It also identifies the impact of potential limitations and assumptions. From a technical standpoint, the following should be considered: (i) Sensitivity Analysis. (ii) In-sample vs. Out-of-sample performance. (iii) Replication of results from model development team. (iv) Stability analysis. Model "validators" should document all of their findings and share with relevant stakeholders.
|
| 110 |
+
|
| 111 |
+
### 2.4 AI Operationalization
|
| 112 |
+
|
| 113 |
+
Deploying a machine learning model into production (i.e. MLOps [1, 34]) is the first step to potentially receiving value out of it. The steps that go into the deployment process should include the following:
|
| 114 |
+
|
| 115 |
+
Review-Approval Flow: Model building in an AI project will go through various stages: experimentation, model registration, deployment, and decommissioning. Moving from one stage to the next would require "external" reviewer(s) who will vet and provide feedback.
|
| 116 |
+
|
| 117 |
+
Monitoring & Alerts: Once a model is deployed, it must be monitored for various metrics to ensure there is not any degradation in the model. The cause for a model degrading when deployed can include the following: feature and/or target drift, lack of data integrity, and outliers, amongst other things. In terms of monitoring, accuracy, fairness, and explanations of predictions are of interest [3, 19].
|
| 118 |
+
|
| 119 |
+
Decision Making: The output of a machine learning model is a prediction, but that output must be turned into a decision. How to decide? Will it be autonomous? Will it involve a human in the loop? The answers to these questions vary across different applications, but the idea remains the same, ensuring decisions are made in the proper way to decrease risk for everyone involved.
|
| 120 |
+
|
| 121 |
+
Incident Response and Escalation Process: With AI models being used in production, there is always going to be a chance for issues to arise. Organizations should have an incident response plan and escalation process documented and known to all project teams.
|
| 122 |
+
|
| 123 |
+
## 3 Conclusion
|
| 124 |
+
|
| 125 |
+
AI systems are used today to make life-altering decisions about employment, bail, parole, and lending, and the scope of decisions delegated by AI systems seems likely to expand in the future. The pervasiveness of AI across many fields is something that will not slowdown anytime soon and organizations will want to keep up with such applications. However, they must be cognisant of the risks that come with AI and have guidelines around how they approach applications of AI to avoid such risks. By establishing a framework for AI Governance, organizations will be able to harness AI for their use cases while at the same time avoiding risks and having plans in place for risk mitigation, which is paramount.
|
| 126 |
+
|
| 127 |
+
Social Impact As we discuss in this paper, governance and certain control over AI applications in organizations should be mandatory. This "new term", AI Governance, aims to enable and facilitate connections between various aspects of trustworthy and socially responsible Machine Learning systems, and therefore it accounts for security, robustness, privacy, fairness, ethics, and transparency. We here summarize this concept, and we believe the implementation of these ideas should have a positive impact in the society.
|
| 128 |
+
|
| 129 |
+
## References
|
| 130 |
+
|
| 131 |
+
[1] Sridhar Alla and Suman Kalyan Adari. What is mlops? In Beginning MLOps with MLFlow, pages 79-124. Springer, 2021.
|
| 132 |
+
|
| 133 |
+
[2] Sulaiman AlSheibani, Yen Cheung, and Chris Messom. Artificial intelligence adoption: Ai-readiness at firm-level. PACIS 2018 Proceedings, 2018.
|
| 134 |
+
|
| 135 |
+
[3] Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. Concrete problems in ai safety. arXiv preprint arXiv:1606.06565, 2016.
|
| 136 |
+
|
| 137 |
+
[4] Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, et al. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information fusion, 58:82-115, 2020.
|
| 138 |
+
|
| 139 |
+
[5] S. Barocas, M. Hardt, and A. Narayanan. Fairness and Machine Learning: Limitaions and Opportunities, 2022. URL: https://fairmlbook.org/pdf/fairmlbook.pdf
|
| 140 |
+
|
| 141 |
+
[6] Marco Barreno, Blaine Nelson, Anthony D Joseph, and J Doug Tygar. The Security of Machine Learning. Machine Learning, 81(2):121-148, 2010. URL: https://people.eecs.berkeley.edu/~adj/ publications/paper-files/SecML-MLJ2010.pdf
|
| 142 |
+
|
| 143 |
+
[7] John Bresina, Richard Dearden, Nicolas Meuleau, Sailesh Ramkrishnan, David Smith, and Richard Washington. Planning under continuous time and resource uncertainty: A challenge for ai. arXiv preprint arXiv:1301.0559, 2012.
|
| 144 |
+
|
| 145 |
+
[8] Marija Cubric. Drivers, barriers and social considerations for ai adoption in business and management: A tertiary study. Technology in Society, 62:101257, 2020.
|
| 146 |
+
|
| 147 |
+
[9] Allan Dafoe. Ai governance: a research agenda. Governance of AI Program, Future of Humanity Institute, University of Oxford: Oxford, UK, 1442:1443, 2018.
|
| 148 |
+
|
| 149 |
+
[10] Artificial Intelligence Incident Database. AI Incident Database, 2022. URL: https:// incidentdatabase.ai/
|
| 150 |
+
|
| 151 |
+
[11] Yanqing Duan, John S Edwards, and Yogesh K Dwivedi. Artificial intelligence for decision making in the era of big data-evolution, challenges and research agenda. International journal of information management, 48:63-71, 2019.
|
| 152 |
+
|
| 153 |
+
[12] Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, et al. Ai4people—an ethical framework for a good ai society: Opportunities, risks, principles, and recommendations. Minds and machines, 28(4): 689-707, 2018.
|
| 154 |
+
|
| 155 |
+
[13] Urs Gasser and Virgilio AF Almeida. A layered model for ai governance. IEEE Internet Computing, 21(6): 58-62, 2017.
|
| 156 |
+
|
| 157 |
+
[14] David Gunning, Mark Stefik, Jaesik Choi, Timothy Miller, Simone Stumpf, and Guang-Zhong Yang. Xai-explainable artificial intelligence. Science robotics, 4(37):eaay7120, 2019.
|
| 158 |
+
|
| 159 |
+
[15] IBM. IBM Global AI Adoption Index, 2022. URL: https://www.ibm.com/downloads/cas/ GVAGA3JP.
|
| 160 |
+
|
| 161 |
+
[16] KPMG. AI adoption accelerated during the pandemic but many say it's moving too fast: KPMG survey, 2021. URL: https://info.kpmg.us/news-perspectives/technology-innovation/ thriving-in-an-ai-world/ai-adoption-accelerated-during-pandemic.html.
|
| 162 |
+
|
| 163 |
+
[17] Maciej Kuziemski and Gianluca Misuraca. Ai governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings. Telecommunications policy, 44(6):101976, 2020.
|
| 164 |
+
|
| 165 |
+
[18] Maurice Landry, Jean-Louis Malouin, and Muhittin Oral. Model validation in operations research. European journal of operational research, 14(3):207-220, 1983.
|
| 166 |
+
|
| 167 |
+
[19] Jan Leike, Miljan Martic, Victoria Krakovna, Pedro A Ortega, Tom Everitt, Andrew Lefrancq, Laurent Orseau, and Shane Legg. Ai safety gridworlds. arXiv preprint arXiv:1711.09883, 2017.
|
| 168 |
+
|
| 169 |
+
[20] Bernard Marr. Is artificial intelligence dangerous? 6 ai risks everyone should know about. Forbes, 2018.
|
| 170 |
+
|
| 171 |
+
[21] Joe McKendrick. AI Adoption Skyrocketed Over the Last 18 Months. Harvard Business Review, 2021. URL: https://hbr.org/2021/09/ai-adoption-skyrocketed-over-the-last-18-months
|
| 172 |
+
|
| 173 |
+
[22] Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR), 54(6):1-35, 2021.
|
| 174 |
+
|
| 175 |
+
[23] Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, 2019. doi: 10.1145/3287560.3287596. URL https://doi.org/10.1145%2F3287560.3287596.
|
| 176 |
+
|
| 177 |
+
[24] Brent Mittelstadt. Principles alone cannot guarantee ethical ai. Nature Machine Intelligence, 1(11): 501-507, 2019.
|
| 178 |
+
|
| 179 |
+
[25] Christopher Molnar. Interpretable Machine Learning. 2022. URL: https://christophm.github.io/ interpretable-ml-book/
|
| 180 |
+
|
| 181 |
+
[26] Board of Governors of the Federal Reserve System Office of the Comptroller of the Currency. Supervisory Guidance on Model Risk Management, SR Letter 11-7, 2011. URL https://www.federalreserve gov/supervisionreg/srletters/sr1107a1.pdf
|
| 182 |
+
|
| 183 |
+
[27] Nicolas Papernot. A Marauder's Map of Security and Privacy in Machine Learning: An overview of current and future research directions for making machine learning secure and private. In Proceedings of the 11th ACM Workshop on Artificial Intelligence and Security. ACM, 2018. URL: https://arxiv org/pdf/1811.01134.pdf.
|
| 184 |
+
|
| 185 |
+
[28] PwC (PriceWaterhouseCoopers). PwC 2022 AI Business Survey, 2022. URL: https://www.pwc.com/ us/en/tech-effect/ai-analytics/ai-business-survey.html
|
| 186 |
+
|
| 187 |
+
[29] Mahima Pushkarna, Andrew Zaldivar, and Oddur Kjartansson. Data cards: Purposeful and transparent dataset documentation for responsible ai, 2022. URL https://arxiv.org/abs/2204.01075.
|
| 188 |
+
|
| 189 |
+
[30] Sandeep Reddy, Sonia Allan, Simon Coghlan, and Paul Cooper. A governance model for the application of ai in health care. Journal of the American Medical Informatics Association, 27(3):491-497, 2020.
|
| 190 |
+
|
| 191 |
+
[31] Reva Schwartz, Apostol Vassilev, Kristen Greene, Lori Perine, Andrew Burt, and Patrick Hall. Towards a standard for identifying and managing bias in artificial intelligence, 2022-03-15 04:03:00 2022. URL https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=934464.
|
| 192 |
+
|
| 193 |
+
[32] Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership Inference Attacks Against Machine Learning Models. In 2017 IEEE Symposium on Security and Privacy (SP), pages 3-18. IEEE, 2017. URL: https://arxiv.org/pdf/1610.05820.pdf.
|
| 194 |
+
|
| 195 |
+
[33] Florian Tramèr, Fan Zhang, Ari Juels, Michael K Reiter, and Thomas Ristenpart. Stealing Machine Learning Models via Prediction APIs. In 25th \{USENIX\} Security Symposium (\{USENIX\} Security 16), pages 601- 618, 2016. URL: https://www.usenix.org/system/files/conference/usenixsecurity16/ sec16_paper_tramer.pdf.
|
| 196 |
+
|
| 197 |
+
[34] Mark Treveil, Nicolas Omont, Clément Stenac, Kenji Lefevre, Du Phan, Joachim Zentici, Adrien Lavoil-lotte, Makoto Miyazaki, and Lynn Heidmann. Introducing MLOps. O'Reilly Media, 2020.
|
| 198 |
+
|
| 199 |
+
[35] Weiyu Wang and Keng Siau. Artificial intelligence: A study on governance, policies, and regulations. MWAIS 2018 proceedings, 40, 2018.
|
| 200 |
+
|
| 201 |
+
[36] Lotfi A Zadeh. Is probability theory sufficient for dealing with uncertainty in ai: A negative view. In Machine intelligence and pattern recognition, volume 4, pages 103-116. Elsevier, 1986.
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/uEQTusqzEg-/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,127 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ A BRIEF OVERVIEW OF AI GOVERNANCE FOR RESPONSIBLE MACHINE LEARNING SYSTEMS
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
§ ABSTRACT
|
| 12 |
+
|
| 13 |
+
Organizations of all sizes, across all industries and domains are leveraging artificial intelligence (AI) technologies to solve some of their biggest challenges around operations, customer experience, and much more. However, due to the probabilistic nature of AI, the risks associated with it are far greater than traditional technologies. Research has shown that these risks can range anywhere from regulatory, compliance, reputational, user trust and societal risks, to financial and even existential risks. Depending on the nature and size of the organization, AI technologies can pose a significant risk, if not used in a responsible way. This position paper seeks to present a brief introduction to AI governance, which is a framework designed to oversee the responsible use of AI with the goal of preventing and mitigating risks. Having such a framework will not only manage risks but also gain maximum value out of AI projects and develop consistency for organization-wide adoption of AI.
|
| 14 |
+
|
| 15 |
+
§ 13 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
In this position paper, we share our insights about AI Governance in companies, which enables new connections between various aspects and properties of trustworthy and socially responsible ML: security, robustness, privacy, fairness, ethics, interpretability, transparency, etc.
|
| 18 |
+
|
| 19 |
+
For a long time artificial intelligence (AI) was something enterprise organizations adopted due to the huge amounts of resources they have at their fingertips. Today, smaller companies are able to take advantage of AI due to newer technologies, e.g. cloud software, which are significantly more affordable than what was available in the past [2, 8, 11, 12]. AI has been on an upward trajectory in recent years and it will increase significantly over the next several years [21] [15] [28]. Obviously, AI is seen as a fruitful investment for many organizations. However, every investment has its pros and cons. Unfortunately, the cons associated with AI adoption are caused by the builders of such AI systems who do not take the necessary steps to avoid problems down the road [20, 12].
|
| 20 |
+
|
| 21 |
+
§ 1.1 PROBLEMS WITHIN INDUSTRY
|
| 22 |
+
|
| 23 |
+
Applications of AI in industry is still in its infancy. With that said, many problems have arisen since its adoption [10, 12, 20], which can be attributed to several factors:
|
| 24 |
+
|
| 25 |
+
Lack of risk management: Too much attention is given to applications of AI and its potential success and not enough attention is given to its potential pitfalls & risks.
|
| 26 |
+
|
| 27 |
+
AI adoption is moving too fast: According to a survey by KPMG in 2021 [16], many respondents noted that AI technology is moving too fast for their comfort in industrial manufacturing (55%), technology (49%), financial services (37%), government (37%), and health care (35%) sectors.
|
| 28 |
+
|
| 29 |
+
AI adoption needs government intervention: According to the same survey by KPMG, [16] an overwhelming percentage of respondents agreed that governments should be involved in regulating AI technology in the industrial manufacturing (94%), retail (87%), financial services (86%), life sciences (86%), technology (86%), health care (84%), and government (82%) sectors.
|
| 30 |
+
|
| 31 |
+
Companies are still immature when it comes to adopting AI: Some companies are not prepared for business conditions to change once a ML model is deployed into the real world.
|
| 32 |
+
|
| 33 |
+
Many of these problems can be avoided with proper governance mechanisms. AI without such mechanisms is a dangerous game with detrimental outcomes due its inherent uncertainty [36, 7]. 1 With that said, adding governance into applications of AI is imperative to ensure safety in production.
|
| 34 |
+
|
| 35 |
+
§ 42 2 AI GOVERNANCE
|
| 36 |
+
|
| 37 |
+
In order for organizations to realize the maximum value out of AI projects and develop consistency for organization-wide adoption of AI, while managing significant risks to their business, they must implement AI governance [30, 9, 13, 17]; this enables organizations to not only develop AI projects in a responsible way, but also ensure that there is consistency across the entire organization and the business objective is front and center. With the AI governance implemented (as illustrated in Figure 1), the following benefits can be realized:
|
| 38 |
+
|
| 39 |
+
Alignment and Clarity: teams would be aware and aligned on what the industry, international, regional, local, and organizational policies are that need to be adhered to.
|
| 40 |
+
|
| 41 |
+
Thoughtfulness and Accountability: teams would put deliberate effort into justifying the business case for AI projects, and put conscious effort into thinking about end-user experience, adversarial impacts, public safety & privacy. This also places greater accountability on the teams developing their respective AI projects.
|
| 42 |
+
|
| 43 |
+
Consistency and Organizational Adoption: teams would have a more consistent way of developing and collaborating on their AI projects, leading to increased tracking and transparency for their projects. This also provides an overarching view of all AI projects going on within the organization, leading to increased visibility and overall adoption.
|
| 44 |
+
|
| 45 |
+
Process, Communication, and Tools: teams would have complete understanding of what the steps are in order to move the AI project to production to start realizing business value. They would also be able to leverage tools that take them through the defined process, while being able to communicate with the right stakeholders through the tool.
|
| 46 |
+
|
| 47 |
+
Trust and Public Perception: as teams build out their AI projects more thoughtfully, this will inherently build trust amongst customers and end users, and therefore a positive public perception.
|
| 48 |
+
|
| 49 |
+
The components of AI governance should focus on organizational and use case planning, AI development, and AI "operationalization", which come together to make a 4 stage AI life-cycle approach.
|
| 50 |
+
|
| 51 |
+
< g r a p h i c s >
|
| 52 |
+
|
| 53 |
+
Figure 1: Illustration of the AI Governance application towards responsible AI in companies.
|
| 54 |
+
|
| 55 |
+
§ 2.1 ORGANIZATIONAL PLANNING
|
| 56 |
+
|
| 57 |
+
An AI Governance Program [17, 9, 35] should be organized in such a way that (a) there is comprehensive understanding of regulations, laws, and policies amongst all team members (b) resources and help available for team members who encounter challenges (c) there is a light weight, yet clear process to assist team members.
|
| 58 |
+
|
| 59 |
+
§ 1. REGULATIONS, LAWS, POLICIES
|
| 60 |
+
|
| 61 |
+
Laws and regulations that apply to a specific entity should be identified, documented, and available for others to review and audit. These regulations, laws, and policies vary across industry and sometimes by geographical location. Organizations should, if applicable, develop policies for themselves, which reflect their values and ethical views [35, 24, 12]; this enables teams to be more autonomous and make decisions with confidence.
|
| 62 |
+
|
| 63 |
+
§ 2. ORGANIZATION (CENTER OF COMPETENCY)
|
| 64 |
+
|
| 65 |
+
Establishing groups within an organization that provide support to teams with AI projects can prove to be quite beneficial. This includes a group that is knowledgeable with regulations, laws, and policies and can answer any questions that AI teams may have; a group that is able to share best practices across different AI teams within the organization; a group that is able to audit the data, model, process, etc. to ensure there isn't a breach or non-compliance. For more information, we refer the reader to to the survey by Floridi et al. [12].
|
| 66 |
+
|
| 67 |
+
§ 3. PROCESS
|
| 68 |
+
|
| 69 |
+
Developing a light-weight process that provides guidelines to AI teams can help with their efficiency, rather than hinder their progress and velocity. This involves identifying what the approval process and incident response would be for data, model, deployments, etc.
|
| 70 |
+
|
| 71 |
+
§ 2.2 USE CASE PLANNING
|
| 72 |
+
|
| 73 |
+
Building use cases involves establishing business value, technology stack, and model usage. The group of people involved in this process can include: subject matter experts, data scientists/analysts/annotators and ML engineers, IT professionals, and finance departments.
|
| 74 |
+
|
| 75 |
+
Business Value Framework. The AI team should ensure that the motivation for the AI use case is documented and communicated amongst all stakeholders. This should also include the original hypothesis, and the metrics that would be used for evaluating the experiments.
|
| 76 |
+
|
| 77 |
+
Tools, Technology, Products. The AI team should either select from a set of pre-approved tools and products from the organization or get a set of tools and products approved before using in an AI user case. If tools for AI development are not governed, it not only leads to high costs and inability to manage the tools (as existing IT teams are aware), it also leads to not being able to create repeatability and traceability into AI models.
|
| 78 |
+
|
| 79 |
+
Model Usage. Once a sense of value is attached to the use case, then the next step would be to break down the use case to its sub-components which include, but are not limited to, identifying the consumer of the model, the model's limitations, and potential bias that may exist within the model, along with its implications. Also, one would want to ensure inclusiveness of the target, public safety/user privacy, and identification of the model interface needed for their intended use case.
|
| 80 |
+
|
| 81 |
+
§ 2.3 AI DEVELOPMENT
|
| 82 |
+
|
| 83 |
+
Development of a machine learning model, including data handling and analysis, modeling, generating explanations, bias detection, accuracy and efficacy analysis, security and robustness checks, model lineage, validation, and documentation.
|
| 84 |
+
|
| 85 |
+
§ 1. DATA HANDLING, ANALYSIS AND MODELING
|
| 86 |
+
|
| 87 |
+
The first technical step to any AI project is the procurement and analysis of data, which is critical as it lays the foundation for all work going forward. Once data is analyzed, then one must decipher if modeling is needed for the use case at hand. If modeling is needed, then the application of AI can take place. Such an application is an iterative process spanned across many different types of people.
|
| 88 |
+
|
| 89 |
+
§ 2. EXPLANATIONS AND BIAS
|
| 90 |
+
|
| 91 |
+
The goal of model explanations is to relate feature values to model predictions in a human-friendly manner [25]. What one does with these explanations breaks down to 3 personas: modeler, intermediary user, and the end user. The modeler would use explanations for model debugging and gaining understanding of the model they just built. The intermediary user would use what the modeler made for actionable insights. And finally, the end user is the person the model affects directly. For these reasons, Explainable Artificial Intelligence (XAI) is a very active research topic [4, 14].
|
| 92 |
+
|
| 93 |
+
Bias, whether intentional (disparate treatment) or unintentional (disparate impact), is a cause for concern in many applications of AI [31, 22]. Common things to investigate when it comes to preventing bias include the data source used for the modeling process, performance issues amongst different demographics, disparate impact, identifying known limitations & potential adverse implications, and the models impact on public safety [5]. We refer the reader to the survey by Mehrabi et al. [22] for more details about bias and fairness in AI.
|
| 94 |
+
|
| 95 |
+
§ 3. ACCURACY, EFFICACY, & ROBUSTNESS
|
| 96 |
+
|
| 97 |
+
Accuracy of a machine learning model is critical for any business application in which predictions drive potential actions. However, it is not the most important metric to optimize. One must also consider the efficacy of a model, i.e., is the model making the intended business impact?
|
| 98 |
+
|
| 99 |
+
When a model is serving predictions in a production setting, the data can be a little or significantly different from the data that the project team had access to. Although model drift and feature drift can capture this discrepancy, it is a lagging indicator, and by that time, the model has already made predictions. This is where Robustness comes in: project teams can proactively test for model robustness, using "out of scope" data, to understand whether the model perturbs. The out of scope data can be a combination of manual generation (toggle with feature values) and automatic generation (system toggles feature values).
|
| 100 |
+
|
| 101 |
+
§ 4. SECURITY
|
| 102 |
+
|
| 103 |
+
ML systems today are subject to general attacks that can affect any public facing IT system [6], [27]; specialized attacks that exploit insider access to data and ML code; external access to ML prediction APIs and endpoints [33], [32]; and trojans that can hide in third-party ML artifacts. Such attacks must be accounted for and tested against before sending a machine learning model out into the real world.
|
| 104 |
+
|
| 105 |
+
§ 5. DOCUMENTATION & VALIDATION
|
| 106 |
+
|
| 107 |
+
An overall lineage of the entire AI project life-cycle should be documented to ensure transparency and understanding [23], [29], which will be useful for the AI team working on the project and also future teams who must reference this project for their own application.
|
| 108 |
+
|
| 109 |
+
Model validation [26, 18] is the set of processes and activities that are carried out by a third party, with the intent to verify that models are robust and performing as expected, in line with the business use case. It also identifies the impact of potential limitations and assumptions. From a technical standpoint, the following should be considered: (i) Sensitivity Analysis. (ii) In-sample vs. Out-of-sample performance. (iii) Replication of results from model development team. (iv) Stability analysis. Model "validators" should document all of their findings and share with relevant stakeholders.
|
| 110 |
+
|
| 111 |
+
§ 2.4 AI OPERATIONALIZATION
|
| 112 |
+
|
| 113 |
+
Deploying a machine learning model into production (i.e. MLOps [1, 34]) is the first step to potentially receiving value out of it. The steps that go into the deployment process should include the following:
|
| 114 |
+
|
| 115 |
+
Review-Approval Flow: Model building in an AI project will go through various stages: experimentation, model registration, deployment, and decommissioning. Moving from one stage to the next would require "external" reviewer(s) who will vet and provide feedback.
|
| 116 |
+
|
| 117 |
+
Monitoring & Alerts: Once a model is deployed, it must be monitored for various metrics to ensure there is not any degradation in the model. The cause for a model degrading when deployed can include the following: feature and/or target drift, lack of data integrity, and outliers, amongst other things. In terms of monitoring, accuracy, fairness, and explanations of predictions are of interest [3, 19].
|
| 118 |
+
|
| 119 |
+
Decision Making: The output of a machine learning model is a prediction, but that output must be turned into a decision. How to decide? Will it be autonomous? Will it involve a human in the loop? The answers to these questions vary across different applications, but the idea remains the same, ensuring decisions are made in the proper way to decrease risk for everyone involved.
|
| 120 |
+
|
| 121 |
+
Incident Response and Escalation Process: With AI models being used in production, there is always going to be a chance for issues to arise. Organizations should have an incident response plan and escalation process documented and known to all project teams.
|
| 122 |
+
|
| 123 |
+
§ 3 CONCLUSION
|
| 124 |
+
|
| 125 |
+
AI systems are used today to make life-altering decisions about employment, bail, parole, and lending, and the scope of decisions delegated by AI systems seems likely to expand in the future. The pervasiveness of AI across many fields is something that will not slowdown anytime soon and organizations will want to keep up with such applications. However, they must be cognisant of the risks that come with AI and have guidelines around how they approach applications of AI to avoid such risks. By establishing a framework for AI Governance, organizations will be able to harness AI for their use cases while at the same time avoiding risks and having plans in place for risk mitigation, which is paramount.
|
| 126 |
+
|
| 127 |
+
Social Impact As we discuss in this paper, governance and certain control over AI applications in organizations should be mandatory. This "new term", AI Governance, aims to enable and facilitate connections between various aspects of trustworthy and socially responsible Machine Learning systems, and therefore it accounts for security, robustness, privacy, fairness, ethics, and transparency. We here summarize this concept, and we believe the implementation of these ideas should have a positive impact in the society.
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/uPF2bs14E3p/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,318 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Differentially Private Gradient Boosting on Linear Learners for Tabular Data Analysis
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s) Affiliation Address email
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
Gradient boosting takes linear combinations of weak base learners. Therefore, absent privacy constraints (when we can exactly optimize over the base models) it is not effective when run over base learner classes that are closed under linear combinations (e.g. linear models). As a result, gradient boosting is typically implemented with tree base learners (e.g., XGBoost), and this has become the state of the art approach in tabular data analysis. Prior work on private gradient boosting focused on taking the state of the art algorithm in the non-private regime-boosting on trees-and making it differentially private. Surprisingly, we find that when we use differentially private learners, gradient boosting over trees is not as effective as gradient boosting over linear learners. In this paper, we propose differentially private gradient-boosted linear models as a private classification method for tabular data. We empirically demonstrate that, under strict privacy constraints, it yields higher F1 scores than the private versions of gradient-boosted trees on five real-world binary classification problems. This work adds to the growing picture that the most effective learning methods under differential privacy may be quite different from the most effective learning methods without privacy.
|
| 8 |
+
|
| 9 |
+
## 17 1 Introduction
|
| 10 |
+
|
| 11 |
+
Gradient boosting is an approach to learn an additive model such that the sum of many weak base learners approximates the final output [1]. This is achieved by iteratively fitting the next base learner to the gradient of the loss evaluated at the current prediction. Algorithm 1 outlines gradient boosting in a general form, which can be parameterized by any choice of loss function $\mathcal{L}$ and base learner $b\left( x\right)$ . Classification and regression trees (CARTs) are one of the most popular choices for the base learner because of its effectiveness on tabular data and deployment-ready tree-based data structures in systems. Existing packages, including XGBoost [2], LightGBM [3], and CatBoost [4], drastically improved the usability of gradient boosting among tools for tabular data analysis.
|
| 12 |
+
|
| 13 |
+
There has been an increasing demand for privacy-preserving machine learning tools, which naturally triggered a wave of efforts to develop a private version of the gradient boosting algorithm. Differential privacy (DP, Definition 1.1) is one of the most prevalent definitions of privacy, and was adopted to make gradient boosting algorithms private in recent works [5, 6, 7]. DP ensures that, for a randomized algorithm, when two neighboring datasets that differ in one data point are fed into an algorithm, the two outputs are indistinguishable, within some probability margin defined using $\epsilon$ and $\delta \in \lbrack 0,1)$ .
|
| 14 |
+
|
| 15 |
+
Definition 1.1 (Differential Privacy [8]). A randomized algorithm $\mathcal{M}$ with domain $\mathcal{D}$ is $\left( {\epsilon ,\delta }\right)$ - differentially private for all $\mathcal{S} \subseteq \operatorname{Range}\left( \mathcal{M}\right)$ and for all pairs of neighboring databases $D,{D}^{\prime } \in \mathcal{D}$ ,
|
| 16 |
+
|
| 17 |
+
$$
|
| 18 |
+
\Pr \left\lbrack {\mathcal{M}\left( D\right) \in \mathcal{S}}\right\rbrack \leq {e}^{\epsilon }\Pr \left\lbrack {\mathcal{M}\left( {D}^{\prime }\right) \in \mathcal{S}}\right\rbrack + \delta , \tag{1}
|
| 19 |
+
$$
|
| 20 |
+
|
| 21 |
+
where the probability space is over the randomness of the mechanism $\mathcal{M}$ . As an extension of this idea, a single-parameter family of privacy notion (Gaussian differential privacy, GDP) was later proposed [9]. We first define the trade-off function $T\left( {P, Q}\right)$ and use it to define GDP.
|
| 22 |
+
|
| 23 |
+
Algorithm 1 Gradient Boosting (iterations $T$ , loss $\mathcal{L}$ , base learner $b\left( {x;\theta }\right)$ )
|
| 24 |
+
|
| 25 |
+
---
|
| 26 |
+
|
| 27 |
+
Data input: covariates ${x}_{1},\cdots ,{x}_{n}$ and labels ${y}_{1},\cdots ,{y}_{n}$ .
|
| 28 |
+
|
| 29 |
+
Initialize $f\left( x\right) = 0$
|
| 30 |
+
|
| 31 |
+
for $t \in \left\lbrack T\right\rbrack$ do
|
| 32 |
+
|
| 33 |
+
Compute ${\theta }_{t} = \arg \mathop{\min }\limits_{\theta }\mathop{\sum }\limits_{{i = 1}}^{n}\mathcal{L}\left( {{y}_{i}, f\left( {x}_{i}\right) + b\left( {{x}_{i};\theta }\right) }\right)$
|
| 34 |
+
|
| 35 |
+
Update $f\left( x\right) \leftarrow f\left( x\right) + {f}_{t}\left( x\right)$ , where ${f}_{t}\left( x\right) = b\left( {x;{\theta }_{t}}\right)$
|
| 36 |
+
|
| 37 |
+
end for
|
| 38 |
+
|
| 39 |
+
return $f\left( x\right) = \mathop{\sum }\limits_{{t = 1}}^{T}{f}_{t}\left( x\right)$
|
| 40 |
+
|
| 41 |
+
---
|
| 42 |
+
|
| 43 |
+
Definition 1.2 (Trade-off function, Definition 2.1 of [9]). For any two probability distributions $P$ and $Q$ on the same space, the trade-off function $T\left( {P, Q}\right) : \left\lbrack {0,1}\right\rbrack \rightarrow \left\lbrack {0,1}\right\rbrack$ is defined as
|
| 44 |
+
|
| 45 |
+
$$
|
| 46 |
+
T\left( {P, Q}\right) \left( \alpha \right) = \mathop{\inf }\limits_{\phi }\left\{ {1 - {\mathbb{E}}_{Q}\left\lbrack \phi \right\rbrack : {\mathbb{E}}_{P}\left\lbrack \phi \right\rbrack \leq \alpha }\right\}
|
| 47 |
+
$$
|
| 48 |
+
|
| 49 |
+
Definition 1.3 (Gaussian Differential Privacy, Definition 2.6 of [9]). A mechanism $\mathcal{M}$ is said to satisfy $\mu$ -Gaussian Differential Privacy $\left( {\mu \text{-GDP}}\right)$ if it is ${G}_{\mu }$ -DP. That is,
|
| 50 |
+
|
| 51 |
+
$$
|
| 52 |
+
T\left( {\mathcal{M}\left( D\right) ,\mathcal{M}\left( {D}^{\prime }\right) }\right) \geq {G}_{\mu }
|
| 53 |
+
$$
|
| 54 |
+
|
| 55 |
+
for all neighboring datasets $D$ and ${D}^{\prime }$ , where ${G}_{\mu } = T\left( {\mathcal{N}\left( {0,1}\right) ,\mathcal{N}\left( {\mu ,1}\right) }\right)$ .
|
| 56 |
+
|
| 57 |
+
$\mu$ -GDP means that determining whether an individual’s data is present in the dataset from one draw is at least as difficult as telling apart the two normal distributions $\mathcal{N}\left( {0,1}\right)$ and $\mathcal{N}\left( {\mu ,1}\right) .\mu$ -GDP can be converted to $\left( {\epsilon ,\delta }\right)$ -DP and vice versa.
|
| 58 |
+
|
| 59 |
+
Corollary 1.1 (Conversion between GDP and DP, Corollary 2.13 of [9]). A mechanism is $\mu$ -GDP if and only if it is $\left( {\epsilon ,\delta \left( \epsilon \right) }\right)$ -DP for all $\epsilon \geq 0$ , where
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
\delta \left( \epsilon \right) = \Phi \left( {-\frac{\epsilon }{\mu } + \frac{\mu }{2}}\right) - {e}^{\epsilon }\Phi \left( {-\frac{\epsilon }{\mu } - \frac{\mu }{2}}\right) .
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
Theorem 1.2 (Gaussian Mechanism, Theorem 2.7 from of [9]). Define a randomized algorithm ${GM}$ that operates on a statistic $\theta$ as ${GM}\left( {x,\mu }\right) = \theta \left( x\right) + \eta$ , where $\eta \sim \mathcal{N}\left( {0,\operatorname{sens}{\left( \theta \right) }^{2}/{\mu }^{2}}\right)$ and sens is the ${l}_{2}$ -sensitivity of the statistics $\theta$ . Then, ${GM}$ is $\mu$ - ${GDP}$ .
|
| 66 |
+
|
| 67 |
+
Most attempts focused on making gradient boosting private on tree base learners. For example, [5] proposed DPBoost, privatizing gradient-boosted regression trees by finding splits using the exponential mechanism and computing numeric values at leaves using the Laplace mechanism. In a similar fashion, DP-XGBoost was suggested by additionally privatizing the quantile sketching step of XGBoost [6]. DPBoost and DP-XGBoost suffered from low accuracy under strict privacy constraints, as they had to consume privacy budget not only in leaf value computation but also in split finding step (quantile sketching in DP-XGBoost). DP-EBM overcame this by restricting each tree to use only one feature and randomly selecting split points (hence no privacy budget is consumed in split finding) [7]. It showed improved performance compared to DPBoost. However, DP-EBM takes a much longer time to learn a model since it requires learning many more trees.
|
| 68 |
+
|
| 69 |
+
Absent privacy, gradient boosting on linear models does not improve performance, since linear models are closed under linear combinations, and the base learner can already exactly optimize over this class. But with differential privacy, it is no longer possible to exactly optimize over the base class, so gradient boosting has the potential to give improvements. Moreover, there are private linear learners that make very efficient use of the privacy budget. We adopt a state of the art approach to privately learn linear models, AdaSSP [10], which adds noise to the sufficient statistics for a linear model.
|
| 70 |
+
|
| 71 |
+
## 2 Differentially Private Gradient Boosting with Linear Models
|
| 72 |
+
|
| 73 |
+
The flexibility of gradient boosting arises from two choices that we make: (i) a loss function and (ii) a class of base learners. We experiment on three types of loss functions (squared, logistic, and hinge) 9 and fix the base learner class to a private ridge regressor (via AdaSSP). In this section, we describe the loss functions and its gradients (subsection 2.1) and the private base learner class (subsection 2.2).
|
| 74 |
+
|
| 75 |
+
### 2.1 Loss Functions and Gradients
|
| 76 |
+
|
| 77 |
+
For binary classification problems, we have a dataset $\mathcal{D}$ of $n$ data points, composed of covariates ${x}_{i} \in {\mathbb{R}}^{p}$ and labels ${y}_{i} \in \{ - 1,1\}$ (or ${y}_{i} \in \{ 0,1\}$ for logistic loss), $\forall i \in \left\lbrack n\right\rbrack$ . The final model $f\left( x\right)$ takes the covariates ${x}_{i}$ as an input and outputs the score ${s}_{i}$ , which can be translated later to output binary prediction $\widehat{y} = \mathbb{1}\left( {{s}_{i} > 0}\right) * 2 - 1$ (or $\widehat{y} = \mathbb{1}\left( {{s}_{i} > 0}\right)$ for ${y}_{i} \in \{ 0,1\}$ case).
|
| 78 |
+
|
| 79 |
+
Let $\mathcal{L}$ be the loss function, $T$ be the number of boosting rounds, ${f}_{t}$ be the model learned at iteration $t \in \left\lbrack T\right\rbrack$ . At $t$ -th round of boosting iteration, the goal of Algorithm 1 is to obtain
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
{\theta }_{t} = \arg \mathop{\min }\limits_{\theta }\mathop{\sum }\limits_{{i = 1}}^{N}\mathcal{L}\left( {{y}_{i},\mathop{\sum }\limits_{{k = 1}}^{{t - 1}}{f}_{k}\left( {x}_{i}\right) + b\left( {{x}_{i};\theta }\right) }\right) . \tag{2}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
The ${f}_{t}\left( x\right) = b\left( {x;{\theta }_{t}}\right)$ can be approximated by steepest gradient descent, where the gradient is taken with respect to the score prediction ${s}_{i}$ and evaluated at current score ${s}_{i} \mathrel{\text{:=}} \mathop{\sum }\limits_{{k = 1}}^{{t - 1}}{f}_{k}\left( {x}_{i}\right)$ . The components of the negative gradient at $t$ can be written as
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
{g}_{i, t} \mathrel{\text{:=}} - {\left. \frac{\partial \mathcal{L}\left( {{y}_{i},{s}_{i}}\right) }{\partial {s}_{i}}\right| }_{{s}_{i} = \mathop{\sum }\limits_{{k = 1}}^{{t - 1}}{f}_{k}\left( {x}_{i}\right) }. \tag{3}
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
Table. 1 lists the three loss functions we experiment in this paper, and their negative gradients. Note 2 that the squared loss is unbounded, hence we clip the gradient to fall between(-z, z)for some $z \in \mathbb{R}$ (further explained in the next section).
|
| 92 |
+
|
| 93 |
+
Table 1: Loss functions and gradients, where $\sigma \left( x\right) = \frac{1}{\left( 1 + {e}^{-x}\right) }$ and $\mathbb{1}\left( \cdot \right)$ is the indicator function.
|
| 94 |
+
|
| 95 |
+
<table><tr><td/><td>$\mathcal{L}\left( {{y}_{i},{s}_{i}}\right)$</td><td>$g\left( {s}_{i}\right) = - \partial \mathcal{L}\left( {{y}_{i},{s}_{i}}\right) /\partial {s}_{i}$</td></tr><tr><td>Squared</td><td>$\frac{1}{2}{\left( {y}_{i} - {s}_{i}\right) }^{2}$</td><td>${y}_{i} - {s}_{i}$</td></tr><tr><td>Logistic</td><td>$- {y}_{i}\ln \left( {\sigma \left( {s}_{i}\right) }\right) - \left( {1 - {y}_{i}}\right) \ln \left( {1 - \sigma \left( {s}_{i}\right) }\right)$</td><td>${y}_{i} - \sigma \left( {s}_{i}\right)$</td></tr><tr><td>Hinge</td><td>$\max \left( {0,1 - {y}_{i}{s}_{i}}\right)$</td><td>$\mathbb{1}\left( {1 - {y}_{i}{s}_{i} > 0}\right) {y}_{i}$</td></tr></table>
|
| 96 |
+
|
| 97 |
+
### 2.2 Private Ridge Regressor as a Base Learner
|
| 98 |
+
|
| 99 |
+
With the negative gradients ${g}_{i, t}$ computed as in eq. (3), we fit the next base learner ${f}_{t}\left( x\right)$ to those gradients by minimizing empirical risk with a ridge regularizer. As we fixed the base learner class to linear models, we may express ${f}_{t}\left( x\right) = {\theta }_{t}^{\top }x$ , and the new model is obtained by
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
{\theta }_{t} = \arg \mathop{\min }\limits_{\theta }\mathop{\sum }\limits_{{i = 1}}^{n}{\left( {\theta }^{\top }{x}_{i} - {g}_{i, t}\right) }^{2} + \lambda \parallel \theta {\parallel }_{2}^{2}, \tag{4}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
where $\lambda \in \mathbb{R}$ is a hyperparameter for ridge regularizer. Note that the objective to be minimized in this step is different from the loss function we chose when computing gradients. Let $X \in {\mathbb{R}}^{n \times p}$ be the matrix with ${x}_{i}$ ’s in each row and ${g}_{t} \in {\mathbb{R}}^{n}$ be a vector containing all sample’s gradient at $t$ (i.e., $\left. {g}_{i, t}\right)$ . Absent privacy, the above minimization yields
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
{\theta }_{t} = {\left( {X}^{\top }X + \lambda I\right) }^{-1}{X}^{\top }{g}_{t}. \tag{5}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
Using the squared loss function, we can analytically show that ${\theta }_{t} = 0\;\forall t > 1$ when $\lambda = 0$ , and they are close to zero with small $\lambda$ values. This affirms why gradient boosting was not applied on linear base learners under no privacy constraints.
|
| 112 |
+
|
| 113 |
+
To meet the privacy constraints, we adopt AdaSSP to privately learn this ridge regressor ${\theta }_{t}$ (Algorithm 2 of [10]). Let $\mathcal{X}$ and $\mathcal{Y}$ be the domain of our data covariates and labels, respectively. We define the bound on data domain $\parallel \mathcal{X}\parallel = \mathop{\sup }\limits_{{x \in \mathcal{X}}}\parallel x\parallel$ and $\parallel \mathcal{Y}\parallel = \mathop{\sup }\limits_{{y \in \mathcal{Y}}}\left| y\right|$ . Given the privacy budget $\epsilon ,\delta$ to guarantee $\left( {\epsilon ,\delta }\right)$ -DP, and bounds on the data $\parallel \mathcal{X}\parallel$ and $\parallel \mathcal{Y}\parallel$ for ${x}_{i}$ and ${g}_{i, t}$ , respectively, AdaSSP calibrates $\left( {\epsilon ,\delta }\right)$ -DP to $\mu$ -GDP with an appropriate $\mu$ , and adds calibrated Gaussian noise to three sufficient statistics: 1) ${X}^{\top }X,2){X}^{\top }{g}_{t}$ , and 3) $\lambda$ . The detailed description of AdaSSP algorithm for learning one ridge regressor is deferred to Appendix A.2.
|
| 114 |
+
|
| 115 |
+
Let $\widehat{{X}^{\top }X} = {GM}\left( {{X}^{\top }X,{\mu }_{1}}\right) ,\widehat{{X}^{\top }{g}_{t}} = {GM}\left( {{X}^{\top }{g}_{t},{\mu }_{2}}\right) ,\widehat{\lambda } = {GM}\left( {\lambda ,{\mu }_{3}}\right)$ be the private release of sufficient statistics from a single instantiation of AdaSSP to learn ${\theta }_{t}$ and ${GM}$ is defined in Theorem 1.2. The final model $\widehat{{\theta }^{ * }}$ can be expressed as
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
\widehat{{\theta }^{ \star }} = \mathop{\sum }\limits_{{t = 1}}^{T}\widehat{{\theta }_{t}} = {\left( \widehat{{X}^{\top }X} + \widehat{\lambda }I\right) }^{-1}\mathop{\sum }\limits_{{t = 1}}^{T}\widehat{{X}^{\top }{g}_{t}} \tag{6}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
where the initial ${g}_{i,1} = {y}_{i}$ . Notice that, for a boosted model, we may call ${GM}\left( {{X}^{\top }X,{\mu }_{1}}\right)$ and ${GM}\left( {\lambda ,{\mu }_{3}}\right)$ just once and only repeat the second part ${GM}\left( {{X}^{\top }{g}_{t},{\mu }_{2}}\right)$ for $T$ rounds, instead of straightforwardly repeating Algorithm 3 for $T$ rounds.
|
| 122 |
+
|
| 123 |
+
Finally, we suggest BoostedAdaSSP (Algorithm 2), a differentially private gradient boosting algorithm with linear base learners. Algorithm 2 assumes binary classification tasks, and can be generalized to any choice of loss function $\mathcal{L}$ . In the second line inside the for loop, we clip the computed gradient to enforce ${g}_{i, t} \in \left\lbrack {-\parallel \mathcal{Y}\parallel ,\parallel \mathcal{Y}\parallel }\right\rbrack$ , if it is not naturally satisfied. The final output $\theta$ defines the score predictor $f\left( x\right) = {\theta }^{\top }x$ , where the score above 0 means a positive label $\left( {+1}\right)$ and below 0 means a negative label (-1 or 0 ).
|
| 124 |
+
|
| 125 |
+
Algorithm 2 BoostedAdaSSP (Data $X, y$ , Privacy parameter $\epsilon ,\delta$ , Split ratio $a, b, c$ , Bound on $\left| \right| \mathcal{X}\left| \right| ,\left| \right| \mathcal{Y}\left| \right| )$
|
| 126 |
+
|
| 127 |
+
---
|
| 128 |
+
|
| 129 |
+
Initialize $\theta = 0$
|
| 130 |
+
|
| 131 |
+
Find $\mu$ such that $\mu$ -GDP satisfies $\left( {\epsilon ,\delta }\right)$ -DP. # Corollary 1.1
|
| 132 |
+
|
| 133 |
+
Calibrate ${\mu }_{1},{\mu }_{2},{\mu }_{3}$ such that ${\mu }_{1} : {\mu }_{2} : {\mu }_{3} = a : b : c$ and $\mu = \sqrt{{\mu }_{1}^{2} + {\mu }_{2}^{2} + {\mu }_{3}^{2}}$ .
|
| 134 |
+
|
| 135 |
+
$\widehat{{X}^{\top }X} = {GM}\left( {{X}^{\top }X,{\mu }_{1}}\right)$ and $\widehat{\lambda } = {GM}\left( {\lambda ,{\mu }_{3}}\right)$ # instantiate AdaSSP (part 1 & 3)
|
| 136 |
+
|
| 137 |
+
$\Gamma = {\left( \widehat{{X}^{\top }X} + \widehat{\lambda }I\right) }^{-1}$
|
| 138 |
+
|
| 139 |
+
for $t \in \left\lbrack T\right\rbrack$ do
|
| 140 |
+
|
| 141 |
+
$s = {X\theta }$ # current score prediction
|
| 142 |
+
|
| 143 |
+
${g}_{t} = - {\nabla }_{s}\mathcal{L}\left( {y, s}\right)$ # compute gradient, clip as needed
|
| 144 |
+
|
| 145 |
+
${\theta }_{t} = \Gamma \overset{⏜}{{X}^{\top }{g}_{t}}$ , where $\overset{⏜}{{X}^{\top }{g}_{t}} = {GM}\left( {{X}^{\top }{g}_{t},\frac{{\mu }_{2}}{\sqrt{T}}}\right)$ # instantiate AdaSSP (part 2)
|
| 146 |
+
|
| 147 |
+
$\theta = \theta + {\theta }_{t}$ # update model
|
| 148 |
+
|
| 149 |
+
end for
|
| 150 |
+
|
| 151 |
+
return $\theta$
|
| 152 |
+
|
| 153 |
+
$* {GM}\left( {X,\mu }\right)$ denotes a Gaussian mechanism to guarantee $\mu$ -GDP for private release of a statistic
|
| 154 |
+
|
| 155 |
+
$X$ , and uses the bounds $\parallel \mathcal{X}\parallel$ to compute the sensitivity internally.
|
| 156 |
+
|
| 157 |
+
---
|
| 158 |
+
|
| 159 |
+
Corollary 2.1 (Composition of GDP, Corollary 3.3 of [9]). The n-fold composition of ${\mu }_{i}$ -GDP mechanisms is $\sqrt{{\mu }_{1}^{2} + \cdots + {\mu }_{n}^{2}}$ -GDP.
|
| 160 |
+
|
| 161 |
+
Theorem 2.2. When $\parallel \mathcal{X}\parallel = \mathop{\sup }\limits_{{x \in \mathcal{X}}}\parallel x\parallel$ and $\parallel \mathcal{Y}\parallel = \mathop{\sup }\limits_{{y \in \mathcal{Y}}}\parallel y\parallel$ , Algorithm 2 satisfies $\mu$ -GDP and $\left( {\epsilon ,\delta }\right)$ -DP.
|
| 162 |
+
|
| 163 |
+
Proof. From Corollary 2.1, $\sqrt{{\mu }_{1}^{2} + T{\left( \frac{{\mu }_{2}}{\sqrt{T}}\right) }^{2} + {\mu }_{3}^{2}} = \sqrt{{\mu }_{1}^{2} + {\mu }_{2}^{2} + {\mu }_{3}^{2}} = \mu$ . The conversion of GDP to DP follows from Corollary 1.1.
|
| 164 |
+
|
| 165 |
+
## 3 Experiments
|
| 166 |
+
|
| 167 |
+
Algorithm 2 was experimented on five real-world datasets from Kaggle and LIBSVM (details can be found in Appendix A.1) with three choices of loss functions (squared, logistic and hinge), varying number of iterations, and varying choices of epsilon from 0.01 to 10 . The privacy parameter $\delta = {10}^{-6}$ for all experiments. Lastly, a non-private version was also computed for comparison. The performance of a model is measured by F1 score and AUROC. Note that AUROC may report a good value (close to 1) even when most of the minority class is misclassified. See Appendix A.1 to check how imbalanced each dataset is.
|
| 168 |
+
|
| 169 |
+
To compare the performance of BoostedAdaSSP against a tree-based private gradient boosting algorithm, we choose DP-EBM [7] which reported the best performance among others [5, 6]. At each boosting round, DP-EBM learns separate trees on individual features, and split points are chosen completely at random (this allows more efficient privacy budgeting). We experimented DP-EBM with four choices of the maximum number of leaves $\{ 2,3,{10},{100}\}$ . Setting this value to 100 yielded the best performance in terms of F1 score, hence we report this result in the main paper and defer other results to Appendix A.3.
|
| 170 |
+
|
| 171 |
+
### 3.1 Boosted Linear Models Under Non-Private Regime
|
| 172 |
+
|
| 173 |
+
Figure 1 shows the training loss of non private gradient boosting, each line corresponding to the choice of loss functions (squared, logistic, and hinge). For the squared loss(green line), we see virtually no improvement over boosting rounds, as mentioned previously in Section 2.2. On the other hand, we observe a decrease in training loss for logistic and hinge losses, when additional boosting rounds are introduced. This is because the gradient of these loss functions are non-linear with respect to the score predictions (whereas it is linear for squared loss).
|
| 174 |
+
|
| 175 |
+

|
| 176 |
+
|
| 177 |
+
Figure 1: Training loss versus number of rounds without privacy constraints.
|
| 178 |
+
|
| 179 |
+
### 3.2 BoostedAdaSSP (linear base learner) vs. DP-EBM (tree base learner)
|
| 180 |
+
|
| 181 |
+
Figure 2a and Figure 2b show the performance of private gradient boosting measured by F1 score and AUROC score, respectively. Each line corresponds to BoostedAdaSSP with three choices of loss functions and DP-EBM with 100 leaves per tree (red). We observe the improved performance as $\epsilon$ increases $\left( {\delta = {10}^{-6}}\right.$ is fixed for all experiments) for most cases. This aligns with our expectation that larger privacy budget $\epsilon$ allows less noise to be introduced, leading to a better performance. However, F1 score of DP-EBM in less than or equal to 100 rounds of boosting behaves counter-intuitively. Since the AUROC score follows our expectation, we may construe this as DP-EBM with small rounds of boosting results in a model that misclassifies majority of the minority class, to the point that the privacy noise sometimes helps correctly classifying the minority labels.
|
| 182 |
+
|
| 183 |
+
There was no loss functions that outperformed others in all cases-rather, the best-performing loss function (for BoostedAdaSSP) depends on the dataset (detailed results on individual datasets are deferred to Appendix A.5.) Overall, BoostedAdaSSP provides higher F1 scores at most values of total boosting rounds, however, DP-EBM provides slightly better AUROC scores. (Note that the datasets we experiment are mostly imbalanced.)
|
| 184 |
+
|
| 185 |
+

|
| 186 |
+
|
| 187 |
+
Figure 2: Averaged scores vs. Epsilon
|
| 188 |
+
|
| 189 |
+
### 3.3 Effect of Boosting Under Privacy Constraints
|
| 190 |
+
|
| 191 |
+
Figure 3a and Figure 3b show the performance(y-axis) over boosting rounds(x-axis) at four different privacy levels. The F1 score of BoostedAdaSSP and DP-EBM both improve as the number of boosting rounds increases. DP-EBM requires significantly more number of boosting rounds to yield comparable F1 score to BoostedAdaSSP. To run 1000 rounds of boosting, DP-EBM takes about 60.9 seconds to finish, and BoostedAdaSSP takes only about 4.5 seconds. Overall, we conclude that, restricting the number of boosting rounds to be small (i.e., when we want to limit the time budget), there exists a BoostedAdaSSP with some loss function that is preferrable to DP-EBM for all epsilon
|
| 192 |
+
|
| 193 |
+
165 values, when we evaluate the performance based on F1 scores.
|
| 194 |
+
|
| 195 |
+

|
| 196 |
+
|
| 197 |
+
Figure 3: Averaged scores vs. Epsilon
|
| 198 |
+
|
| 199 |
+
Additionally for BoostedAdaSSP with squared loss, we compare the ratio between the test set F1 score of a non-boosted model (i.e., with 1 iteration) versus with some rounds of iterations (10, 100, 1000 are plotted). Figure 4 shows that the effect of boosting (measured by the ratio) diminishes as $\epsilon$ goes to infinity as well as when $\epsilon$ goes to zero. This can be explained by conjecturing $\epsilon = \infty$ and $\epsilon = 0$ cases. As the privacy budget $\epsilon$ goes to infinity, our BoostedAdaSSP gradually reduces the amount of noise added to the output, and it eventually becomes similar to the non-private regime. Therefore, we may expect no effect of boosting, the same as in non-private case (see Figure 1). As the privacy budget $\epsilon$ approaches to 0, we eventually enter the high-privacy regime where the privacy noise dominates the signal. In this case, it is difficult to expect any learning algorithms, let alone additional boosting rounds, to learn anything.
|
| 200 |
+
|
| 201 |
+
As a result, we observe a bell curve shape in the Figure 4, which implies that there exists a sweet spot in terms of the privacy budget $\epsilon$ where the boosting has the maximum impact. However, the sweet spot observed here doesn't necessarily indicates the best F1 score. Same observations are shown in Figure 9 for logistic loss and Figure 10 for hinge loss.
|
| 202 |
+
|
| 203 |
+

|
| 204 |
+
|
| 205 |
+
Figure 4: Effect of DP in gradient boosting with mean squared loss
|
| 206 |
+
|
| 207 |
+
180
|
| 208 |
+
|
| 209 |
+
## 4 Conclusion and Future Work
|
| 210 |
+
|
| 211 |
+
We proposed a differentially private gradient boosting algorithm using linear base learners by adopting AdaSSP to privately train linear models. In each boosting round, the linear model is privately trained to approximate the gradient of loss function at the current score prediction. Without privacy, gradient boosting of a linear model is expected to be the same (OLS) or similar to (ERM with small regularization) a single linear model learned in one-shot. Hence, in practice, gradient boosting is primarily focused on learning with tree base models. However, in the high-privacy regime, BoostedAdaSSP provides a higher F1 score than the state of the art tree-based differentially private gradient boosting algorithm (DP-EBM). BoostedAdaSSP also converges to good performance level with fewer boosting rounds than DP-EBM at a fixed privacy level.
|
| 212 |
+
|
| 213 |
+
Although the results presented in this paper already seem promising, there are a few ways to further improve the algorithm. One direction could be introducing more hyperparameters to the algorithm. For example, we may introduce a step size $\eta$ to the last step inside the for loop of Algorithm 2, When a weak base learner ${\theta }_{t}$ is added to the final model $\theta$ , we may multiply ${\theta }_{t}$ by $\eta$ , as we do in gradient descent algorithms (that being said, Algorithm 2 can be seen as using $\eta = 1$ ). Also, we may attempt to clip the gradients more aggressively for squared loss and logistic loss as we iterate over boosting rounds, so that we can add less noise when instantiating the second part of AdaSSP.
|
| 214 |
+
|
| 215 |
+
## References
|
| 216 |
+
|
| 217 |
+
[1] Trevor Hastie, Robert Tibshirani, Jerome H Friedman, and Jerome H Friedman. The elements of statistical learning: data mining, inference, and prediction, volume 2. Springer, 2009.
|
| 218 |
+
|
| 219 |
+
[2] Tianqi Chen and Carlos Guestrin. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, pages 785-794, 2016.
|
| 220 |
+
|
| 221 |
+
[3] Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. Lightgbm: A highly efficient gradient boosting decision tree. Advances in neural information processing systems, 30, 2017.
|
| 222 |
+
|
| 223 |
+
[4] Liudmila Prokhorenkova, Gleb Gusev, Aleksandr Vorobev, Anna Veronika Dorogush, and Andrey Gulin. Catboost: unbiased boosting with categorical features. Advances in neural information processing systems, 31, 2018.
|
| 224 |
+
|
| 225 |
+
[5] Qinbin Li, Zhaomin Wu, Zeyi Wen, and Bingsheng He. Privacy-preserving gradient boosting decision trees. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 784-791, 2020.
|
| 226 |
+
|
| 227 |
+
[6] Nicolas Grislain and Joan Gonzalvez. Dp-xgboost: Private machine learning at scale. arXiv preprint arXiv:2110.12770, 2021.
|
| 228 |
+
|
| 229 |
+
[7] Harsha Nori, Rich Caruana, Zhiqi Bu, Judy Hanwen Shen, and Janardhan Kulkarni. Accuracy, interpretability, and differential privacy via explainable boosting. In International Conference on Machine Learning, pages 8227-8237. PMLR, 2021.
|
| 230 |
+
|
| 231 |
+
[8] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In Proceedings of the 3rd Conference on Theory of Cryptography, TCC '06, pages 265-284, 2006.
|
| 232 |
+
|
| 233 |
+
[9] Jinshuo Dong, Aaron Roth, and Weijie J. Su. Gaussian differential privacy, 2019.
|
| 234 |
+
|
| 235 |
+
[10] Yu-Xiang Wang. Revisiting differentially private linear regression: optimal and adaptive prediction & estimation in unbounded domain. arXiv preprint arXiv:1803.02596, 2018.
|
| 236 |
+
|
| 237 |
+
## A Appendix
|
| 238 |
+
|
| 239 |
+
A. 1 Datasets
|
| 240 |
+
|
| 241 |
+
Table 2: Datasets used in experiments
|
| 242 |
+
|
| 243 |
+
<table><tr><td>Name</td><td>#features(p)</td><td>#train samples (n)</td><td>#test samples</td><td>$\%$ of positive samples in train set</td></tr><tr><td>cod-rna</td><td>8</td><td>271617</td><td>271617</td><td>33.33%</td></tr><tr><td>adult</td><td>123</td><td>32561</td><td>16281</td><td>24.08%</td></tr><tr><td>creditcard</td><td>30</td><td>227845</td><td>56962</td><td>0.17%</td></tr><tr><td>telco</td><td>46</td><td>5634</td><td>1409</td><td>26.53%</td></tr><tr><td>cardio</td><td>19</td><td>56000</td><td>14000</td><td>49.96%</td></tr></table>
|
| 244 |
+
|
| 245 |
+
Table 3: Data sources
|
| 246 |
+
|
| 247 |
+
<table><tr><td>Name</td><td>Link</td></tr><tr><td rowspan="5">cod-rna adult creditcard telco cardio</td><td>https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary.html#cod-rna</td></tr><tr><td>https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary.html#a9a</td></tr><tr><td>https://www.kaggle.com/datasets/mlg-ulb/creditcardfraud</td></tr><tr><td>https://www.kaggle.com/datasets/blastchar/telco-customer-churn</td></tr><tr><td>https://www.kaggle.com/datasets/sulianova/cardiovascular-disease-dataset</td></tr></table>
|
| 248 |
+
|
| 249 |
+
### A.2 AdaSSP algorithm to learn a single ridge regressor
|
| 250 |
+
|
| 251 |
+
Let $\widehat{ \leftarrow }$ denote private versions of the corresponding statistics. Then, AdaSSP privately releases the sufficient statistics of ridge regressor as follows.
|
| 252 |
+
|
| 253 |
+
Algorithm 3 Private Ridge regression via AdaSSP(data $X, y$ , calibration ratio $a, b, c$ , Privacy parameter $\epsilon ,\delta$ , Bound on $\parallel \mathcal{X}\parallel ,\parallel \mathcal{Y}\parallel )$
|
| 254 |
+
|
| 255 |
+
---
|
| 256 |
+
|
| 257 |
+
Find $\mu$ such that $\mu$ -GDP satisfies $\left( {\epsilon ,\delta }\right)$ -DP. # Corollary 1.1
|
| 258 |
+
|
| 259 |
+
Calibrate ${\mu }_{1},{\mu }_{2},{\mu }_{3}$ such that ${\mu }_{1} : {\mu }_{2} : {\mu }_{3} = a : b : c$ and $\mu = \sqrt{{\mu }_{1}^{2} + {\mu }_{2}^{2} + {\mu }_{3}^{2}}$ .
|
| 260 |
+
|
| 261 |
+
$\overset{⏜}{{X}^{\top }X} = {GM}\left( {{X}^{\top }X,{\mu }_{1}}\right)$
|
| 262 |
+
|
| 263 |
+
$\widehat{{X}^{\top }{g}_{t}} = {GM}\left( {{X}^{\top }{g}_{t},{\mu }_{2}}\right) \# {g}_{t}$ resides within $\parallel \mathcal{Y}\parallel$
|
| 264 |
+
|
| 265 |
+
$\widehat{\lambda } = {GM}\left( {\lambda ,{\mu }_{3}}\right)$
|
| 266 |
+
|
| 267 |
+
$* {GM}\left( {X,\mu }\right)$ denotes a Gaussian mechanism to guarantee $\mu$ -GDP for private release of a statistic
|
| 268 |
+
|
| 269 |
+
$X$ , and uses the bounds $\parallel \mathcal{X}\parallel$ to compute the sensitivity internally.
|
| 270 |
+
|
| 271 |
+
---
|
| 272 |
+
|
| 273 |
+
Algorithm 3 instantiates three Gaussian mechanisms with ${\mu }_{1},{\mu }_{2}$ , and ${\mu }_{3}$ to privately release each sufficient statistic. Hence the composition
|
| 274 |
+
|
| 275 |
+
$$
|
| 276 |
+
\widehat{{\theta }_{t}} = {\left( \widehat{{X}^{\top }X} + \widehat{\lambda }I\right) }^{-1}\widehat{{X}^{\top }{g}_{t}} \tag{7}
|
| 277 |
+
$$
|
| 278 |
+
|
| 279 |
+
is $\left( {\epsilon ,\delta }\right)$ -DP. Detailed proof is available in Theorem 3 of [10]. A. 3 DP-EBM 32 Each line corresponds to DP-EBM with 2,3,10, and 100 maximum number of leaves per tree.
|
| 280 |
+
|
| 281 |
+

|
| 282 |
+
|
| 283 |
+
Figure 5: DP-EBM
|
| 284 |
+
|
| 285 |
+
233
|
| 286 |
+
|
| 287 |
+
### A.4 non-private F1, AUROC
|
| 288 |
+
|
| 289 |
+
In terms of the performance of these non-private boosted linear models on the test set, Fig. 6a and
|
| 290 |
+
|
| 291 |
+
235 Fig. 6b show F1 scores and AUROC scores, respectively. Apart from the Cardiovascular dataset,
|
| 292 |
+
|
| 293 |
+
236 increasing the number of rounds improves the F1 score on the test set for logistic and hinge losses
|
| 294 |
+
|
| 295 |
+
237 only. 38 A. 5 F1 scores and AUROC scores on the test set of individual datasets
|
| 296 |
+
|
| 297 |
+

|
| 298 |
+
|
| 299 |
+
Figure 6: non-private
|
| 300 |
+
|
| 301 |
+

|
| 302 |
+
|
| 303 |
+
Figure 7: BoostedAdaSSP vs. DP-EBM
|
| 304 |
+
|
| 305 |
+

|
| 306 |
+
|
| 307 |
+
Figure 8: The Effect of Boosting
|
| 308 |
+
|
| 309 |
+
A. 7 Effect of Differential Privacy
|
| 310 |
+
|
| 311 |
+

|
| 312 |
+
|
| 313 |
+
Figure 9: Effect of DP gradient boosting with logistic loss
|
| 314 |
+
|
| 315 |
+

|
| 316 |
+
|
| 317 |
+
Figure 10: Effect of DP gradient boosting with hinge loss
|
| 318 |
+
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/uPF2bs14E3p/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,218 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ DIFFERENTIALLY PRIVATE GRADIENT BOOSTING ON LINEAR LEARNERS FOR TABULAR DATA ANALYSIS
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s) Affiliation Address email
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
Gradient boosting takes linear combinations of weak base learners. Therefore, absent privacy constraints (when we can exactly optimize over the base models) it is not effective when run over base learner classes that are closed under linear combinations (e.g. linear models). As a result, gradient boosting is typically implemented with tree base learners (e.g., XGBoost), and this has become the state of the art approach in tabular data analysis. Prior work on private gradient boosting focused on taking the state of the art algorithm in the non-private regime-boosting on trees-and making it differentially private. Surprisingly, we find that when we use differentially private learners, gradient boosting over trees is not as effective as gradient boosting over linear learners. In this paper, we propose differentially private gradient-boosted linear models as a private classification method for tabular data. We empirically demonstrate that, under strict privacy constraints, it yields higher F1 scores than the private versions of gradient-boosted trees on five real-world binary classification problems. This work adds to the growing picture that the most effective learning methods under differential privacy may be quite different from the most effective learning methods without privacy.
|
| 8 |
+
|
| 9 |
+
§ 17 1 INTRODUCTION
|
| 10 |
+
|
| 11 |
+
Gradient boosting is an approach to learn an additive model such that the sum of many weak base learners approximates the final output [1]. This is achieved by iteratively fitting the next base learner to the gradient of the loss evaluated at the current prediction. Algorithm 1 outlines gradient boosting in a general form, which can be parameterized by any choice of loss function $\mathcal{L}$ and base learner $b\left( x\right)$ . Classification and regression trees (CARTs) are one of the most popular choices for the base learner because of its effectiveness on tabular data and deployment-ready tree-based data structures in systems. Existing packages, including XGBoost [2], LightGBM [3], and CatBoost [4], drastically improved the usability of gradient boosting among tools for tabular data analysis.
|
| 12 |
+
|
| 13 |
+
There has been an increasing demand for privacy-preserving machine learning tools, which naturally triggered a wave of efforts to develop a private version of the gradient boosting algorithm. Differential privacy (DP, Definition 1.1) is one of the most prevalent definitions of privacy, and was adopted to make gradient boosting algorithms private in recent works [5, 6, 7]. DP ensures that, for a randomized algorithm, when two neighboring datasets that differ in one data point are fed into an algorithm, the two outputs are indistinguishable, within some probability margin defined using $\epsilon$ and $\delta \in \lbrack 0,1)$ .
|
| 14 |
+
|
| 15 |
+
Definition 1.1 (Differential Privacy [8]). A randomized algorithm $\mathcal{M}$ with domain $\mathcal{D}$ is $\left( {\epsilon ,\delta }\right)$ - differentially private for all $\mathcal{S} \subseteq \operatorname{Range}\left( \mathcal{M}\right)$ and for all pairs of neighboring databases $D,{D}^{\prime } \in \mathcal{D}$ ,
|
| 16 |
+
|
| 17 |
+
$$
|
| 18 |
+
\Pr \left\lbrack {\mathcal{M}\left( D\right) \in \mathcal{S}}\right\rbrack \leq {e}^{\epsilon }\Pr \left\lbrack {\mathcal{M}\left( {D}^{\prime }\right) \in \mathcal{S}}\right\rbrack + \delta , \tag{1}
|
| 19 |
+
$$
|
| 20 |
+
|
| 21 |
+
where the probability space is over the randomness of the mechanism $\mathcal{M}$ . As an extension of this idea, a single-parameter family of privacy notion (Gaussian differential privacy, GDP) was later proposed [9]. We first define the trade-off function $T\left( {P,Q}\right)$ and use it to define GDP.
|
| 22 |
+
|
| 23 |
+
Algorithm 1 Gradient Boosting (iterations $T$ , loss $\mathcal{L}$ , base learner $b\left( {x;\theta }\right)$ )
|
| 24 |
+
|
| 25 |
+
Data input: covariates ${x}_{1},\cdots ,{x}_{n}$ and labels ${y}_{1},\cdots ,{y}_{n}$ .
|
| 26 |
+
|
| 27 |
+
Initialize $f\left( x\right) = 0$
|
| 28 |
+
|
| 29 |
+
for $t \in \left\lbrack T\right\rbrack$ do
|
| 30 |
+
|
| 31 |
+
Compute ${\theta }_{t} = \arg \mathop{\min }\limits_{\theta }\mathop{\sum }\limits_{{i = 1}}^{n}\mathcal{L}\left( {{y}_{i},f\left( {x}_{i}\right) + b\left( {{x}_{i};\theta }\right) }\right)$
|
| 32 |
+
|
| 33 |
+
Update $f\left( x\right) \leftarrow f\left( x\right) + {f}_{t}\left( x\right)$ , where ${f}_{t}\left( x\right) = b\left( {x;{\theta }_{t}}\right)$
|
| 34 |
+
|
| 35 |
+
end for
|
| 36 |
+
|
| 37 |
+
return $f\left( x\right) = \mathop{\sum }\limits_{{t = 1}}^{T}{f}_{t}\left( x\right)$
|
| 38 |
+
|
| 39 |
+
Definition 1.2 (Trade-off function, Definition 2.1 of [9]). For any two probability distributions $P$ and $Q$ on the same space, the trade-off function $T\left( {P,Q}\right) : \left\lbrack {0,1}\right\rbrack \rightarrow \left\lbrack {0,1}\right\rbrack$ is defined as
|
| 40 |
+
|
| 41 |
+
$$
|
| 42 |
+
T\left( {P,Q}\right) \left( \alpha \right) = \mathop{\inf }\limits_{\phi }\left\{ {1 - {\mathbb{E}}_{Q}\left\lbrack \phi \right\rbrack : {\mathbb{E}}_{P}\left\lbrack \phi \right\rbrack \leq \alpha }\right\}
|
| 43 |
+
$$
|
| 44 |
+
|
| 45 |
+
Definition 1.3 (Gaussian Differential Privacy, Definition 2.6 of [9]). A mechanism $\mathcal{M}$ is said to satisfy $\mu$ -Gaussian Differential Privacy $\left( {\mu \text{ -GDP }}\right)$ if it is ${G}_{\mu }$ -DP. That is,
|
| 46 |
+
|
| 47 |
+
$$
|
| 48 |
+
T\left( {\mathcal{M}\left( D\right) ,\mathcal{M}\left( {D}^{\prime }\right) }\right) \geq {G}_{\mu }
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+
for all neighboring datasets $D$ and ${D}^{\prime }$ , where ${G}_{\mu } = T\left( {\mathcal{N}\left( {0,1}\right) ,\mathcal{N}\left( {\mu ,1}\right) }\right)$ .
|
| 52 |
+
|
| 53 |
+
$\mu$ -GDP means that determining whether an individual’s data is present in the dataset from one draw is at least as difficult as telling apart the two normal distributions $\mathcal{N}\left( {0,1}\right)$ and $\mathcal{N}\left( {\mu ,1}\right) .\mu$ -GDP can be converted to $\left( {\epsilon ,\delta }\right)$ -DP and vice versa.
|
| 54 |
+
|
| 55 |
+
Corollary 1.1 (Conversion between GDP and DP, Corollary 2.13 of [9]). A mechanism is $\mu$ -GDP if and only if it is $\left( {\epsilon ,\delta \left( \epsilon \right) }\right)$ -DP for all $\epsilon \geq 0$ , where
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
\delta \left( \epsilon \right) = \Phi \left( {-\frac{\epsilon }{\mu } + \frac{\mu }{2}}\right) - {e}^{\epsilon }\Phi \left( {-\frac{\epsilon }{\mu } - \frac{\mu }{2}}\right) .
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
Theorem 1.2 (Gaussian Mechanism, Theorem 2.7 from of [9]). Define a randomized algorithm ${GM}$ that operates on a statistic $\theta$ as ${GM}\left( {x,\mu }\right) = \theta \left( x\right) + \eta$ , where $\eta \sim \mathcal{N}\left( {0,\operatorname{sens}{\left( \theta \right) }^{2}/{\mu }^{2}}\right)$ and sens is the ${l}_{2}$ -sensitivity of the statistics $\theta$ . Then, ${GM}$ is $\mu$ - ${GDP}$ .
|
| 62 |
+
|
| 63 |
+
Most attempts focused on making gradient boosting private on tree base learners. For example, [5] proposed DPBoost, privatizing gradient-boosted regression trees by finding splits using the exponential mechanism and computing numeric values at leaves using the Laplace mechanism. In a similar fashion, DP-XGBoost was suggested by additionally privatizing the quantile sketching step of XGBoost [6]. DPBoost and DP-XGBoost suffered from low accuracy under strict privacy constraints, as they had to consume privacy budget not only in leaf value computation but also in split finding step (quantile sketching in DP-XGBoost). DP-EBM overcame this by restricting each tree to use only one feature and randomly selecting split points (hence no privacy budget is consumed in split finding) [7]. It showed improved performance compared to DPBoost. However, DP-EBM takes a much longer time to learn a model since it requires learning many more trees.
|
| 64 |
+
|
| 65 |
+
Absent privacy, gradient boosting on linear models does not improve performance, since linear models are closed under linear combinations, and the base learner can already exactly optimize over this class. But with differential privacy, it is no longer possible to exactly optimize over the base class, so gradient boosting has the potential to give improvements. Moreover, there are private linear learners that make very efficient use of the privacy budget. We adopt a state of the art approach to privately learn linear models, AdaSSP [10], which adds noise to the sufficient statistics for a linear model.
|
| 66 |
+
|
| 67 |
+
§ 2 DIFFERENTIALLY PRIVATE GRADIENT BOOSTING WITH LINEAR MODELS
|
| 68 |
+
|
| 69 |
+
The flexibility of gradient boosting arises from two choices that we make: (i) a loss function and (ii) a class of base learners. We experiment on three types of loss functions (squared, logistic, and hinge) 9 and fix the base learner class to a private ridge regressor (via AdaSSP). In this section, we describe the loss functions and its gradients (subsection 2.1) and the private base learner class (subsection 2.2).
|
| 70 |
+
|
| 71 |
+
§ 2.1 LOSS FUNCTIONS AND GRADIENTS
|
| 72 |
+
|
| 73 |
+
For binary classification problems, we have a dataset $\mathcal{D}$ of $n$ data points, composed of covariates ${x}_{i} \in {\mathbb{R}}^{p}$ and labels ${y}_{i} \in \{ - 1,1\}$ (or ${y}_{i} \in \{ 0,1\}$ for logistic loss), $\forall i \in \left\lbrack n\right\rbrack$ . The final model $f\left( x\right)$ takes the covariates ${x}_{i}$ as an input and outputs the score ${s}_{i}$ , which can be translated later to output binary prediction $\widehat{y} = \mathbb{1}\left( {{s}_{i} > 0}\right) * 2 - 1$ (or $\widehat{y} = \mathbb{1}\left( {{s}_{i} > 0}\right)$ for ${y}_{i} \in \{ 0,1\}$ case).
|
| 74 |
+
|
| 75 |
+
Let $\mathcal{L}$ be the loss function, $T$ be the number of boosting rounds, ${f}_{t}$ be the model learned at iteration $t \in \left\lbrack T\right\rbrack$ . At $t$ -th round of boosting iteration, the goal of Algorithm 1 is to obtain
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
{\theta }_{t} = \arg \mathop{\min }\limits_{\theta }\mathop{\sum }\limits_{{i = 1}}^{N}\mathcal{L}\left( {{y}_{i},\mathop{\sum }\limits_{{k = 1}}^{{t - 1}}{f}_{k}\left( {x}_{i}\right) + b\left( {{x}_{i};\theta }\right) }\right) . \tag{2}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
The ${f}_{t}\left( x\right) = b\left( {x;{\theta }_{t}}\right)$ can be approximated by steepest gradient descent, where the gradient is taken with respect to the score prediction ${s}_{i}$ and evaluated at current score ${s}_{i} \mathrel{\text{ := }} \mathop{\sum }\limits_{{k = 1}}^{{t - 1}}{f}_{k}\left( {x}_{i}\right)$ . The components of the negative gradient at $t$ can be written as
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
{g}_{i,t} \mathrel{\text{ := }} - {\left. \frac{\partial \mathcal{L}\left( {{y}_{i},{s}_{i}}\right) }{\partial {s}_{i}}\right| }_{{s}_{i} = \mathop{\sum }\limits_{{k = 1}}^{{t - 1}}{f}_{k}\left( {x}_{i}\right) }. \tag{3}
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
Table. 1 lists the three loss functions we experiment in this paper, and their negative gradients. Note 2 that the squared loss is unbounded, hence we clip the gradient to fall between(-z, z)for some $z \in \mathbb{R}$ (further explained in the next section).
|
| 88 |
+
|
| 89 |
+
Table 1: Loss functions and gradients, where $\sigma \left( x\right) = \frac{1}{\left( 1 + {e}^{-x}\right) }$ and $\mathbb{1}\left( \cdot \right)$ is the indicator function.
|
| 90 |
+
|
| 91 |
+
max width=
|
| 92 |
+
|
| 93 |
+
X $\mathcal{L}\left( {{y}_{i},{s}_{i}}\right)$ $g\left( {s}_{i}\right) = - \partial \mathcal{L}\left( {{y}_{i},{s}_{i}}\right) /\partial {s}_{i}$
|
| 94 |
+
|
| 95 |
+
1-3
|
| 96 |
+
Squared $\frac{1}{2}{\left( {y}_{i} - {s}_{i}\right) }^{2}$ ${y}_{i} - {s}_{i}$
|
| 97 |
+
|
| 98 |
+
1-3
|
| 99 |
+
Logistic $- {y}_{i}\ln \left( {\sigma \left( {s}_{i}\right) }\right) - \left( {1 - {y}_{i}}\right) \ln \left( {1 - \sigma \left( {s}_{i}\right) }\right)$ ${y}_{i} - \sigma \left( {s}_{i}\right)$
|
| 100 |
+
|
| 101 |
+
1-3
|
| 102 |
+
Hinge $\max \left( {0,1 - {y}_{i}{s}_{i}}\right)$ $\mathbb{1}\left( {1 - {y}_{i}{s}_{i} > 0}\right) {y}_{i}$
|
| 103 |
+
|
| 104 |
+
1-3
|
| 105 |
+
|
| 106 |
+
§ 2.2 PRIVATE RIDGE REGRESSOR AS A BASE LEARNER
|
| 107 |
+
|
| 108 |
+
With the negative gradients ${g}_{i,t}$ computed as in eq. (3), we fit the next base learner ${f}_{t}\left( x\right)$ to those gradients by minimizing empirical risk with a ridge regularizer. As we fixed the base learner class to linear models, we may express ${f}_{t}\left( x\right) = {\theta }_{t}^{\top }x$ , and the new model is obtained by
|
| 109 |
+
|
| 110 |
+
$$
|
| 111 |
+
{\theta }_{t} = \arg \mathop{\min }\limits_{\theta }\mathop{\sum }\limits_{{i = 1}}^{n}{\left( {\theta }^{\top }{x}_{i} - {g}_{i,t}\right) }^{2} + \lambda \parallel \theta {\parallel }_{2}^{2}, \tag{4}
|
| 112 |
+
$$
|
| 113 |
+
|
| 114 |
+
where $\lambda \in \mathbb{R}$ is a hyperparameter for ridge regularizer. Note that the objective to be minimized in this step is different from the loss function we chose when computing gradients. Let $X \in {\mathbb{R}}^{n \times p}$ be the matrix with ${x}_{i}$ ’s in each row and ${g}_{t} \in {\mathbb{R}}^{n}$ be a vector containing all sample’s gradient at $t$ (i.e., $\left. {g}_{i,t}\right)$ . Absent privacy, the above minimization yields
|
| 115 |
+
|
| 116 |
+
$$
|
| 117 |
+
{\theta }_{t} = {\left( {X}^{\top }X + \lambda I\right) }^{-1}{X}^{\top }{g}_{t}. \tag{5}
|
| 118 |
+
$$
|
| 119 |
+
|
| 120 |
+
Using the squared loss function, we can analytically show that ${\theta }_{t} = 0\;\forall t > 1$ when $\lambda = 0$ , and they are close to zero with small $\lambda$ values. This affirms why gradient boosting was not applied on linear base learners under no privacy constraints.
|
| 121 |
+
|
| 122 |
+
To meet the privacy constraints, we adopt AdaSSP to privately learn this ridge regressor ${\theta }_{t}$ (Algorithm 2 of [10]). Let $\mathcal{X}$ and $\mathcal{Y}$ be the domain of our data covariates and labels, respectively. We define the bound on data domain $\parallel \mathcal{X}\parallel = \mathop{\sup }\limits_{{x \in \mathcal{X}}}\parallel x\parallel$ and $\parallel \mathcal{Y}\parallel = \mathop{\sup }\limits_{{y \in \mathcal{Y}}}\left| y\right|$ . Given the privacy budget $\epsilon ,\delta$ to guarantee $\left( {\epsilon ,\delta }\right)$ -DP, and bounds on the data $\parallel \mathcal{X}\parallel$ and $\parallel \mathcal{Y}\parallel$ for ${x}_{i}$ and ${g}_{i,t}$ , respectively, AdaSSP calibrates $\left( {\epsilon ,\delta }\right)$ -DP to $\mu$ -GDP with an appropriate $\mu$ , and adds calibrated Gaussian noise to three sufficient statistics: 1) ${X}^{\top }X,2){X}^{\top }{g}_{t}$ , and 3) $\lambda$ . The detailed description of AdaSSP algorithm for learning one ridge regressor is deferred to Appendix A.2.
|
| 123 |
+
|
| 124 |
+
Let $\widehat{{X}^{\top }X} = {GM}\left( {{X}^{\top }X,{\mu }_{1}}\right) ,\widehat{{X}^{\top }{g}_{t}} = {GM}\left( {{X}^{\top }{g}_{t},{\mu }_{2}}\right) ,\widehat{\lambda } = {GM}\left( {\lambda ,{\mu }_{3}}\right)$ be the private release of sufficient statistics from a single instantiation of AdaSSP to learn ${\theta }_{t}$ and ${GM}$ is defined in Theorem 1.2. The final model $\widehat{{\theta }^{ * }}$ can be expressed as
|
| 125 |
+
|
| 126 |
+
$$
|
| 127 |
+
\widehat{{\theta }^{ \star }} = \mathop{\sum }\limits_{{t = 1}}^{T}\widehat{{\theta }_{t}} = {\left( \widehat{{X}^{\top }X} + \widehat{\lambda }I\right) }^{-1}\mathop{\sum }\limits_{{t = 1}}^{T}\widehat{{X}^{\top }{g}_{t}} \tag{6}
|
| 128 |
+
$$
|
| 129 |
+
|
| 130 |
+
where the initial ${g}_{i,1} = {y}_{i}$ . Notice that, for a boosted model, we may call ${GM}\left( {{X}^{\top }X,{\mu }_{1}}\right)$ and ${GM}\left( {\lambda ,{\mu }_{3}}\right)$ just once and only repeat the second part ${GM}\left( {{X}^{\top }{g}_{t},{\mu }_{2}}\right)$ for $T$ rounds, instead of straightforwardly repeating Algorithm 3 for $T$ rounds.
|
| 131 |
+
|
| 132 |
+
Finally, we suggest BoostedAdaSSP (Algorithm 2), a differentially private gradient boosting algorithm with linear base learners. Algorithm 2 assumes binary classification tasks, and can be generalized to any choice of loss function $\mathcal{L}$ . In the second line inside the for loop, we clip the computed gradient to enforce ${g}_{i,t} \in \left\lbrack {-\parallel \mathcal{Y}\parallel ,\parallel \mathcal{Y}\parallel }\right\rbrack$ , if it is not naturally satisfied. The final output $\theta$ defines the score predictor $f\left( x\right) = {\theta }^{\top }x$ , where the score above 0 means a positive label $\left( {+1}\right)$ and below 0 means a negative label (-1 or 0 ).
|
| 133 |
+
|
| 134 |
+
Algorithm 2 BoostedAdaSSP (Data $X,y$ , Privacy parameter $\epsilon ,\delta$ , Split ratio $a,b,c$ , Bound on $\left| \right| \mathcal{X}\left| \right| ,\left| \right| \mathcal{Y}\left| \right| )$
|
| 135 |
+
|
| 136 |
+
Initialize $\theta = 0$
|
| 137 |
+
|
| 138 |
+
Find $\mu$ such that $\mu$ -GDP satisfies $\left( {\epsilon ,\delta }\right)$ -DP. # Corollary 1.1
|
| 139 |
+
|
| 140 |
+
Calibrate ${\mu }_{1},{\mu }_{2},{\mu }_{3}$ such that ${\mu }_{1} : {\mu }_{2} : {\mu }_{3} = a : b : c$ and $\mu = \sqrt{{\mu }_{1}^{2} + {\mu }_{2}^{2} + {\mu }_{3}^{2}}$ .
|
| 141 |
+
|
| 142 |
+
$\widehat{{X}^{\top }X} = {GM}\left( {{X}^{\top }X,{\mu }_{1}}\right)$ and $\widehat{\lambda } = {GM}\left( {\lambda ,{\mu }_{3}}\right)$ # instantiate AdaSSP (part 1 & 3)
|
| 143 |
+
|
| 144 |
+
$\Gamma = {\left( \widehat{{X}^{\top }X} + \widehat{\lambda }I\right) }^{-1}$
|
| 145 |
+
|
| 146 |
+
for $t \in \left\lbrack T\right\rbrack$ do
|
| 147 |
+
|
| 148 |
+
$s = {X\theta }$ # current score prediction
|
| 149 |
+
|
| 150 |
+
${g}_{t} = - {\nabla }_{s}\mathcal{L}\left( {y,s}\right)$ # compute gradient, clip as needed
|
| 151 |
+
|
| 152 |
+
${\theta }_{t} = \Gamma \overset{⏜}{{X}^{\top }{g}_{t}}$ , where $\overset{⏜}{{X}^{\top }{g}_{t}} = {GM}\left( {{X}^{\top }{g}_{t},\frac{{\mu }_{2}}{\sqrt{T}}}\right)$ # instantiate AdaSSP (part 2)
|
| 153 |
+
|
| 154 |
+
$\theta = \theta + {\theta }_{t}$ # update model
|
| 155 |
+
|
| 156 |
+
end for
|
| 157 |
+
|
| 158 |
+
return $\theta$
|
| 159 |
+
|
| 160 |
+
$* {GM}\left( {X,\mu }\right)$ denotes a Gaussian mechanism to guarantee $\mu$ -GDP for private release of a statistic
|
| 161 |
+
|
| 162 |
+
$X$ , and uses the bounds $\parallel \mathcal{X}\parallel$ to compute the sensitivity internally.
|
| 163 |
+
|
| 164 |
+
Corollary 2.1 (Composition of GDP, Corollary 3.3 of [9]). The n-fold composition of ${\mu }_{i}$ -GDP mechanisms is $\sqrt{{\mu }_{1}^{2} + \cdots + {\mu }_{n}^{2}}$ -GDP.
|
| 165 |
+
|
| 166 |
+
Theorem 2.2. When $\parallel \mathcal{X}\parallel = \mathop{\sup }\limits_{{x \in \mathcal{X}}}\parallel x\parallel$ and $\parallel \mathcal{Y}\parallel = \mathop{\sup }\limits_{{y \in \mathcal{Y}}}\parallel y\parallel$ , Algorithm 2 satisfies $\mu$ -GDP and $\left( {\epsilon ,\delta }\right)$ -DP.
|
| 167 |
+
|
| 168 |
+
Proof. From Corollary 2.1, $\sqrt{{\mu }_{1}^{2} + T{\left( \frac{{\mu }_{2}}{\sqrt{T}}\right) }^{2} + {\mu }_{3}^{2}} = \sqrt{{\mu }_{1}^{2} + {\mu }_{2}^{2} + {\mu }_{3}^{2}} = \mu$ . The conversion of GDP to DP follows from Corollary 1.1.
|
| 169 |
+
|
| 170 |
+
§ 3 EXPERIMENTS
|
| 171 |
+
|
| 172 |
+
Algorithm 2 was experimented on five real-world datasets from Kaggle and LIBSVM (details can be found in Appendix A.1) with three choices of loss functions (squared, logistic and hinge), varying number of iterations, and varying choices of epsilon from 0.01 to 10 . The privacy parameter $\delta = {10}^{-6}$ for all experiments. Lastly, a non-private version was also computed for comparison. The performance of a model is measured by F1 score and AUROC. Note that AUROC may report a good value (close to 1) even when most of the minority class is misclassified. See Appendix A.1 to check how imbalanced each dataset is.
|
| 173 |
+
|
| 174 |
+
To compare the performance of BoostedAdaSSP against a tree-based private gradient boosting algorithm, we choose DP-EBM [7] which reported the best performance among others [5, 6]. At each boosting round, DP-EBM learns separate trees on individual features, and split points are chosen completely at random (this allows more efficient privacy budgeting). We experimented DP-EBM with four choices of the maximum number of leaves $\{ 2,3,{10},{100}\}$ . Setting this value to 100 yielded the best performance in terms of F1 score, hence we report this result in the main paper and defer other results to Appendix A.3.
|
| 175 |
+
|
| 176 |
+
§ 3.1 BOOSTED LINEAR MODELS UNDER NON-PRIVATE REGIME
|
| 177 |
+
|
| 178 |
+
Figure 1 shows the training loss of non private gradient boosting, each line corresponding to the choice of loss functions (squared, logistic, and hinge). For the squared loss(green line), we see virtually no improvement over boosting rounds, as mentioned previously in Section 2.2. On the other hand, we observe a decrease in training loss for logistic and hinge losses, when additional boosting rounds are introduced. This is because the gradient of these loss functions are non-linear with respect to the score predictions (whereas it is linear for squared loss).
|
| 179 |
+
|
| 180 |
+
< g r a p h i c s >
|
| 181 |
+
|
| 182 |
+
Figure 1: Training loss versus number of rounds without privacy constraints.
|
| 183 |
+
|
| 184 |
+
§ 3.2 BOOSTEDADASSP (LINEAR BASE LEARNER) VS. DP-EBM (TREE BASE LEARNER)
|
| 185 |
+
|
| 186 |
+
Figure 2a and Figure 2b show the performance of private gradient boosting measured by F1 score and AUROC score, respectively. Each line corresponds to BoostedAdaSSP with three choices of loss functions and DP-EBM with 100 leaves per tree (red). We observe the improved performance as $\epsilon$ increases $\left( {\delta = {10}^{-6}}\right.$ is fixed for all experiments) for most cases. This aligns with our expectation that larger privacy budget $\epsilon$ allows less noise to be introduced, leading to a better performance. However, F1 score of DP-EBM in less than or equal to 100 rounds of boosting behaves counter-intuitively. Since the AUROC score follows our expectation, we may construe this as DP-EBM with small rounds of boosting results in a model that misclassifies majority of the minority class, to the point that the privacy noise sometimes helps correctly classifying the minority labels.
|
| 187 |
+
|
| 188 |
+
There was no loss functions that outperformed others in all cases-rather, the best-performing loss function (for BoostedAdaSSP) depends on the dataset (detailed results on individual datasets are deferred to Appendix A.5.) Overall, BoostedAdaSSP provides higher F1 scores at most values of total boosting rounds, however, DP-EBM provides slightly better AUROC scores. (Note that the datasets we experiment are mostly imbalanced.)
|
| 189 |
+
|
| 190 |
+
< g r a p h i c s >
|
| 191 |
+
|
| 192 |
+
Figure 2: Averaged scores vs. Epsilon
|
| 193 |
+
|
| 194 |
+
§ 3.3 EFFECT OF BOOSTING UNDER PRIVACY CONSTRAINTS
|
| 195 |
+
|
| 196 |
+
Figure 3a and Figure 3b show the performance(y-axis) over boosting rounds(x-axis) at four different privacy levels. The F1 score of BoostedAdaSSP and DP-EBM both improve as the number of boosting rounds increases. DP-EBM requires significantly more number of boosting rounds to yield comparable F1 score to BoostedAdaSSP. To run 1000 rounds of boosting, DP-EBM takes about 60.9 seconds to finish, and BoostedAdaSSP takes only about 4.5 seconds. Overall, we conclude that, restricting the number of boosting rounds to be small (i.e., when we want to limit the time budget), there exists a BoostedAdaSSP with some loss function that is preferrable to DP-EBM for all epsilon
|
| 197 |
+
|
| 198 |
+
165 values, when we evaluate the performance based on F1 scores.
|
| 199 |
+
|
| 200 |
+
< g r a p h i c s >
|
| 201 |
+
|
| 202 |
+
Figure 3: Averaged scores vs. Epsilon
|
| 203 |
+
|
| 204 |
+
Additionally for BoostedAdaSSP with squared loss, we compare the ratio between the test set F1 score of a non-boosted model (i.e., with 1 iteration) versus with some rounds of iterations (10, 100, 1000 are plotted). Figure 4 shows that the effect of boosting (measured by the ratio) diminishes as $\epsilon$ goes to infinity as well as when $\epsilon$ goes to zero. This can be explained by conjecturing $\epsilon = \infty$ and $\epsilon = 0$ cases. As the privacy budget $\epsilon$ goes to infinity, our BoostedAdaSSP gradually reduces the amount of noise added to the output, and it eventually becomes similar to the non-private regime. Therefore, we may expect no effect of boosting, the same as in non-private case (see Figure 1). As the privacy budget $\epsilon$ approaches to 0, we eventually enter the high-privacy regime where the privacy noise dominates the signal. In this case, it is difficult to expect any learning algorithms, let alone additional boosting rounds, to learn anything.
|
| 205 |
+
|
| 206 |
+
As a result, we observe a bell curve shape in the Figure 4, which implies that there exists a sweet spot in terms of the privacy budget $\epsilon$ where the boosting has the maximum impact. However, the sweet spot observed here doesn't necessarily indicates the best F1 score. Same observations are shown in Figure 9 for logistic loss and Figure 10 for hinge loss.
|
| 207 |
+
|
| 208 |
+
< g r a p h i c s >
|
| 209 |
+
|
| 210 |
+
Figure 4: Effect of DP in gradient boosting with mean squared loss
|
| 211 |
+
|
| 212 |
+
180
|
| 213 |
+
|
| 214 |
+
§ 4 CONCLUSION AND FUTURE WORK
|
| 215 |
+
|
| 216 |
+
We proposed a differentially private gradient boosting algorithm using linear base learners by adopting AdaSSP to privately train linear models. In each boosting round, the linear model is privately trained to approximate the gradient of loss function at the current score prediction. Without privacy, gradient boosting of a linear model is expected to be the same (OLS) or similar to (ERM with small regularization) a single linear model learned in one-shot. Hence, in practice, gradient boosting is primarily focused on learning with tree base models. However, in the high-privacy regime, BoostedAdaSSP provides a higher F1 score than the state of the art tree-based differentially private gradient boosting algorithm (DP-EBM). BoostedAdaSSP also converges to good performance level with fewer boosting rounds than DP-EBM at a fixed privacy level.
|
| 217 |
+
|
| 218 |
+
Although the results presented in this paper already seem promising, there are a few ways to further improve the algorithm. One direction could be introducing more hyperparameters to the algorithm. For example, we may introduce a step size $\eta$ to the last step inside the for loop of Algorithm 2, When a weak base learner ${\theta }_{t}$ is added to the final model $\theta$ , we may multiply ${\theta }_{t}$ by $\eta$ , as we do in gradient descent algorithms (that being said, Algorithm 2 can be seen as using $\eta = 1$ ). Also, we may attempt to clip the gradients more aggressively for squared loss and logistic loss as we iterate over boosting rounds, so that we can add less noise when instantiating the second part of AdaSSP.
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/zaJsDuwwdlJ/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,171 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Interactive Rationale Extraction for Text Classification
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
## Abstract
|
| 12 |
+
|
| 13 |
+
Deep neural networks show superior performance in text classification tasks, but their poor interpretability and explainability can cause trust issues. For text classification problems, the identification of textual sub-phrases or "rationales" is one strategy for attempting to find the most influential portions of text, which can be conveyed as critical in making classification decisions. Selective models for rationale extraction faithfully explain a neural classifier's predictions by training a rationale generator and a text classifier jointly: the generator identifies rationales and the classifier predicts a category solely based on the rationales. The selected rationales are then viewed as the explanations for the classifier's predictions. Through exchange of such explanations, humans interact to achieve higher performances in problem solving. To imitate the interactive process of humans, we propose a simple interactive rationale extraction architecture that selects a pair of rationales and then makes predictions from two independently trained selective models. We show how this architecture outperforms both base models for text classification tasks on datasets IMDB movie reviews and 20 Newsgroups in terms of predictive performance.
|
| 14 |
+
|
| 15 |
+
## 17 1 Introduction
|
| 16 |
+
|
| 17 |
+
Selective (or select-predict) models for rationale extraction in text classification (Lei et al., 2016; Bastings et al. 2019), with the general structure shown in Figure 1, are designed to extract a set of words, namely a rationale (Zaidan et al., 2007), from an original text where, for prediction purposes, the rationale is expected to be sufficient as the input for the classification model to obtain the same prediction based on the whole text. For the purpose of interpretability, the rationale should be concise and contiguous. A rationale extraction model is faithful (Lipton, 2018) if the extracted rationales are truly the information used for classification (Jain et al., 2020). The problem of extracting rationales that satisfy the criteria above is complex from a machine learning perspective and becomes more difficult with only instance-level supervision (i.e., without token-level annotations) (Jain et al., 2020). One model's identification of rationales can suffer from high variance because of the complex training process. An ensemble of more than one model helps to reduce variance, which leads to the exploration of how to take use of two rationale extraction models and how to make a choice when the two models make different predictions.
|
| 18 |
+
|
| 19 |
+
When two humans have different answers to a problem, they tend to exchange their reasons or explanations, after which there might be a change of mind. To show why this interaction of humans is effective, we use the problem of proving a mathematical conjuncture as an instance: because searching for a correct mathematical proof, which then leads to a correct claim about the conjuncture, is usually much more difficult than verifying a proof (e.g., $\mathcal{P} \subseteq \mathcal{N}\mathcal{P}$ in computation theory), often one who is not capable of finding a good proof can tell if a proof is good when the proof is given. Considering the complexity for a generator to search among all possible rationales with only remote instance-level supervision, the work of rationale extraction can be much more difficult than classification.
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
|
| 23 |
+
Figure 1: Schematic of selective rationale extraction models where $x$ is an embedded text, $g$ is a generator and $f$ is a classifier. Generator $g$ extracts a rationale $r$ based on which classifier $f$ makes a prediction $y$ .
|
| 24 |
+
|
| 25 |
+
We may then consider selective models for rationale extraction to be naturally compatible with the interactive pattern of humans by viewing the rationales extracted by a generator as the proofs for the decisions of its classifier, which means the interaction between two base models can be performed by the exchange of their rationales. Subsequently, the problem becomes how to decide if a rationale is good or not so that we know which pairs of rationale and prediction are appropriate choices when two base models make different predictions. A good rationale here is expected to give a correct prediction when input to a decent classifier.
|
| 26 |
+
|
| 27 |
+
Intuitively, a good rationale is supposed to contain strong indicators for the correct "gold label" instead of insignificant words which do not contribute to classification, which leads to two simple rules for handling base models' disagreements: first, a good rationale is more likely to produce consistent predictions among classifiers (i.e., a good explanation convinces people); second, a good rationale is more likely to produce a higher confidence level (Section 2.2) for the prediction of one classifier (i.e., one with a good reason is often confident). The two rules are created a basis for classification, as opposed to random guessing based on otherwise randomly selected words. Note that the two rules are based on the assumption that the probability that base models extract strong indicators for wrong labels is very low, which should be considered to be true for decent generators and decent classifiers (i.e., better than random guessing).
|
| 28 |
+
|
| 29 |
+
To imitate the interactive pattern of humans in problem solving, we introduce Interactive Rationale Extraction for Text Classification to interactively connect two independently trained selective rationale extraction models. We show the architecture achieves higher predictive performance than either base models with similar performance on IMDB movie reviews and 20 Newsgroups. This is done by selecting pairs of rationale and prediction from the base models using the above simple rules. In addition, because this interactive architecture makes decisions solely based on the base models' rationales, the faithfulness and interpretability of the base models' rationales are not compromised.
|
| 30 |
+
|
| 31 |
+
## 2 Background
|
| 32 |
+
|
| 33 |
+
### 2.1 Selective Rationale Extraction
|
| 34 |
+
|
| 35 |
+
The original selective rationale extraction model was proposed by Lei et al. (2016) with an architecture shown in Figure 1. Their model faithfully explains a neural network-based classifier's predictions by jointly training a generator and a classifier with only instance-level supervision. We summarize their work as follows. The generator $g$ consumes the embedded tokens of the original text, namely $x = \left\lbrack {{x}_{1},{x}_{2},\ldots ,{x}_{l}}\right\rbrack$ where $l$ is the number of the tokens in the text and each token ${x}_{i} \in {\mathbb{R}}^{d}$ is an $d$ dimensional embedding vector, and outputs a probability distribution $p\left( {z \mid x}\right)$ over the hard mask $z = \left\lbrack {{z}_{1},{z}_{2},\ldots ,{z}_{l}}\right\rbrack$ where each value ${z}_{i} \in \{ 0,1\}$ denotes whether the corresponding token is selected. A rationale $r$ is defined as(z, x)representing the hard mask $z$ over the original input $x$ . Subsequently, the classifier $f$ takes(z, x)as input to make a prediction $f\left( {z, x}\right)$ . Given gold label $y$ , the loss function used to optimize both generator $g$ and classifier $f$ is defined as
|
| 36 |
+
|
| 37 |
+
$$
|
| 38 |
+
\operatorname{loss}\left( {z, x, y}\right) = \parallel f\left( {z, x}\right) - y{\parallel }_{2}^{2} + {\lambda }_{1}\parallel z\parallel + {\lambda }_{2}\mathop{\sum }\limits_{{i = 1}}^{{l - 1}}\left| {{z}_{i} - {z}_{i + 1}}\right| \tag{1}
|
| 39 |
+
$$
|
| 40 |
+
|
| 41 |
+
which consists of three parts: prediction loss, selection loss and contiguity loss. The parameters ${\lambda }_{1}$ and ${\lambda }_{2}$ in the loss function are used to tune the constraints on rationales (i.e., conciseness and contiguity). Jain et al. (2020) modified the loss function to apply hard constraints on rationales (i.e., maximum length) by not punishing a model when a given limit on the number of words is not reached. Because of the absence of token-level supervision and the use of hard masking which is not differentiable, Lei et al. (2016) turned to REINFORCE (Williams, 1992) for gradient estimation, which causes high variance and sensitivity to hyper-parameters (Jain et al., 2020). Following the select-predict architecture proposed by Lei et al. (2016), Bastings et al. (2019) explored a reparameterization heuristic called HardKuma for gradient estimation. Furthermore, Guerreiro and Martins (2021) exposed the trade-off between differentiable masking and hard constraints in selective rationale extraction models.
|
| 42 |
+
|
| 43 |
+

|
| 44 |
+
|
| 45 |
+
Figure 2: Schematic of our interactive rationale extraction where rationales are exchanged. The notations follow Figure 1.
|
| 46 |
+
|
| 47 |
+
### 2.2 Confidence Level
|
| 48 |
+
|
| 49 |
+
Confidence level (CL) indicates how far a neural network's prediction is from being neutral. Given a neural network’s non-probabilistic output $k = \left\lbrack {{k}_{1},{k}_{2},\ldots ,{k}_{n}}\right\rbrack$ for a $n$ -class classification, Kumar et al. (2022) defined the CL of the classification with a softmax function
|
| 50 |
+
|
| 51 |
+
$$
|
| 52 |
+
{CL}\left( k\right) = \frac{\exp \left( {\max \left( k\right) }\right) }{\mathop{\sum }\limits_{{i = 1}}^{n}\exp \left( {k}_{i}\right) } \tag{2}
|
| 53 |
+
$$
|
| 54 |
+
|
| 55 |
+
where $\max \left( k\right)$ is the value of the output node ${k}_{i}$ with the highest value (i.e., $i$ is the final prediction).
|
| 56 |
+
|
| 57 |
+
Guo et al. (2017) stated that a classification network should not only have a high accuracy but also indicate how likely each prediction is correct or incorrect for trust purposes. In addition, their study on neural networks' calibration Guo et al. (2017) suggested that accuracy, even if not nearly identical to CL for some neural networks, is generally positively correlated to CL. This means that, when two base models with similar expected performances make different predictions, the prediction with a higher CL is generally more likely to be correct.
|
| 58 |
+
|
| 59 |
+
## 3 Algorithm
|
| 60 |
+
|
| 61 |
+
As demonstrated in Figure 2, after the interaction between two base select-predict models, a total of 4 predictions are generated: ${y}_{1} = {f}_{1}\left( {r}_{1}\right) ,{y}_{1}^{\prime } = {f}_{1}\left( {r}_{2}\right) ,{y}_{2}^{\prime } = {f}_{2}\left( {r}_{1}\right)$ and ${y}_{2} = {f}_{2}\left( {r}_{2}\right)$ where ${y}_{1}$ and ${y}_{2}$ are the predictions based on their own rationales and ${y}_{1}^{\prime }$ and ${y}_{2}^{\prime }$ are predictions based on the exchanged rationales, as shown in the table below.
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
\begin{matrix} {r}_{1} & {r}_{2} & \\ {f}_{1} & {y}_{1} & {y}_{1}^{\prime } \\ {f}_{2} & {y}_{2}^{\prime } & {y}_{2} \end{matrix}
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
Given an input text, when the predictions of two base models are the same, namely ${y}_{1} = {y}_{2}$ , both rationales ${r}_{1},{r}_{2}$ are good and the final prediction is the shared prediction. When two base models initially show a disagreement, we check if one rationale causes more consistent predictions. If ${r}_{1}$ causes more consistent predictions, in order words, if ${r}_{1}$ changes the prediction of ${f}_{2}$ to ${y}_{1}$ when given as an input rationale (namely, ${y}_{1} = {y}_{2}^{\prime }$ ), but ${r}_{2}$ does not change the prediction of ${f}_{1}$ to ${y}_{2}$ when given as an input rationale $\left( {{y}_{2} \neq {y}_{1}^{\prime }}\right)$ , then the pair $\left( {{r}_{1},{y}_{1}}\right)$ is chosen as the final rationale and prediction; symmetrically, if ${r}_{2}$ causes more consistent predictions, the pair $\left( {{r}_{2},{y}_{2}}\right)$ is chosen. For the cases where no rationale causes more consistent predictions, we rely on confidence levels which are real numbers between 0 and 1 as defined by expression (2). If the confidence level of ${f}_{1}$ on ${r}_{1}$ is higher than that of ${f}_{2}$ on ${r}_{2}$ (say ${CL}\left( {{f}_{1},{r}_{1}}\right) > {CL}\left( {{f}_{2},{r}_{2}}\right)$ with $\left( {{f}_{1},{r}_{1}}\right)$ and $\left( {{f}_{2},{r}_{2}}\right)$ separately denoting their corresponding non-probabilistic outputs), the pair $\left( {{r}_{1},{y}_{1}}\right)$ is chosen; otherwise, the pair $\left( {{r}_{2},{y}_{2}}\right)$ is chosen. The process of selecting a pair of rationale and prediction is formalized in 15 Algorithm 1. It's worth mentioning that, in implementation, the exchange of rationales only needs to be performed when base models have a disagreement in prediction (i.e., ${y}_{1} \neq {y}_{2}$ ).
|
| 68 |
+
|
| 69 |
+
Algorithm 1 Rationale-prediction Selection after Interaction
|
| 70 |
+
|
| 71 |
+
---
|
| 72 |
+
|
| 73 |
+
Require: ${f}_{1},{f}_{2},{r}_{1},{r}_{2},{y}_{1},{y}_{1}^{\prime },{y}_{2}^{\prime },{y}_{2}$ from Figure 2, ${CL}\left( {f, r}\right)$ for the confidence level of $f$ on $r$ .
|
| 74 |
+
|
| 75 |
+
if ${y}_{1} = {y}_{2}$ then $\vartriangleright$ agreement
|
| 76 |
+
|
| 77 |
+
return $\left( {{r}_{1},{y}_{1}}\right)$ $\vartriangleright \operatorname{or}\left( {{r}_{2},{y}_{2}}\right)$
|
| 78 |
+
|
| 79 |
+
else $\vartriangleright$ disagreement
|
| 80 |
+
|
| 81 |
+
if ${y}_{1} = {y}_{2}^{\prime }$ and ${y}_{2} \neq {y}_{1}^{\prime }$ then $\vartriangleright$ model 2 convinced by model 1
|
| 82 |
+
|
| 83 |
+
return $\left( {{r}_{1},{y}_{1}}\right)$
|
| 84 |
+
|
| 85 |
+
else if ${y}_{1} \neq {y}_{2}^{\prime }$ and ${y}_{2} = {y}_{1}^{\prime }$ then $\vartriangleright$ model 1 convinced by model 2
|
| 86 |
+
|
| 87 |
+
return $\left( {{r}_{2},{y}_{2}}\right)$
|
| 88 |
+
|
| 89 |
+
else
|
| 90 |
+
|
| 91 |
+
if ${CL}\left( {{f}_{1},{r}_{1}}\right) > {CL}\left( {{f}_{2},{r}_{2}}\right)$ then $\vartriangleright$ model 1 is more confident
|
| 92 |
+
|
| 93 |
+
return $\left( {{r}_{1},{y}_{1}}\right)$
|
| 94 |
+
|
| 95 |
+
else $\vartriangleright$ model 2 is more confident
|
| 96 |
+
|
| 97 |
+
return $\left( {{r}_{2},{y}_{2}}\right)$
|
| 98 |
+
|
| 99 |
+
end if
|
| 100 |
+
|
| 101 |
+
end if
|
| 102 |
+
|
| 103 |
+
end if
|
| 104 |
+
|
| 105 |
+
---
|
| 106 |
+
|
| 107 |
+
## 17 4 Experiments
|
| 108 |
+
|
| 109 |
+
### 4.1 Datasets
|
| 110 |
+
|
| 111 |
+
IMDB movie reviews (Maas et al., 2011) This is a dataset of 50,000 movie reviews collected from the Internet Movie Database (IMDB) with binary labels (i.e., positive and negative). The dataset is originally split into two subsets: 25,000 for training and 25,000 for testing. We randomly split the training data into 20,000 (80%) for training and 5,000 (20%) for development. The numbers of the two labels are perfectly balanced in each subset.
|
| 112 |
+
|
| 113 |
+
20 Newsgroups It is a publicly available dataset containing a total of 18,846 texts, with 11,314 for training and 7,532 for testing, in 20 distinct categories of news topics. We split the training data randomly into 9,051 (80%) for training and 2,263 (20%) for development. The numbers of the 20
|
| 114 |
+
|
| 115 |
+
labels are not perfectly balanced and varying from 304 to 490 in the training data, 73 to 131 in the development data and 251 to 399 in the testing data.
|
| 116 |
+
|
| 117 |
+
<table><tr><td colspan="5">20 Newsgroups</td></tr><tr><td>$\left( {{\lambda }_{1},{\lambda }_{2}}\right)$</td><td colspan="2">(5e-3, 0)</td><td colspan="2">(1e-3, 1e-3)</td></tr><tr><td>Base Model</td><td>Model 1</td><td>Model 2</td><td>Model 1</td><td>Model 2</td></tr><tr><td>Length</td><td>11.33</td><td>11.18</td><td>21.76</td><td>22.68</td></tr><tr><td>Contiguity Loss</td><td>17.12</td><td>16.84</td><td>21.92</td><td>21.45</td></tr><tr><td>Interaction Cases</td><td colspan="2">(331, 363, 1129, 1211.5)</td><td colspan="2">(228.6,264,974.2,1075.8)</td></tr><tr><td>Case Accuracy</td><td colspan="2">(0.41,0.43,0.30,0.26)</td><td colspan="2">(0.38,0.44,0.31,0.27)</td></tr><tr><td colspan="5">IMDB movie reviews</td></tr><tr><td>$\left( {{\lambda }_{1},{\lambda }_{2}}\right)$</td><td colspan="2">(1e-3, 0)</td><td colspan="2">(2e-4, 2e-4)</td></tr><tr><td>Base Model</td><td>Model 1</td><td>Model 2</td><td>Model 1</td><td>Model 2</td></tr><tr><td>Length</td><td>13.99</td><td>17.59</td><td>29.22</td><td>27.37</td></tr><tr><td>Contiguity Loss</td><td>21.84</td><td>26.45</td><td>37.14</td><td>35.48</td></tr><tr><td>Interaction Cases</td><td colspan="2">(855.6,946.0,1187.4,1250.0)</td><td colspan="2">(681.7,665.2,1101.8,1295.7)</td></tr><tr><td>Case Accuracy</td><td colspan="2">(0.66,0.65,0.59,0.59)</td><td colspan="2">(0.66,0.64,0.58,0.60)</td></tr></table>
|
| 118 |
+
|
| 119 |
+
Table 1: Experiment details (average values). We report the rationale length (i.e., number of words) and contiguity loss of each base model under each hyper-parameter setting and numbers of interaction cases and each case's accuracy. Four values in an interaction case are the average numbers of the cases separately for base model 1 convinced, base model 2 convinced, base model 1 more confident, and base model 2 more confident. These are the four cases from handling disagreements in Algorithm 四
|
| 120 |
+
|
| 121 |
+
### 4.2 Setup
|
| 122 |
+
|
| 123 |
+
Training Instead of REINFORCE (Williams, 1992), a reparameterization heuristic Gumbel-Softmax (Jang et al., 2017) is used to simplify gradient estimation. Convolutional neural network (Kim, 2014) is used for both generators and classifiers with filter sizes of [3,4,5], filter number of 100 and dropout rate of 0.5 . Hidden dimensions of 100 and 120 are separately used for the first and the second base model, which is the only difference among all parameters for training two base models. Adam is used as the optimizer with a weight decay of $5\mathrm{e} - {06}$ and an initial learning rate of 0.001 . If no improvement is achieved in loss in development dataset from the previous best model after a patience of 5 epochs, the learning rate is halved (i.e., 0.001, 0.0005...) and the training process starts over from the previous best model. Totally, 20 epochs are used for training. Cross-entropy is used as the loss objective. Batch size is set to be 128. For Gumbel-Softmax (Jang et al., 2017), the initial temperature is 1 with a decay rate of 1e-5. GloVe (Pennington et al., 2014) of embedding dimension 300 is used for word embedding. The max text lengths are separately set to be 80 and 200 words for 20 Newsgroups and IMDB movie reviews.
|
| 124 |
+
|
| 125 |
+
Testing For each dataset, two base models are trained and tested with two settings of hyper-parameters $\left( {{\lambda }_{1},{\lambda }_{2}}\right)$ from the loss function. $\{ \left( {{0.005},0}\right) ,\left( {{0.001},{0.001}}\right) \}$ for 20 Newsgroups and $\{ \left( {{0.001},0}\right) ,\left( {{0.0002},{0.0002}}\right) \}$ for IMDB movie reviews. The four settings are chosen in a way that shows the performance of the algorithm under different rationale length and contiguity (Table 1). For each hyper-parameter setting, both base models are trained and tested with 6 random seeds (i.e., $\{ {2022},{2023},{2024},{2025},{2026},{2027}\}$ ), and the invalid cases where two base models show a significant difference in the performance in development dataset (i.e.,> 3% in accuracy) are removed. The numbers of invalid cases are separately2,1,1,0out of 6 for the four hyper-parameter settings.
|
| 126 |
+
|
| 127 |
+
<table><tr><td rowspan="2">$\left( {{\lambda }_{1},{\lambda }_{2}}\right)$</td><td colspan="2">20 Newsgroups</td><td colspan="2">IMDB movie reviews</td></tr><tr><td>(5e-3, 0)</td><td>(1e-3, 1e-3)</td><td>(1e-3, 0)</td><td>(2e-4, 2e-4)</td></tr><tr><td>Model 1</td><td>.55 (.53-.57)</td><td>.58 (.56-.59)</td><td>.81 (.80-.82)</td><td>.82 (.81-.83)</td></tr><tr><td>Model 2</td><td>.54 (.52-.57)</td><td>.57 (.55-.59)</td><td>.81 (.80-.82)</td><td>.82 (.81-.83)</td></tr><tr><td>Interaction</td><td>.58 (.56-.60)</td><td>60.65-61)</td><td>.83 (.82-.84)</td><td>.84 (.83-.84)</td></tr></table>
|
| 128 |
+
|
| 129 |
+
Table 2: Average performances (accuracy) of maximum six experiments for base (Models 1 and 2) and interactive models under each hyper-parameter setting for each dataset. The (min, max) performances of each model are also reported to demonstrate variances.
|
| 130 |
+
|
| 131 |
+
### 4.3 Quantitative Evaluation
|
| 132 |
+
|
| 133 |
+
For quantitative evaluation, we report the predictive performances of the classifiers from base models and the interactive model. In Table 2, the interactive model outperforms the better base model by $2\%$ in IMDB movie reviews and 2-3 $\%$ in 20 Newsgroups and shows a relatively smaller variance in both datasets. The improvement in predictive performance and reduced variance is general for most experiments in addition to the four settings. We found that, in the cases of extreme hyper-parameter settings where rationales contain almost whole texts or no words, there is no improvement. This seems reasonable as, when base models generate rationales of whole texts or no words, the rationales are identical, which makes the exchange of rationales meaningless. Also, in some cases where one base model is trained well and one is not (e.g., 80% and 60% accuracy in IMDB movie reviews), the interactive model shows a slightly lower performance than the better base model. The reason can be that a relatively better rationale generated by the better model can not convince the classifier of the poor performance model, where the first rule that a good rationale is more likely to produce consistent predictions is not followed. If no rationale is causing consistent predictions, the second rule about confidence level is applied but a poor classifier can sometimes be overconfident, which causes errors.
|
| 134 |
+
|
| 135 |
+
For a binary classification task, when two base models with similar performances have a disagreement, the expected accuracy of each base model is around ${50}\%$ and the probability of blindly choosing a prediction turning out to be correct should also be near ${50}\%$ (i.e., random guessing). However, as shown in Table 1, in IMDB movie reviews, the accuracy after interaction is 8-16% higher than random guessing.
|
| 136 |
+
|
| 137 |
+
Also, we observed that, when the constraints on rationales are less strict (i.e., allowing more words and more contiguity loss), generally the performance of base models increases but the improvement
|
| 138 |
+
|
| 139 |
+
after interaction deceases. The reason may be that, with weaker rationale constraints, strong indicators are easier identified as causing the rationales of both base models to contain more similarly strong indicators.
|
| 140 |
+
|
| 141 |
+
## 5 Conclusion
|
| 142 |
+
|
| 143 |
+
To handle the high variance of selective rationale extraction models, we proposed method we call Interactive Rationale Extraction for Text Classification, which selects rationales and predictions from base models based on simple rules through imitating the interaction process between humans for handling disagreements. The experimental results show that the interactive process is effective in terms of improving performance, choosing a better rationale, and reducing variance.
|
| 144 |
+
|
| 145 |
+
## References
|
| 146 |
+
|
| 147 |
+
Jasmijn Bastings, Wilker Aziz, and Ivan Titov. Interpretable neural predictions with differentiable binary variables. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2963-2977, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1284. URL https://aclanthology.org/P19-1284.
|
| 148 |
+
|
| 149 |
+
Nuno M. Guerreiro and André F. T. Martins. SPECTRA: Sparse structured text rationalization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6534-6550, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.525. URL https://aclanthology.org/2021.emnlp-main.525.
|
| 150 |
+
|
| 151 |
+
Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On calibration of modern neural networks. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1321-1330. PMLR, 06-11 Aug 2017. URL https://proceedings.mlr.press/v70/guo17a html
|
| 152 |
+
|
| 153 |
+
Sarthak Jain, Sarah Wiegreffe, Yuval Pinter, and Byron C. Wallace. Learning to faithfully rationalize by construction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4459-4473, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.409. URL https://aclanthology.org/2020.acl-main.409.
|
| 154 |
+
|
| 155 |
+
Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. In International Conference on Learning Representations, 2017. URL https://openreview.net/ forum?id=rkE3y85ee.
|
| 156 |
+
|
| 157 |
+
Yoon Kim. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751, Doha, Qatar, October 2014. Association for Computational Linguistics. doi: 10.3115/v1/D14-1181. URL https://aclanthology.org/D14-1181.
|
| 158 |
+
|
| 159 |
+
Ananya Kumar, Tengyu Ma, Percy Liang, and Aditi Raghunathan. Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift. In James Cussens and Kun Zhang, editors, Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, volume 180 of Proceedings of Machine Learning Research, pages 1041-1051. PMLR, 01-05 Aug 2022. URL https://proceedings.mlr.press/v180/kumar22a.html
|
| 160 |
+
|
| 161 |
+
Tao Lei, Regina Barzilay, and Tommi Jaakkola. Rationalizing neural predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 107-117, Austin, Texas, November 2016. Association for Computational Linguistics. doi: 10.18653/v1/D16-1011. URL https://aclanthology.org/D16-1011
|
| 162 |
+
|
| 163 |
+
Zachary C. Lipton. The mythos of model interpretability. Commun. ACM, 61(10):36-43, sep 2018. ISSN 0001-0782. doi: 10.1145/3233231. URL https://doi.org/10.1145/3233231.
|
| 164 |
+
|
| 165 |
+
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142-150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics. URL http: //www.aclweb.org/anthology/P11-1015
|
| 166 |
+
|
| 167 |
+
Jeffrey Pennington, Richard Socher, and Christopher Manning. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar, October 2014. Association for Computational Linguistics. doi: 10.3115/v1/D14-1162. URL https://aclanthology.org/D14-1162.
|
| 168 |
+
|
| 169 |
+
Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn., 8(3-4):229-256, may 1992. ISSN 0885-6125. doi: 10.1007/BF00992696. URL https://doi.org/10.1007/BF00992696.
|
| 170 |
+
|
| 171 |
+
Omar Zaidan, Jason Eisner, and Christine D. Piatko. Using "annotator rationales" to improve machine learning for text categorization. In NAACL, 2007.
|
NeurIPS/NeurIPS 2022/NeurIPS 2022 Workshop/NeurIPS 2022 Workshop TSRML/zaJsDuwwdlJ/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,198 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ INTERACTIVE RATIONALE EXTRACTION FOR TEXT CLASSIFICATION
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
§ ABSTRACT
|
| 12 |
+
|
| 13 |
+
Deep neural networks show superior performance in text classification tasks, but their poor interpretability and explainability can cause trust issues. For text classification problems, the identification of textual sub-phrases or "rationales" is one strategy for attempting to find the most influential portions of text, which can be conveyed as critical in making classification decisions. Selective models for rationale extraction faithfully explain a neural classifier's predictions by training a rationale generator and a text classifier jointly: the generator identifies rationales and the classifier predicts a category solely based on the rationales. The selected rationales are then viewed as the explanations for the classifier's predictions. Through exchange of such explanations, humans interact to achieve higher performances in problem solving. To imitate the interactive process of humans, we propose a simple interactive rationale extraction architecture that selects a pair of rationales and then makes predictions from two independently trained selective models. We show how this architecture outperforms both base models for text classification tasks on datasets IMDB movie reviews and 20 Newsgroups in terms of predictive performance.
|
| 14 |
+
|
| 15 |
+
§ 17 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
Selective (or select-predict) models for rationale extraction in text classification (Lei et al., 2016; Bastings et al. 2019), with the general structure shown in Figure 1, are designed to extract a set of words, namely a rationale (Zaidan et al., 2007), from an original text where, for prediction purposes, the rationale is expected to be sufficient as the input for the classification model to obtain the same prediction based on the whole text. For the purpose of interpretability, the rationale should be concise and contiguous. A rationale extraction model is faithful (Lipton, 2018) if the extracted rationales are truly the information used for classification (Jain et al., 2020). The problem of extracting rationales that satisfy the criteria above is complex from a machine learning perspective and becomes more difficult with only instance-level supervision (i.e., without token-level annotations) (Jain et al., 2020). One model's identification of rationales can suffer from high variance because of the complex training process. An ensemble of more than one model helps to reduce variance, which leads to the exploration of how to take use of two rationale extraction models and how to make a choice when the two models make different predictions.
|
| 18 |
+
|
| 19 |
+
When two humans have different answers to a problem, they tend to exchange their reasons or explanations, after which there might be a change of mind. To show why this interaction of humans is effective, we use the problem of proving a mathematical conjuncture as an instance: because searching for a correct mathematical proof, which then leads to a correct claim about the conjuncture, is usually much more difficult than verifying a proof (e.g., $\mathcal{P} \subseteq \mathcal{N}\mathcal{P}$ in computation theory), often one who is not capable of finding a good proof can tell if a proof is good when the proof is given. Considering the complexity for a generator to search among all possible rationales with only remote instance-level supervision, the work of rationale extraction can be much more difficult than classification.
|
| 20 |
+
|
| 21 |
+
< g r a p h i c s >
|
| 22 |
+
|
| 23 |
+
Figure 1: Schematic of selective rationale extraction models where $x$ is an embedded text, $g$ is a generator and $f$ is a classifier. Generator $g$ extracts a rationale $r$ based on which classifier $f$ makes a prediction $y$ .
|
| 24 |
+
|
| 25 |
+
We may then consider selective models for rationale extraction to be naturally compatible with the interactive pattern of humans by viewing the rationales extracted by a generator as the proofs for the decisions of its classifier, which means the interaction between two base models can be performed by the exchange of their rationales. Subsequently, the problem becomes how to decide if a rationale is good or not so that we know which pairs of rationale and prediction are appropriate choices when two base models make different predictions. A good rationale here is expected to give a correct prediction when input to a decent classifier.
|
| 26 |
+
|
| 27 |
+
Intuitively, a good rationale is supposed to contain strong indicators for the correct "gold label" instead of insignificant words which do not contribute to classification, which leads to two simple rules for handling base models' disagreements: first, a good rationale is more likely to produce consistent predictions among classifiers (i.e., a good explanation convinces people); second, a good rationale is more likely to produce a higher confidence level (Section 2.2) for the prediction of one classifier (i.e., one with a good reason is often confident). The two rules are created a basis for classification, as opposed to random guessing based on otherwise randomly selected words. Note that the two rules are based on the assumption that the probability that base models extract strong indicators for wrong labels is very low, which should be considered to be true for decent generators and decent classifiers (i.e., better than random guessing).
|
| 28 |
+
|
| 29 |
+
To imitate the interactive pattern of humans in problem solving, we introduce Interactive Rationale Extraction for Text Classification to interactively connect two independently trained selective rationale extraction models. We show the architecture achieves higher predictive performance than either base models with similar performance on IMDB movie reviews and 20 Newsgroups. This is done by selecting pairs of rationale and prediction from the base models using the above simple rules. In addition, because this interactive architecture makes decisions solely based on the base models' rationales, the faithfulness and interpretability of the base models' rationales are not compromised.
|
| 30 |
+
|
| 31 |
+
§ 2 BACKGROUND
|
| 32 |
+
|
| 33 |
+
§ 2.1 SELECTIVE RATIONALE EXTRACTION
|
| 34 |
+
|
| 35 |
+
The original selective rationale extraction model was proposed by Lei et al. (2016) with an architecture shown in Figure 1. Their model faithfully explains a neural network-based classifier's predictions by jointly training a generator and a classifier with only instance-level supervision. We summarize their work as follows. The generator $g$ consumes the embedded tokens of the original text, namely $x = \left\lbrack {{x}_{1},{x}_{2},\ldots ,{x}_{l}}\right\rbrack$ where $l$ is the number of the tokens in the text and each token ${x}_{i} \in {\mathbb{R}}^{d}$ is an $d$ dimensional embedding vector, and outputs a probability distribution $p\left( {z \mid x}\right)$ over the hard mask $z = \left\lbrack {{z}_{1},{z}_{2},\ldots ,{z}_{l}}\right\rbrack$ where each value ${z}_{i} \in \{ 0,1\}$ denotes whether the corresponding token is selected. A rationale $r$ is defined as(z, x)representing the hard mask $z$ over the original input $x$ . Subsequently, the classifier $f$ takes(z, x)as input to make a prediction $f\left( {z,x}\right)$ . Given gold label $y$ , the loss function used to optimize both generator $g$ and classifier $f$ is defined as
|
| 36 |
+
|
| 37 |
+
$$
|
| 38 |
+
\operatorname{loss}\left( {z,x,y}\right) = \parallel f\left( {z,x}\right) - y{\parallel }_{2}^{2} + {\lambda }_{1}\parallel z\parallel + {\lambda }_{2}\mathop{\sum }\limits_{{i = 1}}^{{l - 1}}\left| {{z}_{i} - {z}_{i + 1}}\right| \tag{1}
|
| 39 |
+
$$
|
| 40 |
+
|
| 41 |
+
which consists of three parts: prediction loss, selection loss and contiguity loss. The parameters ${\lambda }_{1}$ and ${\lambda }_{2}$ in the loss function are used to tune the constraints on rationales (i.e., conciseness and contiguity). Jain et al. (2020) modified the loss function to apply hard constraints on rationales (i.e., maximum length) by not punishing a model when a given limit on the number of words is not reached. Because of the absence of token-level supervision and the use of hard masking which is not differentiable, Lei et al. (2016) turned to REINFORCE (Williams, 1992) for gradient estimation, which causes high variance and sensitivity to hyper-parameters (Jain et al., 2020). Following the select-predict architecture proposed by Lei et al. (2016), Bastings et al. (2019) explored a reparameterization heuristic called HardKuma for gradient estimation. Furthermore, Guerreiro and Martins (2021) exposed the trade-off between differentiable masking and hard constraints in selective rationale extraction models.
|
| 42 |
+
|
| 43 |
+
< g r a p h i c s >
|
| 44 |
+
|
| 45 |
+
Figure 2: Schematic of our interactive rationale extraction where rationales are exchanged. The notations follow Figure 1.
|
| 46 |
+
|
| 47 |
+
§ 2.2 CONFIDENCE LEVEL
|
| 48 |
+
|
| 49 |
+
Confidence level (CL) indicates how far a neural network's prediction is from being neutral. Given a neural network’s non-probabilistic output $k = \left\lbrack {{k}_{1},{k}_{2},\ldots ,{k}_{n}}\right\rbrack$ for a $n$ -class classification, Kumar et al. (2022) defined the CL of the classification with a softmax function
|
| 50 |
+
|
| 51 |
+
$$
|
| 52 |
+
{CL}\left( k\right) = \frac{\exp \left( {\max \left( k\right) }\right) }{\mathop{\sum }\limits_{{i = 1}}^{n}\exp \left( {k}_{i}\right) } \tag{2}
|
| 53 |
+
$$
|
| 54 |
+
|
| 55 |
+
where $\max \left( k\right)$ is the value of the output node ${k}_{i}$ with the highest value (i.e., $i$ is the final prediction).
|
| 56 |
+
|
| 57 |
+
Guo et al. (2017) stated that a classification network should not only have a high accuracy but also indicate how likely each prediction is correct or incorrect for trust purposes. In addition, their study on neural networks' calibration Guo et al. (2017) suggested that accuracy, even if not nearly identical to CL for some neural networks, is generally positively correlated to CL. This means that, when two base models with similar expected performances make different predictions, the prediction with a higher CL is generally more likely to be correct.
|
| 58 |
+
|
| 59 |
+
§ 3 ALGORITHM
|
| 60 |
+
|
| 61 |
+
As demonstrated in Figure 2, after the interaction between two base select-predict models, a total of 4 predictions are generated: ${y}_{1} = {f}_{1}\left( {r}_{1}\right) ,{y}_{1}^{\prime } = {f}_{1}\left( {r}_{2}\right) ,{y}_{2}^{\prime } = {f}_{2}\left( {r}_{1}\right)$ and ${y}_{2} = {f}_{2}\left( {r}_{2}\right)$ where ${y}_{1}$ and ${y}_{2}$ are the predictions based on their own rationales and ${y}_{1}^{\prime }$ and ${y}_{2}^{\prime }$ are predictions based on the exchanged rationales, as shown in the table below.
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
\begin{matrix} {r}_{1} & {r}_{2} & \\ {f}_{1} & {y}_{1} & {y}_{1}^{\prime } \\ {f}_{2} & {y}_{2}^{\prime } & {y}_{2} \end{matrix}
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
Given an input text, when the predictions of two base models are the same, namely ${y}_{1} = {y}_{2}$ , both rationales ${r}_{1},{r}_{2}$ are good and the final prediction is the shared prediction. When two base models initially show a disagreement, we check if one rationale causes more consistent predictions. If ${r}_{1}$ causes more consistent predictions, in order words, if ${r}_{1}$ changes the prediction of ${f}_{2}$ to ${y}_{1}$ when given as an input rationale (namely, ${y}_{1} = {y}_{2}^{\prime }$ ), but ${r}_{2}$ does not change the prediction of ${f}_{1}$ to ${y}_{2}$ when given as an input rationale $\left( {{y}_{2} \neq {y}_{1}^{\prime }}\right)$ , then the pair $\left( {{r}_{1},{y}_{1}}\right)$ is chosen as the final rationale and prediction; symmetrically, if ${r}_{2}$ causes more consistent predictions, the pair $\left( {{r}_{2},{y}_{2}}\right)$ is chosen. For the cases where no rationale causes more consistent predictions, we rely on confidence levels which are real numbers between 0 and 1 as defined by expression (2). If the confidence level of ${f}_{1}$ on ${r}_{1}$ is higher than that of ${f}_{2}$ on ${r}_{2}$ (say ${CL}\left( {{f}_{1},{r}_{1}}\right) > {CL}\left( {{f}_{2},{r}_{2}}\right)$ with $\left( {{f}_{1},{r}_{1}}\right)$ and $\left( {{f}_{2},{r}_{2}}\right)$ separately denoting their corresponding non-probabilistic outputs), the pair $\left( {{r}_{1},{y}_{1}}\right)$ is chosen; otherwise, the pair $\left( {{r}_{2},{y}_{2}}\right)$ is chosen. The process of selecting a pair of rationale and prediction is formalized in 15 Algorithm 1. It's worth mentioning that, in implementation, the exchange of rationales only needs to be performed when base models have a disagreement in prediction (i.e., ${y}_{1} \neq {y}_{2}$ ).
|
| 68 |
+
|
| 69 |
+
Algorithm 1 Rationale-prediction Selection after Interaction
|
| 70 |
+
|
| 71 |
+
Require: ${f}_{1},{f}_{2},{r}_{1},{r}_{2},{y}_{1},{y}_{1}^{\prime },{y}_{2}^{\prime },{y}_{2}$ from Figure 2, ${CL}\left( {f,r}\right)$ for the confidence level of $f$ on $r$ .
|
| 72 |
+
|
| 73 |
+
if ${y}_{1} = {y}_{2}$ then $\vartriangleright$ agreement
|
| 74 |
+
|
| 75 |
+
return $\left( {{r}_{1},{y}_{1}}\right)$ $\vartriangleright \operatorname{or}\left( {{r}_{2},{y}_{2}}\right)$
|
| 76 |
+
|
| 77 |
+
else $\vartriangleright$ disagreement
|
| 78 |
+
|
| 79 |
+
if ${y}_{1} = {y}_{2}^{\prime }$ and ${y}_{2} \neq {y}_{1}^{\prime }$ then $\vartriangleright$ model 2 convinced by model 1
|
| 80 |
+
|
| 81 |
+
return $\left( {{r}_{1},{y}_{1}}\right)$
|
| 82 |
+
|
| 83 |
+
else if ${y}_{1} \neq {y}_{2}^{\prime }$ and ${y}_{2} = {y}_{1}^{\prime }$ then $\vartriangleright$ model 1 convinced by model 2
|
| 84 |
+
|
| 85 |
+
return $\left( {{r}_{2},{y}_{2}}\right)$
|
| 86 |
+
|
| 87 |
+
else
|
| 88 |
+
|
| 89 |
+
if ${CL}\left( {{f}_{1},{r}_{1}}\right) > {CL}\left( {{f}_{2},{r}_{2}}\right)$ then $\vartriangleright$ model 1 is more confident
|
| 90 |
+
|
| 91 |
+
return $\left( {{r}_{1},{y}_{1}}\right)$
|
| 92 |
+
|
| 93 |
+
else $\vartriangleright$ model 2 is more confident
|
| 94 |
+
|
| 95 |
+
return $\left( {{r}_{2},{y}_{2}}\right)$
|
| 96 |
+
|
| 97 |
+
end if
|
| 98 |
+
|
| 99 |
+
end if
|
| 100 |
+
|
| 101 |
+
end if
|
| 102 |
+
|
| 103 |
+
§ 17 4 EXPERIMENTS
|
| 104 |
+
|
| 105 |
+
§ 4.1 DATASETS
|
| 106 |
+
|
| 107 |
+
IMDB movie reviews (Maas et al., 2011) This is a dataset of 50,000 movie reviews collected from the Internet Movie Database (IMDB) with binary labels (i.e., positive and negative). The dataset is originally split into two subsets: 25,000 for training and 25,000 for testing. We randomly split the training data into 20,000 (80%) for training and 5,000 (20%) for development. The numbers of the two labels are perfectly balanced in each subset.
|
| 108 |
+
|
| 109 |
+
20 Newsgroups It is a publicly available dataset containing a total of 18,846 texts, with 11,314 for training and 7,532 for testing, in 20 distinct categories of news topics. We split the training data randomly into 9,051 (80%) for training and 2,263 (20%) for development. The numbers of the 20
|
| 110 |
+
|
| 111 |
+
labels are not perfectly balanced and varying from 304 to 490 in the training data, 73 to 131 in the development data and 251 to 399 in the testing data.
|
| 112 |
+
|
| 113 |
+
max width=
|
| 114 |
+
|
| 115 |
+
5|c|20 Newsgroups
|
| 116 |
+
|
| 117 |
+
1-5
|
| 118 |
+
$\left( {{\lambda }_{1},{\lambda }_{2}}\right)$ 2|c|(5e-3, 0) 2|c|(1e-3, 1e-3)
|
| 119 |
+
|
| 120 |
+
1-5
|
| 121 |
+
Base Model Model 1 Model 2 Model 1 Model 2
|
| 122 |
+
|
| 123 |
+
1-5
|
| 124 |
+
Length 11.33 11.18 21.76 22.68
|
| 125 |
+
|
| 126 |
+
1-5
|
| 127 |
+
Contiguity Loss 17.12 16.84 21.92 21.45
|
| 128 |
+
|
| 129 |
+
1-5
|
| 130 |
+
Interaction Cases 2|c|(331, 363, 1129, 1211.5) 2|c|(228.6,264,974.2,1075.8)
|
| 131 |
+
|
| 132 |
+
1-5
|
| 133 |
+
Case Accuracy 2|c|(0.41,0.43,0.30,0.26) 2|c|(0.38,0.44,0.31,0.27)
|
| 134 |
+
|
| 135 |
+
1-5
|
| 136 |
+
5|c|IMDB movie reviews
|
| 137 |
+
|
| 138 |
+
1-5
|
| 139 |
+
$\left( {{\lambda }_{1},{\lambda }_{2}}\right)$ 2|c|(1e-3, 0) 2|c|(2e-4, 2e-4)
|
| 140 |
+
|
| 141 |
+
1-5
|
| 142 |
+
Base Model Model 1 Model 2 Model 1 Model 2
|
| 143 |
+
|
| 144 |
+
1-5
|
| 145 |
+
Length 13.99 17.59 29.22 27.37
|
| 146 |
+
|
| 147 |
+
1-5
|
| 148 |
+
Contiguity Loss 21.84 26.45 37.14 35.48
|
| 149 |
+
|
| 150 |
+
1-5
|
| 151 |
+
Interaction Cases 2|c|(855.6,946.0,1187.4,1250.0) 2|c|(681.7,665.2,1101.8,1295.7)
|
| 152 |
+
|
| 153 |
+
1-5
|
| 154 |
+
Case Accuracy 2|c|(0.66,0.65,0.59,0.59) 2|c|(0.66,0.64,0.58,0.60)
|
| 155 |
+
|
| 156 |
+
1-5
|
| 157 |
+
|
| 158 |
+
Table 1: Experiment details (average values). We report the rationale length (i.e., number of words) and contiguity loss of each base model under each hyper-parameter setting and numbers of interaction cases and each case's accuracy. Four values in an interaction case are the average numbers of the cases separately for base model 1 convinced, base model 2 convinced, base model 1 more confident, and base model 2 more confident. These are the four cases from handling disagreements in Algorithm 四
|
| 159 |
+
|
| 160 |
+
§ 4.2 SETUP
|
| 161 |
+
|
| 162 |
+
Training Instead of REINFORCE (Williams, 1992), a reparameterization heuristic Gumbel-Softmax (Jang et al., 2017) is used to simplify gradient estimation. Convolutional neural network (Kim, 2014) is used for both generators and classifiers with filter sizes of [3,4,5], filter number of 100 and dropout rate of 0.5 . Hidden dimensions of 100 and 120 are separately used for the first and the second base model, which is the only difference among all parameters for training two base models. Adam is used as the optimizer with a weight decay of $5\mathrm{e} - {06}$ and an initial learning rate of 0.001 . If no improvement is achieved in loss in development dataset from the previous best model after a patience of 5 epochs, the learning rate is halved (i.e., 0.001, 0.0005...) and the training process starts over from the previous best model. Totally, 20 epochs are used for training. Cross-entropy is used as the loss objective. Batch size is set to be 128. For Gumbel-Softmax (Jang et al., 2017), the initial temperature is 1 with a decay rate of 1e-5. GloVe (Pennington et al., 2014) of embedding dimension 300 is used for word embedding. The max text lengths are separately set to be 80 and 200 words for 20 Newsgroups and IMDB movie reviews.
|
| 163 |
+
|
| 164 |
+
Testing For each dataset, two base models are trained and tested with two settings of hyper-parameters $\left( {{\lambda }_{1},{\lambda }_{2}}\right)$ from the loss function. $\{ \left( {{0.005},0}\right) ,\left( {{0.001},{0.001}}\right) \}$ for 20 Newsgroups and $\{ \left( {{0.001},0}\right) ,\left( {{0.0002},{0.0002}}\right) \}$ for IMDB movie reviews. The four settings are chosen in a way that shows the performance of the algorithm under different rationale length and contiguity (Table 1). For each hyper-parameter setting, both base models are trained and tested with 6 random seeds (i.e., $\{ {2022},{2023},{2024},{2025},{2026},{2027}\}$ ), and the invalid cases where two base models show a significant difference in the performance in development dataset (i.e.,> 3% in accuracy) are removed. The numbers of invalid cases are separately2,1,1,0out of 6 for the four hyper-parameter settings.
|
| 165 |
+
|
| 166 |
+
max width=
|
| 167 |
+
|
| 168 |
+
2*$\left( {{\lambda }_{1},{\lambda }_{2}}\right)$ 2|c|20 Newsgroups 2|c|IMDB movie reviews
|
| 169 |
+
|
| 170 |
+
2-5
|
| 171 |
+
(5e-3, 0) (1e-3, 1e-3) (1e-3, 0) (2e-4, 2e-4)
|
| 172 |
+
|
| 173 |
+
1-5
|
| 174 |
+
Model 1 .55 (.53-.57) .58 (.56-.59) .81 (.80-.82) .82 (.81-.83)
|
| 175 |
+
|
| 176 |
+
1-5
|
| 177 |
+
Model 2 .54 (.52-.57) .57 (.55-.59) .81 (.80-.82) .82 (.81-.83)
|
| 178 |
+
|
| 179 |
+
1-5
|
| 180 |
+
Interaction .58 (.56-.60) 60.65-61) .83 (.82-.84) .84 (.83-.84)
|
| 181 |
+
|
| 182 |
+
1-5
|
| 183 |
+
|
| 184 |
+
Table 2: Average performances (accuracy) of maximum six experiments for base (Models 1 and 2) and interactive models under each hyper-parameter setting for each dataset. The (min, max) performances of each model are also reported to demonstrate variances.
|
| 185 |
+
|
| 186 |
+
§ 4.3 QUANTITATIVE EVALUATION
|
| 187 |
+
|
| 188 |
+
For quantitative evaluation, we report the predictive performances of the classifiers from base models and the interactive model. In Table 2, the interactive model outperforms the better base model by $2\%$ in IMDB movie reviews and 2-3 $\%$ in 20 Newsgroups and shows a relatively smaller variance in both datasets. The improvement in predictive performance and reduced variance is general for most experiments in addition to the four settings. We found that, in the cases of extreme hyper-parameter settings where rationales contain almost whole texts or no words, there is no improvement. This seems reasonable as, when base models generate rationales of whole texts or no words, the rationales are identical, which makes the exchange of rationales meaningless. Also, in some cases where one base model is trained well and one is not (e.g., 80% and 60% accuracy in IMDB movie reviews), the interactive model shows a slightly lower performance than the better base model. The reason can be that a relatively better rationale generated by the better model can not convince the classifier of the poor performance model, where the first rule that a good rationale is more likely to produce consistent predictions is not followed. If no rationale is causing consistent predictions, the second rule about confidence level is applied but a poor classifier can sometimes be overconfident, which causes errors.
|
| 189 |
+
|
| 190 |
+
For a binary classification task, when two base models with similar performances have a disagreement, the expected accuracy of each base model is around ${50}\%$ and the probability of blindly choosing a prediction turning out to be correct should also be near ${50}\%$ (i.e., random guessing). However, as shown in Table 1, in IMDB movie reviews, the accuracy after interaction is 8-16% higher than random guessing.
|
| 191 |
+
|
| 192 |
+
Also, we observed that, when the constraints on rationales are less strict (i.e., allowing more words and more contiguity loss), generally the performance of base models increases but the improvement
|
| 193 |
+
|
| 194 |
+
after interaction deceases. The reason may be that, with weaker rationale constraints, strong indicators are easier identified as causing the rationales of both base models to contain more similarly strong indicators.
|
| 195 |
+
|
| 196 |
+
§ 5 CONCLUSION
|
| 197 |
+
|
| 198 |
+
To handle the high variance of selective rationale extraction models, we proposed method we call Interactive Rationale Extraction for Text Classification, which selects rationales and predictions from base models based on simple rules through imitating the interaction process between humans for handling disagreements. The experimental results show that the interactive process is effective in terms of improving performance, choosing a better rationale, and reducing variance.
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/-O-A_6M_oi/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,444 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# On the Concept of Resource-Efficiency in NLP
|
| 2 |
+
|
| 3 |
+
Anonymous ACL submission
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
001 Resource-efficiency is a growing concern in the NLP community. But what are the resources we care about and why? How do we measure
|
| 8 |
+
|
| 9 |
+
004 efficiency in a way that is reliable and relevant? And how do we balance efficiency and other important concerns? Based on a review of the emerging literature on the subject, we discuss different ways of conceptualizing efficiency in
|
| 10 |
+
|
| 11 |
+
009 terms of product and cost, using a simple case study on fine-tuning and knowledge distillation for illustration. We propose a novel metric of amortized efficiency that is better suited for life-cycle analysis than existing metrics.
|
| 12 |
+
|
| 13 |
+
014
|
| 14 |
+
|
| 15 |
+
## 1 Introduction
|
| 16 |
+
|
| 17 |
+
Resource-efficiency has recently become a more prominent concern in the NLP community. The Association for Computational Linguistics (ACL) has issued an Efficient NLP Policy Document ${}^{1}$ and most conferences now have a special track devoted to efficient methods in NLP. The major reason for this increased attention to efficiency can be found in the perceived negative effects of scaling NLP models (and AI models more generally) to unprecedented sizes, which increases energy consumption and carbon footprint as well as raises barriers to participation in NLP research for economic reasons (Strubell et al., 2019; Schwartz et al., 2020). These considerations are important and deserve serious attention, but they are not the only reasons to care about resource-efficiency. Traditional concerns like guaranteeing that models can be executed with sufficient speed to enable real-time processing, or with sufficiently low memory footprint to fit on small devices, will continue to be important as well.
|
| 18 |
+
|
| 19 |
+
Resource-efficiency is however a complex and multifaceted problem. First, there are many relevant types of resources, which interact in complex (and sometimes antagonistic) ways. For example,
|
| 20 |
+
|
| 21 |
+
adding more computational resources may improve 039
|
| 22 |
+
|
| 23 |
+
time efficiency but increase energy consumption. 040
|
| 24 |
+
|
| 25 |
+
For some of these resources, obtaining relevant 041
|
| 26 |
+
|
| 27 |
+
and reliable measurements can also be a challenge, 042 especially if the consumption depends on both soft- 043 ware and hardware properties. Furthermore, the life-cycle of a typical NLP model can be divided into different phases, like pre-training, fine-tuning and (long-term) inference, which often have very
|
| 28 |
+
|
| 29 |
+
different resource requirements but nevertheless 048 need to be related to each other in order to obtain a
|
| 30 |
+
|
| 31 |
+
holistic view of total resource consumption. Since 050 one and the same (pre-trained) model can be fine-tuned and deployed in multiple instances, it may also be necessary to amortize the training cost in order to arrive at a fair overall assessment.
|
| 32 |
+
|
| 33 |
+
To do justice to this complexity, we must resist 055 the temptation to reduce the notion of resource-efficiency to a single metric or equation. Instead, we need to develop a conceptual framework that supports reasoning about the interaction of different resources while taking the different phases of 060 the life-cycle into account. The emerging literature
|
| 34 |
+
|
| 35 |
+
on the subject shows a growing awareness of this 062 need, and there are a number of promising proposals that address parts of the problem. In this paper, we review some of these proposals and discuss issues that arise when trying to define and measure
|
| 36 |
+
|
| 37 |
+
efficiency in relation to NLP models. We specifi- 067 cally address the need for a holistic assessment of
|
| 38 |
+
|
| 39 |
+
efficiency over the entire life-cycle of a model and 069 propose a novel notion of amortized efficiency. All notions and metrics are illustrated in a small case study on fine-tuning and knowledge distillation. 072
|
| 40 |
+
|
| 41 |
+
## 2 Related Work
|
| 42 |
+
|
| 43 |
+
073
|
| 44 |
+
|
| 45 |
+
Strubell et al. (2019) were among the first to discuss 074 the increasing resource requirements in NLP. They 075 provide estimates of the energy needed to train a number of popular NLP models (T2T, ELMo, BERT, GPT2). Based on those estimates, they also 079 estimate the cost in dollars and the ${\mathrm{{CO}}}_{2}$ emission associated with model training. In addition to the cost of training a single model, they provide a case study of the additional (much larger) costs involved in hyperparameter tuning and model fine-tuning.
|
| 46 |
+
|
| 47 |
+
---
|
| 48 |
+
|
| 49 |
+
${}^{1}$ https://www.aclweb.org/portal/content/efficient-nlp-policy-document
|
| 50 |
+
|
| 51 |
+
---
|
| 52 |
+
|
| 53 |
+
Schwartz et al. (2020) note that training costs in AI increased 300,000 times from 2012 to 2017, with costs doubling every few months, and argue that focusing only on the attainment of state-of-the-art accuracy ignores the economic, environmental, or social cost of reaching the reported accuracy. They advocate research on Green ${AI}$ - AI research that is more environmentally friendly and inclusive than traditional research, which they call Red AI. Specifically, they propose making efficiency a more common evaluation criterion for AI papers alongside accuracy and related measures.
|
| 54 |
+
|
| 55 |
+
Hershcovich et al. (2022) focus specifically on environmental impact and propose a climate performance model card that can be used with only limited information about experiments and underlying computer hardware. At a minimum authors are asked to report (a) whether the model is publicly available, (b) how much time it takes to train the final model, (c) how much time was spent on all experiments (including hyperparameter search), (d) what the total energy consumption was, and (e) at which location the computations were performed.
|
| 56 |
+
|
| 57 |
+
Liu et al. (2022) propose a new benchmark for efficient NLP models called ELUE (Efficient Language Understanding Evaluation) based on the concept of Pareto state of the art (Pareto SOTA), where a model is said to achieve Pareto SOTA if it achieves the best performance at a given cost level. The cost measures used in ELUE are number of model parameters and number of floating point operations (FLOPs), while performance measures vary depending on the task (sentiment analysis, natural language inference, paraphrase and textual similarity).
|
| 58 |
+
|
| 59 |
+
Treviso et al. (2022) provide a survey of current research on efficient methods for NLP, using a taxonomy based on different aspects or phases of the model life-cycle: data collection and preprocessing, model design, training (including pre-training and fine-tuning), inference, and model selection. Following Schwartz et al. (2020), they define efficiency as the cost of a model in relation to the results it produces. They observe that cost can be measured along multiple dimensions, such as computational, time-wise or environmental cost, and
|
| 60 |
+
|
| 61 |
+
that using a single cost indicator can be misleading. 130
|
| 62 |
+
|
| 63 |
+
They also emphasize the importance of separately 131
|
| 64 |
+
|
| 65 |
+
characterizing different stages of the model life- 132 cycle and acknowledge that properly measuring efficiency remains a challenge.
|
| 66 |
+
|
| 67 |
+
Dehghani et al. (2022) elaborate on the theme 135 of potentially misleading efficiency characteriza-
|
| 68 |
+
|
| 69 |
+
tions by showing that some of the most commonly 137 used cost indicators - number of model parameters, FLOPs, and throughput (msec/example) - can easily contradict each other when used to compare models and are therefore insufficient as standalone metrics. They again stress the importance of distinguishing training cost from inference cost, and point out that their relative importance may vary depending on context and use case. For example, training efficiency is crucial if a model needs to be retrained often, while inference efficiency may be critical in embedded applications.
|
| 70 |
+
|
| 71 |
+
## 3 The Concept of Efficiency in NLP
|
| 72 |
+
|
| 73 |
+
149
|
| 74 |
+
|
| 75 |
+
Efficiency is commonly defined as the ratio of use- 150 ful output to total input: ${}^{2}$
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
r = \frac{P}{C} \tag{1}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
152
|
| 82 |
+
|
| 83 |
+
where $P$ is the amount of useful output or results,
|
| 84 |
+
|
| 85 |
+
the product, and $C$ is the total cost of producing the 154 results, often defined as the amount of resources
|
| 86 |
+
|
| 87 |
+
consumed. A process or system can then be said 156 to reach maximum efficiency if a specific desired result is obtained with the minimal possible amount of resources, or if the maximum amount of results is obtained from a given resource. More generally,
|
| 88 |
+
|
| 89 |
+
maximum efficiency holds when it is not possible 161 to increase the product without increasing the cost, nor reduce the cost without reducing the product.
|
| 90 |
+
|
| 91 |
+
In order to apply this concept of efficiency to NLP, we first have to decide what counts as useful output or results - the product $P$ in Equation 1. We then need to figure out how to measure the cost $C$ in terms of resources consumed. Finally, we need to come up with relevant ways of relating $P$ to $C$ in different contexts of research, development and deployment, as well as aggregating the results into a life-cycle analysis. We will begin by discussing the last question, because it has a bearing on how we approach the other two.
|
| 92 |
+
|
| 93 |
+
---
|
| 94 |
+
|
| 95 |
+
${}^{2}$ Historically, the technical concept of efficiency arose in engineering in the nineteenth century, in the analysis of engine performance (thermodynamic efficiency); it was subsequently adopted in economy and social science by Vilfredo Pareto and others (Mitcham, 1994).
|
| 96 |
+
|
| 97 |
+
---
|
| 98 |
+
|
| 99 |
+
175
|
| 100 |
+
|
| 101 |
+
### 3.1 The Life-Cycle of an NLP Model
|
| 102 |
+
|
| 103 |
+
It is natural to divide the life-span of an NLP model into two phases: development and deployment. In the development phase, the model is created, optimized and validated for use. In the deployment phase, it is being used to process new language data in one or more applications. The development phase of an NLP model today typically includes several stages of training, some or all of which may be repeated multiple times in order to optimize various hyperparameters, as well as validation on held-out data to estimate model performance. The deployment phase is more homogeneous in that it mainly consists in using the model for inference on new data, although this may be interrupted by brief development phases to keep the model up to date.
|
| 104 |
+
|
| 105 |
+
As researchers, we naturally tend to focus on the development of new models and many models developed in a research context may never enter the deployment phase at all. Since the development phase is typically also more computationally intensive than the deployment phase, it is therefore not surprising that early papers concerned with the increasing energy consumption of NLP research, such as Strubell et al. (2019) and Schwartz et al. (2020), mainly focused on the development phase. Nevertheless, for models that are actually put to use in large-scale applications, resources consumed during the deployment phase may in the long run be much more important, and efficiency in the deployment phase is therefore an equally valid concern. This is also the focus of the recently proposed evaluation framework ELUE (Liu et al., 2022).
|
| 106 |
+
|
| 107 |
+
As will be discussed in the following sections, some proposed efficiency metrics are better suited for one of the two phases, although they can often be adapted to the other phase as well. However, the question is whether there is also a need for metrics that capture the combined resource usage at development and deployment, and how such metrics can be constructed. One reason for being interested in combined metrics is that there may be trade-offs between resources spent during development and deployment, respectively, so that spending more resources in development may lead to more efficient deployment (or vice versa). To arrive at a more holistic assessment of efficiency, we need to define efficiency metrics for deployment that also incorporate development costs. Before we propose such a metric, we need to discuss how to conceptualize products and costs of NLP models.
|
| 108 |
+
|
| 109 |
+

|
| 110 |
+
|
| 111 |
+
Figure 1: Pareto front with model performance as the product and cost measured in FLOPs (Liu et al., 2022).
|
| 112 |
+
|
| 113 |
+
### 3.2 The Products of an NLP Model
|
| 114 |
+
|
| 115 |
+
226
|
| 116 |
+
|
| 117 |
+
What is the output that we want to produce at the 227
|
| 118 |
+
|
| 119 |
+
lowest possible cost in NLP? Is it simply a model 228 capable of processing natural language (as input
|
| 120 |
+
|
| 121 |
+
or output or both)? Is it the performance of such 230 a model on one or more NLP tasks? Or is it the actual output of such a model when processing natural language at a certain performance level? All of these answers are potentially relevant, and have been considered in the literature, but they give rise to different notions of efficiency and require different metrics and measurement procedures.
|
| 122 |
+
|
| 123 |
+
Regarding the model itself as the product is of limited interest in most circumstances, as it does not take performance into account and only makes sense for the development phase. It is therefore more common to take model performance, as measured on some standard benchmark, as a relevant product quantity, which can be plotted as a function of some relevant cost to obtain a so-called Pareto front (with corresponding concepts of Pareto improvement and Pareto state of the art), as illustrated in Figure 1, reproduced from Liu et al. (2022).
|
| 124 |
+
|
| 125 |
+
One advantage of the product-as-performance model is that it can be applied to the deployment phase as well as the development phase, although the cost measurements are different in the two cases. For the development phase, we want to measure the total cost incurred to produce a model with a given performance, which depends on a multitude of factors, such as the size of the model, the number of hyperparameters that need to be tuned, and the data efficiency of the learning algorithm. For the deployment phase, we instead focus on the average cost of processing a typical input instance, such as a natural language sentence or a text document, independently of the development cost of the model. Separating the two phases in this way is perfectly adequate in many circumstances, but 265 the fact that we measure total cost in one case and average cost in the other makes it impossible to combine the measurements into a global life-cycle analysis. To overcome this limitation, we need a notion of product that is not defined (only) in terms of model performance but also considers the actual output produced by a model.
|
| 126 |
+
|
| 127 |
+
If we take the product to be the amount of data processed by a model in the deployment phase, then we can integrate the development cost in the efficiency metric as a debt that is amortized during deployment. Under this model, the average cost of processing an input instance is not constant but decreases over the life-time of a model, which allows us to capture possible trade-offs between development and deployment costs. For example, it may sometimes be worth investing more resources into the development phase if this leads to a lower development cost in the long run. Moreover, this model allows us to reason about how long a model needs to be in use to "break even" in this respect.
|
| 128 |
+
|
| 129 |
+
An important argument against the product-as-output model is that it is trivial (but uninteresting) to produce a maximally efficient model that produces random output. It thus seems that a relevant life-cycle analysis requires us to incorporate both model performance and model output into the notion of product. There are two obvious ways to do this, each with its own advantages and drawbacks. The first is to stipulate a minimum performance level that a model must reach to be considered valid and to treat all models reaching this threshold as ceteris paribus equivalent. The second way is to use the performance level as a weighting function when calculating the product of a model. We will stick to the first and simpler approach in our case study later, but first we need to discuss the other quantity in the efficiency equation - the cost.
|
| 130 |
+
|
| 131 |
+
### 3.3 The Costs of an NLP Model
|
| 132 |
+
|
| 133 |
+
Schwartz et al. (2020) propose the following formula for estimating the computational cost of producing a result $R$ :
|
| 134 |
+
|
| 135 |
+
$$
|
| 136 |
+
\operatorname{Cost}\left( R\right) \propto E \cdot D \cdot H \tag{2}
|
| 137 |
+
$$
|
| 138 |
+
|
| 139 |
+
where $E$ is the cost of executing the model on a single example, $D$ is the size of the training set (which controls how many times the model is executed during a training run), and $H$ is the number of hyperparameter experiments (which controls how many times the model is trained during model de-
|
| 140 |
+
|
| 141 |
+
velopment). How can we understand this in the 314
|
| 142 |
+
|
| 143 |
+
light of the previous discussion? 315
|
| 144 |
+
|
| 145 |
+
First, it should be noted that this is not an exact equality. The claim is only that the cost is proportional to the product of factors on the right hand side, but the exact cost may depend on other factors that may be hard to control. Depending on what type of cost is considered - a question that we will return to below - the estimate may be more or less exact. Second, the notion of a result is not really specified, but seems to correspond to our notion of product and is therefore open to the same variable interpretations as discussed in the previous section. Third, as stated above, the formula applies only to the development phase, where the result/product is naturally understood as the performance of the final model. To clarify this, we replace $R$ (for result) with ${P}_{P}$ (for product-as-performance) and add the subscript $T$ (for training) to the factors $E$ and $D$ :
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
\operatorname{DevCost}\left( {P}_{P}\right) \propto {E}_{T} \cdot {D}_{T} \cdot H \tag{3}
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
Schwartz et al. (2020) go on to observe that a formula appropriate for inference during the deployment phase can be obtained by simply removing the factors $D$ and $H$ (and, in our new notation, changing ${E}_{T}$ to ${E}_{I}$ since the cost of processing a single input instance is typically not the same at training and inference time):
|
| 152 |
+
|
| 153 |
+
$$
|
| 154 |
+
\operatorname{Dep}\operatorname{Cost}\left( {P}_{P}\right) \propto {E}_{I} \tag{4}
|
| 155 |
+
$$
|
| 156 |
+
|
| 157 |
+
This corresponds to the product-as-performance model for the deployment phase discussed in the previous section, based on the average cost of processing a typical input instance, and has the same limitations. It ignores the quantity of data processed by a model, and it is insensitive to the initial investment in terms of development cost. To overcome the first limitation, we can add back the factor $D$ , now representing the amount of data processed during deployment (instead of the amount of training data), and replace product-as-performance $\left( {P}_{P}\right)$ by product-as-output $\left( {P}_{O}\right)$ :
|
| 158 |
+
|
| 159 |
+
$$
|
| 160 |
+
\operatorname{Dep}\operatorname{Cost}\left( {P}_{O}\right) \propto {E}_{I} \cdot {D}_{I} \tag{5}
|
| 161 |
+
$$
|
| 162 |
+
|
| 163 |
+
To overcome the second limitation, we have to add the development cost to the equation:
|
| 164 |
+
|
| 165 |
+
$$
|
| 166 |
+
\operatorname{DepCost}\left( {P}_{O}\right) \propto {E}_{T} \cdot {D}_{T} \cdot H + {E}_{I} \cdot {D}_{I} \tag{6}
|
| 167 |
+
$$
|
| 168 |
+
|
| 169 |
+
357
|
| 170 |
+
|
| 171 |
+
This allows us to quantify the product and cost as they develop over the lifetime of a model, and this 360 is what we propose to call amortized efficiency based on total deployment cost, treating develop- 362 ment cost as a debt that is amortized during the deployment phase. ${}^{3}$ As already noted, this is only meaningful if we take model performance into ac- 365 count, for example by stipulating a threshold of minimal acceptable performance.
|
| 172 |
+
|
| 173 |
+
367 The discussion so far has focused on how to understand the notion of efficiency in NLP by relating different notions of product to an abstract notion of cost incurred over the different phases of lifetime of a model. However, as noted in the introduction, this abstract notion of cost can be instantiated in many different ways, often in terms of a specific resource being consumed, and it may be more or less straightforward to obtain precise measures of the resource consumption. Before illustrating the different efficiency metrics with some real data, we will therefore discuss costs and resources that have been prominent in the recent literature and motivate the selection of costs included in our case study.
|
| 174 |
+
|
| 175 |
+
Time and Space The classical notion of efficient computation from complexity theory is based on the resources of time and space. Measuring cost in terms of time and space (or memory) is important for time-critical applications and/or memory-constrained settings, but in this context we are more interested in execution time and memory consumption than in asymptotic time and space complexity. For this reason, execution time remains one of the most often reported cost measures in the literature, even though it can be hard to compare across experimental settings because it is influenced by factors such as the underlying hardware, other jobs running on the same machine, and the number of cores used (Schwartz et al., 2020).
|
| 176 |
+
|
| 177 |
+
Power and ${\mathrm{{CO}}}_{2}$ Electrical power consumption and the ensuing ${\mathrm{{CO}}}_{2}$ emission are costs that have been highlighted in the recent literature on resource-efficient NLP and AI. For example, Strubell et al. (2019) estimate the total power consumption for training NLP models as well as the corresponding ${\mathrm{{CO}}}_{2}$ emission. Hershcovich et al. (2022) propose that climate performance model cards for NLP models should minimally include information about total energy consumption and location for the computation, ideally also information about the 406
|
| 178 |
+
|
| 179 |
+
energy mix at the location and the ${\mathrm{{CO}}}_{2}$ emission 407 associated with different phases of model develop- 408 ment and use. Against this, Schwartz et al. (2020) 409
|
| 180 |
+
|
| 181 |
+
observe that, while both power consumption and 410
|
| 182 |
+
|
| 183 |
+
carbon emission are highly relevant costs, they are 411 difficult to compare across settings because they
|
| 184 |
+
|
| 185 |
+
depend on hardware and local electricity infrastruc- 413 ture in a way that may vary over time even at the same location.
|
| 186 |
+
|
| 187 |
+
Abstract Cost Measures Given the practical dif-
|
| 188 |
+
|
| 189 |
+
ficulties to obtain exact and comparable measure- 417 ments of relevant costs like time, power consumption, and carbon emission, several researchers have advocated more abstract cost measures, which are easier to obtain and compare across settings while
|
| 190 |
+
|
| 191 |
+
being sufficiently correlated with other costs that 422 we care about. One such measure is model size, often expressed as number of parameters, which is independent of underlying hardware but correlates with memory consumption. However, as observed by Schwartz et al. (2020), since different models and algorithms make different use of their parameters, model size is not always strongly correlated with costs like execution time, power consumption, and carbon emission. They therefore advocate number of floating point operations (FLOPs) as the best abstract cost measure, arguing that it has the following advantages compared to other measures: (a) it directly computes the amount of work done by the running machine when executing a specific instance of a model and is thus tied to the amount of energy consumed; (b) it is agnostic to the hardware on which the model is run, which facilitates fair comparison between different approaches; (c) unlike asymptotic time complexity, it also considers the amount of work done at each time step. They acknowledge that it also has limitations, such as ignoring memory consumption and model implementation. Using FLOPs to measure computation cost has emerged as perhaps the most popular approach in the community, and it has been shown empirically to correlate well with energy consumption (Axberg, 2022).
|
| 192 |
+
|
| 193 |
+
Data The amount of data (labeled or unlabeled) 450 needed to train a given model and/or reach a certain
|
| 194 |
+
|
| 195 |
+
performance is a relevant cost measure for several 452 reasons. In AI in general, if we can make models and algorithms more data-efficient, then they will ceteris paribus be more time- and energy-efficient. 456 In NLP specifically, it will in addition benefit low-resource languages, for which both data and computation are scarce resources.
|
| 196 |
+
|
| 197 |
+
---
|
| 198 |
+
|
| 199 |
+
${}^{3}$ Note that we can also use the notion of total deployment cost to compare the Pareto efficiency of different models at different points of time (under a product-as-performance model) by computing average deployment cost in a way that is sensitive to development cost and lifetime usage of a model.
|
| 200 |
+
|
| 201 |
+
---
|
| 202 |
+
|
| 203 |
+
In conclusion, no single cost metric captures all we care about, and any single metric can therefore be misleading on its own. In our illustrative case study, we include three of the most important metrics: execution time, power consumption, and FLOPs.
|
| 204 |
+
|
| 205 |
+
## 4 Case Study
|
| 206 |
+
|
| 207 |
+
To illustrate the different conceptualizations of resource-efficiency discussed in previous sections, we present a case study on developing and deploying a language model for a specific NLP task using different combinations of fine-tuning and knowledge distillation. The point of the study is not to advance the state of the art in resource-efficient NLP, but to show how different conceptualizations support the comparison of models of different sizes, at different performance levels, and with different development and deployment costs.
|
| 208 |
+
|
| 209 |
+
### 4.1 Overall Experimental Design
|
| 210 |
+
|
| 211 |
+
Our goal is to apply the Swedish pre-trained language model KB-BERT (Malmsten et al., 2020) to Named Entity Recognition (NER), using data from SUCX 3.0 (Spräkbanken, 2022) for fine-tuning and evaluation. We consider three scenarios:
|
| 212 |
+
|
| 213 |
+
- Fine-tuning (FT): The standard fine-tuning approach is followed, with a linear layer added on top of KB-BERT. The model is trained on the SUCX 3.0 training set until the validation loss no longer decreases for up to 10 epochs.
|
| 214 |
+
|
| 215 |
+
- Task-specific distillation (TS): We distill the fine-tuned KB-BERT model to a 6-layer BERT student model. The student model is trained on the SUCX 3.0 training set using the teacher predictions on this set as ground truth.
|
| 216 |
+
|
| 217 |
+
- Task-agnostic distillation (TA): We distill KB-BERT to a 6-layer BERT student model using the task-agnostic distillation objective proposed by Sanh et al. (2020). We train on deduplicated Swedish Wikipedia data by averaging three kinds of losses for masked language modelling, knowledge distillation and cosine-distance between student and teacher hidden states. The student model is subsequently fine-tuned on the SUCX 3.0 training set with the method described above. All three fine-tuned models are evaluated on the 503 SUCX 3.0 test set. Statistics about the datasets can 504 be found in the appendix section A.1, while details 505 about our experiments are given in the appendix 506 section A.2. We measure model performance using 507 the F1 score, which is the standard evaluation met- 508 ric for NER, and model output in number of words;
|
| 218 |
+
|
| 219 |
+
and we measure three different types of cost dur- 510 ing development and deployment: execution time, power consumption and FLOPs. Based on these basic measures, we derive different efficiency metrics for model comparison, as discussed in Section 4.4.
|
| 220 |
+
|
| 221 |
+
### 4.2 Setup Details
|
| 222 |
+
|
| 223 |
+
515
|
| 224 |
+
|
| 225 |
+
The TextBrewer framework (Yang et al., 2020) 516 is used for the distillation experiments, while the Huggingface Transformers ${}^{4}$ library is used for fine-
|
| 226 |
+
|
| 227 |
+
tuning and inference. ${}^{5}$ All experiments are exe- 519 cuted on an Nvidia DGX-1 server with 8 Tesla
|
| 228 |
+
|
| 229 |
+
V100 SXM2 32GB. In order to get measurements 521 under realistic conditions, we run different stages in parallel on different GPUs, while blocking other processes from the system to avoid external interference. Each experimental stage is repeated
|
| 230 |
+
|
| 231 |
+
3 times and measurements of execution time and 526 power consumption are averaged. ${}^{6}$ The different cost types are measured as follows:
|
| 232 |
+
|
| 233 |
+
- Execution time: We average the duration of
|
| 234 |
+
|
| 235 |
+
the individual Python jobs for each experimen- 530 tal stage.
|
| 236 |
+
|
| 237 |
+
- Power consumption: We measure power con- 532 sumption for all 4 PSUs of the server as well
|
| 238 |
+
|
| 239 |
+
as individual GPU power consumption, fol- 534 lowing Gustafsson et al. (2018). Based on snapshots of measured effect at individual points in time, we calculate the area under the curve to get the power consumption in Wh. Since we run the task-agnostic distilla- 539 tion using distributed data parallelism on two GPUs, we sum the consumption of both GPUs for each TA run.
|
| 240 |
+
|
| 241 |
+
- FLOPs: We estimate the number of FLOPs 543 required for each stage using the estimation 544
|
| 242 |
+
|
| 243 |
+
---
|
| 244 |
+
|
| 245 |
+
${}^{4}$ https://huggingface.co/docs/ transformers/index
|
| 246 |
+
|
| 247 |
+
${}^{5}$ More information on hyperparameters and data set sizes can be found in Appendix A.
|
| 248 |
+
|
| 249 |
+
${}^{6}$ Since we repeat stages 3 times for every model instance, task-specific distillation, fine-tuning of the distilled model, and evaluation of FT are repeated 9 times, while evaluation of TS and TA is repeated 27 times.
|
| 250 |
+
|
| 251 |
+
---
|
| 252 |
+
|
| 253 |
+
<table><tr><td rowspan="2"/><td colspan="3">Distillation Stage</td><td colspan="3">Fine-Tuning Stage</td><td colspan="3">Evaluation Stage</td><td rowspan="2">$\mathbf{{F1}}$</td></tr><tr><td>Time</td><td>Power</td><td>FLOPs</td><td>Time</td><td>Power</td><td>FLOPs</td><td>Time</td><td>Power</td><td>FLOPs</td></tr><tr><td>FT</td><td>-</td><td>-</td><td>-</td><td>0:35:17</td><td>141.1</td><td>${2.48} \times {10}^{16}$</td><td>0:01:32</td><td>5.2</td><td>${2.59} \times {10}^{15}$</td><td>87.3</td></tr><tr><td>TS</td><td>0:18:30</td><td>77.1</td><td>${1.64} \times {10}^{16}$</td><td>0:35:17</td><td>141.1</td><td>${2.48} \times {10}^{16}$</td><td>0:01:09</td><td>3.1</td><td>${1.71} \times {10}^{15}$</td><td>84.9</td></tr><tr><td>TA</td><td>13:06:59</td><td>6848.9</td><td>${3.65} \times {10}^{17}$</td><td>0:18:53</td><td>74.4</td><td>${1.69} \times {10}^{16}$</td><td>0:01:15</td><td>3.3</td><td>${1.71} \times {10}^{15}$</td><td>77.6</td></tr></table>
|
| 254 |
+
|
| 255 |
+
Table 1: Performance (F1) and cost measurements (Time: hh:mm:ss, Power: Wh, FLOPs) for different stages (Distillation, Fine-tuning, Evaluation) and different development scenarios (Fine-tuning: FT, Task-specific distillation: TS, Task-agnostic distillation: TA).
|
| 256 |
+
|
| 257 |
+

|
| 258 |
+
|
| 259 |
+
Figure 2: Pareto efficiency for the development phase (top) and the deployment phase (down) based on three different cost measures: execution time (left), power consumption (center), and FLOPs (right).
|
| 260 |
+
|
| 261 |
+
545 formulas proposed by Kaplan et al. (2020), for training (7) and inference (8):
|
| 262 |
+
|
| 263 |
+
547
|
| 264 |
+
|
| 265 |
+
$$
|
| 266 |
+
{\mathrm{{FLOP}}}_{T} = 6 \cdot n \cdot N \cdot S \cdot B \tag{7}
|
| 267 |
+
$$
|
| 268 |
+
|
| 269 |
+
$$
|
| 270 |
+
{\mathrm{{FLOP}}}_{I} = 2 \cdot n \cdot N \cdot S \cdot B \tag{8}
|
| 271 |
+
$$
|
| 272 |
+
|
| 273 |
+
where $n$ is the sequence length, $N$ is the number of model parameters, $S$ is the number of training/inference steps, and $B$ is the batch size. The cost for fine-tuning a model is given by ${\mathrm{{FLOP}}}_{T}$ , while the evaluation cost is ${\mathrm{{FLOP}}}_{I}$ . For distillation, we need to sum ${\mathrm{{FLOP}}}_{T}$ for the student model and ${\mathrm{{FLOP}}}_{I}$ for the teacher model (whose predictions are used to train the student model).
|
| 274 |
+
|
| 275 |
+
### 4.3 Basic Results
|
| 276 |
+
|
| 277 |
+
Table 1 shows basic measurements of performance and costs for different scenarios and stages. We see
|
| 278 |
+
|
| 279 |
+
that the fine-tuned KB-BERT model (FT) reaches 562
|
| 280 |
+
|
| 281 |
+
an F1 score of 87.3; task-specific distillation to 563
|
| 282 |
+
|
| 283 |
+
a smaller model (TS) gives a score of 84.9, while 564 fine-tuning after task-agnostic distillation (TA) only reaches 77.6 in this experiment. When comparing
|
| 284 |
+
|
| 285 |
+
costs, we see that task-agnostic distillation is by 567 far the most expensive stage. Compared to task-
|
| 286 |
+
|
| 287 |
+
specific distillation, the execution time is more than 569 40 times longer, the power consumption almost 100 times greater, and the number of FLOPs more than 20 times greater. Although the fine-tuning costs are smaller for the distilled TA model, the reduction is only about ${50}\%$ for execution time and power consumption and about ${30}\%$ for FLOPs.
|
| 288 |
+
|
| 289 |
+
We also investigate whether power consumption 576 can be predicted from the number of FLOPs, as this is a common argument in the literature for preferring the simpler FLOPs calculations over the more
|
| 290 |
+
|
| 291 |
+
Total deployment cost
|
| 292 |
+
|
| 293 |
+

|
| 294 |
+
|
| 295 |
+
Figure 3: Amortized efficiency of the deployment phase over lifetime, based on three different cost measures: execution time (left), power consumption (center), and FLOPs (right).
|
| 296 |
+
|
| 297 |
+
580 involved measurements of actual power consumption. We find an extremely strong and significant linear correlation between the two costs (Pearson $r = {0.997}, p \approx 0)$ . Our experiments thus corroborate earlier claims that FLOPs is a convenient cost measure that correlates well with power consumption (Schwartz et al., 2020; Axberg, 2022). However, it is worth noting that the GPU power consumption, which is what is reported in Table 1 and which can thus be estimated from the FLOPs count, is only ${71.7}\%$ of the total power consumption of the server including all 4 PSUs.
|
| 298 |
+
|
| 299 |
+
### 4.4 Measuring and Comparing Efficiency
|
| 300 |
+
|
| 301 |
+
So how do our three models compare with respect to resource-efficiency? The answer is that this depends on what concept of efficiency we apply and which part of the life-cycle we consider. Figure 2 plots product-as-performance as a function of cost separately for the development phase and the deployment phase, corresponding to Equations (3) and (4), which allows us to compare Pareto efficiency. Considering only the development phase, the FT model is clearly optimal, since it has both the highest performance and the lowest cost of all models. Considering instead the deployment phase, the FT model still has the best performance, but the other two models have lower (average) inference cost. The TA model is still suboptimal, since it gives lower performance at the same cost as the TS model. However, FT and TS are both optimal with respect to Pareto efficiency, since neither is outperformed by a model at the same cost level (nor has higher processing cost than any model at the same performance level).
|
| 302 |
+
|
| 303 |
+
For a more holistic perspective on life-time efficiency, we can switch to a product-as-output model
|
| 304 |
+
|
| 305 |
+
and plot deployment efficiency as a function of both 616
|
| 306 |
+
|
| 307 |
+
the initial development cost and the average infer- 617
|
| 308 |
+
|
| 309 |
+
ence cost for processing new data, corresponding 618
|
| 310 |
+
|
| 311 |
+
to Equation (6) and our newly proposed notion of 619
|
| 312 |
+
|
| 313 |
+
amortized efficiency. This is depicted in Figure 3, 620
|
| 314 |
+
|
| 315 |
+
which compares the FT and TS model (disregard- 621 ing the clearly suboptimal TA model). We see that,
|
| 316 |
+
|
| 317 |
+
although the FT model has an initial advantage be- 623 cause it has not incurred the cost for distillation, the TS model eventually catches up and becomes more time-efficient after processing about $4\mathrm{\;B}$ tokens and more energy-efficient after processing about ${127}\mathrm{M}$
|
| 318 |
+
|
| 319 |
+
tokens. It is however important to keep in mind that 628 this comparison does not take performance into ac-
|
| 320 |
+
|
| 321 |
+
count, so we again need to decide what increase in 630 cost we are willing to pay for a given improvement in performance, although the increase in this case is sensitive to the expected life-time of the models.
|
| 322 |
+
|
| 323 |
+
## 5 Conclusion
|
| 324 |
+
|
| 325 |
+
634
|
| 326 |
+
|
| 327 |
+
In this paper, we have discussed the concept of
|
| 328 |
+
|
| 329 |
+
resource-efficiency in NLP, arguing that it cannot 636 be reduced to a single definition and that we need a richer conceptual framework to reason about different aspects of efficiency. As a complement to the established notion of Pareto efficiency, which separates development and deployment under a product-as-performance model, we have proposed the notion of amortized efficiency, which enables a life-cycle analysis including both development and deployment under a product-as-output model. We have illustrated both notions in a simple case study, which we hope can serve as inspiration for further discussions of resource-efficiency in NLP. Future work should investigate more sophisticated ways of incorporating performance level into the notion of amortized efficiency. 652
|
| 330 |
+
|
| 331 |
+
## References
|
| 332 |
+
|
| 333 |
+
653 Tom Axberg. 2022. Deriving a natural language pro- 654 cessing inference cost model with greenhouse gas 655 accounting: Towards a sustainable usage of machine 656 learning. Master's thesis, KTH Royal Institue of 657 Technology.
|
| 334 |
+
|
| 335 |
+
Mostafa Dehghani, Anurag Arnab, Lucas Beyer, Ashish Vaswani, and Yi Tay. 2022. The efficiency misnomer. In Proceedings of the Tenth International Conference on Learning Representations (ICLR).
|
| 336 |
+
|
| 337 |
+
Jonas Gustafsson, Sebastian Fredriksson, Magnus Nilsson-Mäki, Daniel Olsson, Jeffrey Sarkinen, Henrik Niska, Nicolas Seyvet, Tor Björn Minde, and Jonathan Summers. 2018. A demonstration of monitoring and measuring data centers for energy efficiency using opensource tools. In Proceedings of the Ninth International Conference on Future Energy Systems, pages 506-512.
|
| 338 |
+
|
| 339 |
+
Daniel Hershcovich, Nicolas Webersinke, Mathias Kraus, Julia Anna Bingler, and Markus Leippold. 2022. Towards climate awareness in nlp research. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages
|
| 340 |
+
|
| 341 |
+
675 2480-2494.
|
| 342 |
+
|
| 343 |
+
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B.
|
| 344 |
+
|
| 345 |
+
677 Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models.
|
| 346 |
+
|
| 347 |
+
680 arxiv:2011.08361.
|
| 348 |
+
|
| 349 |
+
Xiangyang Liu, Tianxiang Sun, Junliang He, Jiawen Wu,
|
| 350 |
+
|
| 351 |
+
682 Lingling Wu, Xinyu Zhang, Hao Jiang, Zhao Cao, Xuanjing Huang, and Xipeng Qiu. 2022. Towards efficient NLP: A standard evaluation and a strong
|
| 352 |
+
|
| 353 |
+
685 baseline. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech-
|
| 354 |
+
|
| 355 |
+
688 nologies, pages 3288-3303. Association for Computational Linguistics.
|
| 356 |
+
|
| 357 |
+
690 Martin Malmsten, Love Börjeson, and Chris Haf-fenden. 2020. Playing with words at the National Library of Sweden - Making a Swedish BERT.
|
| 358 |
+
|
| 359 |
+
693 arXiv:2007.01658.
|
| 360 |
+
|
| 361 |
+
Carl Mitcham. 1994. Thinking through Technology:
|
| 362 |
+
|
| 363 |
+
695 The Path between Engineering and Philosophy. The University of Chicago Press.
|
| 364 |
+
|
| 365 |
+
697 Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2020. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter.
|
| 366 |
+
|
| 367 |
+
700 arXiv:1910.01108. Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren
|
| 368 |
+
|
| 369 |
+
702 Etzioni. 2020. Green AI. Communications of the ${ACM},{63}\left( {12}\right) : {54} - {63}$ .
|
| 370 |
+
|
| 371 |
+
704 Spräkbanken. 2022. SUCX 3.0: Stockholm-Umeå corpus 3.0 scrambled.
|
| 372 |
+
|
| 373 |
+
Emma Strubell, Ananya Ganesh, and Andrew McCal- 706 lum. 2019. Energy and policy considerations for 707 deep learning in NLP. In Proceedings of the 57th 708 Annual Meeting of the Association for Computational 709 Linguistics, pages 3645-3650. 710
|
| 374 |
+
|
| 375 |
+
Marcos Treviso, Tianchu Ji, Ji-Ung Lee, Betty van Aken, 711
|
| 376 |
+
|
| 377 |
+
Qingqing Cao, Manuel R. Ciosici, Michael Hassid, 712
|
| 378 |
+
|
| 379 |
+
Kenneth Heafield, Sara Hooker, Pedro H. Martins, 713
|
| 380 |
+
|
| 381 |
+
André F. T. Martins, Peter Milder, Colin Raffel, Ed- 714
|
| 382 |
+
|
| 383 |
+
win Simpson, Noam Slonim, Niranjan Balasubra- 715
|
| 384 |
+
|
| 385 |
+
manian, Leon Derczynski, and Roy Schwartz. 2022. 716
|
| 386 |
+
|
| 387 |
+
Efficient methods for natural language processing: A 717
|
| 388 |
+
|
| 389 |
+
survey. arXiv:2209.00099. 718
|
| 390 |
+
|
| 391 |
+
Ziqing Yang, Yiming Cui, Zhipeng Chen, Wanxiang 719
|
| 392 |
+
|
| 393 |
+
Che, Ting Liu, Shijin Wang, and Guoping Hu. 2020. 720 TextBrewer: An Open-Source Knowledge Distilla-
|
| 394 |
+
|
| 395 |
+
tion Toolkit for Natural Language Processing. In 722
|
| 396 |
+
|
| 397 |
+
Proceedings of the 58th Annual Meeting of the Associ- 723 ation for Computational Linguistics: System Demon-
|
| 398 |
+
|
| 399 |
+
strations, Online. Association for Computational Lin- 725
|
| 400 |
+
|
| 401 |
+
guistics. 726
|
| 402 |
+
|
| 403 |
+
## A Experimental Details
|
| 404 |
+
|
| 405 |
+
727
|
| 406 |
+
|
| 407 |
+
### A.1 Data Sets
|
| 408 |
+
|
| 409 |
+
728
|
| 410 |
+
|
| 411 |
+
The SUCX 3.0 dataset (simple_lower_mix ver- 729 sion ${)}^{7}$ is used for fine-tuning, task-specific distil-
|
| 412 |
+
|
| 413 |
+
lation and evaluation. The dataset splits are are 731 the following: 43126 examples in the training set, 10772 in the validation set and 13504 examples in the test set.
|
| 414 |
+
|
| 415 |
+
For task-agnostic distillation, we are using a
|
| 416 |
+
|
| 417 |
+
deduplicated version of Swedish Wikipedia, with 736 the following dataset split:2,552,479sentences in
|
| 418 |
+
|
| 419 |
+
the training set and 25,783 sentences in the valida- 738 tion set.
|
| 420 |
+
|
| 421 |
+
### A.2 Experiment Details
|
| 422 |
+
|
| 423 |
+
740
|
| 424 |
+
|
| 425 |
+
The base model in our experiments is KB-BERT-
|
| 426 |
+
|
| 427 |
+
cased. ${}^{8}$ The hyperparameters used for fine-tuning 742 and distillation are presented in Table 2. In the fine-tuning experiments, early stopping is used and the best performing model in the validation set is saved. The task-agnostic distillation experiments
|
| 428 |
+
|
| 429 |
+
are performed on two GPUs, using the distributed 747 data parallel functionality of pytorch, while gradi-
|
| 430 |
+
|
| 431 |
+
ent accumulation steps are set to 2 . 749
|
| 432 |
+
|
| 433 |
+
---
|
| 434 |
+
|
| 435 |
+
7 https://huggingface.co/datasets/ KBLab/sucx3_ner
|
| 436 |
+
|
| 437 |
+
${}^{8}$ https://huggingface.co/KB/ bert-base-swedish-cased
|
| 438 |
+
|
| 439 |
+
---
|
| 440 |
+
|
| 441 |
+
<table><tr><td/><td>Batch size</td><td>Training epochs</td><td>Sequence length</td><td>Learning rate</td><td>Warm-up steps</td></tr><tr><td>Fine-tuning</td><td>32</td><td>10</td><td>256</td><td>${3e} - 5$</td><td>404</td></tr><tr><td>Task-specific distillation</td><td>32</td><td>2</td><td>256</td><td>5e-5</td><td>260</td></tr><tr><td>Task-agnostic distillation</td><td>8</td><td>0.75</td><td>256</td><td>${1e} - 4$</td><td>3750</td></tr><tr><td>Evaluation</td><td>32</td><td>-</td><td>256</td><td>-</td><td>-</td></tr></table>
|
| 442 |
+
|
| 443 |
+
Table 2: Hyperparameters for fine-tuning and distillation.
|
| 444 |
+
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/-O-A_6M_oi/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,329 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ ON THE CONCEPT OF RESOURCE-EFFICIENCY IN NLP
|
| 2 |
+
|
| 3 |
+
Anonymous ACL submission
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
001 Resource-efficiency is a growing concern in the NLP community. But what are the resources we care about and why? How do we measure
|
| 8 |
+
|
| 9 |
+
004 efficiency in a way that is reliable and relevant? And how do we balance efficiency and other important concerns? Based on a review of the emerging literature on the subject, we discuss different ways of conceptualizing efficiency in
|
| 10 |
+
|
| 11 |
+
009 terms of product and cost, using a simple case study on fine-tuning and knowledge distillation for illustration. We propose a novel metric of amortized efficiency that is better suited for life-cycle analysis than existing metrics.
|
| 12 |
+
|
| 13 |
+
014
|
| 14 |
+
|
| 15 |
+
§ 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
Resource-efficiency has recently become a more prominent concern in the NLP community. The Association for Computational Linguistics (ACL) has issued an Efficient NLP Policy Document ${}^{1}$ and most conferences now have a special track devoted to efficient methods in NLP. The major reason for this increased attention to efficiency can be found in the perceived negative effects of scaling NLP models (and AI models more generally) to unprecedented sizes, which increases energy consumption and carbon footprint as well as raises barriers to participation in NLP research for economic reasons (Strubell et al., 2019; Schwartz et al., 2020). These considerations are important and deserve serious attention, but they are not the only reasons to care about resource-efficiency. Traditional concerns like guaranteeing that models can be executed with sufficient speed to enable real-time processing, or with sufficiently low memory footprint to fit on small devices, will continue to be important as well.
|
| 18 |
+
|
| 19 |
+
Resource-efficiency is however a complex and multifaceted problem. First, there are many relevant types of resources, which interact in complex (and sometimes antagonistic) ways. For example,
|
| 20 |
+
|
| 21 |
+
adding more computational resources may improve 039
|
| 22 |
+
|
| 23 |
+
time efficiency but increase energy consumption. 040
|
| 24 |
+
|
| 25 |
+
For some of these resources, obtaining relevant 041
|
| 26 |
+
|
| 27 |
+
and reliable measurements can also be a challenge, 042 especially if the consumption depends on both soft- 043 ware and hardware properties. Furthermore, the life-cycle of a typical NLP model can be divided into different phases, like pre-training, fine-tuning and (long-term) inference, which often have very
|
| 28 |
+
|
| 29 |
+
different resource requirements but nevertheless 048 need to be related to each other in order to obtain a
|
| 30 |
+
|
| 31 |
+
holistic view of total resource consumption. Since 050 one and the same (pre-trained) model can be fine-tuned and deployed in multiple instances, it may also be necessary to amortize the training cost in order to arrive at a fair overall assessment.
|
| 32 |
+
|
| 33 |
+
To do justice to this complexity, we must resist 055 the temptation to reduce the notion of resource-efficiency to a single metric or equation. Instead, we need to develop a conceptual framework that supports reasoning about the interaction of different resources while taking the different phases of 060 the life-cycle into account. The emerging literature
|
| 34 |
+
|
| 35 |
+
on the subject shows a growing awareness of this 062 need, and there are a number of promising proposals that address parts of the problem. In this paper, we review some of these proposals and discuss issues that arise when trying to define and measure
|
| 36 |
+
|
| 37 |
+
efficiency in relation to NLP models. We specifi- 067 cally address the need for a holistic assessment of
|
| 38 |
+
|
| 39 |
+
efficiency over the entire life-cycle of a model and 069 propose a novel notion of amortized efficiency. All notions and metrics are illustrated in a small case study on fine-tuning and knowledge distillation. 072
|
| 40 |
+
|
| 41 |
+
§ 2 RELATED WORK
|
| 42 |
+
|
| 43 |
+
073
|
| 44 |
+
|
| 45 |
+
Strubell et al. (2019) were among the first to discuss 074 the increasing resource requirements in NLP. They 075 provide estimates of the energy needed to train a number of popular NLP models (T2T, ELMo, BERT, GPT2). Based on those estimates, they also 079 estimate the cost in dollars and the ${\mathrm{{CO}}}_{2}$ emission associated with model training. In addition to the cost of training a single model, they provide a case study of the additional (much larger) costs involved in hyperparameter tuning and model fine-tuning.
|
| 46 |
+
|
| 47 |
+
${}^{1}$ https://www.aclweb.org/portal/content/efficient-nlp-policy-document
|
| 48 |
+
|
| 49 |
+
Schwartz et al. (2020) note that training costs in AI increased 300,000 times from 2012 to 2017, with costs doubling every few months, and argue that focusing only on the attainment of state-of-the-art accuracy ignores the economic, environmental, or social cost of reaching the reported accuracy. They advocate research on Green ${AI}$ - AI research that is more environmentally friendly and inclusive than traditional research, which they call Red AI. Specifically, they propose making efficiency a more common evaluation criterion for AI papers alongside accuracy and related measures.
|
| 50 |
+
|
| 51 |
+
Hershcovich et al. (2022) focus specifically on environmental impact and propose a climate performance model card that can be used with only limited information about experiments and underlying computer hardware. At a minimum authors are asked to report (a) whether the model is publicly available, (b) how much time it takes to train the final model, (c) how much time was spent on all experiments (including hyperparameter search), (d) what the total energy consumption was, and (e) at which location the computations were performed.
|
| 52 |
+
|
| 53 |
+
Liu et al. (2022) propose a new benchmark for efficient NLP models called ELUE (Efficient Language Understanding Evaluation) based on the concept of Pareto state of the art (Pareto SOTA), where a model is said to achieve Pareto SOTA if it achieves the best performance at a given cost level. The cost measures used in ELUE are number of model parameters and number of floating point operations (FLOPs), while performance measures vary depending on the task (sentiment analysis, natural language inference, paraphrase and textual similarity).
|
| 54 |
+
|
| 55 |
+
Treviso et al. (2022) provide a survey of current research on efficient methods for NLP, using a taxonomy based on different aspects or phases of the model life-cycle: data collection and preprocessing, model design, training (including pre-training and fine-tuning), inference, and model selection. Following Schwartz et al. (2020), they define efficiency as the cost of a model in relation to the results it produces. They observe that cost can be measured along multiple dimensions, such as computational, time-wise or environmental cost, and
|
| 56 |
+
|
| 57 |
+
that using a single cost indicator can be misleading. 130
|
| 58 |
+
|
| 59 |
+
They also emphasize the importance of separately 131
|
| 60 |
+
|
| 61 |
+
characterizing different stages of the model life- 132 cycle and acknowledge that properly measuring efficiency remains a challenge.
|
| 62 |
+
|
| 63 |
+
Dehghani et al. (2022) elaborate on the theme 135 of potentially misleading efficiency characteriza-
|
| 64 |
+
|
| 65 |
+
tions by showing that some of the most commonly 137 used cost indicators - number of model parameters, FLOPs, and throughput (msec/example) - can easily contradict each other when used to compare models and are therefore insufficient as standalone metrics. They again stress the importance of distinguishing training cost from inference cost, and point out that their relative importance may vary depending on context and use case. For example, training efficiency is crucial if a model needs to be retrained often, while inference efficiency may be critical in embedded applications.
|
| 66 |
+
|
| 67 |
+
§ 3 THE CONCEPT OF EFFICIENCY IN NLP
|
| 68 |
+
|
| 69 |
+
149
|
| 70 |
+
|
| 71 |
+
Efficiency is commonly defined as the ratio of use- 150 ful output to total input: ${}^{2}$
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
r = \frac{P}{C} \tag{1}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
152
|
| 78 |
+
|
| 79 |
+
where $P$ is the amount of useful output or results,
|
| 80 |
+
|
| 81 |
+
the product, and $C$ is the total cost of producing the 154 results, often defined as the amount of resources
|
| 82 |
+
|
| 83 |
+
consumed. A process or system can then be said 156 to reach maximum efficiency if a specific desired result is obtained with the minimal possible amount of resources, or if the maximum amount of results is obtained from a given resource. More generally,
|
| 84 |
+
|
| 85 |
+
maximum efficiency holds when it is not possible 161 to increase the product without increasing the cost, nor reduce the cost without reducing the product.
|
| 86 |
+
|
| 87 |
+
In order to apply this concept of efficiency to NLP, we first have to decide what counts as useful output or results - the product $P$ in Equation 1. We then need to figure out how to measure the cost $C$ in terms of resources consumed. Finally, we need to come up with relevant ways of relating $P$ to $C$ in different contexts of research, development and deployment, as well as aggregating the results into a life-cycle analysis. We will begin by discussing the last question, because it has a bearing on how we approach the other two.
|
| 88 |
+
|
| 89 |
+
${}^{2}$ Historically, the technical concept of efficiency arose in engineering in the nineteenth century, in the analysis of engine performance (thermodynamic efficiency); it was subsequently adopted in economy and social science by Vilfredo Pareto and others (Mitcham, 1994).
|
| 90 |
+
|
| 91 |
+
175
|
| 92 |
+
|
| 93 |
+
§ 3.1 THE LIFE-CYCLE OF AN NLP MODEL
|
| 94 |
+
|
| 95 |
+
It is natural to divide the life-span of an NLP model into two phases: development and deployment. In the development phase, the model is created, optimized and validated for use. In the deployment phase, it is being used to process new language data in one or more applications. The development phase of an NLP model today typically includes several stages of training, some or all of which may be repeated multiple times in order to optimize various hyperparameters, as well as validation on held-out data to estimate model performance. The deployment phase is more homogeneous in that it mainly consists in using the model for inference on new data, although this may be interrupted by brief development phases to keep the model up to date.
|
| 96 |
+
|
| 97 |
+
As researchers, we naturally tend to focus on the development of new models and many models developed in a research context may never enter the deployment phase at all. Since the development phase is typically also more computationally intensive than the deployment phase, it is therefore not surprising that early papers concerned with the increasing energy consumption of NLP research, such as Strubell et al. (2019) and Schwartz et al. (2020), mainly focused on the development phase. Nevertheless, for models that are actually put to use in large-scale applications, resources consumed during the deployment phase may in the long run be much more important, and efficiency in the deployment phase is therefore an equally valid concern. This is also the focus of the recently proposed evaluation framework ELUE (Liu et al., 2022).
|
| 98 |
+
|
| 99 |
+
As will be discussed in the following sections, some proposed efficiency metrics are better suited for one of the two phases, although they can often be adapted to the other phase as well. However, the question is whether there is also a need for metrics that capture the combined resource usage at development and deployment, and how such metrics can be constructed. One reason for being interested in combined metrics is that there may be trade-offs between resources spent during development and deployment, respectively, so that spending more resources in development may lead to more efficient deployment (or vice versa). To arrive at a more holistic assessment of efficiency, we need to define efficiency metrics for deployment that also incorporate development costs. Before we propose such a metric, we need to discuss how to conceptualize products and costs of NLP models.
|
| 100 |
+
|
| 101 |
+
< g r a p h i c s >
|
| 102 |
+
|
| 103 |
+
Figure 1: Pareto front with model performance as the product and cost measured in FLOPs (Liu et al., 2022).
|
| 104 |
+
|
| 105 |
+
§ 3.2 THE PRODUCTS OF AN NLP MODEL
|
| 106 |
+
|
| 107 |
+
226
|
| 108 |
+
|
| 109 |
+
What is the output that we want to produce at the 227
|
| 110 |
+
|
| 111 |
+
lowest possible cost in NLP? Is it simply a model 228 capable of processing natural language (as input
|
| 112 |
+
|
| 113 |
+
or output or both)? Is it the performance of such 230 a model on one or more NLP tasks? Or is it the actual output of such a model when processing natural language at a certain performance level? All of these answers are potentially relevant, and have been considered in the literature, but they give rise to different notions of efficiency and require different metrics and measurement procedures.
|
| 114 |
+
|
| 115 |
+
Regarding the model itself as the product is of limited interest in most circumstances, as it does not take performance into account and only makes sense for the development phase. It is therefore more common to take model performance, as measured on some standard benchmark, as a relevant product quantity, which can be plotted as a function of some relevant cost to obtain a so-called Pareto front (with corresponding concepts of Pareto improvement and Pareto state of the art), as illustrated in Figure 1, reproduced from Liu et al. (2022).
|
| 116 |
+
|
| 117 |
+
One advantage of the product-as-performance model is that it can be applied to the deployment phase as well as the development phase, although the cost measurements are different in the two cases. For the development phase, we want to measure the total cost incurred to produce a model with a given performance, which depends on a multitude of factors, such as the size of the model, the number of hyperparameters that need to be tuned, and the data efficiency of the learning algorithm. For the deployment phase, we instead focus on the average cost of processing a typical input instance, such as a natural language sentence or a text document, independently of the development cost of the model. Separating the two phases in this way is perfectly adequate in many circumstances, but 265 the fact that we measure total cost in one case and average cost in the other makes it impossible to combine the measurements into a global life-cycle analysis. To overcome this limitation, we need a notion of product that is not defined (only) in terms of model performance but also considers the actual output produced by a model.
|
| 118 |
+
|
| 119 |
+
If we take the product to be the amount of data processed by a model in the deployment phase, then we can integrate the development cost in the efficiency metric as a debt that is amortized during deployment. Under this model, the average cost of processing an input instance is not constant but decreases over the life-time of a model, which allows us to capture possible trade-offs between development and deployment costs. For example, it may sometimes be worth investing more resources into the development phase if this leads to a lower development cost in the long run. Moreover, this model allows us to reason about how long a model needs to be in use to "break even" in this respect.
|
| 120 |
+
|
| 121 |
+
An important argument against the product-as-output model is that it is trivial (but uninteresting) to produce a maximally efficient model that produces random output. It thus seems that a relevant life-cycle analysis requires us to incorporate both model performance and model output into the notion of product. There are two obvious ways to do this, each with its own advantages and drawbacks. The first is to stipulate a minimum performance level that a model must reach to be considered valid and to treat all models reaching this threshold as ceteris paribus equivalent. The second way is to use the performance level as a weighting function when calculating the product of a model. We will stick to the first and simpler approach in our case study later, but first we need to discuss the other quantity in the efficiency equation - the cost.
|
| 122 |
+
|
| 123 |
+
§ 3.3 THE COSTS OF AN NLP MODEL
|
| 124 |
+
|
| 125 |
+
Schwartz et al. (2020) propose the following formula for estimating the computational cost of producing a result $R$ :
|
| 126 |
+
|
| 127 |
+
$$
|
| 128 |
+
\operatorname{Cost}\left( R\right) \propto E \cdot D \cdot H \tag{2}
|
| 129 |
+
$$
|
| 130 |
+
|
| 131 |
+
where $E$ is the cost of executing the model on a single example, $D$ is the size of the training set (which controls how many times the model is executed during a training run), and $H$ is the number of hyperparameter experiments (which controls how many times the model is trained during model de-
|
| 132 |
+
|
| 133 |
+
velopment). How can we understand this in the 314
|
| 134 |
+
|
| 135 |
+
light of the previous discussion? 315
|
| 136 |
+
|
| 137 |
+
First, it should be noted that this is not an exact equality. The claim is only that the cost is proportional to the product of factors on the right hand side, but the exact cost may depend on other factors that may be hard to control. Depending on what type of cost is considered - a question that we will return to below - the estimate may be more or less exact. Second, the notion of a result is not really specified, but seems to correspond to our notion of product and is therefore open to the same variable interpretations as discussed in the previous section. Third, as stated above, the formula applies only to the development phase, where the result/product is naturally understood as the performance of the final model. To clarify this, we replace $R$ (for result) with ${P}_{P}$ (for product-as-performance) and add the subscript $T$ (for training) to the factors $E$ and $D$ :
|
| 138 |
+
|
| 139 |
+
$$
|
| 140 |
+
\operatorname{DevCost}\left( {P}_{P}\right) \propto {E}_{T} \cdot {D}_{T} \cdot H \tag{3}
|
| 141 |
+
$$
|
| 142 |
+
|
| 143 |
+
Schwartz et al. (2020) go on to observe that a formula appropriate for inference during the deployment phase can be obtained by simply removing the factors $D$ and $H$ (and, in our new notation, changing ${E}_{T}$ to ${E}_{I}$ since the cost of processing a single input instance is typically not the same at training and inference time):
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
\operatorname{Dep}\operatorname{Cost}\left( {P}_{P}\right) \propto {E}_{I} \tag{4}
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
This corresponds to the product-as-performance model for the deployment phase discussed in the previous section, based on the average cost of processing a typical input instance, and has the same limitations. It ignores the quantity of data processed by a model, and it is insensitive to the initial investment in terms of development cost. To overcome the first limitation, we can add back the factor $D$ , now representing the amount of data processed during deployment (instead of the amount of training data), and replace product-as-performance $\left( {P}_{P}\right)$ by product-as-output $\left( {P}_{O}\right)$ :
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
\operatorname{Dep}\operatorname{Cost}\left( {P}_{O}\right) \propto {E}_{I} \cdot {D}_{I} \tag{5}
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
To overcome the second limitation, we have to add the development cost to the equation:
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
\operatorname{DepCost}\left( {P}_{O}\right) \propto {E}_{T} \cdot {D}_{T} \cdot H + {E}_{I} \cdot {D}_{I} \tag{6}
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
357
|
| 162 |
+
|
| 163 |
+
This allows us to quantify the product and cost as they develop over the lifetime of a model, and this 360 is what we propose to call amortized efficiency based on total deployment cost, treating develop- 362 ment cost as a debt that is amortized during the deployment phase. ${}^{3}$ As already noted, this is only meaningful if we take model performance into ac- 365 count, for example by stipulating a threshold of minimal acceptable performance.
|
| 164 |
+
|
| 165 |
+
367 The discussion so far has focused on how to understand the notion of efficiency in NLP by relating different notions of product to an abstract notion of cost incurred over the different phases of lifetime of a model. However, as noted in the introduction, this abstract notion of cost can be instantiated in many different ways, often in terms of a specific resource being consumed, and it may be more or less straightforward to obtain precise measures of the resource consumption. Before illustrating the different efficiency metrics with some real data, we will therefore discuss costs and resources that have been prominent in the recent literature and motivate the selection of costs included in our case study.
|
| 166 |
+
|
| 167 |
+
Time and Space The classical notion of efficient computation from complexity theory is based on the resources of time and space. Measuring cost in terms of time and space (or memory) is important for time-critical applications and/or memory-constrained settings, but in this context we are more interested in execution time and memory consumption than in asymptotic time and space complexity. For this reason, execution time remains one of the most often reported cost measures in the literature, even though it can be hard to compare across experimental settings because it is influenced by factors such as the underlying hardware, other jobs running on the same machine, and the number of cores used (Schwartz et al., 2020).
|
| 168 |
+
|
| 169 |
+
Power and ${\mathrm{{CO}}}_{2}$ Electrical power consumption and the ensuing ${\mathrm{{CO}}}_{2}$ emission are costs that have been highlighted in the recent literature on resource-efficient NLP and AI. For example, Strubell et al. (2019) estimate the total power consumption for training NLP models as well as the corresponding ${\mathrm{{CO}}}_{2}$ emission. Hershcovich et al. (2022) propose that climate performance model cards for NLP models should minimally include information about total energy consumption and location for the computation, ideally also information about the 406
|
| 170 |
+
|
| 171 |
+
energy mix at the location and the ${\mathrm{{CO}}}_{2}$ emission 407 associated with different phases of model develop- 408 ment and use. Against this, Schwartz et al. (2020) 409
|
| 172 |
+
|
| 173 |
+
observe that, while both power consumption and 410
|
| 174 |
+
|
| 175 |
+
carbon emission are highly relevant costs, they are 411 difficult to compare across settings because they
|
| 176 |
+
|
| 177 |
+
depend on hardware and local electricity infrastruc- 413 ture in a way that may vary over time even at the same location.
|
| 178 |
+
|
| 179 |
+
Abstract Cost Measures Given the practical dif-
|
| 180 |
+
|
| 181 |
+
ficulties to obtain exact and comparable measure- 417 ments of relevant costs like time, power consumption, and carbon emission, several researchers have advocated more abstract cost measures, which are easier to obtain and compare across settings while
|
| 182 |
+
|
| 183 |
+
being sufficiently correlated with other costs that 422 we care about. One such measure is model size, often expressed as number of parameters, which is independent of underlying hardware but correlates with memory consumption. However, as observed by Schwartz et al. (2020), since different models and algorithms make different use of their parameters, model size is not always strongly correlated with costs like execution time, power consumption, and carbon emission. They therefore advocate number of floating point operations (FLOPs) as the best abstract cost measure, arguing that it has the following advantages compared to other measures: (a) it directly computes the amount of work done by the running machine when executing a specific instance of a model and is thus tied to the amount of energy consumed; (b) it is agnostic to the hardware on which the model is run, which facilitates fair comparison between different approaches; (c) unlike asymptotic time complexity, it also considers the amount of work done at each time step. They acknowledge that it also has limitations, such as ignoring memory consumption and model implementation. Using FLOPs to measure computation cost has emerged as perhaps the most popular approach in the community, and it has been shown empirically to correlate well with energy consumption (Axberg, 2022).
|
| 184 |
+
|
| 185 |
+
Data The amount of data (labeled or unlabeled) 450 needed to train a given model and/or reach a certain
|
| 186 |
+
|
| 187 |
+
performance is a relevant cost measure for several 452 reasons. In AI in general, if we can make models and algorithms more data-efficient, then they will ceteris paribus be more time- and energy-efficient. 456 In NLP specifically, it will in addition benefit low-resource languages, for which both data and computation are scarce resources.
|
| 188 |
+
|
| 189 |
+
${}^{3}$ Note that we can also use the notion of total deployment cost to compare the Pareto efficiency of different models at different points of time (under a product-as-performance model) by computing average deployment cost in a way that is sensitive to development cost and lifetime usage of a model.
|
| 190 |
+
|
| 191 |
+
In conclusion, no single cost metric captures all we care about, and any single metric can therefore be misleading on its own. In our illustrative case study, we include three of the most important metrics: execution time, power consumption, and FLOPs.
|
| 192 |
+
|
| 193 |
+
§ 4 CASE STUDY
|
| 194 |
+
|
| 195 |
+
To illustrate the different conceptualizations of resource-efficiency discussed in previous sections, we present a case study on developing and deploying a language model for a specific NLP task using different combinations of fine-tuning and knowledge distillation. The point of the study is not to advance the state of the art in resource-efficient NLP, but to show how different conceptualizations support the comparison of models of different sizes, at different performance levels, and with different development and deployment costs.
|
| 196 |
+
|
| 197 |
+
§ 4.1 OVERALL EXPERIMENTAL DESIGN
|
| 198 |
+
|
| 199 |
+
Our goal is to apply the Swedish pre-trained language model KB-BERT (Malmsten et al., 2020) to Named Entity Recognition (NER), using data from SUCX 3.0 (Spräkbanken, 2022) for fine-tuning and evaluation. We consider three scenarios:
|
| 200 |
+
|
| 201 |
+
* Fine-tuning (FT): The standard fine-tuning approach is followed, with a linear layer added on top of KB-BERT. The model is trained on the SUCX 3.0 training set until the validation loss no longer decreases for up to 10 epochs.
|
| 202 |
+
|
| 203 |
+
* Task-specific distillation (TS): We distill the fine-tuned KB-BERT model to a 6-layer BERT student model. The student model is trained on the SUCX 3.0 training set using the teacher predictions on this set as ground truth.
|
| 204 |
+
|
| 205 |
+
* Task-agnostic distillation (TA): We distill KB-BERT to a 6-layer BERT student model using the task-agnostic distillation objective proposed by Sanh et al. (2020). We train on deduplicated Swedish Wikipedia data by averaging three kinds of losses for masked language modelling, knowledge distillation and cosine-distance between student and teacher hidden states. The student model is subsequently fine-tuned on the SUCX 3.0 training set with the method described above. All three fine-tuned models are evaluated on the 503 SUCX 3.0 test set. Statistics about the datasets can 504 be found in the appendix section A.1, while details 505 about our experiments are given in the appendix 506 section A.2. We measure model performance using 507 the F1 score, which is the standard evaluation met- 508 ric for NER, and model output in number of words;
|
| 206 |
+
|
| 207 |
+
and we measure three different types of cost dur- 510 ing development and deployment: execution time, power consumption and FLOPs. Based on these basic measures, we derive different efficiency metrics for model comparison, as discussed in Section 4.4.
|
| 208 |
+
|
| 209 |
+
§ 4.2 SETUP DETAILS
|
| 210 |
+
|
| 211 |
+
515
|
| 212 |
+
|
| 213 |
+
The TextBrewer framework (Yang et al., 2020) 516 is used for the distillation experiments, while the Huggingface Transformers ${}^{4}$ library is used for fine-
|
| 214 |
+
|
| 215 |
+
tuning and inference. ${}^{5}$ All experiments are exe- 519 cuted on an Nvidia DGX-1 server with 8 Tesla
|
| 216 |
+
|
| 217 |
+
V100 SXM2 32GB. In order to get measurements 521 under realistic conditions, we run different stages in parallel on different GPUs, while blocking other processes from the system to avoid external interference. Each experimental stage is repeated
|
| 218 |
+
|
| 219 |
+
3 times and measurements of execution time and 526 power consumption are averaged. ${}^{6}$ The different cost types are measured as follows:
|
| 220 |
+
|
| 221 |
+
* Execution time: We average the duration of
|
| 222 |
+
|
| 223 |
+
the individual Python jobs for each experimen- 530 tal stage.
|
| 224 |
+
|
| 225 |
+
* Power consumption: We measure power con- 532 sumption for all 4 PSUs of the server as well
|
| 226 |
+
|
| 227 |
+
as individual GPU power consumption, fol- 534 lowing Gustafsson et al. (2018). Based on snapshots of measured effect at individual points in time, we calculate the area under the curve to get the power consumption in Wh. Since we run the task-agnostic distilla- 539 tion using distributed data parallelism on two GPUs, we sum the consumption of both GPUs for each TA run.
|
| 228 |
+
|
| 229 |
+
* FLOPs: We estimate the number of FLOPs 543 required for each stage using the estimation 544
|
| 230 |
+
|
| 231 |
+
${}^{4}$ https://huggingface.co/docs/ transformers/index
|
| 232 |
+
|
| 233 |
+
${}^{5}$ More information on hyperparameters and data set sizes can be found in Appendix A.
|
| 234 |
+
|
| 235 |
+
${}^{6}$ Since we repeat stages 3 times for every model instance, task-specific distillation, fine-tuning of the distilled model, and evaluation of FT are repeated 9 times, while evaluation of TS and TA is repeated 27 times.
|
| 236 |
+
|
| 237 |
+
max width=
|
| 238 |
+
|
| 239 |
+
2*X 3|c|Distillation Stage 3|c|Fine-Tuning Stage 3|c|Evaluation Stage 2*$\mathbf{{F1}}$
|
| 240 |
+
|
| 241 |
+
2-10
|
| 242 |
+
Time Power FLOPs Time Power FLOPs Time Power FLOPs
|
| 243 |
+
|
| 244 |
+
1-11
|
| 245 |
+
FT - - - 0:35:17 141.1 ${2.48} \times {10}^{16}$ 0:01:32 5.2 ${2.59} \times {10}^{15}$ 87.3
|
| 246 |
+
|
| 247 |
+
1-11
|
| 248 |
+
TS 0:18:30 77.1 ${1.64} \times {10}^{16}$ 0:35:17 141.1 ${2.48} \times {10}^{16}$ 0:01:09 3.1 ${1.71} \times {10}^{15}$ 84.9
|
| 249 |
+
|
| 250 |
+
1-11
|
| 251 |
+
TA 13:06:59 6848.9 ${3.65} \times {10}^{17}$ 0:18:53 74.4 ${1.69} \times {10}^{16}$ 0:01:15 3.3 ${1.71} \times {10}^{15}$ 77.6
|
| 252 |
+
|
| 253 |
+
1-11
|
| 254 |
+
|
| 255 |
+
Table 1: Performance (F1) and cost measurements (Time: hh:mm:ss, Power: Wh, FLOPs) for different stages (Distillation, Fine-tuning, Evaluation) and different development scenarios (Fine-tuning: FT, Task-specific distillation: TS, Task-agnostic distillation: TA).
|
| 256 |
+
|
| 257 |
+
< g r a p h i c s >
|
| 258 |
+
|
| 259 |
+
Figure 2: Pareto efficiency for the development phase (top) and the deployment phase (down) based on three different cost measures: execution time (left), power consumption (center), and FLOPs (right).
|
| 260 |
+
|
| 261 |
+
545 formulas proposed by Kaplan et al. (2020), for training (7) and inference (8):
|
| 262 |
+
|
| 263 |
+
547
|
| 264 |
+
|
| 265 |
+
$$
|
| 266 |
+
{\mathrm{{FLOP}}}_{T} = 6 \cdot n \cdot N \cdot S \cdot B \tag{7}
|
| 267 |
+
$$
|
| 268 |
+
|
| 269 |
+
$$
|
| 270 |
+
{\mathrm{{FLOP}}}_{I} = 2 \cdot n \cdot N \cdot S \cdot B \tag{8}
|
| 271 |
+
$$
|
| 272 |
+
|
| 273 |
+
where $n$ is the sequence length, $N$ is the number of model parameters, $S$ is the number of training/inference steps, and $B$ is the batch size. The cost for fine-tuning a model is given by ${\mathrm{{FLOP}}}_{T}$ , while the evaluation cost is ${\mathrm{{FLOP}}}_{I}$ . For distillation, we need to sum ${\mathrm{{FLOP}}}_{T}$ for the student model and ${\mathrm{{FLOP}}}_{I}$ for the teacher model (whose predictions are used to train the student model).
|
| 274 |
+
|
| 275 |
+
§ 4.3 BASIC RESULTS
|
| 276 |
+
|
| 277 |
+
Table 1 shows basic measurements of performance and costs for different scenarios and stages. We see
|
| 278 |
+
|
| 279 |
+
that the fine-tuned KB-BERT model (FT) reaches 562
|
| 280 |
+
|
| 281 |
+
an F1 score of 87.3; task-specific distillation to 563
|
| 282 |
+
|
| 283 |
+
a smaller model (TS) gives a score of 84.9, while 564 fine-tuning after task-agnostic distillation (TA) only reaches 77.6 in this experiment. When comparing
|
| 284 |
+
|
| 285 |
+
costs, we see that task-agnostic distillation is by 567 far the most expensive stage. Compared to task-
|
| 286 |
+
|
| 287 |
+
specific distillation, the execution time is more than 569 40 times longer, the power consumption almost 100 times greater, and the number of FLOPs more than 20 times greater. Although the fine-tuning costs are smaller for the distilled TA model, the reduction is only about ${50}\%$ for execution time and power consumption and about ${30}\%$ for FLOPs.
|
| 288 |
+
|
| 289 |
+
We also investigate whether power consumption 576 can be predicted from the number of FLOPs, as this is a common argument in the literature for preferring the simpler FLOPs calculations over the more
|
| 290 |
+
|
| 291 |
+
Total deployment cost
|
| 292 |
+
|
| 293 |
+
< g r a p h i c s >
|
| 294 |
+
|
| 295 |
+
Figure 3: Amortized efficiency of the deployment phase over lifetime, based on three different cost measures: execution time (left), power consumption (center), and FLOPs (right).
|
| 296 |
+
|
| 297 |
+
580 involved measurements of actual power consumption. We find an extremely strong and significant linear correlation between the two costs (Pearson $r = {0.997},p \approx 0)$ . Our experiments thus corroborate earlier claims that FLOPs is a convenient cost measure that correlates well with power consumption (Schwartz et al., 2020; Axberg, 2022). However, it is worth noting that the GPU power consumption, which is what is reported in Table 1 and which can thus be estimated from the FLOPs count, is only ${71.7}\%$ of the total power consumption of the server including all 4 PSUs.
|
| 298 |
+
|
| 299 |
+
§ 4.4 MEASURING AND COMPARING EFFICIENCY
|
| 300 |
+
|
| 301 |
+
So how do our three models compare with respect to resource-efficiency? The answer is that this depends on what concept of efficiency we apply and which part of the life-cycle we consider. Figure 2 plots product-as-performance as a function of cost separately for the development phase and the deployment phase, corresponding to Equations (3) and (4), which allows us to compare Pareto efficiency. Considering only the development phase, the FT model is clearly optimal, since it has both the highest performance and the lowest cost of all models. Considering instead the deployment phase, the FT model still has the best performance, but the other two models have lower (average) inference cost. The TA model is still suboptimal, since it gives lower performance at the same cost as the TS model. However, FT and TS are both optimal with respect to Pareto efficiency, since neither is outperformed by a model at the same cost level (nor has higher processing cost than any model at the same performance level).
|
| 302 |
+
|
| 303 |
+
For a more holistic perspective on life-time efficiency, we can switch to a product-as-output model
|
| 304 |
+
|
| 305 |
+
and plot deployment efficiency as a function of both 616
|
| 306 |
+
|
| 307 |
+
the initial development cost and the average infer- 617
|
| 308 |
+
|
| 309 |
+
ence cost for processing new data, corresponding 618
|
| 310 |
+
|
| 311 |
+
to Equation (6) and our newly proposed notion of 619
|
| 312 |
+
|
| 313 |
+
amortized efficiency. This is depicted in Figure 3, 620
|
| 314 |
+
|
| 315 |
+
which compares the FT and TS model (disregard- 621 ing the clearly suboptimal TA model). We see that,
|
| 316 |
+
|
| 317 |
+
although the FT model has an initial advantage be- 623 cause it has not incurred the cost for distillation, the TS model eventually catches up and becomes more time-efficient after processing about $4\mathrm{\;B}$ tokens and more energy-efficient after processing about ${127}\mathrm{M}$
|
| 318 |
+
|
| 319 |
+
tokens. It is however important to keep in mind that 628 this comparison does not take performance into ac-
|
| 320 |
+
|
| 321 |
+
count, so we again need to decide what increase in 630 cost we are willing to pay for a given improvement in performance, although the increase in this case is sensitive to the expected life-time of the models.
|
| 322 |
+
|
| 323 |
+
§ 5 CONCLUSION
|
| 324 |
+
|
| 325 |
+
634
|
| 326 |
+
|
| 327 |
+
In this paper, we have discussed the concept of
|
| 328 |
+
|
| 329 |
+
resource-efficiency in NLP, arguing that it cannot 636 be reduced to a single definition and that we need a richer conceptual framework to reason about different aspects of efficiency. As a complement to the established notion of Pareto efficiency, which separates development and deployment under a product-as-performance model, we have proposed the notion of amortized efficiency, which enables a life-cycle analysis including both development and deployment under a product-as-output model. We have illustrated both notions in a simple case study, which we hope can serve as inspiration for further discussions of resource-efficiency in NLP. Future work should investigate more sophisticated ways of incorporating performance level into the notion of amortized efficiency. 652
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/0nNhIdvKQkU/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,655 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Dyslexia Prediction from Natural Reading of Danish Texts
|
| 2 |
+
|
| 3 |
+
054
|
| 4 |
+
|
| 5 |
+
055
|
| 6 |
+
|
| 7 |
+
056
|
| 8 |
+
|
| 9 |
+
Anonymous Author
|
| 10 |
+
|
| 11 |
+
Affiliation / Address line 1
|
| 12 |
+
|
| 13 |
+
Affiliation / Address line 2
|
| 14 |
+
|
| 15 |
+
Affiliation / Address line 3
|
| 16 |
+
|
| 17 |
+
email@domain
|
| 18 |
+
|
| 19 |
+
Anonymouser Author
|
| 20 |
+
|
| 21 |
+
Affiliation / Address line 1
|
| 22 |
+
|
| 23 |
+
Affiliation / Address line 2
|
| 24 |
+
|
| 25 |
+
Affiliation / Address line 3
|
| 26 |
+
|
| 27 |
+
email@domain
|
| 28 |
+
|
| 29 |
+
Anonymousest Author 057
|
| 30 |
+
|
| 31 |
+
Affiliation / Address line 1 058
|
| 32 |
+
|
| 33 |
+
Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062
|
| 34 |
+
|
| 35 |
+
063
|
| 36 |
+
|
| 37 |
+
## Abstract
|
| 38 |
+
|
| 39 |
+
Dyslexia screening in adults is an open challenge since difficulties may not align with standardised tests designed for children. We collect eye-tracking data from natural reading of Danish texts from readers with dyslexia while closely following the experimental design of a corpus of readers without dyslexia. Research suggests that the opaque orthography of the Danish language affects the diagnostic characteristics of dyslexia. To the best of our knowledge, this is the first attempt to classify dyslexia from eye movements during reading in Danish. We experiment with various machine-learning methods, and our best model yields ${0.85}\mathrm{\;F}1$ score.
|
| 40 |
+
|
| 41 |
+
## 1 Introduction
|
| 42 |
+
|
| 43 |
+
Dyslexia is a learning disorder of neurological origin that reportedly affects about ${10} - {20}\%$ of the world population (Rello and Ballesteros, 2015; Kaisar, 2020). It involves difficulties with reading, spelling, and decoding words, and is not related to intelligence (Perera et al., 2018; Rauschenberger et al., 2017). Detecting dyslexia as early as possible is vital, as the disorder can lead to many negative consequences that can be mitigated with proper assistance. These include low self-esteem and high rates of depression and anxiety (Per-era et al., 2018; Schulte-Körne, 2010). There are qualitative studies suggesting that living with an undiagnosed learning disorder leads to frustrations (Kong, 2012), feelings of being misunderstood (Denhart, 2008), and of failure, (Tanner, 2009). Being diagnosed with a learning disorder as an adult has been reported to lead to a sense of relief (Arceneaux, 2006), validation (Denhart, 2008; Kelm, 2016) and liberation (Tanner, 2009; Kong, 2012). Dyslexia can be difficult to diagnose
|
| 44 |
+
|
| 45 |
+
due to its indications and impairments occurring 065 in varying degrees (Eckert, 2004), and is there-
|
| 46 |
+
|
| 47 |
+
fore often recognised as a hidden disability (Rello 067 and Ballesteros, 2015). Popular methods of detecting dyslexia usually include standardised lex-
|
| 48 |
+
|
| 49 |
+
ical assessment tests that involve behavioural as- 070 pects, such as reading and spelling tasks (Perera
|
| 50 |
+
|
| 51 |
+
et al., 2018). Singleton et al. (2009) explain that 072 computerised screening methods have been well-
|
| 52 |
+
|
| 53 |
+
established for children in the UK, but develop- 075 ing such tests for adult readers with dyslexia is
|
| 54 |
+
|
| 55 |
+
exceptionally challenging as adults with dyslexia 077 may not show obvious literacy difficulties that align with what standardised tests distinguish as
|
| 56 |
+
|
| 57 |
+
dyslexic tendencies. For one thing, dyslexia is ex- 080 perienced differently from person to person but
|
| 58 |
+
|
| 59 |
+
also, most adults with dyslexia have developed 082 strategies that help them disguise weaknesses and may thus remain unnoticed and result in false-
|
| 60 |
+
|
| 61 |
+
negative tests (Singleton et al., 2009). 085
|
| 62 |
+
|
| 63 |
+
Less frequently used methods are eye track-
|
| 64 |
+
|
| 65 |
+
ing during reading or neuroimaging techniques 087 such as (functional) magnetic resonance imaging, electroencephalogram, brain positron emission to-
|
| 66 |
+
|
| 67 |
+
mography, and magnetoencephalography methods 090 (Kaisar, 2020; Perera et al., 2018). These mod-
|
| 68 |
+
|
| 69 |
+
els are yet under experimental development and 092 are currently not used for screening dyslexia (Per-era et al., 2018). A small body of studies investigates dyslexia detection using eye tracking with
|
| 70 |
+
|
| 71 |
+
the help of machine-learning techniques outlined 097 in $\$ {2.4}$ . Compared to neuroimaging techniques, eye tracking is more affordable and faster to record and its link to online text processing is well established (Rayner, 1998). Using eye-tracking records for dyslexia detection does not necessarily require readers to respond or perform a test but merely objectively observes the reader during natural reading (Benfatto et al., 2016). Although eye-tracking experiments are often limited to a relatively small
|
| 72 |
+
|
| 73 |
+
number of participants compared to computerized 107 tools, the method typically produces many data points from each participant.
|
| 74 |
+
|
| 75 |
+
The purpose of the current paper is twofold: 1) We provide a dataset from participants with dyslexia reading natural Danish texts. This dataset uses the same experimental design as the CopCo corpus by Hollenstein et al. (2022), which allows us to compare the eye movement patterns from readers with dyslexia to those without from CopCo. 2) We train the first machine learning classifiers for dyslexia prediction from eye movements in Danish. The data is available as raw gaze recording, fixation-level information, and word-level eye tracking features ${}^{1}$ . The code for all our experiments is also available online ${}^{2}$ .
|
| 76 |
+
|
| 77 |
+
## 2 Related Work
|
| 78 |
+
|
| 79 |
+
### 2.1 Dyslexia Screening in Denmark
|
| 80 |
+
|
| 81 |
+
In 2015, The Ministry of Children and Education in Denmark launched a national electronic dyslexia test, Ordblindetesten "the Dyslexia Test". The test is a screening method for children, youths, and adults speculated to have dyslexia. It is accessible through educational institutions and is performed under the observation of a supervisor (Centre for Reading Research et al., 2020). It consists of three multiple-choice subtests, performed electronically, that focus on phonological decoding abilities. The result is rendered as 'not dyslexic', 'uncertain phonological decoding', or 'dyslexic'. The official instruction strictly denies the uncertain group to be dyslexic ${}^{3}$ and therefore not entitled to dyslexia support. But they may benefit from other support and they are subject to further assessment. To this end, Helleruptesten "The Hellerup Test" is used by educational institutions for adults.
|
| 82 |
+
|
| 83 |
+
### 2.2 Danish as a Target Language
|
| 84 |
+
|
| 85 |
+
Similar studies on dyslexia detection with machine learning (ML) classification include experiments with Chinese (Haller et al., 2022), Swedish (Benfatto et al., 2016), Spanish (Rello and Ballesteros, 2015), Greek (Asvestopoulou et al., 2019), Arabic (Al-Edaily et al., 2013) and Finnish (Raatikainen et al., 2021) as their target languages. However, the diagnostic charac-
|
| 86 |
+
|
| 87 |
+
teristics of dyslexia may differ depending on the 162
|
| 88 |
+
|
| 89 |
+
transparency of the language. In early research, 163 De Luca et al. (1999) reported that the regular spelling-sound correspondences in languages of transparent orthographies, e.g., German and Italian, dim phonological deficits. Phonological
|
| 90 |
+
|
| 91 |
+
deficits of individuals with dyslexia are clearer 168 in languages with irregular, non-transparent orthographies (Smyrnakis et al., 2017).
|
| 92 |
+
|
| 93 |
+
Danish is a language with a highly nontransparent orthography. It has been shown that adult reading comprehension skills are poorer in Danish than in other Nordic languages (Juul and Sigurdsson, 2005). This suggests that identifying readers with dyslexia in Danish may be easier than for languages with transparent orthographies. The lack of spelling-sound correspondence in Dan-
|
| 94 |
+
|
| 95 |
+
ish indicates that the Danish language holds great 180 value for investigating dyslexia detection based on two main reasons: Firstly, the combination of the non-transparent orthography of the Danish language and eye movement patterns could poten-
|
| 96 |
+
|
| 97 |
+
tially reveal more apparent indications of dyslexia 185 through the selected features that have proven to be relevant for dyslexia detection in other languages, which can be favourable in further research on, e.g., the development of assistive tools
|
| 98 |
+
|
| 99 |
+
and technologies. Secondly, the fact that reading 190 comprehension skills are proven to be poorer in Danish than in other Nordic languages highlights the necessity of proper assistance and recognition
|
| 100 |
+
|
| 101 |
+
for individuals with dyslexia in Denmark. 195
|
| 102 |
+
|
| 103 |
+
### 2.3 Dyslexia and Eye Movements
|
| 104 |
+
|
| 105 |
+
Tracking eye movements during natural reading 198 reveals information on fixations (relatively still
|
| 106 |
+
|
| 107 |
+
gaze on a single location) and saccades (rapid 200 movements between fixations). Studies (Rayner, 1998; Henderson, 2013) have substantiated that information on eye movements during reading contains characterizations of visual and cognitive processes that have a direct impact on eye movements. These are also strongly related to identifying information about, e.g., attention during reading, which is highly correlated with saccades (Rayner, 1998). As Henderson (2013) phrases it, "eye movements serve as a window into the operation of the attentional system".
|
| 108 |
+
|
| 109 |
+
Previous studies have repeatedly shown that readers with dyslexia show a higher number of fix-
|
| 110 |
+
|
| 111 |
+
ations and regressions, longer fixation durations, 215 and shorter and more numerous saccades than readers without dyslexia (Pirozzolo and Rayner,
|
| 112 |
+
|
| 113 |
+
---
|
| 114 |
+
|
| 115 |
+
'https://osf.io/anonymous.com/
|
| 116 |
+
|
| 117 |
+
${}^{2}$ https://github.com/anonymous
|
| 118 |
+
|
| 119 |
+
${}^{3}$ https://www.spsu.dk/for-stoettegivere/elever-og-studerende-med-usikker-fonologisk-kodning
|
| 120 |
+
|
| 121 |
+
---
|
| 122 |
+
|
| 123 |
+
teligik andler ikkeohighdles
|
| 124 |
+
|
| 125 |
+
Figure 1: Fixations recorded from a reader without dyslexia (above) and a reader with dyslexia (below) when reading the same sentence. Numbers indicate duration in ms.
|
| 126 |
+
|
| 127 |
+
232 1979; Rayner, 1986; Biscaldi et al., 1998). This was already discovered by Rubino and Minden
|
| 128 |
+
|
| 129 |
+
234 (1973) and later work discussed whether this was the cause or effect of dyslexia with evidence on both sides, e.g., Pirozzolo and Rayner (1979); Pavlidis (1981); Eden et al. (1994); Biscaldi et al. (1998). Most recent studies acknowledge that the movements reflect a dyslexic reader's difficulties with processing language. (Fischer and Weber, 1990; Hyönä and Olson, 1995; Henderson, 2013; Rello and Ballesteros, 2015; Benfatto et al., 2016; Raatikainen et al., 2021), and Rayner (1998) who echo an earlier study (Rayner, 1986) state that eye movements are not the cause of slow reading but rather reflect the more time-consuming cognitive processes. These insights from psycholinguistics motivate the feature selection for this work.
|
| 130 |
+
|
| 131 |
+
### 2.4 ML-based Dyslexia Detection from Gaze
|
| 132 |
+
|
| 133 |
+
There is recent evidence that ML-based methods can be used for dyslexia detection in children, e.g., (Christoforou et al., 2021; Nerušil et al., 2021). This section is, however, limited to ML-based methods for dyslexia detection in adults. Prior studies that facilitate the investigation of dyslexia detection with the help of machine learning classification on eye-tracking data have concluded that support vector machines (SVM)s is of great advantage (Rello and Ballesteros, 2015; Benfatto et al., 2016; Prabha and Bhargavi, 2020; Asvestopoulou et al., 2019; Raatikainen et al., 2021). Rello and Ballesteros (2015) used an SVM for dyslexia detection based on eye tracking recordings from readers with and without dyslexia, which resulted in an accuracy of ${80.18}\%$ . Benfatto et al. (2016);
|
| 134 |
+
|
| 135 |
+
269 Prabha and Bhargavi (2020) achieved accuracy
|
| 136 |
+
|
| 137 |
+
scores of ${95.6}\%$ and ${95}\%$ respectively on the same 270
|
| 138 |
+
|
| 139 |
+
dataset using SVM variations. 271
|
| 140 |
+
|
| 141 |
+
With Greek as their target language, Smyrnakis et al. (2017) propose a method with two parameters for dyslexia detection: word-specific and nonword-specific. Non-word-specific features con-
|
| 142 |
+
|
| 143 |
+
sisted of fixation duration, saccade lengths, short 276 refixations, and the total number of fixations. On the other hand, the word-specific features contained gaze duration on each word and the num-
|
| 144 |
+
|
| 145 |
+
ber of revisits on each word. Based on the 281 same dataset as Smyrnakis et al., Asvestopoulou
|
| 146 |
+
|
| 147 |
+
et al. (2019) developed a tool called DysLexML. 283 The classifier with the highest accuracy on noise-free data is linear SVM, used on features se-
|
| 148 |
+
|
| 149 |
+
lected by LASSO regression at ${\lambda 1}\mathrm{{SE}}$ , which gave 286 an accuracy of ${87.87}\%$ , and up to ${97}\% +$ when
|
| 150 |
+
|
| 151 |
+
using leave-one-out cross-validation. In recent 288 years, Raatikainen et al. (2021) used a hybrid method consisting of an SVM classifier with ran-
|
| 152 |
+
|
| 153 |
+
dom forest feature selection for dyslexia detection 291 with data recorded from eye movement.The best-
|
| 154 |
+
|
| 155 |
+
performing SVM model of their study scored an 293 accuracy of 89.7%.
|
| 156 |
+
|
| 157 |
+
## 3 Data Collection
|
| 158 |
+
|
| 159 |
+
296
|
| 160 |
+
|
| 161 |
+
Data acquisition follows Hollenstein et al. (2022), 298 but the most important points are repeated here. The only procedural difference is the additional
|
| 162 |
+
|
| 163 |
+
two reading tests administered to participants with 301 dyslexia as described in Section 3.3.
|
| 164 |
+
|
| 165 |
+
303
|
| 166 |
+
|
| 167 |
+
### 3.1 Participant Selection
|
| 168 |
+
|
| 169 |
+
The participant selection for this study of natu-
|
| 170 |
+
|
| 171 |
+
ral reading is purposefully broad and follows the 306 requirements for Hollenstein et al. (2022) from
|
| 172 |
+
|
| 173 |
+
which we sample the typical readers. Prior to 308 this we excluded four participants from the non-dyslexic group from the analysis due to poor calibration and reported attention deficit disorder. The
|
| 174 |
+
|
| 175 |
+
only difference to our participant sampling is that 313 all dyslexic readers are officially diagnosed with dyslexia. There is no age limit and no required educational background but all participants are adults, and native speakers of Danish. All have normal vision or corrected-to-normal (glasses or contact lenses), but no readers included in the analysis had a known attention deficit disorder. All participants signed an informed consent and all digital data is pseudonymised. Due to the ab-
|
| 176 |
+
|
| 177 |
+
sence of an official dyslexia diagnosis, we discard 323
|
| 178 |
+
|
| 179 |
+
324 378
|
| 180 |
+
|
| 181 |
+
<table><tr><td>SUBJ</td><td>SCORE</td><td>$n$ TEXTS</td><td>WPM</td><td>AGE</td><td>GENDER</td><td>DIAGNOSED</td></tr><tr><td colspan="7">READERS WITH DYSLEXIA</td></tr><tr><td>P23</td><td>1.00</td><td>2</td><td>200.0</td><td>33</td><td>F</td><td>16</td></tr><tr><td>P24</td><td>0.80</td><td>2</td><td>203.7</td><td>64</td><td>F</td><td>9</td></tr><tr><td>P25</td><td>0.82</td><td>4</td><td>142.0</td><td>20</td><td>F</td><td>16</td></tr><tr><td>P26</td><td>0.57</td><td>2</td><td>86.7</td><td>32</td><td>M</td><td>12</td></tr><tr><td>P27</td><td>0.71</td><td>4</td><td>137.4</td><td>53</td><td>M</td><td>48</td></tr><tr><td>P28</td><td>0.93</td><td>4</td><td>173.3</td><td>25</td><td>F</td><td>15</td></tr><tr><td>P29</td><td>0.73</td><td>3</td><td>143.3</td><td>25</td><td>F</td><td>21</td></tr><tr><td>P30</td><td>0.93</td><td>4</td><td>179.0</td><td>61</td><td>M</td><td>50</td></tr><tr><td>P31</td><td>0.75</td><td>2</td><td>61.9</td><td>20</td><td>M</td><td>15</td></tr><tr><td>P33</td><td>0.86</td><td>2</td><td>59.3</td><td>30</td><td>F</td><td>8</td></tr><tr><td>P34</td><td>0.62</td><td>2</td><td>107.4</td><td>56</td><td>F</td><td>9</td></tr><tr><td>P35</td><td>0.71</td><td>4</td><td>285.1</td><td>24</td><td>F</td><td>19</td></tr><tr><td>P36</td><td>0.40</td><td>2</td><td>58.5</td><td>23</td><td>F</td><td>11</td></tr><tr><td>P37</td><td>0.58</td><td>4</td><td>270.7</td><td>25</td><td>F</td><td>23</td></tr><tr><td>P38</td><td>0.75</td><td>2</td><td>115.5</td><td>30</td><td>M</td><td>29</td></tr><tr><td>P39</td><td>1.00</td><td>1</td><td>160.2</td><td>32</td><td>F</td><td>17</td></tr><tr><td>P40</td><td>0.92</td><td>4</td><td>173.3</td><td>29</td><td>M</td><td>7</td></tr><tr><td>P41</td><td>0.88</td><td>4</td><td>154.9</td><td>51</td><td>F</td><td>50</td></tr><tr><td>$\mathbf{{AVG}}$</td><td>0.78 (0.16)</td><td>2.9 (1.1)</td><td>150.7 (65.0)</td><td>35.1 (14.7)</td><td>67.7%F</td><td>20.8 (14.3)</td></tr><tr><td colspan="7">READERS WITHOUT DYSLEXIA</td></tr><tr><td>$\mathbf{{AVG}}$</td><td>0.81 (0.11)</td><td>${4.4}\left( {1.5}\right)$</td><td>276.8 (54.6)</td><td>$\mathbf{{30.7}}\left( {10.8}\right)$</td><td>78% F</td><td>-</td></tr></table>
|
| 182 |
+
|
| 183 |
+
Table 1: Overview of readers with dyslexia included in the study. Average and standard deviations are in brackets. SCORE is the accuracy of the answers to the comprehension questions; DIAGNOSED refers to the age at which the participants were diagnosed with dyslexia. Aggregated data from the 18 readers without dyslexia from Hollenstein et al. (2022) for comparison.
|
| 184 |
+
|
| 185 |
+
395
|
| 186 |
+
|
| 187 |
+
325 379
|
| 188 |
+
|
| 189 |
+
326 380
|
| 190 |
+
|
| 191 |
+
327 381
|
| 192 |
+
|
| 193 |
+
328 382
|
| 194 |
+
|
| 195 |
+
329 383
|
| 196 |
+
|
| 197 |
+
330 384
|
| 198 |
+
|
| 199 |
+
331 385
|
| 200 |
+
|
| 201 |
+
332 386
|
| 202 |
+
|
| 203 |
+
333 387
|
| 204 |
+
|
| 205 |
+
334 388
|
| 206 |
+
|
| 207 |
+
335 389
|
| 208 |
+
|
| 209 |
+
336 390
|
| 210 |
+
|
| 211 |
+
337 391
|
| 212 |
+
|
| 213 |
+
338 392
|
| 214 |
+
|
| 215 |
+
339 393
|
| 216 |
+
|
| 217 |
+
340 394
|
| 218 |
+
|
| 219 |
+
342 396 the data from one subject for further analysis but include 18 readers in the dyslexic group. Participant statistics for all included dyslexic participants are presented in Table 1 with a summary of the 18 non-dyslexic participants for comparison.
|
| 220 |
+
|
| 221 |
+
### 3.2 Reading Materials
|
| 222 |
+
|
| 223 |
+
We used the same set of reading materials as Hollenstein et al. (2022) presented in the same way. They are 46 transcribed and proofread Danish speeches, accessed from the Danske Taler
|
| 224 |
+
|
| 225 |
+
362 archive (https://dansketaler.dk).Table 2 shows an overview. The readability of each speech was calculated from a LIX score, which is based on the length of the words and sentences in a text (Björnsson, 1968). Each reader read a subset of
|
| 226 |
+
|
| 227 |
+
367 the full dataset reported in $n$ TEXTS in Table 1.
|
| 228 |
+
|
| 229 |
+
Reading comprehension questions To prevent mindless reading, comprehension questions were added to occur after approximately ${20}\%$ of the paragraphs that contain more than 100 characters following (Hollenstein et al., 2022). The average accuracy of the comprehension questions per participant can be seen in Table 1 in the SCORE col-
|
| 230 |
+
|
| 231 |
+
377 umn.
|
| 232 |
+
|
| 233 |
+
<table><tr><td/><td>MIN</td><td>MAX</td><td>MEAN</td><td>STD</td><td>TOTAL</td></tr><tr><td>SENTS PER DOC</td><td>37</td><td>134</td><td>92.4</td><td>29.4</td><td>1,849</td></tr><tr><td>TOKENS PER DOC</td><td>978</td><td>2,846</td><td>1,744.8</td><td>533.1</td><td>34,897</td></tr><tr><td>WORD TYPES PER DOC</td><td>391</td><td>1,056</td><td>603.6</td><td>159.4</td><td>7,361</td></tr><tr><td>LIX PER DOC</td><td>26.4</td><td>50.1</td><td>37.2</td><td>7.2</td><td>-</td></tr><tr><td>FREQUENCY PER DOC</td><td>0.68</td><td>0.79</td><td>0.74</td><td>0.03</td><td>-</td></tr><tr><td>SENT LEN IN TOKENS</td><td>1</td><td>119</td><td>10.8</td><td>15.9</td><td>-</td></tr><tr><td>TOKEN LEN IN CHARS</td><td>1</td><td>33</td><td>4.5</td><td>3.0</td><td>-</td></tr></table>
|
| 234 |
+
|
| 235 |
+
Table 2: Statistics on the 46 documents that comprise the reading material. TOTAL is the dataset total. LIX is the readability score. For typical readers, a text with a LIX score between 25 and 34 is considered easy, whereas a text scoring more than 55 is considered difficult and corresponds to an academic text. The frequency
|
| 236 |
+
|
| 237 |
+
is measured by the proportion of words included 416 in the 10,000 most common Danish words from https://korpus.dsl.dk/resources/ details/freq-lemmas.html
|
| 238 |
+
|
| 239 |
+
421
|
| 240 |
+
|
| 241 |
+
### 3.3 Lexical Assessment
|
| 242 |
+
|
| 243 |
+
All participants with dyslexia performed two lexical assessment tests, which are used as a control
|
| 244 |
+
|
| 245 |
+
test for the current study. Both tests are developed 426 by the Centre of Reading Research, University of Copenhagen. The purpose of the tests is to have a comparable benchmark for a lexical assessment
|
| 246 |
+
|
| 247 |
+
unrelated to the eye movements of the participants 430
|
| 248 |
+
|
| 249 |
+
with dyslexia. 431
|
| 250 |
+
|
| 251 |
+
Nergård-Nilssen and Eklund (2018) found in their psychometric evaluation that a pseudohomo-phone test is of high reliability and that such a test incorporates evaluations that provide accurate discrimination of readers with dyslexia. Due to this finding, as well as the fact that the pseudoho-mophone task is used in the Danish dyslexia test, a pseudohomophone test was selected as one of the lexical assessment tests for the current study. For the sake of reliability and providing insightful findings on reading skills, a reading comprehension test was also used as a complementary lexical assessment test.
|
| 252 |
+
|
| 253 |
+
Reading comprehension test The original purpose of the reading comprehension test ${}^{4}$ is to provide easy access for adults to receive an informal evaluation of their reading skills, and to stress that more adults are seeking help with developing their reading skills (Jensen et al., 2014). It takes ten minutes to complete, making it relatively short, yet insightful. The tasks in the test consist of three variants of cloze tests, which are tests where the participants must select a missing word in a sentence, e.g., It had been raining for some _____ [days, moments, countries] (our translation).
|
| 254 |
+
|
| 255 |
+
As the reading task is an online self-assessment test that requires no log-in or external assistance, requirements, or access, the participants without dyslexia in the experiment were contacted after their participation in the eye-tracking experiment to voluntarily take the test at home to serve as a control group. Ten participants without dyslexia submitted their scores as a contribution to this experiment.
|
| 256 |
+
|
| 257 |
+
The aggregated results for both reader groups are presented in Table 3. We observe that readers with dyslexia generally have a lower score and a larger variance. A two-tailed t-test showed that this difference is significant $\left( {p < {0.001}}\right)$ .
|
| 258 |
+
|
| 259 |
+
Pseudohomophone test The second linguistic assessment we conducted with the participants with dyslexia was a pseudohomophone ${}^{5}$ and was developed as a part of a diagnostic reading test for adults. The test encompasses 38 tasks where each task consists of four non-words, of which one of the words sounds like a real Danish word when pronounced. The difficulty of the 38 tasks
|
| 260 |
+
|
| 261 |
+
485
|
| 262 |
+
|
| 263 |
+
<table><tr><td>GROUP</td><td>$n$</td><td>MEAN</td><td>MIN</td><td>MAX</td></tr><tr><td>DYSLEXIC</td><td>18</td><td>3.5</td><td>0.7</td><td>5.2</td></tr><tr><td>NOT DYSLEXIC</td><td>10</td><td>5.7</td><td>4.4</td><td>7.1</td></tr></table>
|
| 264 |
+
|
| 265 |
+
486
|
| 266 |
+
|
| 267 |
+
487
|
| 268 |
+
|
| 269 |
+
488
|
| 270 |
+
|
| 271 |
+
489
|
| 272 |
+
|
| 273 |
+
Table 3: Reading task scores for participants of 490 both reading groups. A score between 0-3.4 indi-
|
| 274 |
+
|
| 275 |
+
cates that the reader may find many texts difficult 492 and time-consuming to read, and a score between 3.5-3.9 indicates that the reader may find some texts difficult and/or time-consuming to read. A score over 4 indicates good reading skills.
|
| 276 |
+
|
| 277 |
+
<table><tr><td>Group</td><td>$n$</td><td>Acc</td></tr><tr><td>NO READING DIFFICULTIES</td><td>72</td><td>66%</td></tr><tr><td>IN PROGRAMS FOR DYSLEXIC STUDENTS</td><td>46</td><td>23%</td></tr><tr><td>IN LITERACY READING PROGRAMS</td><td>167</td><td>31%</td></tr><tr><td>COPCO READERS WITH DYSLEXIA</td><td>18</td><td>33%</td></tr></table>
|
| 278 |
+
|
| 279 |
+
Table 4: Pseudohomophone test accuracies. The three top rows are standards from the official documentation of the test material for comparison.
|
| 280 |
+
|
| 281 |
+
502
|
| 282 |
+
|
| 283 |
+
504 increases gradually. The participants get five minutes to complete as many tasks as possible. Knowledge of the words of the test is required to perform it, but as the words are frequent, everyday words in Danish, it is assumed that native adult readers in Danish are familiar with the words. Translated examples of the words are: cheese, eat, steps, factory, and help.
|
| 284 |
+
|
| 285 |
+
The result is presented in Table 4 compared to standard scores from the documentation of the ${\text{test}}^{6}$ . We observe that the scores from the readers with dyslexia in the current study are on par with the standard scores of adults in literacy reading programs and higher than the standards for adults in programs for dyslexic readers. However,
|
| 286 |
+
|
| 287 |
+
all quartile scores for our group of readers with 524 dyslexia are about half compared to the standards for adults without reading difficulties.
|
| 288 |
+
|
| 289 |
+
### 3.4 Experiment Procedure
|
| 290 |
+
|
| 291 |
+
Eye movement data were collected with an in- 529 frared video-based EyeLink 1000 Plus eye tracker (SR Research) and follow Hollenstein et al. (2022). The experiment was designed with the SR Experiment Builder software. Data is recorded with a sampling rate of ${1000}\mathrm{\;{Hz}}$ . Participants were seated at a distance of approximately ${85}\mathrm{\;{cm}}$ from a 27-inch monitor (display dimensions ${590} \times {335}$
|
| 292 |
+
|
| 293 |
+
539 $\mathrm{{mm}}$ , resolution ${1920} \times {1080}$ pixels). We recorded
|
| 294 |
+
|
| 295 |
+
---
|
| 296 |
+
|
| 297 |
+
${}^{4}$ Accessed from https://selvtest.nu/
|
| 298 |
+
|
| 299 |
+
${}^{5}$ Accessed from https://laes.hum.ku.dk/ test/
|
| 300 |
+
|
| 301 |
+
${}^{6}$ https://laes.hum.ku.dk/test/find_det_ der_lyder_som_et_ord/standarder/
|
| 302 |
+
|
| 303 |
+
---
|
| 304 |
+
|
| 305 |
+
541 monocular eye-tracking data of the right eye. In a few cases of calibration difficulties, the left eye was tracked.
|
| 306 |
+
|
| 307 |
+
A 9-point calibration was performed at the beginning of the experiment. The calibration was validated after each block. Re-calibration was conducted if the quality was not good (worst point error $< {1.5}^{ \circ }$ , average error $\left. { < {1.0}^{ \circ }}\right)$ . Drift correction was performed after each trial, i.e. each screen of text. Minimum calibration quality measure of the recording ("good" calibration score, or "fair" in exceptionally difficult cases).
|
| 308 |
+
|
| 309 |
+
Experiment Protocol Participants read speeches in blocks of two speeches. The experiment was self-paced meaning there were no time restrictions. Between blocks, the participants
|
| 310 |
+
|
| 311 |
+
558 could take a break. Each participant completed as many blocks as they were comfortable within one session. The order of the blocks and the order of the speeches within a block were randomized. Instructions were presented orally and on the computer screen before the experiment started. All participants first completed a practice round of reading a short speech with one comprehension question. The experiment duration was between 60 and 90 minutes.
|
| 312 |
+
|
| 313 |
+
Stimulus Presentation The text passages presented on each screen resembled the author's original division of the story into paragraphs as much as possible. Comprehension questions were presented on separate screens. The text was in a black, monospaced font (type: Consolas; size: $\;{16}\mathrm{{pt}}$ ) on a light-gray background (RGB: ${248},{248},{248})$ . The texts spanned max. 10 lines with triple line spacing. We used a 140 pixels margin at the top and bottom, and 200 pixels side margin for a screen resolution of ${1920} \times {1080}$ .
|
| 314 |
+
|
| 315 |
+
## 4 Data Processing
|
| 316 |
+
|
| 317 |
+
### 4.1 Event Detection
|
| 318 |
+
|
| 319 |
+
This procedure also follows Hollenstein et al. (2022) closely. During data acquisition, the eye movement events are generated in real-time by the EyeLink eye tracker software during recording with a velocity- and acceleration-based saccade detection method. A fixation event is defined by the algorithm as any period that is not a saccade or a blink. Hence, the raw data consist of (x, y)
|
| 320 |
+
|
| 321 |
+
593 gaze location coordinates for individual fixations.
|
| 322 |
+
|
| 323 |
+
We use the DataViewer software by SR Re- 594
|
| 324 |
+
|
| 325 |
+
search to extract fixation events for all areas of 595
|
| 326 |
+
|
| 327 |
+
interest. Areas of interest are automatically de- 596
|
| 328 |
+
|
| 329 |
+
fined as rectangular boxes that surround each char- 597 acter of a text on the screen, as shown in Figure 1. For later analysis, only fixations within
|
| 330 |
+
|
| 331 |
+
the boundaries of each displayed character are ex- 600 tracted. Therefore, data points distinctly not associated with reading are excluded. We also set a minimum duration threshold of ${100}\mathrm{{ms}}$ .
|
| 332 |
+
|
| 333 |
+
### 4.2 Feature Extraction
|
| 334 |
+
|
| 335 |
+
605
|
| 336 |
+
|
| 337 |
+
In the second step, we use custom Python code 607 to map and aggregate character-level features to word-level features. These features cover the read-
|
| 338 |
+
|
| 339 |
+
ing process from early lexical access to later syn- 610 tactic integration. The selection of features is
|
| 340 |
+
|
| 341 |
+
inspired by similar corpora in other languages 612 (Siegelman et al., 2022; Hollenstein et al., 2018; Cop et al., 2017) as well as features known to show strong effects in eye movements from readers with dyslexia (Biscaldi et al., 1998; Pirozzolo and Rayner, 1979; Rayner, 1986). We extract the following eye-tracking features:
|
| 342 |
+
|
| 343 |
+
1. nFIX: The total number of fixations on the 620 current word.
|
| 344 |
+
|
| 345 |
+
2. FFD: Duration of the first fixation of the cur- 622 rent word.
|
| 346 |
+
|
| 347 |
+
3. MFD: Mean duration of all fixations on the 625 current word.
|
| 348 |
+
|
| 349 |
+
627
|
| 350 |
+
|
| 351 |
+
4. TFD: Total fixation duration on the current word.
|
| 352 |
+
|
| 353 |
+
5. FPD: first pass duration, The summed dura- 630
|
| 354 |
+
|
| 355 |
+
tion of all fixations on the current word prior 632 to progressing out of the current word (left or right).
|
| 356 |
+
|
| 357 |
+
6. GPT: go-past time, the sum duration of all fixations prior to progressing to the right of
|
| 358 |
+
|
| 359 |
+
the current word, including regressions to 637 previous words that originated from the current word.
|
| 360 |
+
|
| 361 |
+
7. MSD: mean saccade duration, Mean duration
|
| 362 |
+
|
| 363 |
+
of all saccades originating from the current 642 word.
|
| 364 |
+
|
| 365 |
+
8. PSV: peak saccade velocity, Maximum gaze velocity (in visual degrees per second) of all
|
| 366 |
+
|
| 367 |
+
saccades originating from the current word. 647
|
| 368 |
+
|
| 369 |
+
## 5 Dyslexia Classification
|
| 370 |
+
|
| 371 |
+
649
|
| 372 |
+
|
| 373 |
+
We experiment with three types of classifiers using features on two different levels of aggregation; sentence-level and trial-level. A trial corresponds to the text presented on a single screen, roughly corresponding to paragraphs from the original text materials. For both levels of aggregation, the eye-tracking features of each word in a sentence or trial, respectively, are averaged to get a single vector of eight features for each sample. Therefore, we train classifiers, where each sample corresponds either to the eye-tracking information from
|
| 374 |
+
|
| 375 |
+
661 a sentence or from a full trial. Dataset sizes are presented in Table 5. The data is split into 90% training data and ${10}\%$ test data. We use an addi-
|
| 376 |
+
|
| 377 |
+
664 tional ${10}\%$ of the training data as a validation split for the Long Short Term Memory (LSTM). For all
|
| 378 |
+
|
| 379 |
+
666 experiments, we randomly undersampled the non-dyslexic datasets for training, but not testing. We perform 5 runs taking different random samples from the data of readers without dyslexia and report the average performance.
|
| 380 |
+
|
| 381 |
+
<table><tr><td rowspan="2">EXPERIMENT TYPE</td><td colspan="2">$n$ SAMPLES</td></tr><tr><td>NON-DYSLEXIC</td><td>Dyslexic</td></tr><tr><td>TRIAL-LEVEL</td><td>5,147</td><td>4,144</td></tr><tr><td>SENTENCE-LEVEL</td><td>21,859</td><td>17,477</td></tr></table>
|
| 382 |
+
|
| 383 |
+
Table 5: Dataset size
|
| 384 |
+
|
| 385 |
+
SVM and Random Forest Classifiers The eye-tracking features are normalised with a min-max scaler that gives each instance a number between 0 and 1 .We use a grid search to tune the hyper-parameters of both SVM (the best regularization parameter $C = {100}$ ) and random forest (the best
|
| 386 |
+
|
| 387 |
+
686 maximum depth=9, and the optimal number of estimators=200) in a 5 -fold cross validation setup on the full train set. The classifiers are implemented with the scikit-learn library for Python. The SVM uses a linear kernel. In addition to taking the mean
|
| 388 |
+
|
| 389 |
+
691 feature values per word or trial (i.e., aggregating the eye-tracking features of all individual words), we also experiment with adding the standard deviations and maximum values of each feature.
|
| 390 |
+
|
| 391 |
+
LSTM Classifiers with Sequential Word Features We train a recurrent neural network optimized for sequential data, namely an LSTM. As LSTMs perform well with sequences and data consisting of large vocabularies and are effective
|
| 392 |
+
|
| 393 |
+
701 in memorizing important information, it can be
|
| 394 |
+
|
| 395 |
+
beneficial to dyslexia detection to predict the prob- 702
|
| 396 |
+
|
| 397 |
+
ability of class for a sentence, given the observed 703
|
| 398 |
+
|
| 399 |
+
words. Therefore, the inputs for the LSTM net- 704
|
| 400 |
+
|
| 401 |
+
work are the same eye-tracking features, but rather 705 than aggregating on the full trial or sentence, each word is assigned a feature vector. The sequences
|
| 402 |
+
|
| 403 |
+
were then padded to the maximum sentence or 708 trial length, respectively. We use two LSTM layers, with 32 and 16 dimensions, respectively, and a dropout rate of 0.3 after the first layer. Finally, we use a sigmoid activation function for outputting the probabilities of each class. The models are trained with a batch size of 128, using a cross-entropy loss and a RMSprop optimizer with a learning rate of 0.001 . We implement early stopping with a patience of 70 epochs on the maximum validation accuracy and save the best model. The model was implemented using Keras.
|
| 404 |
+
|
| 405 |
+
<table><tr><td>MODEL</td><td>TRIAL</td><td>SENTENCE</td></tr><tr><td>SVM</td><td>0.80 (0.018)</td><td>0.71 (0.004)</td></tr><tr><td>SVM + STD</td><td>0.81 (0.010)</td><td>0.71 (0.006)</td></tr><tr><td>SVM + STD + MAX</td><td>0.81 (0.014)</td><td>0.72 (0.007)</td></tr><tr><td>RF</td><td>0.83 (0.012)</td><td>0.72 (0.001)</td></tr><tr><td>RF + STD</td><td>0.85 (0.015)</td><td>0.72 (0.007)</td></tr><tr><td>$\mathrm{{RF}} + \mathrm{{STD}} + \mathrm{{MAX}}$</td><td>0.85 (0.010)</td><td>0.73 (0.006)</td></tr><tr><td>LSTM</td><td>${0.82}\left( {0.030}\right)$</td><td>0.71 (0.037)</td></tr></table>
|
| 406 |
+
|
| 407 |
+
Table 6: Average F1 score (standard deviation across five runs in brackets) for SVM, $\mathrm{R}$ (random) $\mathrm{F}$ (orest) and LSTM.
|
| 408 |
+
|
| 409 |
+
### 5.1 Results
|
| 410 |
+
|
| 411 |
+
The trial-level and sentence-level results for the dyslexia classification task are presented in Table 6. We observe that trial-level classifiers achieve much higher results than sentence-level classifiers, which is to be expected since the latter includes reading data from fewer words. However, for the SVM and random forest, the features are aggregated. Hence there will be an upper limit of text
|
| 412 |
+
|
| 413 |
+
length suitable for these methods. The random for- 745 est achieves the best results on both levels and a wider range of features (namely, including standard variation and maximum value features) yields higher scores. The LSTM model does not outper-
|
| 414 |
+
|
| 415 |
+
form the simpler and faster-to-train random forest 750 models and shows a higher variance between runs.
|
| 416 |
+
|
| 417 |
+
#### 5.1.1 Misclassifications
|
| 418 |
+
|
| 419 |
+
To further analyze these results, we look at the
|
| 420 |
+
|
| 421 |
+
confusion matrix and misclassified participants 755 from the best model, namely the random forest classifier including mean, standard deviation, and maximum value features. The confusion matrices in Figure 2 show that more mistakes are made classifying samples from readers with dyslexia than from readers without dyslexia. This is more apparent at sentence-level where the number of samples is substantially larger.
|
| 422 |
+
|
| 423 |
+
Furthermore, we hypothesize that the classifier struggles to correctly classify samples from readers with dyslexia that have reading patterns comparable to readers without dyslexia. The samples that are misclassified most frequently belong mostly to the same group of participants, both at sentence-level and at trial-level. The most frequently misclassified samples from readers with dyslexia were P28, P35, P23, P40, and P37 (in descending order of the number of misclassifications). We correlate the number of misclassified samples for all participants with dyslexia with their demographic and lexical text information and find a significant correlation between misclassifications and words per minute $\left( {\rho = {0.79}, p < }\right.$ ${0.001})$ and between misclassifications and reading comprehension scores $\left( {\rho = {0.71}, p < {0.001}}\right)$ . However, the correlation between misclassifications and pseudohomophone test scores is minimal and not significant. This shows that samples from readers with dyslexia with higher reading speed and better reading comprehension are more likely to be misclassified since the features are more similar to readers without dyslexia.
|
| 424 |
+
|
| 425 |
+
## 6 Discussion & Conclusion
|
| 426 |
+
|
| 427 |
+
We presented a dataset of eye-tracking recordings from reading natural texts from adults with dyslexia, which complements the CopCo dataset of readers without dyslexia (Hollenstein et al., 2022). Additionally, to the best of our knowledge, we presented the first attempt to predict dyslexia from eye-tracking features using Danish as a target language. The best-performing classifier of the current study achieves an F1 score of 0.85, using a random forest classifier trained with a feature combination that includes the aggregation of means, standard deviations, and maximum values of eight eye-tracking features.
|
| 428 |
+
|
| 429 |
+
While the recorded eye-tracking features proved to reflect vital information about the reading mechanisms of the participants, there were a considerably high number of misclassifications
|
| 430 |
+
|
| 431 |
+
True label 350 300 250 200 318 150 100 dyslexic Predicted label (a) Trial-level 1400 1200 1000 800 600 1010 400 dysiexic Predicted label (b) Sentence-level dyslexic non-dyslexic non-dysiexic True label dyslexic 743 non-dyslexic
|
| 432 |
+
|
| 433 |
+
Figure 2: Confusion matrices for the best classifier, RF+SDT+MAX, for each experiment level.
|
| 434 |
+
|
| 435 |
+
810
|
| 436 |
+
|
| 437 |
+
811
|
| 438 |
+
|
| 439 |
+
812
|
| 440 |
+
|
| 441 |
+
813
|
| 442 |
+
|
| 443 |
+
814
|
| 444 |
+
|
| 445 |
+
815
|
| 446 |
+
|
| 447 |
+
816
|
| 448 |
+
|
| 449 |
+
817
|
| 450 |
+
|
| 451 |
+
818
|
| 452 |
+
|
| 453 |
+
819
|
| 454 |
+
|
| 455 |
+
820
|
| 456 |
+
|
| 457 |
+
821
|
| 458 |
+
|
| 459 |
+
822
|
| 460 |
+
|
| 461 |
+
823
|
| 462 |
+
|
| 463 |
+
824
|
| 464 |
+
|
| 465 |
+
825
|
| 466 |
+
|
| 467 |
+
826
|
| 468 |
+
|
| 469 |
+
828
|
| 470 |
+
|
| 471 |
+
830
|
| 472 |
+
|
| 473 |
+
831
|
| 474 |
+
|
| 475 |
+
833
|
| 476 |
+
|
| 477 |
+
836 of fast and skilled readers with dyslexia. This indicates that a fast reading speed is atypical for a reader with dyslexia. These results contribute to findings that the symptoms of dyslexia occur in varying degrees and thus underline the importance of developing a reliable assessment tool for dyslexia that can reduce the number of misclassifications.
|
| 478 |
+
|
| 479 |
+
Moreover, due to known co-morbidities across 846 reading disorders (Mayes et al., 2000) that can
|
| 480 |
+
|
| 481 |
+
be reflected in eye movements (e.g., attention and 848 autism spectrum disorders), as the dataset continues to grow, we will include these populations of readers in the data collection to learn to classify different subgroups readers correctly.
|
| 482 |
+
|
| 483 |
+
Precise criteria for dyslexia diagnosis remain 853 difficult to standardise with the varying degrees of the symptoms and indicators of the disorder, which is why the condition deserves more atten-
|
| 484 |
+
|
| 485 |
+
tion. As eye-tracking recordings provide insight- 858 ful information about cognitive processes in naturalistic tasks such as reading, they can be a beneficial tool for dyslexia prediction. Eye tracking can be a stepping stone to achieving more reliable
|
| 486 |
+
|
| 487 |
+
screening methods for dyslexia. 863
|
| 488 |
+
|
| 489 |
+
References
|
| 490 |
+
|
| 491 |
+
Arwa Al-Edaily, Areej Al-Wabil, and Yousef Al-Ohali. 2013. Dyslexia explorer: A screening system for learning difficulties in the arabic language using eye tracking. In International Conference on Human Factors in Computing and Informatics, pages 831- 834. Springer.
|
| 492 |
+
|
| 493 |
+
André Duncan Arceneaux. 2006. It doesn't make any sense: Self and strategies among college students with learning disabilities. Ph.D. thesis, University of Missouri-Columbia.
|
| 494 |
+
|
| 495 |
+
Thomais Asvestopoulou, Victoria Manousaki, Anto-nis Psistakis, Ioannis Smyrnakis, Vassilios An-dreadakis, Ioannis M Aslanides, and Maria Pa-padopouli. 2019. Dyslexml: Screening tool for dyslexia using machine learning.
|
| 496 |
+
|
| 497 |
+
Mattias Benfatto, Gustaf Öqvist Seimyr, Jan Ygge, Tony Pansell, Agneta Rydberg, and Christer Jacobson. 2016. Screening for dyslexia using eye tracking during reading. PloSone, 11(12):e0165508.
|
| 498 |
+
|
| 499 |
+
Monica Biscaldi, Stefan Gezeck, and Volker Stuhr. 1998. Poor saccadic control correlates with dyslexia. Neuropsychologia, 36(11):1189-1202.
|
| 500 |
+
|
| 501 |
+
CH Björnsson. 1968. Läsbarhet, liber. Stockholm, Sweden.
|
| 502 |
+
|
| 503 |
+
Christoforos Christoforou, Argyro Fella, Paavo HT Leppänen, George K Georgiou, and Timothy C Papadopoulos. 2021. Fixation-related potentials in naming speed: A combined eeg and eye-tracking study on children with dyslexia. Clinical Neurophysiology, 132(11):2798-2807.
|
| 504 |
+
|
| 505 |
+
Uschi Cop, Nicolas Dirix, Denis Drieghe, and Wouter Duyck. 2017. Presenting geco: An eyetracking corpus of monolingual and bilingual sentence reading. Behavior research methods, 49(2):602-615.
|
| 506 |
+
|
| 507 |
+
Maria De Luca, Enrico Di Pace, Anna Judica, Do-natella Spinelli, and Pierluigi Zoccolotti. 1999. Eye movement patterns in linguistic and non-linguistic tasks in developmental surface dyslexia. Neuropsy-chologia, 37(12):1407-1420.
|
| 508 |
+
|
| 509 |
+
Hazel Denhart. 2008. Deconstructing barriers: Perceptions of students labeled with learning disabilities in higher education. Journal of learning disabilities, 41(6):483-497.
|
| 510 |
+
|
| 511 |
+
Mark Eckert. 2004. Neuroanatomical markers for dyslexia: a review of dyslexia structural imaging studies. The neuroscientist, 10(4):362-371.
|
| 512 |
+
|
| 513 |
+
GF Eden, JF Stein, HM Wood, and FB Wood. 1994. Differences in eye movements and reading problems in dyslexic and normal children. Vision research, 34(10):1345-1358.
|
| 514 |
+
|
| 515 |
+
Burkhart Fischer and Heike Weber. 1990. Saccadic reaction times of dyslexic and age-matched normal subjects. Perception, 19(6):805-818.
|
| 516 |
+
|
| 517 |
+
865 866 867 868 870 872 875 877 880 882 885 887 890 892 902 907 912
|
| 518 |
+
|
| 519 |
+
917
|
| 520 |
+
|
| 521 |
+
Patrick Haller, Andreas Säuberli, Sarah Elisabeth 918
|
| 522 |
+
|
| 523 |
+
Kiener, Jinger Pan, Ming Yan, and Lena Jäger. 2022. 919
|
| 524 |
+
|
| 525 |
+
Eye-tracking based classification of mandarin chinese readers with and without dyslexia using neural sequence models. arXiv preprint arXiv:2210.09819.
|
| 526 |
+
|
| 527 |
+
John M Henderson. 2013. Eye movements. The Oxford handbook of cognitive psychology, pages 69- 82.
|
| 528 |
+
|
| 529 |
+
Nora Hollenstein, Maria Barrett, and Marina Björnsdóttir. 2022. The copenhagen corpus of eye tracking recordings from natural reading of danish texts. arXiv preprint arXiv:2204.13311.
|
| 530 |
+
|
| 531 |
+
Nora Hollenstein, Jonathan Rotsztejn, Marius Troen-dle, Andreas Pedroni, Ce Zhang, and Nicolas Langer. 2018. Zuco, a simultaneous eeg and eye-tracking resource for natural sentence reading. Scientific data, 5(1):1-13.
|
| 532 |
+
|
| 533 |
+
Jukka Hyönä and Richard K Olson. 1995. Eye fixation patterns among dyslexic and normal readers: effects of word length and word frequency. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21(6):1430.
|
| 534 |
+
|
| 535 |
+
Katrine Lyskov Jensen, Anna Steenberg Gellert, and Carsten Elbro. 2014. Rapport om udvikling og afprøvning af selvtest af læsning - en selvtest af vok-snes læsefærdigheder på nettet.
|
| 536 |
+
|
| 537 |
+
Holger Juul and Baldur Sigurdsson. 2005. Orthography as a handicap? a direct comparison of spelling acquisition in danish and icelandic. Scandinavian Journal of Psychology, 46(3):263-272.
|
| 538 |
+
|
| 539 |
+
Shahriar Kaisar. 2020. Developmental dyslexia detection using machine learning techniques: A survey. ICT Express, 6(3):181-184.
|
| 540 |
+
|
| 541 |
+
Joanna Lynne Kelm. 2016. Adults' experiences of receiving a diagnosis of a learning disability. Ph.D. thesis, University of British Columbia.
|
| 542 |
+
|
| 543 |
+
Shelley Young Kong. 2012. The emotional impact of being recently diagnosed with dyslexia from the perspective of chiropractic students. Journal of Further and Higher Education, 36(1):127-146.
|
| 544 |
+
|
| 545 |
+
Susan D Mayes, Susan L Calhoun, and Errin W Crowell. 2000. Learning disabilities and adhd: Overlapping spectrum disorders. Journal of learning disabilities, 33(5):417-424.
|
| 546 |
+
|
| 547 |
+
Trude Nergård-Nilssen and Kenneth Eklund. 2018. Evaluation of the psychometric properties of "the norwegian screening test for dyslexia". Dyslexia, 24(3):250-262.
|
| 548 |
+
|
| 549 |
+
Boris Nerušil, Jaroslav Polec, Juraj Škunda, and Ju-raj Kačur. 2021. Eye tracking based dyslexia detection using a holistic approach. Scientific Reports, ${11}\left( 1\right) : 1 - {10}$ .
|
| 550 |
+
|
| 551 |
+
George Th Pavlidis. 1981. Do eye movements hold the key to dyslexia? Neuropsychologia, 19(1):57-64.
|
| 552 |
+
|
| 553 |
+
920
|
| 554 |
+
|
| 555 |
+
921
|
| 556 |
+
|
| 557 |
+
922
|
| 558 |
+
|
| 559 |
+
923
|
| 560 |
+
|
| 561 |
+
924
|
| 562 |
+
|
| 563 |
+
925
|
| 564 |
+
|
| 565 |
+
929 934 935
|
| 566 |
+
|
| 567 |
+
936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 966
|
| 568 |
+
|
| 569 |
+
968
|
| 570 |
+
|
| 571 |
+
969 970 971
|
| 572 |
+
|
| 573 |
+
972 Harshani Perera, Mohd Fairuz Shiratuddin, and 973 Kok Wai Wong. 2018. Review of eeg-based pattern 974 classification frameworks for dyslexia. brain infor- 975 matics, 5(2):1-14.
|
| 574 |
+
|
| 575 |
+
976 Francis J Pirozzolo and Keith Rayner. 1979. The neural 977 control of eye movements in acquired and develop- 978 mental reading disorders. Studies in neurolinguis- 979 tics, pages 97-123.
|
| 576 |
+
|
| 577 |
+
980 A Jothi Prabha and R Bhargavi. 2020. Predictive model for dyslexia from fixations and saccadic eye movement events. Computer Methods and Programs in 983 Biomedicine, 195:105538. 984
|
| 578 |
+
|
| 579 |
+
985 Peter Raatikainen, Jarkko Hautala, Otto Loberg,
|
| 580 |
+
|
| 581 |
+
986 Tommi Kärkkäinen, Paavo Leppänen, and Paavo Nieminen. 2021. Detection of developmental
|
| 582 |
+
|
| 583 |
+
987 dyslexia with machine learning using eye movement
|
| 584 |
+
|
| 585 |
+
988 data. Array, 12:100087.
|
| 586 |
+
|
| 587 |
+
989
|
| 588 |
+
|
| 589 |
+
990 Maria Rauschenberger, Luz Rello, Ricardo Baeza-Yates, Emilia Gomez, and Jeffrey P Bigham. 2017. Towards the prediction of dyslexia by a web-based game with musical elements. In Proceedings of the
|
| 590 |
+
|
| 591 |
+
993 14th International Web for All Conference, pages 1- 4.
|
| 592 |
+
|
| 593 |
+
995
|
| 594 |
+
|
| 595 |
+
Keith Rayner. 1986. Eye movements and the perceptual span in beginning and skilled readers. Journal of experimental child psychology, 41(2):211-236.
|
| 596 |
+
|
| 597 |
+
998
|
| 598 |
+
|
| 599 |
+
Keith Rayner. 1998. Eye movements in reading and
|
| 600 |
+
|
| 601 |
+
1000 information processing: 20 years of research. Psychological bulletin, 124(3):372.
|
| 602 |
+
|
| 603 |
+
University of Copenhagen Centre for Reading Research, Danish School of Education, Aarhus University, The National Board of Social Services, Min-
|
| 604 |
+
|
| 605 |
+
1005 istry of Children, and Education. 2020. Vejledning til Ordblindetesten (version 8). Ministry of Children and Education.
|
| 606 |
+
|
| 607 |
+
Luz Rello and Miguel Ballesteros. 2015. Detecting readers with dyslexia using machine learning with
|
| 608 |
+
|
| 609 |
+
1010 eye tracking measures. In Proceedings of the 12th International Web for All Conference, pages 1-8. CA Rubino and HA Minden. 1973. An analysis of eye-movements in children with a reading disability. Cortex: A Journal Devoted to the Study of the
|
| 610 |
+
|
| 611 |
+
1015 Nervous System and Behavior. Gerd Schulte-Körne. 2010. The prevention, diagnosis, and treatment of dyslexia. Deutsches Arzteblatt International, 107(41):718.
|
| 612 |
+
|
| 613 |
+
1020 Noam Siegelman, Sascha Schroeder, Cengiz Acartürk, Hee-Don Ahn, Svetlana Alexeeva, Simona Amenta, Raymond Bertram, Rolando Bonandrini, Marc Brysbaert, Daria Chernova, et al. 2022. Expanding horizons of cross-linguistic research on reading: The multilingual eye-movement corpus (meco). Behav-
|
| 614 |
+
|
| 615 |
+
1025 ior research methods, pages 1-21.
|
| 616 |
+
|
| 617 |
+
Chris Singleton, Joanna Horne, and Fiona Simmons. 1026 2009. Computerised screening for dyslexia in 1027 adults. Journal of Research in Reading, 32(1):137- 1028 152. 1029
|
| 618 |
+
|
| 619 |
+
Ioannis Smyrnakis, Vassilios Andreadakis, Vassilios 1030
|
| 620 |
+
|
| 621 |
+
Selimis, Michail Kalaitzakis, Theodora Bachourou, 1031
|
| 622 |
+
|
| 623 |
+
Georgios Kaloutsakis, George D Kymionis, Stelios 1032
|
| 624 |
+
|
| 625 |
+
Smirnakis, and Ioannis M Aslanides. 2017. Radar: 1033
|
| 626 |
+
|
| 627 |
+
A novel fast-screening method for reading diffi- 1034 culties with special focus on dyslexia. PloS one,
|
| 628 |
+
|
| 629 |
+
12(8):e0182597. 1035
|
| 630 |
+
|
| 631 |
+
1036
|
| 632 |
+
|
| 633 |
+
Kathleen Tanner. 2009. Adult dyslexia and the 'conun- 1037 drum of failure'. Disability & Society, 24(6):785- 1038 797. 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068
|
| 634 |
+
|
| 635 |
+
1069
|
| 636 |
+
|
| 637 |
+
1070
|
| 638 |
+
|
| 639 |
+
1071
|
| 640 |
+
|
| 641 |
+
1072
|
| 642 |
+
|
| 643 |
+
1073
|
| 644 |
+
|
| 645 |
+
1074
|
| 646 |
+
|
| 647 |
+
1075
|
| 648 |
+
|
| 649 |
+
1076
|
| 650 |
+
|
| 651 |
+
1077
|
| 652 |
+
|
| 653 |
+
1078
|
| 654 |
+
|
| 655 |
+
1079
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/0nNhIdvKQkU/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,638 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ DYSLEXIA PREDICTION FROM NATURAL READING OF DANISH TEXTS
|
| 2 |
+
|
| 3 |
+
054
|
| 4 |
+
|
| 5 |
+
055
|
| 6 |
+
|
| 7 |
+
056
|
| 8 |
+
|
| 9 |
+
Anonymous Author
|
| 10 |
+
|
| 11 |
+
Affiliation / Address line 1
|
| 12 |
+
|
| 13 |
+
Affiliation / Address line 2
|
| 14 |
+
|
| 15 |
+
Affiliation / Address line 3
|
| 16 |
+
|
| 17 |
+
email@domain
|
| 18 |
+
|
| 19 |
+
Anonymouser Author
|
| 20 |
+
|
| 21 |
+
Affiliation / Address line 1
|
| 22 |
+
|
| 23 |
+
Affiliation / Address line 2
|
| 24 |
+
|
| 25 |
+
Affiliation / Address line 3
|
| 26 |
+
|
| 27 |
+
email@domain
|
| 28 |
+
|
| 29 |
+
Anonymousest Author 057
|
| 30 |
+
|
| 31 |
+
Affiliation / Address line 1 058
|
| 32 |
+
|
| 33 |
+
Affiliation / Address line 2 059 060 Affiliation / Address line 3 061 email@domain 062
|
| 34 |
+
|
| 35 |
+
063
|
| 36 |
+
|
| 37 |
+
§ ABSTRACT
|
| 38 |
+
|
| 39 |
+
Dyslexia screening in adults is an open challenge since difficulties may not align with standardised tests designed for children. We collect eye-tracking data from natural reading of Danish texts from readers with dyslexia while closely following the experimental design of a corpus of readers without dyslexia. Research suggests that the opaque orthography of the Danish language affects the diagnostic characteristics of dyslexia. To the best of our knowledge, this is the first attempt to classify dyslexia from eye movements during reading in Danish. We experiment with various machine-learning methods, and our best model yields ${0.85}\mathrm{\;F}1$ score.
|
| 40 |
+
|
| 41 |
+
§ 1 INTRODUCTION
|
| 42 |
+
|
| 43 |
+
Dyslexia is a learning disorder of neurological origin that reportedly affects about ${10} - {20}\%$ of the world population (Rello and Ballesteros, 2015; Kaisar, 2020). It involves difficulties with reading, spelling, and decoding words, and is not related to intelligence (Perera et al., 2018; Rauschenberger et al., 2017). Detecting dyslexia as early as possible is vital, as the disorder can lead to many negative consequences that can be mitigated with proper assistance. These include low self-esteem and high rates of depression and anxiety (Per-era et al., 2018; Schulte-Körne, 2010). There are qualitative studies suggesting that living with an undiagnosed learning disorder leads to frustrations (Kong, 2012), feelings of being misunderstood (Denhart, 2008), and of failure, (Tanner, 2009). Being diagnosed with a learning disorder as an adult has been reported to lead to a sense of relief (Arceneaux, 2006), validation (Denhart, 2008; Kelm, 2016) and liberation (Tanner, 2009; Kong, 2012). Dyslexia can be difficult to diagnose
|
| 44 |
+
|
| 45 |
+
due to its indications and impairments occurring 065 in varying degrees (Eckert, 2004), and is there-
|
| 46 |
+
|
| 47 |
+
fore often recognised as a hidden disability (Rello 067 and Ballesteros, 2015). Popular methods of detecting dyslexia usually include standardised lex-
|
| 48 |
+
|
| 49 |
+
ical assessment tests that involve behavioural as- 070 pects, such as reading and spelling tasks (Perera
|
| 50 |
+
|
| 51 |
+
et al., 2018). Singleton et al. (2009) explain that 072 computerised screening methods have been well-
|
| 52 |
+
|
| 53 |
+
established for children in the UK, but develop- 075 ing such tests for adult readers with dyslexia is
|
| 54 |
+
|
| 55 |
+
exceptionally challenging as adults with dyslexia 077 may not show obvious literacy difficulties that align with what standardised tests distinguish as
|
| 56 |
+
|
| 57 |
+
dyslexic tendencies. For one thing, dyslexia is ex- 080 perienced differently from person to person but
|
| 58 |
+
|
| 59 |
+
also, most adults with dyslexia have developed 082 strategies that help them disguise weaknesses and may thus remain unnoticed and result in false-
|
| 60 |
+
|
| 61 |
+
negative tests (Singleton et al., 2009). 085
|
| 62 |
+
|
| 63 |
+
Less frequently used methods are eye track-
|
| 64 |
+
|
| 65 |
+
ing during reading or neuroimaging techniques 087 such as (functional) magnetic resonance imaging, electroencephalogram, brain positron emission to-
|
| 66 |
+
|
| 67 |
+
mography, and magnetoencephalography methods 090 (Kaisar, 2020; Perera et al., 2018). These mod-
|
| 68 |
+
|
| 69 |
+
els are yet under experimental development and 092 are currently not used for screening dyslexia (Per-era et al., 2018). A small body of studies investigates dyslexia detection using eye tracking with
|
| 70 |
+
|
| 71 |
+
the help of machine-learning techniques outlined 097 in $\$ {2.4}$ . Compared to neuroimaging techniques, eye tracking is more affordable and faster to record and its link to online text processing is well established (Rayner, 1998). Using eye-tracking records for dyslexia detection does not necessarily require readers to respond or perform a test but merely objectively observes the reader during natural reading (Benfatto et al., 2016). Although eye-tracking experiments are often limited to a relatively small
|
| 72 |
+
|
| 73 |
+
number of participants compared to computerized 107 tools, the method typically produces many data points from each participant.
|
| 74 |
+
|
| 75 |
+
The purpose of the current paper is twofold: 1) We provide a dataset from participants with dyslexia reading natural Danish texts. This dataset uses the same experimental design as the CopCo corpus by Hollenstein et al. (2022), which allows us to compare the eye movement patterns from readers with dyslexia to those without from CopCo. 2) We train the first machine learning classifiers for dyslexia prediction from eye movements in Danish. The data is available as raw gaze recording, fixation-level information, and word-level eye tracking features ${}^{1}$ . The code for all our experiments is also available online ${}^{2}$ .
|
| 76 |
+
|
| 77 |
+
§ 2 RELATED WORK
|
| 78 |
+
|
| 79 |
+
§ 2.1 DYSLEXIA SCREENING IN DENMARK
|
| 80 |
+
|
| 81 |
+
In 2015, The Ministry of Children and Education in Denmark launched a national electronic dyslexia test, Ordblindetesten "the Dyslexia Test". The test is a screening method for children, youths, and adults speculated to have dyslexia. It is accessible through educational institutions and is performed under the observation of a supervisor (Centre for Reading Research et al., 2020). It consists of three multiple-choice subtests, performed electronically, that focus on phonological decoding abilities. The result is rendered as 'not dyslexic', 'uncertain phonological decoding', or 'dyslexic'. The official instruction strictly denies the uncertain group to be dyslexic ${}^{3}$ and therefore not entitled to dyslexia support. But they may benefit from other support and they are subject to further assessment. To this end, Helleruptesten "The Hellerup Test" is used by educational institutions for adults.
|
| 82 |
+
|
| 83 |
+
§ 2.2 DANISH AS A TARGET LANGUAGE
|
| 84 |
+
|
| 85 |
+
Similar studies on dyslexia detection with machine learning (ML) classification include experiments with Chinese (Haller et al., 2022), Swedish (Benfatto et al., 2016), Spanish (Rello and Ballesteros, 2015), Greek (Asvestopoulou et al., 2019), Arabic (Al-Edaily et al., 2013) and Finnish (Raatikainen et al., 2021) as their target languages. However, the diagnostic charac-
|
| 86 |
+
|
| 87 |
+
teristics of dyslexia may differ depending on the 162
|
| 88 |
+
|
| 89 |
+
transparency of the language. In early research, 163 De Luca et al. (1999) reported that the regular spelling-sound correspondences in languages of transparent orthographies, e.g., German and Italian, dim phonological deficits. Phonological
|
| 90 |
+
|
| 91 |
+
deficits of individuals with dyslexia are clearer 168 in languages with irregular, non-transparent orthographies (Smyrnakis et al., 2017).
|
| 92 |
+
|
| 93 |
+
Danish is a language with a highly nontransparent orthography. It has been shown that adult reading comprehension skills are poorer in Danish than in other Nordic languages (Juul and Sigurdsson, 2005). This suggests that identifying readers with dyslexia in Danish may be easier than for languages with transparent orthographies. The lack of spelling-sound correspondence in Dan-
|
| 94 |
+
|
| 95 |
+
ish indicates that the Danish language holds great 180 value for investigating dyslexia detection based on two main reasons: Firstly, the combination of the non-transparent orthography of the Danish language and eye movement patterns could poten-
|
| 96 |
+
|
| 97 |
+
tially reveal more apparent indications of dyslexia 185 through the selected features that have proven to be relevant for dyslexia detection in other languages, which can be favourable in further research on, e.g., the development of assistive tools
|
| 98 |
+
|
| 99 |
+
and technologies. Secondly, the fact that reading 190 comprehension skills are proven to be poorer in Danish than in other Nordic languages highlights the necessity of proper assistance and recognition
|
| 100 |
+
|
| 101 |
+
for individuals with dyslexia in Denmark. 195
|
| 102 |
+
|
| 103 |
+
§ 2.3 DYSLEXIA AND EYE MOVEMENTS
|
| 104 |
+
|
| 105 |
+
Tracking eye movements during natural reading 198 reveals information on fixations (relatively still
|
| 106 |
+
|
| 107 |
+
gaze on a single location) and saccades (rapid 200 movements between fixations). Studies (Rayner, 1998; Henderson, 2013) have substantiated that information on eye movements during reading contains characterizations of visual and cognitive processes that have a direct impact on eye movements. These are also strongly related to identifying information about, e.g., attention during reading, which is highly correlated with saccades (Rayner, 1998). As Henderson (2013) phrases it, "eye movements serve as a window into the operation of the attentional system".
|
| 108 |
+
|
| 109 |
+
Previous studies have repeatedly shown that readers with dyslexia show a higher number of fix-
|
| 110 |
+
|
| 111 |
+
ations and regressions, longer fixation durations, 215 and shorter and more numerous saccades than readers without dyslexia (Pirozzolo and Rayner,
|
| 112 |
+
|
| 113 |
+
'https://osf.io/anonymous.com/
|
| 114 |
+
|
| 115 |
+
${}^{2}$ https://github.com/anonymous
|
| 116 |
+
|
| 117 |
+
${}^{3}$ https://www.spsu.dk/for-stoettegivere/elever-og-studerende-med-usikker-fonologisk-kodning
|
| 118 |
+
|
| 119 |
+
teligik andler ikkeohighdles
|
| 120 |
+
|
| 121 |
+
Figure 1: Fixations recorded from a reader without dyslexia (above) and a reader with dyslexia (below) when reading the same sentence. Numbers indicate duration in ms.
|
| 122 |
+
|
| 123 |
+
232 1979; Rayner, 1986; Biscaldi et al., 1998). This was already discovered by Rubino and Minden
|
| 124 |
+
|
| 125 |
+
234 (1973) and later work discussed whether this was the cause or effect of dyslexia with evidence on both sides, e.g., Pirozzolo and Rayner (1979); Pavlidis (1981); Eden et al. (1994); Biscaldi et al. (1998). Most recent studies acknowledge that the movements reflect a dyslexic reader's difficulties with processing language. (Fischer and Weber, 1990; Hyönä and Olson, 1995; Henderson, 2013; Rello and Ballesteros, 2015; Benfatto et al., 2016; Raatikainen et al., 2021), and Rayner (1998) who echo an earlier study (Rayner, 1986) state that eye movements are not the cause of slow reading but rather reflect the more time-consuming cognitive processes. These insights from psycholinguistics motivate the feature selection for this work.
|
| 126 |
+
|
| 127 |
+
§ 2.4 ML-BASED DYSLEXIA DETECTION FROM GAZE
|
| 128 |
+
|
| 129 |
+
There is recent evidence that ML-based methods can be used for dyslexia detection in children, e.g., (Christoforou et al., 2021; Nerušil et al., 2021). This section is, however, limited to ML-based methods for dyslexia detection in adults. Prior studies that facilitate the investigation of dyslexia detection with the help of machine learning classification on eye-tracking data have concluded that support vector machines (SVM)s is of great advantage (Rello and Ballesteros, 2015; Benfatto et al., 2016; Prabha and Bhargavi, 2020; Asvestopoulou et al., 2019; Raatikainen et al., 2021). Rello and Ballesteros (2015) used an SVM for dyslexia detection based on eye tracking recordings from readers with and without dyslexia, which resulted in an accuracy of ${80.18}\%$ . Benfatto et al. (2016);
|
| 130 |
+
|
| 131 |
+
269 Prabha and Bhargavi (2020) achieved accuracy
|
| 132 |
+
|
| 133 |
+
scores of ${95.6}\%$ and ${95}\%$ respectively on the same 270
|
| 134 |
+
|
| 135 |
+
dataset using SVM variations. 271
|
| 136 |
+
|
| 137 |
+
With Greek as their target language, Smyrnakis et al. (2017) propose a method with two parameters for dyslexia detection: word-specific and nonword-specific. Non-word-specific features con-
|
| 138 |
+
|
| 139 |
+
sisted of fixation duration, saccade lengths, short 276 refixations, and the total number of fixations. On the other hand, the word-specific features contained gaze duration on each word and the num-
|
| 140 |
+
|
| 141 |
+
ber of revisits on each word. Based on the 281 same dataset as Smyrnakis et al., Asvestopoulou
|
| 142 |
+
|
| 143 |
+
et al. (2019) developed a tool called DysLexML. 283 The classifier with the highest accuracy on noise-free data is linear SVM, used on features se-
|
| 144 |
+
|
| 145 |
+
lected by LASSO regression at ${\lambda 1}\mathrm{{SE}}$ , which gave 286 an accuracy of ${87.87}\%$ , and up to ${97}\% +$ when
|
| 146 |
+
|
| 147 |
+
using leave-one-out cross-validation. In recent 288 years, Raatikainen et al. (2021) used a hybrid method consisting of an SVM classifier with ran-
|
| 148 |
+
|
| 149 |
+
dom forest feature selection for dyslexia detection 291 with data recorded from eye movement.The best-
|
| 150 |
+
|
| 151 |
+
performing SVM model of their study scored an 293 accuracy of 89.7%.
|
| 152 |
+
|
| 153 |
+
§ 3 DATA COLLECTION
|
| 154 |
+
|
| 155 |
+
296
|
| 156 |
+
|
| 157 |
+
Data acquisition follows Hollenstein et al. (2022), 298 but the most important points are repeated here. The only procedural difference is the additional
|
| 158 |
+
|
| 159 |
+
two reading tests administered to participants with 301 dyslexia as described in Section 3.3.
|
| 160 |
+
|
| 161 |
+
303
|
| 162 |
+
|
| 163 |
+
§ 3.1 PARTICIPANT SELECTION
|
| 164 |
+
|
| 165 |
+
The participant selection for this study of natu-
|
| 166 |
+
|
| 167 |
+
ral reading is purposefully broad and follows the 306 requirements for Hollenstein et al. (2022) from
|
| 168 |
+
|
| 169 |
+
which we sample the typical readers. Prior to 308 this we excluded four participants from the non-dyslexic group from the analysis due to poor calibration and reported attention deficit disorder. The
|
| 170 |
+
|
| 171 |
+
only difference to our participant sampling is that 313 all dyslexic readers are officially diagnosed with dyslexia. There is no age limit and no required educational background but all participants are adults, and native speakers of Danish. All have normal vision or corrected-to-normal (glasses or contact lenses), but no readers included in the analysis had a known attention deficit disorder. All participants signed an informed consent and all digital data is pseudonymised. Due to the ab-
|
| 172 |
+
|
| 173 |
+
sence of an official dyslexia diagnosis, we discard 323
|
| 174 |
+
|
| 175 |
+
324 378
|
| 176 |
+
|
| 177 |
+
max width=
|
| 178 |
+
|
| 179 |
+
SUBJ SCORE $n$ TEXTS WPM AGE GENDER DIAGNOSED
|
| 180 |
+
|
| 181 |
+
1-7
|
| 182 |
+
7|c|READERS WITH DYSLEXIA
|
| 183 |
+
|
| 184 |
+
1-7
|
| 185 |
+
P23 1.00 2 200.0 33 F 16
|
| 186 |
+
|
| 187 |
+
1-7
|
| 188 |
+
P24 0.80 2 203.7 64 F 9
|
| 189 |
+
|
| 190 |
+
1-7
|
| 191 |
+
P25 0.82 4 142.0 20 F 16
|
| 192 |
+
|
| 193 |
+
1-7
|
| 194 |
+
P26 0.57 2 86.7 32 M 12
|
| 195 |
+
|
| 196 |
+
1-7
|
| 197 |
+
P27 0.71 4 137.4 53 M 48
|
| 198 |
+
|
| 199 |
+
1-7
|
| 200 |
+
P28 0.93 4 173.3 25 F 15
|
| 201 |
+
|
| 202 |
+
1-7
|
| 203 |
+
P29 0.73 3 143.3 25 F 21
|
| 204 |
+
|
| 205 |
+
1-7
|
| 206 |
+
P30 0.93 4 179.0 61 M 50
|
| 207 |
+
|
| 208 |
+
1-7
|
| 209 |
+
P31 0.75 2 61.9 20 M 15
|
| 210 |
+
|
| 211 |
+
1-7
|
| 212 |
+
P33 0.86 2 59.3 30 F 8
|
| 213 |
+
|
| 214 |
+
1-7
|
| 215 |
+
P34 0.62 2 107.4 56 F 9
|
| 216 |
+
|
| 217 |
+
1-7
|
| 218 |
+
P35 0.71 4 285.1 24 F 19
|
| 219 |
+
|
| 220 |
+
1-7
|
| 221 |
+
P36 0.40 2 58.5 23 F 11
|
| 222 |
+
|
| 223 |
+
1-7
|
| 224 |
+
P37 0.58 4 270.7 25 F 23
|
| 225 |
+
|
| 226 |
+
1-7
|
| 227 |
+
P38 0.75 2 115.5 30 M 29
|
| 228 |
+
|
| 229 |
+
1-7
|
| 230 |
+
P39 1.00 1 160.2 32 F 17
|
| 231 |
+
|
| 232 |
+
1-7
|
| 233 |
+
P40 0.92 4 173.3 29 M 7
|
| 234 |
+
|
| 235 |
+
1-7
|
| 236 |
+
P41 0.88 4 154.9 51 F 50
|
| 237 |
+
|
| 238 |
+
1-7
|
| 239 |
+
$\mathbf{{AVG}}$ 0.78 (0.16) 2.9 (1.1) 150.7 (65.0) 35.1 (14.7) 67.7%F 20.8 (14.3)
|
| 240 |
+
|
| 241 |
+
1-7
|
| 242 |
+
7|c|READERS WITHOUT DYSLEXIA
|
| 243 |
+
|
| 244 |
+
1-7
|
| 245 |
+
$\mathbf{{AVG}}$ 0.81 (0.11) ${4.4}\left( {1.5}\right)$ 276.8 (54.6) $\mathbf{{30.7}}\left( {10.8}\right)$ 78% F -
|
| 246 |
+
|
| 247 |
+
1-7
|
| 248 |
+
|
| 249 |
+
Table 1: Overview of readers with dyslexia included in the study. Average and standard deviations are in brackets. SCORE is the accuracy of the answers to the comprehension questions; DIAGNOSED refers to the age at which the participants were diagnosed with dyslexia. Aggregated data from the 18 readers without dyslexia from Hollenstein et al. (2022) for comparison.
|
| 250 |
+
|
| 251 |
+
395
|
| 252 |
+
|
| 253 |
+
325 379
|
| 254 |
+
|
| 255 |
+
326 380
|
| 256 |
+
|
| 257 |
+
327 381
|
| 258 |
+
|
| 259 |
+
328 382
|
| 260 |
+
|
| 261 |
+
329 383
|
| 262 |
+
|
| 263 |
+
330 384
|
| 264 |
+
|
| 265 |
+
331 385
|
| 266 |
+
|
| 267 |
+
332 386
|
| 268 |
+
|
| 269 |
+
333 387
|
| 270 |
+
|
| 271 |
+
334 388
|
| 272 |
+
|
| 273 |
+
335 389
|
| 274 |
+
|
| 275 |
+
336 390
|
| 276 |
+
|
| 277 |
+
337 391
|
| 278 |
+
|
| 279 |
+
338 392
|
| 280 |
+
|
| 281 |
+
339 393
|
| 282 |
+
|
| 283 |
+
340 394
|
| 284 |
+
|
| 285 |
+
342 396 the data from one subject for further analysis but include 18 readers in the dyslexic group. Participant statistics for all included dyslexic participants are presented in Table 1 with a summary of the 18 non-dyslexic participants for comparison.
|
| 286 |
+
|
| 287 |
+
§ 3.2 READING MATERIALS
|
| 288 |
+
|
| 289 |
+
We used the same set of reading materials as Hollenstein et al. (2022) presented in the same way. They are 46 transcribed and proofread Danish speeches, accessed from the Danske Taler
|
| 290 |
+
|
| 291 |
+
362 archive (https://dansketaler.dk).Table 2 shows an overview. The readability of each speech was calculated from a LIX score, which is based on the length of the words and sentences in a text (Björnsson, 1968). Each reader read a subset of
|
| 292 |
+
|
| 293 |
+
367 the full dataset reported in $n$ TEXTS in Table 1.
|
| 294 |
+
|
| 295 |
+
Reading comprehension questions To prevent mindless reading, comprehension questions were added to occur after approximately ${20}\%$ of the paragraphs that contain more than 100 characters following (Hollenstein et al., 2022). The average accuracy of the comprehension questions per participant can be seen in Table 1 in the SCORE col-
|
| 296 |
+
|
| 297 |
+
377 umn.
|
| 298 |
+
|
| 299 |
+
max width=
|
| 300 |
+
|
| 301 |
+
X MIN MAX MEAN STD TOTAL
|
| 302 |
+
|
| 303 |
+
1-6
|
| 304 |
+
SENTS PER DOC 37 134 92.4 29.4 1,849
|
| 305 |
+
|
| 306 |
+
1-6
|
| 307 |
+
TOKENS PER DOC 978 2,846 1,744.8 533.1 34,897
|
| 308 |
+
|
| 309 |
+
1-6
|
| 310 |
+
WORD TYPES PER DOC 391 1,056 603.6 159.4 7,361
|
| 311 |
+
|
| 312 |
+
1-6
|
| 313 |
+
LIX PER DOC 26.4 50.1 37.2 7.2 -
|
| 314 |
+
|
| 315 |
+
1-6
|
| 316 |
+
FREQUENCY PER DOC 0.68 0.79 0.74 0.03 -
|
| 317 |
+
|
| 318 |
+
1-6
|
| 319 |
+
SENT LEN IN TOKENS 1 119 10.8 15.9 -
|
| 320 |
+
|
| 321 |
+
1-6
|
| 322 |
+
TOKEN LEN IN CHARS 1 33 4.5 3.0 -
|
| 323 |
+
|
| 324 |
+
1-6
|
| 325 |
+
|
| 326 |
+
Table 2: Statistics on the 46 documents that comprise the reading material. TOTAL is the dataset total. LIX is the readability score. For typical readers, a text with a LIX score between 25 and 34 is considered easy, whereas a text scoring more than 55 is considered difficult and corresponds to an academic text. The frequency
|
| 327 |
+
|
| 328 |
+
is measured by the proportion of words included 416 in the 10,000 most common Danish words from https://korpus.dsl.dk/resources/ details/freq-lemmas.html
|
| 329 |
+
|
| 330 |
+
421
|
| 331 |
+
|
| 332 |
+
§ 3.3 LEXICAL ASSESSMENT
|
| 333 |
+
|
| 334 |
+
All participants with dyslexia performed two lexical assessment tests, which are used as a control
|
| 335 |
+
|
| 336 |
+
test for the current study. Both tests are developed 426 by the Centre of Reading Research, University of Copenhagen. The purpose of the tests is to have a comparable benchmark for a lexical assessment
|
| 337 |
+
|
| 338 |
+
unrelated to the eye movements of the participants 430
|
| 339 |
+
|
| 340 |
+
with dyslexia. 431
|
| 341 |
+
|
| 342 |
+
Nergård-Nilssen and Eklund (2018) found in their psychometric evaluation that a pseudohomo-phone test is of high reliability and that such a test incorporates evaluations that provide accurate discrimination of readers with dyslexia. Due to this finding, as well as the fact that the pseudoho-mophone task is used in the Danish dyslexia test, a pseudohomophone test was selected as one of the lexical assessment tests for the current study. For the sake of reliability and providing insightful findings on reading skills, a reading comprehension test was also used as a complementary lexical assessment test.
|
| 343 |
+
|
| 344 |
+
Reading comprehension test The original purpose of the reading comprehension test ${}^{4}$ is to provide easy access for adults to receive an informal evaluation of their reading skills, and to stress that more adults are seeking help with developing their reading skills (Jensen et al., 2014). It takes ten minutes to complete, making it relatively short, yet insightful. The tasks in the test consist of three variants of cloze tests, which are tests where the participants must select a missing word in a sentence, e.g., It had been raining for some_____ [days, moments, countries] (our translation).
|
| 345 |
+
|
| 346 |
+
As the reading task is an online self-assessment test that requires no log-in or external assistance, requirements, or access, the participants without dyslexia in the experiment were contacted after their participation in the eye-tracking experiment to voluntarily take the test at home to serve as a control group. Ten participants without dyslexia submitted their scores as a contribution to this experiment.
|
| 347 |
+
|
| 348 |
+
The aggregated results for both reader groups are presented in Table 3. We observe that readers with dyslexia generally have a lower score and a larger variance. A two-tailed t-test showed that this difference is significant $\left( {p < {0.001}}\right)$ .
|
| 349 |
+
|
| 350 |
+
Pseudohomophone test The second linguistic assessment we conducted with the participants with dyslexia was a pseudohomophone ${}^{5}$ and was developed as a part of a diagnostic reading test for adults. The test encompasses 38 tasks where each task consists of four non-words, of which one of the words sounds like a real Danish word when pronounced. The difficulty of the 38 tasks
|
| 351 |
+
|
| 352 |
+
485
|
| 353 |
+
|
| 354 |
+
max width=
|
| 355 |
+
|
| 356 |
+
GROUP $n$ MEAN MIN MAX
|
| 357 |
+
|
| 358 |
+
1-5
|
| 359 |
+
DYSLEXIC 18 3.5 0.7 5.2
|
| 360 |
+
|
| 361 |
+
1-5
|
| 362 |
+
NOT DYSLEXIC 10 5.7 4.4 7.1
|
| 363 |
+
|
| 364 |
+
1-5
|
| 365 |
+
|
| 366 |
+
486
|
| 367 |
+
|
| 368 |
+
487
|
| 369 |
+
|
| 370 |
+
488
|
| 371 |
+
|
| 372 |
+
489
|
| 373 |
+
|
| 374 |
+
Table 3: Reading task scores for participants of 490 both reading groups. A score between 0-3.4 indi-
|
| 375 |
+
|
| 376 |
+
cates that the reader may find many texts difficult 492 and time-consuming to read, and a score between 3.5-3.9 indicates that the reader may find some texts difficult and/or time-consuming to read. A score over 4 indicates good reading skills.
|
| 377 |
+
|
| 378 |
+
max width=
|
| 379 |
+
|
| 380 |
+
Group $n$ Acc
|
| 381 |
+
|
| 382 |
+
1-3
|
| 383 |
+
NO READING DIFFICULTIES 72 66%
|
| 384 |
+
|
| 385 |
+
1-3
|
| 386 |
+
IN PROGRAMS FOR DYSLEXIC STUDENTS 46 23%
|
| 387 |
+
|
| 388 |
+
1-3
|
| 389 |
+
IN LITERACY READING PROGRAMS 167 31%
|
| 390 |
+
|
| 391 |
+
1-3
|
| 392 |
+
COPCO READERS WITH DYSLEXIA 18 33%
|
| 393 |
+
|
| 394 |
+
1-3
|
| 395 |
+
|
| 396 |
+
Table 4: Pseudohomophone test accuracies. The three top rows are standards from the official documentation of the test material for comparison.
|
| 397 |
+
|
| 398 |
+
502
|
| 399 |
+
|
| 400 |
+
504 increases gradually. The participants get five minutes to complete as many tasks as possible. Knowledge of the words of the test is required to perform it, but as the words are frequent, everyday words in Danish, it is assumed that native adult readers in Danish are familiar with the words. Translated examples of the words are: cheese, eat, steps, factory, and help.
|
| 401 |
+
|
| 402 |
+
The result is presented in Table 4 compared to standard scores from the documentation of the ${\text{ test }}^{6}$ . We observe that the scores from the readers with dyslexia in the current study are on par with the standard scores of adults in literacy reading programs and higher than the standards for adults in programs for dyslexic readers. However,
|
| 403 |
+
|
| 404 |
+
all quartile scores for our group of readers with 524 dyslexia are about half compared to the standards for adults without reading difficulties.
|
| 405 |
+
|
| 406 |
+
§ 3.4 EXPERIMENT PROCEDURE
|
| 407 |
+
|
| 408 |
+
Eye movement data were collected with an in- 529 frared video-based EyeLink 1000 Plus eye tracker (SR Research) and follow Hollenstein et al. (2022). The experiment was designed with the SR Experiment Builder software. Data is recorded with a sampling rate of ${1000}\mathrm{\;{Hz}}$ . Participants were seated at a distance of approximately ${85}\mathrm{\;{cm}}$ from a 27-inch monitor (display dimensions ${590} \times {335}$
|
| 409 |
+
|
| 410 |
+
539 $\mathrm{{mm}}$ , resolution ${1920} \times {1080}$ pixels). We recorded
|
| 411 |
+
|
| 412 |
+
${}^{4}$ Accessed from https://selvtest.nu/
|
| 413 |
+
|
| 414 |
+
${}^{5}$ Accessed from https://laes.hum.ku.dk/ test/
|
| 415 |
+
|
| 416 |
+
${}^{6}$ https://laes.hum.ku.dk/test/find_det_ der_lyder_som_et_ord/standarder/
|
| 417 |
+
|
| 418 |
+
541 monocular eye-tracking data of the right eye. In a few cases of calibration difficulties, the left eye was tracked.
|
| 419 |
+
|
| 420 |
+
A 9-point calibration was performed at the beginning of the experiment. The calibration was validated after each block. Re-calibration was conducted if the quality was not good (worst point error $< {1.5}^{ \circ }$ , average error $\left. { < {1.0}^{ \circ }}\right)$ . Drift correction was performed after each trial, i.e. each screen of text. Minimum calibration quality measure of the recording ("good" calibration score, or "fair" in exceptionally difficult cases).
|
| 421 |
+
|
| 422 |
+
Experiment Protocol Participants read speeches in blocks of two speeches. The experiment was self-paced meaning there were no time restrictions. Between blocks, the participants
|
| 423 |
+
|
| 424 |
+
558 could take a break. Each participant completed as many blocks as they were comfortable within one session. The order of the blocks and the order of the speeches within a block were randomized. Instructions were presented orally and on the computer screen before the experiment started. All participants first completed a practice round of reading a short speech with one comprehension question. The experiment duration was between 60 and 90 minutes.
|
| 425 |
+
|
| 426 |
+
Stimulus Presentation The text passages presented on each screen resembled the author's original division of the story into paragraphs as much as possible. Comprehension questions were presented on separate screens. The text was in a black, monospaced font (type: Consolas; size: $\;{16}\mathrm{{pt}}$ ) on a light-gray background (RGB: ${248},{248},{248})$ . The texts spanned max. 10 lines with triple line spacing. We used a 140 pixels margin at the top and bottom, and 200 pixels side margin for a screen resolution of ${1920} \times {1080}$ .
|
| 427 |
+
|
| 428 |
+
§ 4 DATA PROCESSING
|
| 429 |
+
|
| 430 |
+
§ 4.1 EVENT DETECTION
|
| 431 |
+
|
| 432 |
+
This procedure also follows Hollenstein et al. (2022) closely. During data acquisition, the eye movement events are generated in real-time by the EyeLink eye tracker software during recording with a velocity- and acceleration-based saccade detection method. A fixation event is defined by the algorithm as any period that is not a saccade or a blink. Hence, the raw data consist of (x, y)
|
| 433 |
+
|
| 434 |
+
593 gaze location coordinates for individual fixations.
|
| 435 |
+
|
| 436 |
+
We use the DataViewer software by SR Re- 594
|
| 437 |
+
|
| 438 |
+
search to extract fixation events for all areas of 595
|
| 439 |
+
|
| 440 |
+
interest. Areas of interest are automatically de- 596
|
| 441 |
+
|
| 442 |
+
fined as rectangular boxes that surround each char- 597 acter of a text on the screen, as shown in Figure 1. For later analysis, only fixations within
|
| 443 |
+
|
| 444 |
+
the boundaries of each displayed character are ex- 600 tracted. Therefore, data points distinctly not associated with reading are excluded. We also set a minimum duration threshold of ${100}\mathrm{{ms}}$ .
|
| 445 |
+
|
| 446 |
+
§ 4.2 FEATURE EXTRACTION
|
| 447 |
+
|
| 448 |
+
605
|
| 449 |
+
|
| 450 |
+
In the second step, we use custom Python code 607 to map and aggregate character-level features to word-level features. These features cover the read-
|
| 451 |
+
|
| 452 |
+
ing process from early lexical access to later syn- 610 tactic integration. The selection of features is
|
| 453 |
+
|
| 454 |
+
inspired by similar corpora in other languages 612 (Siegelman et al., 2022; Hollenstein et al., 2018; Cop et al., 2017) as well as features known to show strong effects in eye movements from readers with dyslexia (Biscaldi et al., 1998; Pirozzolo and Rayner, 1979; Rayner, 1986). We extract the following eye-tracking features:
|
| 455 |
+
|
| 456 |
+
1. nFIX: The total number of fixations on the 620 current word.
|
| 457 |
+
|
| 458 |
+
2. FFD: Duration of the first fixation of the cur- 622 rent word.
|
| 459 |
+
|
| 460 |
+
3. MFD: Mean duration of all fixations on the 625 current word.
|
| 461 |
+
|
| 462 |
+
627
|
| 463 |
+
|
| 464 |
+
4. TFD: Total fixation duration on the current word.
|
| 465 |
+
|
| 466 |
+
5. FPD: first pass duration, The summed dura- 630
|
| 467 |
+
|
| 468 |
+
tion of all fixations on the current word prior 632 to progressing out of the current word (left or right).
|
| 469 |
+
|
| 470 |
+
6. GPT: go-past time, the sum duration of all fixations prior to progressing to the right of
|
| 471 |
+
|
| 472 |
+
the current word, including regressions to 637 previous words that originated from the current word.
|
| 473 |
+
|
| 474 |
+
7. MSD: mean saccade duration, Mean duration
|
| 475 |
+
|
| 476 |
+
of all saccades originating from the current 642 word.
|
| 477 |
+
|
| 478 |
+
8. PSV: peak saccade velocity, Maximum gaze velocity (in visual degrees per second) of all
|
| 479 |
+
|
| 480 |
+
saccades originating from the current word. 647
|
| 481 |
+
|
| 482 |
+
§ 5 DYSLEXIA CLASSIFICATION
|
| 483 |
+
|
| 484 |
+
649
|
| 485 |
+
|
| 486 |
+
We experiment with three types of classifiers using features on two different levels of aggregation; sentence-level and trial-level. A trial corresponds to the text presented on a single screen, roughly corresponding to paragraphs from the original text materials. For both levels of aggregation, the eye-tracking features of each word in a sentence or trial, respectively, are averaged to get a single vector of eight features for each sample. Therefore, we train classifiers, where each sample corresponds either to the eye-tracking information from
|
| 487 |
+
|
| 488 |
+
661 a sentence or from a full trial. Dataset sizes are presented in Table 5. The data is split into 90% training data and ${10}\%$ test data. We use an addi-
|
| 489 |
+
|
| 490 |
+
664 tional ${10}\%$ of the training data as a validation split for the Long Short Term Memory (LSTM). For all
|
| 491 |
+
|
| 492 |
+
666 experiments, we randomly undersampled the non-dyslexic datasets for training, but not testing. We perform 5 runs taking different random samples from the data of readers without dyslexia and report the average performance.
|
| 493 |
+
|
| 494 |
+
max width=
|
| 495 |
+
|
| 496 |
+
2*EXPERIMENT TYPE 2|c|$n$ SAMPLES
|
| 497 |
+
|
| 498 |
+
2-3
|
| 499 |
+
NON-DYSLEXIC Dyslexic
|
| 500 |
+
|
| 501 |
+
1-3
|
| 502 |
+
TRIAL-LEVEL 5,147 4,144
|
| 503 |
+
|
| 504 |
+
1-3
|
| 505 |
+
SENTENCE-LEVEL 21,859 17,477
|
| 506 |
+
|
| 507 |
+
1-3
|
| 508 |
+
|
| 509 |
+
Table 5: Dataset size
|
| 510 |
+
|
| 511 |
+
SVM and Random Forest Classifiers The eye-tracking features are normalised with a min-max scaler that gives each instance a number between 0 and 1 .We use a grid search to tune the hyper-parameters of both SVM (the best regularization parameter $C = {100}$ ) and random forest (the best
|
| 512 |
+
|
| 513 |
+
686 maximum depth=9, and the optimal number of estimators=200) in a 5 -fold cross validation setup on the full train set. The classifiers are implemented with the scikit-learn library for Python. The SVM uses a linear kernel. In addition to taking the mean
|
| 514 |
+
|
| 515 |
+
691 feature values per word or trial (i.e., aggregating the eye-tracking features of all individual words), we also experiment with adding the standard deviations and maximum values of each feature.
|
| 516 |
+
|
| 517 |
+
LSTM Classifiers with Sequential Word Features We train a recurrent neural network optimized for sequential data, namely an LSTM. As LSTMs perform well with sequences and data consisting of large vocabularies and are effective
|
| 518 |
+
|
| 519 |
+
701 in memorizing important information, it can be
|
| 520 |
+
|
| 521 |
+
beneficial to dyslexia detection to predict the prob- 702
|
| 522 |
+
|
| 523 |
+
ability of class for a sentence, given the observed 703
|
| 524 |
+
|
| 525 |
+
words. Therefore, the inputs for the LSTM net- 704
|
| 526 |
+
|
| 527 |
+
work are the same eye-tracking features, but rather 705 than aggregating on the full trial or sentence, each word is assigned a feature vector. The sequences
|
| 528 |
+
|
| 529 |
+
were then padded to the maximum sentence or 708 trial length, respectively. We use two LSTM layers, with 32 and 16 dimensions, respectively, and a dropout rate of 0.3 after the first layer. Finally, we use a sigmoid activation function for outputting the probabilities of each class. The models are trained with a batch size of 128, using a cross-entropy loss and a RMSprop optimizer with a learning rate of 0.001 . We implement early stopping with a patience of 70 epochs on the maximum validation accuracy and save the best model. The model was implemented using Keras.
|
| 530 |
+
|
| 531 |
+
max width=
|
| 532 |
+
|
| 533 |
+
MODEL TRIAL SENTENCE
|
| 534 |
+
|
| 535 |
+
1-3
|
| 536 |
+
SVM 0.80 (0.018) 0.71 (0.004)
|
| 537 |
+
|
| 538 |
+
1-3
|
| 539 |
+
SVM + STD 0.81 (0.010) 0.71 (0.006)
|
| 540 |
+
|
| 541 |
+
1-3
|
| 542 |
+
SVM + STD + MAX 0.81 (0.014) 0.72 (0.007)
|
| 543 |
+
|
| 544 |
+
1-3
|
| 545 |
+
RF 0.83 (0.012) 0.72 (0.001)
|
| 546 |
+
|
| 547 |
+
1-3
|
| 548 |
+
RF + STD 0.85 (0.015) 0.72 (0.007)
|
| 549 |
+
|
| 550 |
+
1-3
|
| 551 |
+
$\mathrm{{RF}} + \mathrm{{STD}} + \mathrm{{MAX}}$ 0.85 (0.010) 0.73 (0.006)
|
| 552 |
+
|
| 553 |
+
1-3
|
| 554 |
+
LSTM ${0.82}\left( {0.030}\right)$ 0.71 (0.037)
|
| 555 |
+
|
| 556 |
+
1-3
|
| 557 |
+
|
| 558 |
+
Table 6: Average F1 score (standard deviation across five runs in brackets) for SVM, $\mathrm{R}$ (random) $\mathrm{F}$ (orest) and LSTM.
|
| 559 |
+
|
| 560 |
+
§ 5.1 RESULTS
|
| 561 |
+
|
| 562 |
+
The trial-level and sentence-level results for the dyslexia classification task are presented in Table 6. We observe that trial-level classifiers achieve much higher results than sentence-level classifiers, which is to be expected since the latter includes reading data from fewer words. However, for the SVM and random forest, the features are aggregated. Hence there will be an upper limit of text
|
| 563 |
+
|
| 564 |
+
length suitable for these methods. The random for- 745 est achieves the best results on both levels and a wider range of features (namely, including standard variation and maximum value features) yields higher scores. The LSTM model does not outper-
|
| 565 |
+
|
| 566 |
+
form the simpler and faster-to-train random forest 750 models and shows a higher variance between runs.
|
| 567 |
+
|
| 568 |
+
§ 5.1.1 MISCLASSIFICATIONS
|
| 569 |
+
|
| 570 |
+
To further analyze these results, we look at the
|
| 571 |
+
|
| 572 |
+
confusion matrix and misclassified participants 755 from the best model, namely the random forest classifier including mean, standard deviation, and maximum value features. The confusion matrices in Figure 2 show that more mistakes are made classifying samples from readers with dyslexia than from readers without dyslexia. This is more apparent at sentence-level where the number of samples is substantially larger.
|
| 573 |
+
|
| 574 |
+
Furthermore, we hypothesize that the classifier struggles to correctly classify samples from readers with dyslexia that have reading patterns comparable to readers without dyslexia. The samples that are misclassified most frequently belong mostly to the same group of participants, both at sentence-level and at trial-level. The most frequently misclassified samples from readers with dyslexia were P28, P35, P23, P40, and P37 (in descending order of the number of misclassifications). We correlate the number of misclassified samples for all participants with dyslexia with their demographic and lexical text information and find a significant correlation between misclassifications and words per minute $\left( {\rho = {0.79},p < }\right.$ ${0.001})$ and between misclassifications and reading comprehension scores $\left( {\rho = {0.71},p < {0.001}}\right)$ . However, the correlation between misclassifications and pseudohomophone test scores is minimal and not significant. This shows that samples from readers with dyslexia with higher reading speed and better reading comprehension are more likely to be misclassified since the features are more similar to readers without dyslexia.
|
| 575 |
+
|
| 576 |
+
§ 6 DISCUSSION & CONCLUSION
|
| 577 |
+
|
| 578 |
+
We presented a dataset of eye-tracking recordings from reading natural texts from adults with dyslexia, which complements the CopCo dataset of readers without dyslexia (Hollenstein et al., 2022). Additionally, to the best of our knowledge, we presented the first attempt to predict dyslexia from eye-tracking features using Danish as a target language. The best-performing classifier of the current study achieves an F1 score of 0.85, using a random forest classifier trained with a feature combination that includes the aggregation of means, standard deviations, and maximum values of eight eye-tracking features.
|
| 579 |
+
|
| 580 |
+
While the recorded eye-tracking features proved to reflect vital information about the reading mechanisms of the participants, there were a considerably high number of misclassifications
|
| 581 |
+
|
| 582 |
+
True label 350 300 250 200 318 150 100 dyslexic Predicted label (a) Trial-level 1400 1200 1000 800 600 1010 400 dysiexic Predicted label (b) Sentence-level dyslexic non-dyslexic non-dysiexic True label dyslexic 743 non-dyslexic
|
| 583 |
+
|
| 584 |
+
Figure 2: Confusion matrices for the best classifier, RF+SDT+MAX, for each experiment level.
|
| 585 |
+
|
| 586 |
+
810
|
| 587 |
+
|
| 588 |
+
811
|
| 589 |
+
|
| 590 |
+
812
|
| 591 |
+
|
| 592 |
+
813
|
| 593 |
+
|
| 594 |
+
814
|
| 595 |
+
|
| 596 |
+
815
|
| 597 |
+
|
| 598 |
+
816
|
| 599 |
+
|
| 600 |
+
817
|
| 601 |
+
|
| 602 |
+
818
|
| 603 |
+
|
| 604 |
+
819
|
| 605 |
+
|
| 606 |
+
820
|
| 607 |
+
|
| 608 |
+
821
|
| 609 |
+
|
| 610 |
+
822
|
| 611 |
+
|
| 612 |
+
823
|
| 613 |
+
|
| 614 |
+
824
|
| 615 |
+
|
| 616 |
+
825
|
| 617 |
+
|
| 618 |
+
826
|
| 619 |
+
|
| 620 |
+
828
|
| 621 |
+
|
| 622 |
+
830
|
| 623 |
+
|
| 624 |
+
831
|
| 625 |
+
|
| 626 |
+
833
|
| 627 |
+
|
| 628 |
+
836 of fast and skilled readers with dyslexia. This indicates that a fast reading speed is atypical for a reader with dyslexia. These results contribute to findings that the symptoms of dyslexia occur in varying degrees and thus underline the importance of developing a reliable assessment tool for dyslexia that can reduce the number of misclassifications.
|
| 629 |
+
|
| 630 |
+
Moreover, due to known co-morbidities across 846 reading disorders (Mayes et al., 2000) that can
|
| 631 |
+
|
| 632 |
+
be reflected in eye movements (e.g., attention and 848 autism spectrum disorders), as the dataset continues to grow, we will include these populations of readers in the data collection to learn to classify different subgroups readers correctly.
|
| 633 |
+
|
| 634 |
+
Precise criteria for dyslexia diagnosis remain 853 difficult to standardise with the varying degrees of the symptoms and indicators of the disorder, which is why the condition deserves more atten-
|
| 635 |
+
|
| 636 |
+
tion. As eye-tracking recordings provide insight- 858 ful information about cognitive processes in naturalistic tasks such as reading, they can be a beneficial tool for dyslexia prediction. Eye tracking can be a stepping stone to achieving more reliable
|
| 637 |
+
|
| 638 |
+
screening methods for dyslexia. 863
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/0yzM0ibZgg/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,521 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
# Evaluating the Impact of Anonymisation on Downstream NLP Tasks
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 Anonymous Author
|
| 10 |
+
|
| 11 |
+
004 Affiliation / Address line 1
|
| 12 |
+
|
| 13 |
+
005 Affiliation / Address line 2 006 Affiliation / Address line 3
|
| 14 |
+
|
| 15 |
+
email@domain
|
| 16 |
+
|
| 17 |
+
Anonymouser Author
|
| 18 |
+
|
| 19 |
+
Affiliation / Address line 1
|
| 20 |
+
|
| 21 |
+
Affiliation / Address line 2
|
| 22 |
+
|
| 23 |
+
Affiliation / Address line 3
|
| 24 |
+
|
| 25 |
+
email@domain
|
| 26 |
+
|
| 27 |
+
Anonymousest Author 057
|
| 28 |
+
|
| 29 |
+
Affiliation / Address line 1 058
|
| 30 |
+
|
| 31 |
+
Affiliation / Address line 2 059 060 Affiliation / Address line 3
|
| 32 |
+
|
| 33 |
+
email@domain 062
|
| 34 |
+
|
| 35 |
+
## Abstract
|
| 36 |
+
|
| 37 |
+
013 Data anonymisation is often required to comply with regulations when transfering information across departments or entities.
|
| 38 |
+
|
| 39 |
+
016 However, the risk is that this procedure can distort the data and jeopardise the mod-
|
| 40 |
+
|
| 41 |
+
018 els built on it. Intuitively, the process of training an NLP model on anonymised data may lower the performance of the resulting
|
| 42 |
+
|
| 43 |
+
021 model when compared to a model trained on non-anonymised data. In this paper, we
|
| 44 |
+
|
| 45 |
+
023 investigate the impact of anonymisation on the performance of nine downstream
|
| 46 |
+
|
| 47 |
+
026 NLP tasks. We focus on the anonymi- sation and pseudonymisation of personal
|
| 48 |
+
|
| 49 |
+
028 names and compare six different anonymi-sation strategies for two state-of-the-art pre-trained models. Based on these ex-
|
| 50 |
+
|
| 51 |
+
031 periments, we formulate recommendations on how the anonymisation should be per-
|
| 52 |
+
|
| 53 |
+
033 formed to guarantee accurate NLP models. Our results reveal that anonymisation does have a negative impact on the performance of NLP models, but this impact is relatively low. We also find that using pseudonymi-
|
| 54 |
+
|
| 55 |
+
038 sation techniques involving random names leads to better performance across most tasks.
|
| 56 |
+
|
| 57 |
+
## 1 Introduction
|
| 58 |
+
|
| 59 |
+
043 Protection of personal data has been a hot topic for decades (Bélanger and Crossler, 2011). Careless sharing of data between companies, cyber-attacks, and other data breaches can lead to catastrophic leaks of confidential data, potentially resulting in the invasion of people's privacy and identity theft.
|
| 60 |
+
|
| 61 |
+
To mitigate damages and hold bad actors accountable, many countries introduced various laws that aim to protect confidential data, such as the
|
| 62 |
+
|
| 63 |
+
053 Health Insurance Portability and Accountability
|
| 64 |
+
|
| 65 |
+
Act (HIPAA) for healthcare confidentiality (Act, 065 1996), and the Gramm-Leach-Bliley Act (GLBA)
|
| 66 |
+
|
| 67 |
+
in the financial domain (Cuaresma, 2002). Most 067 notably, with the introduction of the General Data Protection Regulation (GDPR), the protection of
|
| 68 |
+
|
| 69 |
+
personally identifiable information was codified 070 into EU law in 2018 (Regulation, 2016).
|
| 70 |
+
|
| 71 |
+
In order to mitigate data leaks, organisations 072 such as financial institutes and hospitals are re-
|
| 72 |
+
|
| 73 |
+
quired to anonymise or pseudonymise sensitive 075 data before processing them further. Similarly,
|
| 74 |
+
|
| 75 |
+
automated NLP models should ideally be trained 077 using anonymised data as resulting models could potentially violate a number of GDPR guidelines
|
| 76 |
+
|
| 77 |
+
such as the individuals' right to be forgotten, and 080 the right to explanation. Furthermore, models can
|
| 78 |
+
|
| 79 |
+
be manipulated to partially recreate the training 082 data (Song et al., 2017), which can result in disastrous data breaches. On the other hand, however,
|
| 80 |
+
|
| 81 |
+
anonymisation of texts can lead to loss of informa- 085 tion and meaning, making NLP models trained on
|
| 82 |
+
|
| 83 |
+
anonymised data less reliable as a result (Meystre 087 et al., 2014). Intuitively, this in turn could lead to a decrease in performance of such models when
|
| 84 |
+
|
| 85 |
+
compared to models trained on non-anonymised 090 text. As such, it is crucial to choose an appropriate
|
| 86 |
+
|
| 87 |
+
anonymisation strategy to lower this loss of infor- 092 mation and avoid performance drops of models.
|
| 88 |
+
|
| 89 |
+
In this study, we investigate the impact of anonymisation on the performance of downstream
|
| 90 |
+
|
| 91 |
+
NLP tasks, focusing on the anonymisation and 097 pseudonymisation of personal names only. This allows us to select from a wide array of NLP tasks as most datasets contain a large number of personal names, whereas other types of names are less commonly found. Specifically, we compare six different anonymisation strategies, and two Transformer-based pre-trained model architectures in our experiments: the popular BERT (Devlin et al., 2018) architecture and the state-of-the-art ERNIE (Sun
|
| 92 |
+
|
| 93 |
+
et al., 2020) architecture. Further, we look into nine 107
|
| 94 |
+
|
| 95 |
+
different NLP tasks of varying degrees of difficulty. We address the following research questions:
|
| 96 |
+
|
| 97 |
+
- RQ1: Which anonymisation strategy is the most appropriate for downstream NLP tasks?
|
| 98 |
+
|
| 99 |
+
- RQ2: Should a model be trained on original or anonymised data?
|
| 100 |
+
|
| 101 |
+
## 2 Experimental Setup
|
| 102 |
+
|
| 103 |
+
In this section, we present the datasets used in this study and we introduce the different anonymisation strategies that we compare against each other. We also show the pre-trained models we use.
|
| 104 |
+
|
| 105 |
+
### 2.1 Datasets
|
| 106 |
+
|
| 107 |
+
For this study, we selected several downstream tasks that greatly vary in complexity, ranging from simple text classification to complicated Natural Language Understanding (NLU) tasks featured in the GLUE benchmark collection (Wang et al., 2018). We ensured that each set contains a considerable number of personal names. Most of these datasets are publicly available, except for a proprietary email classification dataset provided by our partners. We release the original as well as the anonymised datasets for most tasks. ${}^{1}$
|
| 108 |
+
|
| 109 |
+
We choose three public classification tasks: Fake News Detection (FND) ${}^{2}$ , News Bias Detection (NBD) (Bharadwaj et al., 2020), and Fraudulent Email Detection (FED) (Radev, 2008).
|
| 110 |
+
|
| 111 |
+
Five of our investigated tasks are featured in the GLUE collection, namely MRPC (Dolan and Brockett, 2005), RTE (Haim et al., 2006), WNLI (Levesque et al., 2012), CoLA (Warstadt et al., 2018), and MNLI (Williams et al., 2018).
|
| 112 |
+
|
| 113 |
+
Our final task is the Email Domain Classification Dataset (EDC) which we describe in greater detail. It is provided by our partners in the banking domain. As such, it is a proprietary dataset consisting of sensitive emails from clients, and thus cannot be publicly released. However, it serves as an authentic use-case for our study. The task consists of classifying emails along 19 broad domains related to banking activities such as credit cards, wire transfers, account management etc., which will then be forwarded to the appropriate department. We selected a subset of the provided dataset, such that each domain is represented equally. More
|
| 114 |
+
|
| 115 |
+
<table><tr><td>Name</td><td>Description of AS</td></tr><tr><td>AS1</td><td>Singular generic token</td></tr><tr><td>AS2</td><td>Unique generic token for each name in document</td></tr><tr><td>AS3</td><td>Unique generic token for each distinct name in document</td></tr><tr><td>AS4</td><td>Removal of names</td></tr><tr><td>AS5</td><td>Random name for each name in document</td></tr><tr><td>AS6</td><td>Random name for each distinct name in document</td></tr></table>
|
| 116 |
+
|
| 117 |
+
Table 1: Description of Anonymisation strategies
|
| 118 |
+
|
| 119 |
+
162
|
| 120 |
+
|
| 121 |
+
163
|
| 122 |
+
|
| 123 |
+
168 specifically, for each domain in the set, we randomly selected $\simeq {500}$ emails, for a total of nearly 9000 emails. Furthermore, the dataset is multilingual, but we perform our experiments on the emails written in French due to the high sample number.
|
| 124 |
+
|
| 125 |
+
### 2.2 Anonymisation Strategies
|
| 126 |
+
|
| 127 |
+
We consider six anonymisation strategies (AS1- 6) for this study. These strategies are commonly
|
| 128 |
+
|
| 129 |
+
found in the literature (Berg et al., 2020; Deleger 180 et al., 2013). They largely fall into three categories: replacement by a generic token, removal of names, and replacement by a random name. We describe each AS in Table 1. Table 2 shows the differences
|
| 130 |
+
|
| 131 |
+
between each strategy on a simple example. 185
|
| 132 |
+
|
| 133 |
+
### 2.3 Model Training
|
| 134 |
+
|
| 135 |
+
We compare the impact of anonymisation strategies 188
|
| 136 |
+
|
| 137 |
+
using two Transformer-based models: BERT (De- 190 vlin et al., 2018) and ERNIE (Sun et al., 2020). For the tasks written in English, we use the uncased BERT Base mode and the ERNIE Base models. For the EDC task, we use the multilingual mBERT model and the ERNIE-M model published by Ouyang et al. (2021). For our study, we use the Transformers library by Huggingface (Wolf et al., 2019) as our framework. Furthermore, we take a grid-search based approach to determine the most
|
| 138 |
+
|
| 139 |
+
appropriate fine-tuning parameters for each down- 200 stream task (cf. Appendix B.[needs ref])
|
| 140 |
+
|
| 141 |
+
## 3 Experimental Results
|
| 142 |
+
|
| 143 |
+
203
|
| 144 |
+
|
| 145 |
+
In this section, we show the results of our exper- 205 iments and address the research questions from Section 1. For each task and for each pre-trained model, we fine-tune a model on the original dataset and each of our six anonymised datasets. We do five runs for each case, and average the results. We then compare the average performance for each AS to the performance of the models trained on original data. Table 3 shows the average performance of every model. For each of the GLUE tasks, we
|
| 146 |
+
|
| 147 |
+
use the metric recommended by (Wang et al., 2018) 215
|
| 148 |
+
|
| 149 |
+
---
|
| 150 |
+
|
| 151 |
+
${}^{1}$ https://anonymous.4open.science/r/ anonymisation_paper-E147/
|
| 152 |
+
|
| 153 |
+
${}^{2}$ https://www.kaggle.com/shubh0799/ fake-news
|
| 154 |
+
|
| 155 |
+
---
|
| 156 |
+
|
| 157 |
+
216 270
|
| 158 |
+
|
| 159 |
+
<table><tr><td>Original</td><td>"Hi, this is Paul, am I speaking to John?"</td><td>"Sorry, no, this is George. John is not here today."</td></tr><tr><td>AS1</td><td>"Hi, this is ENTNAME, am I speaking to ENTNAME?"</td><td>"Sorry, no, this is ENTNAME. ENTNAME is not here today."</td></tr><tr><td>AS2</td><td>"Hi, this is ENTNAME1, am I speaking to ENTNAME2?"</td><td>"Sorry, no, this is ENTNAME1. ENTNAME2 is not here today."</td></tr><tr><td>AS3</td><td>"Hi, this is ENTNAME1, am I speaking to ENTNAME2?"</td><td>"Sorry, no, this is ENTNAME3. ENTNAME2 is not here today."</td></tr><tr><td>AS4</td><td>"Hi, this is, am I speaking to "</td><td>"Sorry, no, this is , is not here today."</td></tr><tr><td>AS5</td><td>"Hi, this is Bert, am I speaking to Ernie?"</td><td>"Sorry, no, this is Elmo. Kermit is not here today."</td></tr><tr><td>AS6</td><td>"Hi, this is Jessie, am I speaking to James?"</td><td>"Sorry, no, this is Meowth. James is not here today."</td></tr></table>
|
| 160 |
+
|
| 161 |
+
Table 2: Example for each anonymisation strategy
|
| 162 |
+
|
| 163 |
+
217 271
|
| 164 |
+
|
| 165 |
+
218 272
|
| 166 |
+
|
| 167 |
+
219 273
|
| 168 |
+
|
| 169 |
+
220 274
|
| 170 |
+
|
| 171 |
+
221 275
|
| 172 |
+
|
| 173 |
+
222 276
|
| 174 |
+
|
| 175 |
+
223
|
| 176 |
+
|
| 177 |
+
278
|
| 178 |
+
|
| 179 |
+
225 279
|
| 180 |
+
|
| 181 |
+
<table><tr><td colspan="2"/><td colspan="7">BERT</td><td colspan="7">ERNIE</td></tr><tr><td>Task</td><td>Metric</td><td>Original</td><td>AS1</td><td>AS2</td><td>AS3</td><td>AS4</td><td>AS5</td><td>AS6</td><td>Original</td><td>AS1</td><td>AS2</td><td>AS3</td><td>AS4</td><td>AS5</td><td>AS6</td></tr><tr><td>FND</td><td>F1</td><td>0.973</td><td>0.976↑</td><td>0.974↑</td><td>0.969↓</td><td>0.965↓</td><td>${0.968} \downarrow$</td><td>0.971↓</td><td>0.968</td><td>0.962↓</td><td>0.960↓</td><td>0.960↓</td><td>0.956↓</td><td>0.956↓</td><td>0.963↓</td></tr><tr><td>NBD</td><td>F1</td><td>0.653</td><td>0.658↑</td><td>0.647↓</td><td>0.654↑</td><td>0.681↑</td><td>0.674↑</td><td>0.683↑</td><td>0.678</td><td>0.681↑</td><td>0.684↑</td><td>0.695↑</td><td>0.709↑</td><td>0.653↓</td><td>0.669↓</td></tr><tr><td>FED</td><td>F1</td><td>0.994</td><td>0.995↑</td><td>0.996↑</td><td>0.996↑</td><td>0.996↑</td><td>0.994</td><td>0.995↑</td><td>0.996</td><td>0.994↓</td><td>0.993↓</td><td>0.994↓</td><td>0.993↓</td><td>0.995↓</td><td>0.993↓</td></tr><tr><td>MRPC</td><td>F1</td><td>0.791</td><td>0.786↓</td><td>0.769↓</td><td>0.768↓</td><td>0.797↑</td><td>0.792↑</td><td>0.783↓</td><td>0.811</td><td>0.824↑</td><td>0.817↑</td><td>0.799↓</td><td>0.832↑</td><td>0.826↑</td><td>0.82↑</td></tr><tr><td>RTE</td><td>Acc</td><td>0.691</td><td>${0.67} \downarrow$</td><td>0.654↓</td><td>0.639↓</td><td>0.624↓</td><td>${0.644} \downarrow$</td><td>0.666↓</td><td>0.703</td><td>0.696↓</td><td>0.665↓</td><td>0.671↓</td><td>${0.683} \downarrow$</td><td>0.716↑</td><td>0.676↓</td></tr><tr><td>WNLI</td><td>F1</td><td>0.520</td><td>0.530↑</td><td>0.526↑</td><td>0.551↑</td><td>0.586↑</td><td>0.541↑</td><td>0.535↑</td><td>0.561</td><td>0.472↓</td><td>0.557↓</td><td>0.564↑</td><td>0.595↑</td><td>0.614↑</td><td>0.550↓</td></tr><tr><td>CoLA</td><td>MCC</td><td>0.555</td><td>0.520↓</td><td>0.522↓</td><td>0.524↓</td><td>${0.443} \downarrow$</td><td>0.495↓</td><td>0.532↓</td><td>0.519</td><td>${0.517} \downarrow$</td><td>0.543↑</td><td>0.556↑</td><td>${0.385} \downarrow$</td><td>0.540↑</td><td>0.542↑</td></tr><tr><td>MNLI</td><td>Acc</td><td>0.754</td><td>0.742↓</td><td>0.730↓</td><td>0.734↓</td><td>0.745↓</td><td>0.742↓</td><td>0.747↓</td><td>0.789</td><td>0.774↓</td><td>0.750↓</td><td>0.759↓</td><td>0.770↓</td><td>0.776↓</td><td>0.773↓</td></tr><tr><td>EDC</td><td>F1</td><td>0.626</td><td>0.624↓</td><td>0.683↑</td><td>0.617↓</td><td>0.619↓</td><td>0.616↓</td><td>0.595↓</td><td>0.642</td><td>0.635↓</td><td>0.696↑</td><td>0.642</td><td>0.635↓</td><td>0.627↓</td><td>0.621↓</td></tr></table>
|
| 182 |
+
|
| 183 |
+
Table 3: Results of our fine-tuned models. We highlight in green (†) the models that outperform the models trained on original data, in red $\left( \downarrow \right)$ the models that do not.
|
| 184 |
+
|
| 185 |
+
226 280
|
| 186 |
+
|
| 187 |
+
227 281
|
| 188 |
+
|
| 189 |
+
228
|
| 190 |
+
|
| 191 |
+
229
|
| 192 |
+
|
| 193 |
+
230
|
| 194 |
+
|
| 195 |
+
231
|
| 196 |
+
|
| 197 |
+
232 286
|
| 198 |
+
|
| 199 |
+
233
|
| 200 |
+
|
| 201 |
+
234 288
|
| 202 |
+
|
| 203 |
+
237
|
| 204 |
+
|
| 205 |
+
<table><tr><td/><td colspan="6">BERT</td><td colspan="6">ERNIE</td></tr><tr><td>Task</td><td>AS1</td><td>AS2</td><td>AS3</td><td>AS4</td><td>AS5</td><td>AS6</td><td>AS1</td><td>AS2</td><td>AS3</td><td>AS4</td><td>AS5</td><td>AS6</td></tr><tr><td>FND</td><td>5</td><td>4</td><td>2</td><td>0</td><td>1</td><td>3</td><td>4</td><td>3</td><td>3</td><td>1</td><td>1</td><td>5</td></tr><tr><td>NBD</td><td>2</td><td>0</td><td>1</td><td>4</td><td>3</td><td>5</td><td>2</td><td>3</td><td>4</td><td>5</td><td>$\underline{\mathbf{0}}$</td><td>1</td></tr><tr><td>FED</td><td>2</td><td>5</td><td>5</td><td>5</td><td>0</td><td>2</td><td>4</td><td>2</td><td>4</td><td>2</td><td>5</td><td>2</td></tr><tr><td>MRPC</td><td>3</td><td>1</td><td>0</td><td>5</td><td>4</td><td>2</td><td>3</td><td>1</td><td>0</td><td>5</td><td>4</td><td>2</td></tr><tr><td>RTE</td><td>5</td><td>3</td><td>1</td><td>0</td><td>2</td><td>4</td><td>4</td><td>0</td><td>1</td><td>3</td><td>5</td><td>2</td></tr><tr><td>WNLI</td><td>1</td><td>0</td><td>4</td><td>5</td><td>3</td><td>2</td><td>0</td><td>2</td><td>3</td><td>4</td><td>5</td><td>1</td></tr><tr><td>CoLA</td><td>2</td><td>3</td><td>4</td><td>0</td><td>1</td><td>5</td><td>1</td><td>4</td><td>5</td><td>0</td><td>2</td><td>3</td></tr><tr><td>MNLI</td><td>3</td><td>0</td><td>1</td><td>4</td><td>3</td><td>5</td><td>4</td><td>0</td><td>1</td><td>2</td><td>5</td><td>3</td></tr><tr><td>EDC</td><td>4</td><td>5</td><td>2</td><td>3</td><td>1</td><td>0</td><td>3</td><td>5</td><td>4</td><td>3</td><td>1</td><td>0</td></tr><tr><td>Total</td><td>27</td><td>21</td><td>20</td><td>26</td><td>18</td><td>28</td><td>25</td><td>20</td><td>25</td><td>25</td><td>28</td><td>21</td></tr><tr><td>Avg.</td><td>3</td><td>2.33</td><td>2.22</td><td>2.89</td><td>2</td><td>3.11</td><td>2.78</td><td>2.22</td><td>2.78</td><td>2.78</td><td>3.11</td><td>2.33</td></tr></table>
|
| 206 |
+
|
| 207 |
+
Table 4: Ranking scores for fine-tuned models. Bold text shows the winner according to Borda Count, underlined text according to Instant Runoff.
|
| 208 |
+
|
| 209 |
+
238
|
| 210 |
+
|
| 211 |
+
239
|
| 212 |
+
|
| 213 |
+
240
|
| 214 |
+
|
| 215 |
+
241
|
| 216 |
+
|
| 217 |
+
242
|
| 218 |
+
|
| 219 |
+
244
|
| 220 |
+
|
| 221 |
+
247
|
| 222 |
+
|
| 223 |
+
248
|
| 224 |
+
|
| 225 |
+
249 and F1 score for the classification tasks.
|
| 226 |
+
|
| 227 |
+
### 3.1 Which anonymisation strategy is the most appropriate for downstream NLP tasks?
|
| 228 |
+
|
| 229 |
+
252
|
| 230 |
+
|
| 231 |
+
In order to determine the most appropriate strat-
|
| 232 |
+
|
| 233 |
+
254 egy, we consider two ranking-based approaches: Borda Count and Instant Runoff (Taylor and Pacelli,
|
| 234 |
+
|
| 235 |
+
256 2008). For both approaches, we determine the 257 score ${s}_{a, t}$ for each anonymisation strategy (AS, 258 indexed by $a$ ) and for each task (indexed by $t$ ) in 259 the following way: The best approach gets a score of five, the second best gets a score of four, etc.
|
| 236 |
+
|
| 237 |
+
The final Borda Count score for a given anonymi-sation strategy $A$ is defined as $\mathop{\sum }\limits_{{t = 0}}^{T}{s}_{A, t}$ (where $T$
|
| 238 |
+
|
| 239 |
+
264 is the total number of tasks, here, nine). The model with the highest score is considered the best.
|
| 240 |
+
|
| 241 |
+
Instant Runoff is an iterative procedure. For each iteration, we count the number of wins for each AS, where an AS is considered a winner in a given task
|
| 242 |
+
|
| 243 |
+
269 if its corresponding fine-tuned model outperforms
|
| 244 |
+
|
| 245 |
+
every other model. We then eliminate the AS with 291 the lowest number of wins and update the scores
|
| 246 |
+
|
| 247 |
+
accordingly. We repeat this process until one AS 293 remains, or until we cannot eliminate further ASs.
|
| 248 |
+
|
| 249 |
+
Table 4 shows the scores for each model and the
|
| 250 |
+
|
| 251 |
+
winning anonymisation strategies according to the 296 aforementioned approaches. For BERT models, we
|
| 252 |
+
|
| 253 |
+
see that AS1, AS4, and AS6 are the best perform- 298
|
| 254 |
+
|
| 255 |
+
ing strategies according to Borda count, AS6 being 299
|
| 256 |
+
|
| 257 |
+
a close winner. Instant Runoff leads to similar re- 300
|
| 258 |
+
|
| 259 |
+
sults with AS4 and AS6 reaching the final iteration, 301
|
| 260 |
+
|
| 261 |
+
and AS6 being the overall winner. Furthermore, 302
|
| 262 |
+
|
| 263 |
+
we note a lower variance in the scores for AS6 303
|
| 264 |
+
|
| 265 |
+
when compared to AS4. In contrast, when eval- 304
|
| 266 |
+
|
| 267 |
+
uating ERNIE models, we note that AS5 models 305
|
| 268 |
+
|
| 269 |
+
are performing significantly better than every other 306 307 strategy according to Borda Count. Similarly, AS5
|
| 270 |
+
|
| 271 |
+
also wins the Instant Runoff with AS4 and AS5 308 309 making it to the final round. Overall, it appears 310 that using random names over generic tokens to 311 anonymise textual data is the preferable solution as 312
|
| 272 |
+
|
| 273 |
+
AS1, AS2, AS3 models, which were all trained on 313 data with generic tokens, usually rank low.
|
| 274 |
+
|
| 275 |
+
### 3.2 Should a model be trained on original or anonymised data?
|
| 276 |
+
|
| 277 |
+
In order to answer this question, we investigate the 318 performance of models trained on original data on
|
| 278 |
+
|
| 279 |
+
the anonymised test sets and compare them to the 320
|
| 280 |
+
|
| 281 |
+
models trained directly on anonymised data. Ta- 321
|
| 282 |
+
|
| 283 |
+
ble 5 shows the results of testing models trained 322
|
| 284 |
+
|
| 285 |
+
on non-anonymised training sets on anonymised 323
|
| 286 |
+
|
| 287 |
+
<table><tr><td colspan="2"/><td colspan="7">BERT</td><td colspan="7">ERNIE</td></tr><tr><td>Task</td><td>Metric</td><td>Original</td><td>AS1</td><td>AS2</td><td>AS3</td><td>AS4</td><td>AS5</td><td>AS6</td><td>Original</td><td>AS1</td><td>AS2</td><td>AS3</td><td>AS4</td><td>AS5</td><td>AS6</td></tr><tr><td>FND</td><td>F1</td><td>0.973</td><td>0.933↓</td><td>0.910↓</td><td>0.907↓</td><td>0.950↓</td><td>0.963↓</td><td>0.963↓</td><td>0.968</td><td>0.951↓</td><td>0.938↓</td><td>0.935↓</td><td>0.957↑</td><td>0.967↑</td><td>0.967↑</td></tr><tr><td>NBD</td><td>F1</td><td>0.653</td><td>0.566↓</td><td>0.551↓</td><td>0.546↓</td><td>0.601↓</td><td>0.602↓</td><td>0.609↓</td><td>0.678</td><td>0.683</td><td>0.684</td><td>0.659↓</td><td>0.687↓</td><td>0.683↑</td><td>0.683↑</td></tr><tr><td>FED</td><td>F1</td><td>0.994</td><td>0.995</td><td>0.995</td><td>0.995</td><td>0.996</td><td>0.996</td><td>0.996</td><td>0.996</td><td>0.995</td><td>0.995</td><td>0.995</td><td>0.996</td><td>0.996</td><td>0.996</td></tr><tr><td>MRPC</td><td>F1</td><td>0.791</td><td>0.809↑</td><td>0.811↑</td><td>0.811↑</td><td>0.819↑</td><td>0.816↑</td><td>0.814↑</td><td>0.811</td><td>0.848↑</td><td>0.848↑</td><td>0.849↑</td><td>0.852↑</td><td>0.804↓</td><td>0.834↑</td></tr><tr><td>RTE</td><td>Acc</td><td>0.691</td><td>0.665↓</td><td>0.663↑</td><td>0.669↑</td><td>0.670↑</td><td>0.645↑</td><td>${0.660} \downarrow$</td><td>0.700</td><td>0.703↑</td><td>0.701↑</td><td>0.693↑</td><td>0.699↑</td><td>0.688↓</td><td>0.704↑</td></tr><tr><td>WNLI</td><td>F1</td><td>0.520</td><td>0.504↓</td><td>0.504↓</td><td>0.504↓</td><td>0.504↓</td><td>0.504↓</td><td>0.504↓</td><td>0.561</td><td>0.435↓</td><td>0.442↓</td><td>0.467↓</td><td>0.506↓</td><td>0.458↓</td><td>0.428↓</td></tr><tr><td>CoLA</td><td>MCC</td><td>0.555</td><td>0.376↓</td><td>0.515↓</td><td>0.528↑</td><td>0.335↓</td><td>0.549↑</td><td>0.550↑</td><td>0.519</td><td>0.427↓</td><td>0.537↓</td><td>0.511↓</td><td>${0.313} \downarrow$</td><td>0.518↓</td><td>0.523↓</td></tr><tr><td>MNLI</td><td>Acc</td><td>0.754</td><td>0.753↑</td><td>0.724↓</td><td>0.753↑</td><td>0.753↑</td><td>0.744↑</td><td>0.744↓</td><td>0.789</td><td>0.783↑</td><td>${0.545} \downarrow$</td><td>0.760↑</td><td>0.772↑</td><td>0.669↓</td><td>0.765↓</td></tr></table>
|
| 288 |
+
|
| 289 |
+
Table 5: Results of testing the original models on anonymised data. We highlight in green (↑) the models that significantly outperform the matching model in Table 3 using a Wilcoxon test, in red ( $\downarrow$ ) the models that perform significantly worse, in black the models that do not perform significantly differently.
|
| 290 |
+
|
| 291 |
+
378
|
| 292 |
+
|
| 293 |
+
379
|
| 294 |
+
|
| 295 |
+
380
|
| 296 |
+
|
| 297 |
+
381
|
| 298 |
+
|
| 299 |
+
384
|
| 300 |
+
|
| 301 |
+
389 test sets. We find that nearly half of the models trained on anonymised data outperform the counterpart model trained on original data. While there is not always a clear trend, we observe that the original models almost consistently perform better in the MRPC and RTE tasks, and perform worse in the WNLI and CoLA tasks, regardless of the architecture used. Furthermore, for BERT models, the models trained on anonymised data consistently perform worse on the FND and NBD tasks. For the ERNIE models, the models trained on original data consistently perform better on the FED task ever so slightly. Despite these observations, we notice that the performance losses are oftentimes very high, specifically for the NBD, WNLI, and CoLA tasks, while performance gains tend to be lower.
|
| 302 |
+
|
| 303 |
+
## 4 Related Work
|
| 304 |
+
|
| 305 |
+
Relevant studies done on textual data largely focus on medical texts and on a very limited number of tasks and anonymisation strategies when compared to our work. On the other hand, they typically anonymise a wide variety of protected health information (PHI) classes, while our work focuses on anonymisation of persons' names only. Berg et al. (2020) studied the impact of four anonymisa-tion strategies (pseudonymisation, replacement by PHI class, masking, and removal) on downstream NER tasks for the clinical domain. Similarly to our findings, they find that pseudonymisation yields the best results among the investigated strategies. On the other hand, removal of names resulted in the highest negative impact on the downstream tasks. Deleger et al. (2013) investigated the impact of anonymisation on an information extraction task using a dataset of 3503 clinical notes. They anonymised 12 types of PHI such as patients' name, age, etc., and used two anonymisation strategies (replacement by fake PHI, and masking). They found no significant loss in performance for this task. Similarly, Meystre et al. (2014) found that the informativeness of medical notes only marginally decreased after anonymisation, using 18 types of PHI and 3 anonymisation strategies (replacement
|
| 306 |
+
|
| 307 |
+
by fake PHI, replacement by PHI class, and replace- 394 ment by ${PHI}$ token). Using the same anonymisa-tion strategies and ten types of PHI, Obeid et al. (2019) investigated the impact of anonymisation
|
| 308 |
+
|
| 309 |
+
on a mental status classification task. Comparing 399 nine different machine learning models, they did
|
| 310 |
+
|
| 311 |
+
not find any significant difference in performance 401 between original and anonymised data.
|
| 312 |
+
|
| 313 |
+
## 5 Conclusion
|
| 314 |
+
|
| 315 |
+
404
|
| 316 |
+
|
| 317 |
+
In this paper, we conducted an empirical study
|
| 318 |
+
|
| 319 |
+
analysing the impact of anonymisation on down- 406 stream NLP tasks. We investigated the difference in performance of six anonymisation strategies on
|
| 320 |
+
|
| 321 |
+
nine NLP tasks ranging from simple classification 409 tasks to hard NLU tasks. Further, we compared
|
| 322 |
+
|
| 323 |
+
two architectures, BERT and ERNIE. Overall, we 411 found that anonymising data before training an NLP model does have a negative impact on its
|
| 324 |
+
|
| 325 |
+
performance. However, this impact is relatively 414 low. We determined that pseudonymisation tech-
|
| 326 |
+
|
| 327 |
+
niques involving random names lead to higher per- 416 formances across most tasks. Specifically, replac-
|
| 328 |
+
|
| 329 |
+
ing names by random names (AS5) had the least 419 negative impact when using an ERNIE model. Sim-
|
| 330 |
+
|
| 331 |
+
ilarly, replacing by random names while preserving 421 the link between identical names (AS6) worked best for BERT models. We also showed that it is advisable to anonymise data prior to training as we observed a large difference in performance
|
| 332 |
+
|
| 333 |
+
between models trained on original data versus 426 anonymised data. There is also a noticeable difference between the performances of BERT and ERNIE, warranting further investigation into the
|
| 334 |
+
|
| 335 |
+
performance differences between a larger number 430
|
| 336 |
+
|
| 337 |
+
of language models. 431
|
| 338 |
+
|
| 339 |
+
## References
|
| 340 |
+
|
| 341 |
+
433 Accountability Act. 1996. Health Insurance Portabil- 434 ity and Accountability Act of 1996. Public law, 435 104:191. 436
|
| 342 |
+
|
| 343 |
+
France Bélanger and Robert E Crossler. 2011. Privacy 438 in the digital age: a review of information privacy re- search in information systems. MIS quarterly, pages 1017-1041. 440
|
| 344 |
+
|
| 345 |
+
Hanna Berg, Aron Henriksson, and Hercules Dalianis. 2020. The impact of de-identification on downstream named entity recognition in clinical text. In 11th 443 International Workshop on Health Text Mining and Information Analysis, pages 1-11. Association for 445 Computational Linguistics.
|
| 346 |
+
|
| 347 |
+
Avinash Bharadwaj, Brinda Ashar, Parshva Barbhaya, Ruchir Bhatia, and Zaheed Shaikh. 2020. Source 448 based fake news classification using machine learning.
|
| 348 |
+
|
| 349 |
+
Jolina C Cuaresma. 2002. The Gramm-Leach-Bliley Act. Berkeley Tech. LJ, 17:497.
|
| 350 |
+
|
| 351 |
+
Louise Deleger, Katalin Molnar, Guergana Savova, Fei Xia, Todd Lingren, Qi Li, Keith Marsolo, Anil Jegga, Megan Kaiser, Laura Stoutenborough, et al. 2013. Large-scale evaluation of automated clinical note de-identification and its impact on information extraction. Journal of the American Medical Informatics Association, 20(1):84-94.
|
| 352 |
+
|
| 353 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional Transformers for language understanding. arXiv preprint arXiv:1810.04805.
|
| 354 |
+
|
| 355 |
+
William B Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).
|
| 356 |
+
|
| 357 |
+
Roy Bar Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. 2006. The second PASCAL Recognising Textual Entailment challenge. In Proceedings of the Second PASCAL Challenges Workshop on Recognising Textual Entailment, volume 7.
|
| 358 |
+
|
| 359 |
+
Hector Levesque, Ernest Davis, and Leora Morgenstern. 2012. The Winograd schema challenge. In Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning.
|
| 360 |
+
|
| 361 |
+
Stéphane M Meystre, Oscar Ferrández, F Jeffrey Friedlin, Brett R South, Shuying Shen, and Matthew H Samore. 2014. Text de-identification for privacy protection: a study of its impact on clinical text information content. Journal of biomedical informatics, 50:142-150.
|
| 362 |
+
|
| 363 |
+
Jihad S Obeid, Paul M Heider, Erin R Weeda, Andrew J Matuskowitz, Christine M Carr, Kevin Gagnon, Tami 485 Crawford, and Stephane M Meystre. 2019. Impact of de-identification on clinical text classification using 486 traditional and deep learning classifiers. Studies in 487 health technology and informatics, 264:283. 488
|
| 364 |
+
|
| 365 |
+
Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun, 489 490 Hao Tian, Hua Wu, and Haifeng Wang. 2021.
|
| 366 |
+
|
| 367 |
+
ERNIE-M: Enhanced multilingual representation by 491
|
| 368 |
+
|
| 369 |
+
aligning cross-lingual semantics with monolingual 492 corpora. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 27-38, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 370 |
+
|
| 371 |
+
Dragomir Radev. 2008. CLAIR collection of fraud 497 email (repository) - ACL Wiki.
|
| 372 |
+
|
| 373 |
+
Protection Regulation. 2016. Regulation (EU) 2016/679 499 of the European Parliament and of the Council. Reg-
|
| 374 |
+
|
| 375 |
+
ulation (eu), 679:2016. 501 502
|
| 376 |
+
|
| 377 |
+
Congzheng Song, Thomas Ristenpart, and Vitaly 503
|
| 378 |
+
|
| 379 |
+
Shmatikov. 2017. Machine learning models that re- 504 member too much. In Proceedings of the 2017 ACM 505 SIGSAC Conference on computer and communications security, pages 587-601. 506
|
| 380 |
+
|
| 381 |
+
507
|
| 382 |
+
|
| 383 |
+
Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao 508
|
| 384 |
+
|
| 385 |
+
Tian, Hua Wu, and Haifeng Wang. 2020. ERNIE 2.0: 509 A continual pre-training framework for language understanding. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8968-
|
| 386 |
+
|
| 387 |
+
8975. 512
|
| 388 |
+
|
| 389 |
+
Alan D Taylor and Allison M Pacelli. 2008. Mathemat- 514 ics and politics: strategy, voting, power, and proof. Springer Science & Business Media.
|
| 390 |
+
|
| 391 |
+
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE:
|
| 392 |
+
|
| 393 |
+
A multi-task benchmark and analysis platform for nat- 519 ural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages
|
| 394 |
+
|
| 395 |
+
353-355. 522
|
| 396 |
+
|
| 397 |
+
Alex Warstadt, Amanpreet Singh, and Samuel R Bow- 524 man. 2018. Neural network acceptability judgments. arXiv preprint arXiv:1805.12471.
|
| 398 |
+
|
| 399 |
+
Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen-
|
| 400 |
+
|
| 401 |
+
tence understanding through inference. In Proceed- 529 ings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122. Association for Computational Linguistics. 534
|
| 402 |
+
|
| 403 |
+
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier-ric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace's Transformers: State-of-the-art natural language processing. ArXiv, 538 abs/1910.03771. 539
|
| 404 |
+
|
| 405 |
+
540 594
|
| 406 |
+
|
| 407 |
+
## 6 Appendices
|
| 408 |
+
|
| 409 |
+
541 595
|
| 410 |
+
|
| 411 |
+
6.1 Appendix a: Statistics of Down-Stream 596
|
| 412 |
+
|
| 413 |
+
Tasks 597
|
| 414 |
+
|
| 415 |
+
546 600
|
| 416 |
+
|
| 417 |
+
<table><tr><td>dataset</td><td>FND</td><td>NBD</td><td>FED</td><td>MRPC</td><td>RTE</td><td>WNLI</td><td>CoLA</td><td>MNLI</td><td>EDC</td></tr><tr><td>train set</td><td>4382</td><td>1374</td><td>8980</td><td>3668</td><td>2489</td><td>635</td><td>6039</td><td>39999</td><td>6354</td></tr><tr><td>dev set</td><td>690</td><td>196</td><td>997</td><td>407</td><td>276</td><td>71</td><td>851</td><td>5000</td><td>926</td></tr><tr><td>test set</td><td>1237</td><td>395</td><td>1926</td><td>1725</td><td>800</td><td>146</td><td>1661</td><td>5396</td><td>1798</td></tr><tr><td>#names</td><td>68 890</td><td>15 610</td><td>30404</td><td>3324</td><td>3685</td><td>898</td><td>2600</td><td>85 999</td><td>6550</td></tr><tr><td>#unique</td><td>7500</td><td>3247</td><td>6104</td><td>1729</td><td>2042</td><td>102</td><td>335</td><td>10460</td><td>2807</td></tr><tr><td>%anonymised</td><td>90.9</td><td>83.9</td><td>55.7</td><td>43.1</td><td>51</td><td>61.9</td><td>41</td><td>93.8</td><td>42.6</td></tr><tr><td>type</td><td>binary</td><td>multi</td><td>binary</td><td>binary</td><td>binary</td><td>binary</td><td>binary</td><td>multi</td><td>multi</td></tr></table>
|
| 418 |
+
|
| 419 |
+
598
|
| 420 |
+
|
| 421 |
+
599
|
| 422 |
+
|
| 423 |
+
601
|
| 424 |
+
|
| 425 |
+
602
|
| 426 |
+
|
| 427 |
+
603
|
| 428 |
+
|
| 429 |
+
604
|
| 430 |
+
|
| 431 |
+
606
|
| 432 |
+
|
| 433 |
+
551 605
|
| 434 |
+
|
| 435 |
+
553 607
|
| 436 |
+
|
| 437 |
+
Table 6: Statistics for the datasets. Size of datasets, 608
|
| 438 |
+
|
| 439 |
+
number of names found in the training set (#names), 609
|
| 440 |
+
|
| 441 |
+
556 number of unique names found in the training set 610
|
| 442 |
+
|
| 443 |
+
(#unique), percentage of samples that contains at 611
|
| 444 |
+
|
| 445 |
+
558 least one name (i.e. the percentage of samples to 612
|
| 446 |
+
|
| 447 |
+
be anonymised) (%anonymised), and the type of 613
|
| 448 |
+
|
| 449 |
+
560 the classification task (binary/multiclass) 614
|
| 450 |
+
|
| 451 |
+
561 615
|
| 452 |
+
|
| 453 |
+
616
|
| 454 |
+
|
| 455 |
+
563 617
|
| 456 |
+
|
| 457 |
+
### 6.2 Appendix B: Fine-Tuning $\mathbf{{Hyperparameters}}$
|
| 458 |
+
|
| 459 |
+
564 618
|
| 460 |
+
|
| 461 |
+
565 619
|
| 462 |
+
|
| 463 |
+
566 620
|
| 464 |
+
|
| 465 |
+
<table><tr><td/><td colspan="3">BERT</td><td colspan="3">ERNIE</td></tr><tr><td>Task</td><td>batch size</td><td>learning rate</td><td>#epochs</td><td>batch size</td><td>learning rate</td><td>#epochs</td></tr><tr><td>FND</td><td>16</td><td>5e-5</td><td>1</td><td>8</td><td>${2}^{-5}$</td><td>1</td></tr><tr><td>NBD</td><td>16</td><td>5e-5</td><td>3</td><td>8</td><td>${2}^{-5}$</td><td>5</td></tr><tr><td>FED</td><td>32</td><td>3e-5</td><td>3</td><td>32</td><td>${5}^{-5}$</td><td>1</td></tr><tr><td>MRPC</td><td>16</td><td>5e-5</td><td>3</td><td>32</td><td>${3}^{-5}$</td><td>4</td></tr><tr><td>RTE</td><td>16</td><td>5e-5</td><td>4</td><td>4</td><td>${2}^{-5}$</td><td>4</td></tr><tr><td>WNLI</td><td>16</td><td>3e-5</td><td>4</td><td>8</td><td>${2}^{-5}$</td><td>4</td></tr><tr><td>ColA</td><td>16</td><td>5e-5</td><td>3</td><td>64</td><td>${3}^{-5}$</td><td>3</td></tr><tr><td>MNLI</td><td>16</td><td>5e-5</td><td>2</td><td>512</td><td>${3}^{-5}$</td><td>3</td></tr><tr><td>EDC</td><td>16</td><td>5e-5</td><td>5</td><td>8</td><td>${3}^{-5}$</td><td>3</td></tr></table>
|
| 466 |
+
|
| 467 |
+
Table 7: Hyperparameters for fine-tuning pre-trained models for downstream tasks
|
| 468 |
+
|
| 469 |
+
621
|
| 470 |
+
|
| 471 |
+
623
|
| 472 |
+
|
| 473 |
+
624
|
| 474 |
+
|
| 475 |
+
626
|
| 476 |
+
|
| 477 |
+
628
|
| 478 |
+
|
| 479 |
+
568 622
|
| 480 |
+
|
| 481 |
+
571 625
|
| 482 |
+
|
| 483 |
+
573 627
|
| 484 |
+
|
| 485 |
+
629
|
| 486 |
+
|
| 487 |
+
576 630
|
| 488 |
+
|
| 489 |
+
577 631
|
| 490 |
+
|
| 491 |
+
578 632
|
| 492 |
+
|
| 493 |
+
579 633
|
| 494 |
+
|
| 495 |
+
580 634
|
| 496 |
+
|
| 497 |
+
581 635
|
| 498 |
+
|
| 499 |
+
582 636
|
| 500 |
+
|
| 501 |
+
583 637
|
| 502 |
+
|
| 503 |
+
584 638
|
| 504 |
+
|
| 505 |
+
585 639
|
| 506 |
+
|
| 507 |
+
586 640
|
| 508 |
+
|
| 509 |
+
587 641
|
| 510 |
+
|
| 511 |
+
588 642
|
| 512 |
+
|
| 513 |
+
589 643
|
| 514 |
+
|
| 515 |
+
590 644
|
| 516 |
+
|
| 517 |
+
591 645
|
| 518 |
+
|
| 519 |
+
592 646
|
| 520 |
+
|
| 521 |
+
593 647
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/0yzM0ibZgg/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,482 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
§ EVALUATING THE IMPACT OF ANONYMISATION ON DOWNSTREAM NLP TASKS
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 Anonymous Author
|
| 10 |
+
|
| 11 |
+
004 Affiliation / Address line 1
|
| 12 |
+
|
| 13 |
+
005 Affiliation / Address line 2 006 Affiliation / Address line 3
|
| 14 |
+
|
| 15 |
+
email@domain
|
| 16 |
+
|
| 17 |
+
Anonymouser Author
|
| 18 |
+
|
| 19 |
+
Affiliation / Address line 1
|
| 20 |
+
|
| 21 |
+
Affiliation / Address line 2
|
| 22 |
+
|
| 23 |
+
Affiliation / Address line 3
|
| 24 |
+
|
| 25 |
+
email@domain
|
| 26 |
+
|
| 27 |
+
Anonymousest Author 057
|
| 28 |
+
|
| 29 |
+
Affiliation / Address line 1 058
|
| 30 |
+
|
| 31 |
+
Affiliation / Address line 2 059 060 Affiliation / Address line 3
|
| 32 |
+
|
| 33 |
+
email@domain 062
|
| 34 |
+
|
| 35 |
+
§ ABSTRACT
|
| 36 |
+
|
| 37 |
+
013 Data anonymisation is often required to comply with regulations when transfering information across departments or entities.
|
| 38 |
+
|
| 39 |
+
016 However, the risk is that this procedure can distort the data and jeopardise the mod-
|
| 40 |
+
|
| 41 |
+
018 els built on it. Intuitively, the process of training an NLP model on anonymised data may lower the performance of the resulting
|
| 42 |
+
|
| 43 |
+
021 model when compared to a model trained on non-anonymised data. In this paper, we
|
| 44 |
+
|
| 45 |
+
023 investigate the impact of anonymisation on the performance of nine downstream
|
| 46 |
+
|
| 47 |
+
026 NLP tasks. We focus on the anonymi- sation and pseudonymisation of personal
|
| 48 |
+
|
| 49 |
+
028 names and compare six different anonymi-sation strategies for two state-of-the-art pre-trained models. Based on these ex-
|
| 50 |
+
|
| 51 |
+
031 periments, we formulate recommendations on how the anonymisation should be per-
|
| 52 |
+
|
| 53 |
+
033 formed to guarantee accurate NLP models. Our results reveal that anonymisation does have a negative impact on the performance of NLP models, but this impact is relatively low. We also find that using pseudonymi-
|
| 54 |
+
|
| 55 |
+
038 sation techniques involving random names leads to better performance across most tasks.
|
| 56 |
+
|
| 57 |
+
§ 1 INTRODUCTION
|
| 58 |
+
|
| 59 |
+
043 Protection of personal data has been a hot topic for decades (Bélanger and Crossler, 2011). Careless sharing of data between companies, cyber-attacks, and other data breaches can lead to catastrophic leaks of confidential data, potentially resulting in the invasion of people's privacy and identity theft.
|
| 60 |
+
|
| 61 |
+
To mitigate damages and hold bad actors accountable, many countries introduced various laws that aim to protect confidential data, such as the
|
| 62 |
+
|
| 63 |
+
053 Health Insurance Portability and Accountability
|
| 64 |
+
|
| 65 |
+
Act (HIPAA) for healthcare confidentiality (Act, 065 1996), and the Gramm-Leach-Bliley Act (GLBA)
|
| 66 |
+
|
| 67 |
+
in the financial domain (Cuaresma, 2002). Most 067 notably, with the introduction of the General Data Protection Regulation (GDPR), the protection of
|
| 68 |
+
|
| 69 |
+
personally identifiable information was codified 070 into EU law in 2018 (Regulation, 2016).
|
| 70 |
+
|
| 71 |
+
In order to mitigate data leaks, organisations 072 such as financial institutes and hospitals are re-
|
| 72 |
+
|
| 73 |
+
quired to anonymise or pseudonymise sensitive 075 data before processing them further. Similarly,
|
| 74 |
+
|
| 75 |
+
automated NLP models should ideally be trained 077 using anonymised data as resulting models could potentially violate a number of GDPR guidelines
|
| 76 |
+
|
| 77 |
+
such as the individuals' right to be forgotten, and 080 the right to explanation. Furthermore, models can
|
| 78 |
+
|
| 79 |
+
be manipulated to partially recreate the training 082 data (Song et al., 2017), which can result in disastrous data breaches. On the other hand, however,
|
| 80 |
+
|
| 81 |
+
anonymisation of texts can lead to loss of informa- 085 tion and meaning, making NLP models trained on
|
| 82 |
+
|
| 83 |
+
anonymised data less reliable as a result (Meystre 087 et al., 2014). Intuitively, this in turn could lead to a decrease in performance of such models when
|
| 84 |
+
|
| 85 |
+
compared to models trained on non-anonymised 090 text. As such, it is crucial to choose an appropriate
|
| 86 |
+
|
| 87 |
+
anonymisation strategy to lower this loss of infor- 092 mation and avoid performance drops of models.
|
| 88 |
+
|
| 89 |
+
In this study, we investigate the impact of anonymisation on the performance of downstream
|
| 90 |
+
|
| 91 |
+
NLP tasks, focusing on the anonymisation and 097 pseudonymisation of personal names only. This allows us to select from a wide array of NLP tasks as most datasets contain a large number of personal names, whereas other types of names are less commonly found. Specifically, we compare six different anonymisation strategies, and two Transformer-based pre-trained model architectures in our experiments: the popular BERT (Devlin et al., 2018) architecture and the state-of-the-art ERNIE (Sun
|
| 92 |
+
|
| 93 |
+
et al., 2020) architecture. Further, we look into nine 107
|
| 94 |
+
|
| 95 |
+
different NLP tasks of varying degrees of difficulty. We address the following research questions:
|
| 96 |
+
|
| 97 |
+
* RQ1: Which anonymisation strategy is the most appropriate for downstream NLP tasks?
|
| 98 |
+
|
| 99 |
+
* RQ2: Should a model be trained on original or anonymised data?
|
| 100 |
+
|
| 101 |
+
§ 2 EXPERIMENTAL SETUP
|
| 102 |
+
|
| 103 |
+
In this section, we present the datasets used in this study and we introduce the different anonymisation strategies that we compare against each other. We also show the pre-trained models we use.
|
| 104 |
+
|
| 105 |
+
§ 2.1 DATASETS
|
| 106 |
+
|
| 107 |
+
For this study, we selected several downstream tasks that greatly vary in complexity, ranging from simple text classification to complicated Natural Language Understanding (NLU) tasks featured in the GLUE benchmark collection (Wang et al., 2018). We ensured that each set contains a considerable number of personal names. Most of these datasets are publicly available, except for a proprietary email classification dataset provided by our partners. We release the original as well as the anonymised datasets for most tasks. ${}^{1}$
|
| 108 |
+
|
| 109 |
+
We choose three public classification tasks: Fake News Detection (FND) ${}^{2}$ , News Bias Detection (NBD) (Bharadwaj et al., 2020), and Fraudulent Email Detection (FED) (Radev, 2008).
|
| 110 |
+
|
| 111 |
+
Five of our investigated tasks are featured in the GLUE collection, namely MRPC (Dolan and Brockett, 2005), RTE (Haim et al., 2006), WNLI (Levesque et al., 2012), CoLA (Warstadt et al., 2018), and MNLI (Williams et al., 2018).
|
| 112 |
+
|
| 113 |
+
Our final task is the Email Domain Classification Dataset (EDC) which we describe in greater detail. It is provided by our partners in the banking domain. As such, it is a proprietary dataset consisting of sensitive emails from clients, and thus cannot be publicly released. However, it serves as an authentic use-case for our study. The task consists of classifying emails along 19 broad domains related to banking activities such as credit cards, wire transfers, account management etc., which will then be forwarded to the appropriate department. We selected a subset of the provided dataset, such that each domain is represented equally. More
|
| 114 |
+
|
| 115 |
+
max width=
|
| 116 |
+
|
| 117 |
+
Name Description of AS
|
| 118 |
+
|
| 119 |
+
1-2
|
| 120 |
+
AS1 Singular generic token
|
| 121 |
+
|
| 122 |
+
1-2
|
| 123 |
+
AS2 Unique generic token for each name in document
|
| 124 |
+
|
| 125 |
+
1-2
|
| 126 |
+
AS3 Unique generic token for each distinct name in document
|
| 127 |
+
|
| 128 |
+
1-2
|
| 129 |
+
AS4 Removal of names
|
| 130 |
+
|
| 131 |
+
1-2
|
| 132 |
+
AS5 Random name for each name in document
|
| 133 |
+
|
| 134 |
+
1-2
|
| 135 |
+
AS6 Random name for each distinct name in document
|
| 136 |
+
|
| 137 |
+
1-2
|
| 138 |
+
|
| 139 |
+
Table 1: Description of Anonymisation strategies
|
| 140 |
+
|
| 141 |
+
162
|
| 142 |
+
|
| 143 |
+
163
|
| 144 |
+
|
| 145 |
+
168 specifically, for each domain in the set, we randomly selected $\simeq {500}$ emails, for a total of nearly 9000 emails. Furthermore, the dataset is multilingual, but we perform our experiments on the emails written in French due to the high sample number.
|
| 146 |
+
|
| 147 |
+
§ 2.2 ANONYMISATION STRATEGIES
|
| 148 |
+
|
| 149 |
+
We consider six anonymisation strategies (AS1- 6) for this study. These strategies are commonly
|
| 150 |
+
|
| 151 |
+
found in the literature (Berg et al., 2020; Deleger 180 et al., 2013). They largely fall into three categories: replacement by a generic token, removal of names, and replacement by a random name. We describe each AS in Table 1. Table 2 shows the differences
|
| 152 |
+
|
| 153 |
+
between each strategy on a simple example. 185
|
| 154 |
+
|
| 155 |
+
§ 2.3 MODEL TRAINING
|
| 156 |
+
|
| 157 |
+
We compare the impact of anonymisation strategies 188
|
| 158 |
+
|
| 159 |
+
using two Transformer-based models: BERT (De- 190 vlin et al., 2018) and ERNIE (Sun et al., 2020). For the tasks written in English, we use the uncased BERT Base mode and the ERNIE Base models. For the EDC task, we use the multilingual mBERT model and the ERNIE-M model published by Ouyang et al. (2021). For our study, we use the Transformers library by Huggingface (Wolf et al., 2019) as our framework. Furthermore, we take a grid-search based approach to determine the most
|
| 160 |
+
|
| 161 |
+
appropriate fine-tuning parameters for each down- 200 stream task (cf. Appendix B.[needs ref])
|
| 162 |
+
|
| 163 |
+
§ 3 EXPERIMENTAL RESULTS
|
| 164 |
+
|
| 165 |
+
203
|
| 166 |
+
|
| 167 |
+
In this section, we show the results of our exper- 205 iments and address the research questions from Section 1. For each task and for each pre-trained model, we fine-tune a model on the original dataset and each of our six anonymised datasets. We do five runs for each case, and average the results. We then compare the average performance for each AS to the performance of the models trained on original data. Table 3 shows the average performance of every model. For each of the GLUE tasks, we
|
| 168 |
+
|
| 169 |
+
use the metric recommended by (Wang et al., 2018) 215
|
| 170 |
+
|
| 171 |
+
${}^{1}$ https://anonymous.4open.science/r/ anonymisation_paper-E147/
|
| 172 |
+
|
| 173 |
+
${}^{2}$ https://www.kaggle.com/shubh0799/ fake-news
|
| 174 |
+
|
| 175 |
+
216 270
|
| 176 |
+
|
| 177 |
+
max width=
|
| 178 |
+
|
| 179 |
+
Original "Hi, this is Paul, am I speaking to John?" "Sorry, no, this is George. John is not here today."
|
| 180 |
+
|
| 181 |
+
1-3
|
| 182 |
+
AS1 "Hi, this is ENTNAME, am I speaking to ENTNAME?" "Sorry, no, this is ENTNAME. ENTNAME is not here today."
|
| 183 |
+
|
| 184 |
+
1-3
|
| 185 |
+
AS2 "Hi, this is ENTNAME1, am I speaking to ENTNAME2?" "Sorry, no, this is ENTNAME1. ENTNAME2 is not here today."
|
| 186 |
+
|
| 187 |
+
1-3
|
| 188 |
+
AS3 "Hi, this is ENTNAME1, am I speaking to ENTNAME2?" "Sorry, no, this is ENTNAME3. ENTNAME2 is not here today."
|
| 189 |
+
|
| 190 |
+
1-3
|
| 191 |
+
AS4 "Hi, this is, am I speaking to " "Sorry, no, this is, is not here today."
|
| 192 |
+
|
| 193 |
+
1-3
|
| 194 |
+
AS5 "Hi, this is Bert, am I speaking to Ernie?" "Sorry, no, this is Elmo. Kermit is not here today."
|
| 195 |
+
|
| 196 |
+
1-3
|
| 197 |
+
AS6 "Hi, this is Jessie, am I speaking to James?" "Sorry, no, this is Meowth. James is not here today."
|
| 198 |
+
|
| 199 |
+
1-3
|
| 200 |
+
|
| 201 |
+
Table 2: Example for each anonymisation strategy
|
| 202 |
+
|
| 203 |
+
217 271
|
| 204 |
+
|
| 205 |
+
218 272
|
| 206 |
+
|
| 207 |
+
219 273
|
| 208 |
+
|
| 209 |
+
220 274
|
| 210 |
+
|
| 211 |
+
221 275
|
| 212 |
+
|
| 213 |
+
222 276
|
| 214 |
+
|
| 215 |
+
223
|
| 216 |
+
|
| 217 |
+
278
|
| 218 |
+
|
| 219 |
+
225 279
|
| 220 |
+
|
| 221 |
+
max width=
|
| 222 |
+
|
| 223 |
+
2|c|X 7|c|BERT 7|c|ERNIE
|
| 224 |
+
|
| 225 |
+
1-16
|
| 226 |
+
Task Metric Original AS1 AS2 AS3 AS4 AS5 AS6 Original AS1 AS2 AS3 AS4 AS5 AS6
|
| 227 |
+
|
| 228 |
+
1-16
|
| 229 |
+
FND F1 0.973 0.976↑ 0.974↑ 0.969↓ 0.965↓ ${0.968} \downarrow$ 0.971↓ 0.968 0.962↓ 0.960↓ 0.960↓ 0.956↓ 0.956↓ 0.963↓
|
| 230 |
+
|
| 231 |
+
1-16
|
| 232 |
+
NBD F1 0.653 0.658↑ 0.647↓ 0.654↑ 0.681↑ 0.674↑ 0.683↑ 0.678 0.681↑ 0.684↑ 0.695↑ 0.709↑ 0.653↓ 0.669↓
|
| 233 |
+
|
| 234 |
+
1-16
|
| 235 |
+
FED F1 0.994 0.995↑ 0.996↑ 0.996↑ 0.996↑ 0.994 0.995↑ 0.996 0.994↓ 0.993↓ 0.994↓ 0.993↓ 0.995↓ 0.993↓
|
| 236 |
+
|
| 237 |
+
1-16
|
| 238 |
+
MRPC F1 0.791 0.786↓ 0.769↓ 0.768↓ 0.797↑ 0.792↑ 0.783↓ 0.811 0.824↑ 0.817↑ 0.799↓ 0.832↑ 0.826↑ 0.82↑
|
| 239 |
+
|
| 240 |
+
1-16
|
| 241 |
+
RTE Acc 0.691 ${0.67} \downarrow$ 0.654↓ 0.639↓ 0.624↓ ${0.644} \downarrow$ 0.666↓ 0.703 0.696↓ 0.665↓ 0.671↓ ${0.683} \downarrow$ 0.716↑ 0.676↓
|
| 242 |
+
|
| 243 |
+
1-16
|
| 244 |
+
WNLI F1 0.520 0.530↑ 0.526↑ 0.551↑ 0.586↑ 0.541↑ 0.535↑ 0.561 0.472↓ 0.557↓ 0.564↑ 0.595↑ 0.614↑ 0.550↓
|
| 245 |
+
|
| 246 |
+
1-16
|
| 247 |
+
CoLA MCC 0.555 0.520↓ 0.522↓ 0.524↓ ${0.443} \downarrow$ 0.495↓ 0.532↓ 0.519 ${0.517} \downarrow$ 0.543↑ 0.556↑ ${0.385} \downarrow$ 0.540↑ 0.542↑
|
| 248 |
+
|
| 249 |
+
1-16
|
| 250 |
+
MNLI Acc 0.754 0.742↓ 0.730↓ 0.734↓ 0.745↓ 0.742↓ 0.747↓ 0.789 0.774↓ 0.750↓ 0.759↓ 0.770↓ 0.776↓ 0.773↓
|
| 251 |
+
|
| 252 |
+
1-16
|
| 253 |
+
EDC F1 0.626 0.624↓ 0.683↑ 0.617↓ 0.619↓ 0.616↓ 0.595↓ 0.642 0.635↓ 0.696↑ 0.642 0.635↓ 0.627↓ 0.621↓
|
| 254 |
+
|
| 255 |
+
1-16
|
| 256 |
+
|
| 257 |
+
Table 3: Results of our fine-tuned models. We highlight in green (†) the models that outperform the models trained on original data, in red $\left( \downarrow \right)$ the models that do not.
|
| 258 |
+
|
| 259 |
+
226 280
|
| 260 |
+
|
| 261 |
+
227 281
|
| 262 |
+
|
| 263 |
+
228
|
| 264 |
+
|
| 265 |
+
229
|
| 266 |
+
|
| 267 |
+
230
|
| 268 |
+
|
| 269 |
+
231
|
| 270 |
+
|
| 271 |
+
232 286
|
| 272 |
+
|
| 273 |
+
233
|
| 274 |
+
|
| 275 |
+
234 288
|
| 276 |
+
|
| 277 |
+
237
|
| 278 |
+
|
| 279 |
+
max width=
|
| 280 |
+
|
| 281 |
+
X 6|c|BERT 6|c|ERNIE
|
| 282 |
+
|
| 283 |
+
1-13
|
| 284 |
+
Task AS1 AS2 AS3 AS4 AS5 AS6 AS1 AS2 AS3 AS4 AS5 AS6
|
| 285 |
+
|
| 286 |
+
1-13
|
| 287 |
+
FND 5 4 2 0 1 3 4 3 3 1 1 5
|
| 288 |
+
|
| 289 |
+
1-13
|
| 290 |
+
NBD 2 0 1 4 3 5 2 3 4 5 $\underline{\mathbf{0}}$ 1
|
| 291 |
+
|
| 292 |
+
1-13
|
| 293 |
+
FED 2 5 5 5 0 2 4 2 4 2 5 2
|
| 294 |
+
|
| 295 |
+
1-13
|
| 296 |
+
MRPC 3 1 0 5 4 2 3 1 0 5 4 2
|
| 297 |
+
|
| 298 |
+
1-13
|
| 299 |
+
RTE 5 3 1 0 2 4 4 0 1 3 5 2
|
| 300 |
+
|
| 301 |
+
1-13
|
| 302 |
+
WNLI 1 0 4 5 3 2 0 2 3 4 5 1
|
| 303 |
+
|
| 304 |
+
1-13
|
| 305 |
+
CoLA 2 3 4 0 1 5 1 4 5 0 2 3
|
| 306 |
+
|
| 307 |
+
1-13
|
| 308 |
+
MNLI 3 0 1 4 3 5 4 0 1 2 5 3
|
| 309 |
+
|
| 310 |
+
1-13
|
| 311 |
+
EDC 4 5 2 3 1 0 3 5 4 3 1 0
|
| 312 |
+
|
| 313 |
+
1-13
|
| 314 |
+
Total 27 21 20 26 18 28 25 20 25 25 28 21
|
| 315 |
+
|
| 316 |
+
1-13
|
| 317 |
+
Avg. 3 2.33 2.22 2.89 2 3.11 2.78 2.22 2.78 2.78 3.11 2.33
|
| 318 |
+
|
| 319 |
+
1-13
|
| 320 |
+
|
| 321 |
+
Table 4: Ranking scores for fine-tuned models. Bold text shows the winner according to Borda Count, underlined text according to Instant Runoff.
|
| 322 |
+
|
| 323 |
+
238
|
| 324 |
+
|
| 325 |
+
239
|
| 326 |
+
|
| 327 |
+
240
|
| 328 |
+
|
| 329 |
+
241
|
| 330 |
+
|
| 331 |
+
242
|
| 332 |
+
|
| 333 |
+
244
|
| 334 |
+
|
| 335 |
+
247
|
| 336 |
+
|
| 337 |
+
248
|
| 338 |
+
|
| 339 |
+
249 and F1 score for the classification tasks.
|
| 340 |
+
|
| 341 |
+
§ 3.1 WHICH ANONYMISATION STRATEGY IS THE MOST APPROPRIATE FOR DOWNSTREAM NLP TASKS?
|
| 342 |
+
|
| 343 |
+
252
|
| 344 |
+
|
| 345 |
+
In order to determine the most appropriate strat-
|
| 346 |
+
|
| 347 |
+
254 egy, we consider two ranking-based approaches: Borda Count and Instant Runoff (Taylor and Pacelli,
|
| 348 |
+
|
| 349 |
+
256 2008). For both approaches, we determine the 257 score ${s}_{a,t}$ for each anonymisation strategy (AS, 258 indexed by $a$ ) and for each task (indexed by $t$ ) in 259 the following way: The best approach gets a score of five, the second best gets a score of four, etc.
|
| 350 |
+
|
| 351 |
+
The final Borda Count score for a given anonymi-sation strategy $A$ is defined as $\mathop{\sum }\limits_{{t = 0}}^{T}{s}_{A,t}$ (where $T$
|
| 352 |
+
|
| 353 |
+
264 is the total number of tasks, here, nine). The model with the highest score is considered the best.
|
| 354 |
+
|
| 355 |
+
Instant Runoff is an iterative procedure. For each iteration, we count the number of wins for each AS, where an AS is considered a winner in a given task
|
| 356 |
+
|
| 357 |
+
269 if its corresponding fine-tuned model outperforms
|
| 358 |
+
|
| 359 |
+
every other model. We then eliminate the AS with 291 the lowest number of wins and update the scores
|
| 360 |
+
|
| 361 |
+
accordingly. We repeat this process until one AS 293 remains, or until we cannot eliminate further ASs.
|
| 362 |
+
|
| 363 |
+
Table 4 shows the scores for each model and the
|
| 364 |
+
|
| 365 |
+
winning anonymisation strategies according to the 296 aforementioned approaches. For BERT models, we
|
| 366 |
+
|
| 367 |
+
see that AS1, AS4, and AS6 are the best perform- 298
|
| 368 |
+
|
| 369 |
+
ing strategies according to Borda count, AS6 being 299
|
| 370 |
+
|
| 371 |
+
a close winner. Instant Runoff leads to similar re- 300
|
| 372 |
+
|
| 373 |
+
sults with AS4 and AS6 reaching the final iteration, 301
|
| 374 |
+
|
| 375 |
+
and AS6 being the overall winner. Furthermore, 302
|
| 376 |
+
|
| 377 |
+
we note a lower variance in the scores for AS6 303
|
| 378 |
+
|
| 379 |
+
when compared to AS4. In contrast, when eval- 304
|
| 380 |
+
|
| 381 |
+
uating ERNIE models, we note that AS5 models 305
|
| 382 |
+
|
| 383 |
+
are performing significantly better than every other 306 307 strategy according to Borda Count. Similarly, AS5
|
| 384 |
+
|
| 385 |
+
also wins the Instant Runoff with AS4 and AS5 308 309 making it to the final round. Overall, it appears 310 that using random names over generic tokens to 311 anonymise textual data is the preferable solution as 312
|
| 386 |
+
|
| 387 |
+
AS1, AS2, AS3 models, which were all trained on 313 data with generic tokens, usually rank low.
|
| 388 |
+
|
| 389 |
+
§ 3.2 SHOULD A MODEL BE TRAINED ON ORIGINAL OR ANONYMISED DATA?
|
| 390 |
+
|
| 391 |
+
In order to answer this question, we investigate the 318 performance of models trained on original data on
|
| 392 |
+
|
| 393 |
+
the anonymised test sets and compare them to the 320
|
| 394 |
+
|
| 395 |
+
models trained directly on anonymised data. Ta- 321
|
| 396 |
+
|
| 397 |
+
ble 5 shows the results of testing models trained 322
|
| 398 |
+
|
| 399 |
+
on non-anonymised training sets on anonymised 323
|
| 400 |
+
|
| 401 |
+
max width=
|
| 402 |
+
|
| 403 |
+
2|c|X 7|c|BERT 7|c|ERNIE
|
| 404 |
+
|
| 405 |
+
1-16
|
| 406 |
+
Task Metric Original AS1 AS2 AS3 AS4 AS5 AS6 Original AS1 AS2 AS3 AS4 AS5 AS6
|
| 407 |
+
|
| 408 |
+
1-16
|
| 409 |
+
FND F1 0.973 0.933↓ 0.910↓ 0.907↓ 0.950↓ 0.963↓ 0.963↓ 0.968 0.951↓ 0.938↓ 0.935↓ 0.957↑ 0.967↑ 0.967↑
|
| 410 |
+
|
| 411 |
+
1-16
|
| 412 |
+
NBD F1 0.653 0.566↓ 0.551↓ 0.546↓ 0.601↓ 0.602↓ 0.609↓ 0.678 0.683 0.684 0.659↓ 0.687↓ 0.683↑ 0.683↑
|
| 413 |
+
|
| 414 |
+
1-16
|
| 415 |
+
FED F1 0.994 0.995 0.995 0.995 0.996 0.996 0.996 0.996 0.995 0.995 0.995 0.996 0.996 0.996
|
| 416 |
+
|
| 417 |
+
1-16
|
| 418 |
+
MRPC F1 0.791 0.809↑ 0.811↑ 0.811↑ 0.819↑ 0.816↑ 0.814↑ 0.811 0.848↑ 0.848↑ 0.849↑ 0.852↑ 0.804↓ 0.834↑
|
| 419 |
+
|
| 420 |
+
1-16
|
| 421 |
+
RTE Acc 0.691 0.665↓ 0.663↑ 0.669↑ 0.670↑ 0.645↑ ${0.660} \downarrow$ 0.700 0.703↑ 0.701↑ 0.693↑ 0.699↑ 0.688↓ 0.704↑
|
| 422 |
+
|
| 423 |
+
1-16
|
| 424 |
+
WNLI F1 0.520 0.504↓ 0.504↓ 0.504↓ 0.504↓ 0.504↓ 0.504↓ 0.561 0.435↓ 0.442↓ 0.467↓ 0.506↓ 0.458↓ 0.428↓
|
| 425 |
+
|
| 426 |
+
1-16
|
| 427 |
+
CoLA MCC 0.555 0.376↓ 0.515↓ 0.528↑ 0.335↓ 0.549↑ 0.550↑ 0.519 0.427↓ 0.537↓ 0.511↓ ${0.313} \downarrow$ 0.518↓ 0.523↓
|
| 428 |
+
|
| 429 |
+
1-16
|
| 430 |
+
MNLI Acc 0.754 0.753↑ 0.724↓ 0.753↑ 0.753↑ 0.744↑ 0.744↓ 0.789 0.783↑ ${0.545} \downarrow$ 0.760↑ 0.772↑ 0.669↓ 0.765↓
|
| 431 |
+
|
| 432 |
+
1-16
|
| 433 |
+
|
| 434 |
+
Table 5: Results of testing the original models on anonymised data. We highlight in green (↑) the models that significantly outperform the matching model in Table 3 using a Wilcoxon test, in red ( $\downarrow$ ) the models that perform significantly worse, in black the models that do not perform significantly differently.
|
| 435 |
+
|
| 436 |
+
378
|
| 437 |
+
|
| 438 |
+
379
|
| 439 |
+
|
| 440 |
+
380
|
| 441 |
+
|
| 442 |
+
381
|
| 443 |
+
|
| 444 |
+
384
|
| 445 |
+
|
| 446 |
+
389 test sets. We find that nearly half of the models trained on anonymised data outperform the counterpart model trained on original data. While there is not always a clear trend, we observe that the original models almost consistently perform better in the MRPC and RTE tasks, and perform worse in the WNLI and CoLA tasks, regardless of the architecture used. Furthermore, for BERT models, the models trained on anonymised data consistently perform worse on the FND and NBD tasks. For the ERNIE models, the models trained on original data consistently perform better on the FED task ever so slightly. Despite these observations, we notice that the performance losses are oftentimes very high, specifically for the NBD, WNLI, and CoLA tasks, while performance gains tend to be lower.
|
| 447 |
+
|
| 448 |
+
§ 4 RELATED WORK
|
| 449 |
+
|
| 450 |
+
Relevant studies done on textual data largely focus on medical texts and on a very limited number of tasks and anonymisation strategies when compared to our work. On the other hand, they typically anonymise a wide variety of protected health information (PHI) classes, while our work focuses on anonymisation of persons' names only. Berg et al. (2020) studied the impact of four anonymisa-tion strategies (pseudonymisation, replacement by PHI class, masking, and removal) on downstream NER tasks for the clinical domain. Similarly to our findings, they find that pseudonymisation yields the best results among the investigated strategies. On the other hand, removal of names resulted in the highest negative impact on the downstream tasks. Deleger et al. (2013) investigated the impact of anonymisation on an information extraction task using a dataset of 3503 clinical notes. They anonymised 12 types of PHI such as patients' name, age, etc., and used two anonymisation strategies (replacement by fake PHI, and masking). They found no significant loss in performance for this task. Similarly, Meystre et al. (2014) found that the informativeness of medical notes only marginally decreased after anonymisation, using 18 types of PHI and 3 anonymisation strategies (replacement
|
| 451 |
+
|
| 452 |
+
by fake PHI, replacement by PHI class, and replace- 394 ment by ${PHI}$ token). Using the same anonymisa-tion strategies and ten types of PHI, Obeid et al. (2019) investigated the impact of anonymisation
|
| 453 |
+
|
| 454 |
+
on a mental status classification task. Comparing 399 nine different machine learning models, they did
|
| 455 |
+
|
| 456 |
+
not find any significant difference in performance 401 between original and anonymised data.
|
| 457 |
+
|
| 458 |
+
§ 5 CONCLUSION
|
| 459 |
+
|
| 460 |
+
404
|
| 461 |
+
|
| 462 |
+
In this paper, we conducted an empirical study
|
| 463 |
+
|
| 464 |
+
analysing the impact of anonymisation on down- 406 stream NLP tasks. We investigated the difference in performance of six anonymisation strategies on
|
| 465 |
+
|
| 466 |
+
nine NLP tasks ranging from simple classification 409 tasks to hard NLU tasks. Further, we compared
|
| 467 |
+
|
| 468 |
+
two architectures, BERT and ERNIE. Overall, we 411 found that anonymising data before training an NLP model does have a negative impact on its
|
| 469 |
+
|
| 470 |
+
performance. However, this impact is relatively 414 low. We determined that pseudonymisation tech-
|
| 471 |
+
|
| 472 |
+
niques involving random names lead to higher per- 416 formances across most tasks. Specifically, replac-
|
| 473 |
+
|
| 474 |
+
ing names by random names (AS5) had the least 419 negative impact when using an ERNIE model. Sim-
|
| 475 |
+
|
| 476 |
+
ilarly, replacing by random names while preserving 421 the link between identical names (AS6) worked best for BERT models. We also showed that it is advisable to anonymise data prior to training as we observed a large difference in performance
|
| 477 |
+
|
| 478 |
+
between models trained on original data versus 426 anonymised data. There is also a noticeable difference between the performances of BERT and ERNIE, warranting further investigation into the
|
| 479 |
+
|
| 480 |
+
performance differences between a larger number 430
|
| 481 |
+
|
| 482 |
+
of language models. 431
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/1Hwy5yfNadS/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,321 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# You say tomato, I say the same: A large-scale study of linguistic accommodation in online communities
|
| 2 |
+
|
| 3 |
+
Anonymous NODALIDA submission
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
001 An important assumption in sociolinguistics and cognitive psychology is that human beings adjust their language use to their interlocutors.
|
| 8 |
+
|
| 9 |
+
004 Put simply, the more often people talk (or write) to each other, the more similar their speech becomes. Such accommodation has often been observed in small-scale observational studies and experiments, but large-scale longitudinal studies that systematically test whether the accommodation occurs are scarce. We use data from a very large Swedish online discussion forum to show that linguistic production of the users who write in the same subforum does usually become more similar over time. Moreover, the results suggest that this trend is stronger for those pairs of users who actively interact than for those pairs who do not interact. Our data thus support the accommodation hypothesis.
|
| 10 |
+
|
| 11 |
+
## 1 Introduction
|
| 12 |
+
|
| 13 |
+
Language is a tool not only for conveying information, but also for expressing attitudes, constructing identities and building relationships (Eckert, 2012). One manifestation of this fundamental property of language is that how we speak (or write) depends on whom we are speaking (or writing) to. How exactly the audience affects the linguistic production is a complex and multi-faceted process which can be approached from various perspectives. Consider, for instance, the audience design theory (Bell, 1984), social identity theory (Reid and Giles, 2008) and accommodation theory (Giles, 1973; Gallois et al., 1995).
|
| 14 |
+
|
| 15 |
+
In this paper, we perform a large-scale test of the hypothesis that people adjust their production style to their interlocutors. This phenomenon is known as accommodation (sometimes attunement or linguistic alignment) or convergence if the styles of the interlocutors are becoming more similar (divergence if they are becoming more different). While it has received considerable attention within sociolinguistics (Rickford et al., 1994; Cukor-Avila
|
| 16 |
+
|
| 17 |
+
and Bailey, 2001) and cognitive psychology (Gar- 042
|
| 18 |
+
|
| 19 |
+
rod et al., 2018), large-scale longitudinal studies 043
|
| 20 |
+
|
| 21 |
+
are wanting. An exception is a study by Nardy 044
|
| 22 |
+
|
| 23 |
+
et al. (2014), who have observed a group of French- 045
|
| 24 |
+
|
| 25 |
+
speaking children at a kindergarten for one year and 046 shown that children who interacted more frequently adopted similar usages of a number of sociolinguistic variables (such as, for instance, the dropping of the consonant $/\mathrm{R}/$ in post-consonantal word-final
|
| 26 |
+
|
| 27 |
+
positions). 051
|
| 28 |
+
|
| 29 |
+
Internet and social media in particular provide
|
| 30 |
+
|
| 31 |
+
us with a vast amount of data about how people 053 communicate and how they use language for other purposes than information transmission (Nguyen and P. Rosé, 2011). While in some respects these data are not as informative as those collected by
|
| 32 |
+
|
| 33 |
+
direct observation or experimenting, in some other 058 respects they may be equally or even more useful,
|
| 34 |
+
|
| 35 |
+
providing very detailed information about who in- 060 teracted when with whom and how. Besides, it is often possible to collect large datasets that enable more systematic hypothesis testing.
|
| 36 |
+
|
| 37 |
+
We use data from a very large Swedish discus-
|
| 38 |
+
|
| 39 |
+
sion forum (Flashback) to test a widely held soci- 065 olinguistic assumption that "the more often people
|
| 40 |
+
|
| 41 |
+
talk to each other, the more similar their speech will 067 be" (Labov, 2001, p.288). In brief, we find pairs of Flashback users which during some period of time have actively interacted (see Section 2.2 for the definition of "active interaction"). We define a measure of linguistic distance between users and show that it is valid for our purposes (see Section 2.3). For every pair of users, we then calculate the linguistic distance between the two users' production before they have started interacting $\left( {\Delta }_{\text{before }}\right)$ and after it $\left( {\Delta }_{\text{after }}\right)$ , and the difference between these distances $\left( {{\Delta }_{i} = {\Delta }_{\text{before }} - {\Delta }_{\text{after }}}\right)$ . If the con-
|
| 42 |
+
|
| 43 |
+
vergence assumption is correct, we expect that the 079 distance will tend to become smaller and the average ${\Delta }_{i}$ will be positive.
|
| 44 |
+
|
| 45 |
+
A positive ${\Delta }_{i}$ , however, can arise for different 083 reasons, of which arguably the most prominent one is that distances between users become smaller not because users accommodate to specific interlocutors, but rather converge on a certain style adopted in the community (Danescu-Niculescu- 088 Mizil et al., 2013). To test whether this is a better explanation, we perform a similar calculation for those pairs who have never had a single interaction, comparing texts written earlier $\left( {\Delta }_{\text{early }}\right)$ and later $\left( {\Delta }_{\text{later }}\right)$ during their activity on the forum $\left( {{\Delta }_{n} = {\Delta }_{\text{early }} - {\Delta }_{\text{later }}}\right)$ . If there is a convergence to norm, the average ${\Delta }_{n}$ should be positive.
|
| 46 |
+
|
| 47 |
+
It is also possible that both pairwise accommodation and convergence to the community norm occur simultaneously. Moreover, they might even be parts of the same process: if speakers do converge on a certain norm, this convergence can emerge (at least partly) due to pairwise interactions. It is, however, also possible that only one of these processes occurs. Speakers can, for instance, converge on the community norm by adjusting to some perceived "average" style and not specific individual interlocutors. On the other hand, it can be imagined that speakers do adjust to the individual interlocutors, but that does not lead to the emergence of the community norm (for instance, because different interlocutors are "pulling" in different directions). The purpose of this study is to provide some insight into these not entirely understood processes.
|
| 48 |
+
|
| 49 |
+
We envisage four likely outcomes of our experiments, summarized in Table 1. Other outcomes are possible, but would be more difficult to explain. We would, for instance, be surprised if ${\Delta }_{n}$ turns out to be larger than ${\Delta }_{i}$ (since if there is convergence to community norm, it should be affecting actively interacting and non-interacting users in approximately the same way). Another unexpected result would be a negative value of either ${\Delta }_{n}$ or ${\Delta }_{i}$ , since that would imply systematic divergence (see discussion in Section 4).
|
| 50 |
+
|
| 51 |
+
## 2 Materials and methods
|
| 52 |
+
|
| 53 |
+
### 2.1 Corpora
|
| 54 |
+
|
| 55 |
+
We use Flashback, ${}^{1}$ a very large Swedish discussion forum covering a broad variety of topics which has existed for more than two decades. In 2021, the proportion of internet users in Sweden (excluding those younger than eight years) who visited the forum at least once during the last 12 months was estimated to be 24% (Internetstiftelsen, 2021).
|
| 56 |
+
|
| 57 |
+
The forum is divided into 16 subforums, of which we use five: Dator och IT 'Computer and IT', Droger 'Drugs', Hem, bostad och family 'Home, house and family', Kultur & Media 'Culture and media', Sport och träning 'Sport and training'. These five were selected as being relatively large, of comparable size and representing diverse and not directly related topics.
|
| 58 |
+
|
| 59 |
+
To access the Flashback texts, we use the corpora created and maintained by Spräkbanken Text, a Swedish national NLP infrastructure. The corpora are available for download ${}^{2}$ and for searching via the Korp interface (Borin et al.,2012) and its API. ${}^{3}$
|
| 60 |
+
|
| 61 |
+
The basic corpus statistics are summarized in Table 2. The earliest available posts date back to 2000, and the corpora were last updated in February 2022. The number of users is estimated as a number of unique non-empty usernames. We list separately the number of "prolific" users, and we consider users prolific if they have written 6000 tokens or more. All other users will be discarded (many of the prolific users will not pass additional thresholds either, see Section 2.4).
|
| 62 |
+
|
| 63 |
+
Subforums may be further divided into subsub-and subsubsubforums, which we do not take into account. What is important for our purposes is that messages (posts) are always organized in threads: there is an initial message which starts a thread (often a question) and then an unlimited number of messages which either respond to the original message or to later messages or in some other way are related to the thread's topic. The structure of the thread is linear: that is, messages are posted in a strictly chronological order.
|
| 64 |
+
|
| 65 |
+
### 2.2 Defining interaction
|
| 66 |
+
|
| 67 |
+
Two users are assumed to have had an interaction if they have written messages within the same thread, the two messages are separated by no more than two other messages and there has gone no more than five days between the two messages were posted. This definition has been used by Hamilton et al. (2017) and Del Tredici and Fernández (2018), but without the temporal threshold. We consider the temporal threshold useful, since Flashback can have very long threads, sometimes spanning over the years.
|
| 68 |
+
|
| 69 |
+
---
|
| 70 |
+
|
| 71 |
+
${}^{2}$ https://spraakbanken.gu.se/resurser? s=flashback&language=All
|
| 72 |
+
|
| 73 |
+
${}^{3}$ https://ws.spraakbanken.gu.se/docs/ korp
|
| 74 |
+
|
| 75 |
+
'https://www.flashback.org/
|
| 76 |
+
|
| 77 |
+
---
|
| 78 |
+
|
| 79 |
+
<table><tr><td>Outcome</td><td>Interpretation</td></tr><tr><td>1${\Delta }_{i} > {\Delta }_{n} > 0$</td><td>Both pairwise accommodation and overall convergence to community norm are detected</td></tr><tr><td>$t - {\Delta }_{i} = {\Delta }_{n} > 0$</td><td>No pairwise accommodation; overall convergence to community norm is detected</td></tr><tr><td>${\Delta }_{i} > {\Delta }_{n} = 0$</td><td>Pairwise accommodation is detected; no convergence to community norm</td></tr><tr><td>${\Delta }_{i} = {\Delta }_{n} = 0$</td><td>No pairwise accommodation; no convergence to community norm</td></tr></table>
|
| 80 |
+
|
| 81 |
+
Table 1: Four likely outcomes of the experiment. ${\Delta }_{i}$ is the change of linguistic distance between actively interacting users, ${\Delta }_{n}$ is the change of distance between non-interacting users.
|
| 82 |
+
|
| 83 |
+
<table><tr><td>Subforum</td><td>tokens</td><td>users</td><td>prolific users</td></tr><tr><td>Computer</td><td>316M</td><td>187K</td><td>9.3K</td></tr><tr><td>Drugs</td><td>257M</td><td>123K</td><td>8.0K</td></tr><tr><td>Culture</td><td>434M</td><td>211K</td><td>12.2K</td></tr><tr><td>Home</td><td>348M</td><td>168K</td><td>10.0K</td></tr><tr><td>Sport</td><td>251M</td><td>105K</td><td>5.4K</td></tr></table>
|
| 84 |
+
|
| 85 |
+
Table 2: Basic statistics about the Flashback subforums. Prolific users have written 6000 tokens or more
|
| 86 |
+
|
| 87 |
+
178 See the definition of "actively interacting users" in section 2.4.
|
| 88 |
+
|
| 89 |
+
### 2.3 Measuring linguistic distance
|
| 90 |
+
|
| 91 |
+
Potential solutions. A traditional sociolinguistic approach would be to identify a number of linguistic variables (features for which variation is known to exist) and use them for comparison (Nardy et al., 2014).The main problem with this approach is that most variables are not very frequent and it is thus difficult to collect enough observations. A traditional NLP approach would be to use a language model (Danescu-Niculescu-Mizil et al., 2013). Here, the main problem would be to ensure that the model has enough training data. We use a metric which is often applied in authorship attribution studies, Cosine Delta (Smith and Aldridge, 2011), a modification of Burrows' delta (Burrows, 2002). Its main advantage is that it can often be successfully applied to relatively small datasets, and it is also computationally efficient. It can also be considered a language model, though very simple.
|
| 92 |
+
|
| 93 |
+
Cosine Delta. To calculate Cosine Delta between two texts, the texts are represented as $t$ - dimensional vectors where every element is a $z$ - score (standard score) of the relative frequency of one of $t$ most frequent words. The cosine of the angle between the two vectors gauges their proximity, by subtracting it from 1 , we get the distance (see Equation 1).
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
{\Delta }_{\angle }\left( {T,{T}^{\prime }}\right) = \frac{\mathbf{z}\left( T\right) \cdot \mathbf{z}\left( {T}^{\prime }\right) }{\parallel \mathbf{z}\left( T\right) {\parallel }_{2}{\begin{Vmatrix}\mathbf{z}\left( {T}^{\prime }\right) \end{Vmatrix}}_{2}} \tag{1}
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
209
|
| 100 |
+
|
| 101 |
+
Cosine Delta has been shown to outperform Bur- 210 rows' Delta and other similar measures (Jannidis
|
| 102 |
+
|
| 103 |
+
et al., 2015; Evert et al., 2015). 212
|
| 104 |
+
|
| 105 |
+
Evaluating the metric. A typical usage of Co- 214 sine Delta is to compare text $\mathrm{X}$ of unknown or disputed authorship with texts by authors A and B in order to see whose style is more similar to the one used in $\mathrm{X}$ and whether the similarity is strong enough to attribute the text. This is not the same task that we have in mind. We want to compare texts written by authors $\mathrm{A}$ and $\mathrm{B}$ at time $\mathrm{P}$ and then at a later time $Q$ in order to see whether the styles of the two authors have become more similar. In other words, we are not trying to infer who authored which text (we know that). Instead, we want to be able to measure the distance between two different authors.
|
| 106 |
+
|
| 107 |
+
To test whether Cosine Delta is suitable for that, we run the following experiment. The main requirement for an evaluation is a meaningful benchmark which can represent the ground truth. In order to evaluate a distance measure we need a set of texts between which true distances are known. We create such a set by mixing texts produced by two authors in different proportions. For two Flashback users, (A0 and A1) an equal amount of tokens is extracted and used to create six texts: Base (contains solely the A0 production), 1 (80% of production belongs to $\mathrm{A}0,{20}\%$ to $\mathrm{A}1$ ; every token is randomly selected), 2 (60% A0, 40% A1), 3 (40% A0, 60% A1), 4 (20% A0, 80% A1) and 5 (100% A1), see Figure 1.
|
| 108 |
+
|
| 109 |
+
We accept as ground truth that the distance between the Base text and, say, Text 1 should be smaller than between Base and Text 5. We use Cosine Delta to compare Texts 1-5 with the Base text, rank them by their distance from Base and then measure Spearman correlation coefficient between this ranking and the true one(1,2,3,4,5).
|
| 110 |
+
|
| 111 |
+

|
| 112 |
+
|
| 113 |
+
Figure 1: The artificial benchmark for evaluating the linguistic distance measure: six texts with different proportions of the authors’ (A0 and A1) production.
|
| 114 |
+
|
| 115 |
+
We run the ranking test on 50 artificial sets, each consisting of six texts generated from two different authors' production, as described above. All data were extracted from the subforum Fordon och trafik 'Vehicles and traffic' (not used in the main experiment). The data were extracted consecutively without any randomization, i.e. the extraction script started from the beginning of the corpus, tried to extract a predefined number of tokens for every new user it encountered and stopped when it collected enough data for 100 unique users.
|
| 116 |
+
|
| 117 |
+
We try several combinations of two parameters: $t$ , the dimension of vectors (the number of the most frequent words the frequencies of which will be used), and $n$ , the minimum size of the texts to be compared (larger texts are expected yield more reliable estimates). The frequency list is compiled using the whole Flashback corpus (uncased). The results are reported in Table 3.
|
| 118 |
+
|
| 119 |
+
The performance of the ranking system is very high and increases as $n$ increases. Unfortunately, increasing $n$ decreases sample size, since less user pairs will be able to pass the thresholds (see Section 2.4). We judge that the best balance between reliability of Cosine Delta and sample size is reached with $n = {3000}(\rho \geq {0.95}$ . For $n = {6000}$ , the performance of Cosine Delta is better, but sample sizes (number of analyzable user pairs) are too small. We use $t = {300}$ , since larger values do not yield any gain for the chosen $n$ values.
|
| 120 |
+
|
| 121 |
+
We also calculate average distance between
|
| 122 |
+
|
| 123 |
+
<table><tr><td>$n$</td><td>$t$</td><td>$\rho$</td><td>$\Delta$</td></tr><tr><td>1500</td><td>150</td><td>0.936 (0.1)</td><td>0.16 (0.06)</td></tr><tr><td>1500</td><td>300</td><td>0.936 (0.1)</td><td>0.15 (0.06)</td></tr><tr><td>1500</td><td>450</td><td>0.940 (0.1)</td><td>0.15 (0.06)</td></tr><tr><td>1500</td><td>600</td><td>0.944 (0.1)</td><td>0.15 (0.06)</td></tr><tr><td>3000</td><td>150</td><td>0.950 (0.1)</td><td>0.15 (0.07)</td></tr><tr><td>3000</td><td>300</td><td>0.952 (0.1)</td><td>0.14 (0.06)</td></tr><tr><td>3000</td><td>450</td><td>0.952 (0.1)</td><td>0.13 (0.06)</td></tr><tr><td>3000</td><td>600</td><td>0.952 (0.1)</td><td>0.13 (0.06)</td></tr><tr><td>4500</td><td>150</td><td>0.976 (0)</td><td>0.14 (0.08)</td></tr><tr><td>4500</td><td>300</td><td>0.978 (0)</td><td>0.13 (0.07)</td></tr><tr><td>4500</td><td>450</td><td>0.978 (0)</td><td>0.13 (0.07)</td></tr><tr><td>4500</td><td>600</td><td>0.978 (0)</td><td>0.13 (0.07)</td></tr><tr><td>6000</td><td>150</td><td>0.994 (0)</td><td>0.14 (0.06)</td></tr><tr><td>6000</td><td>300</td><td>0.994 (0)</td><td>0.13 (0.07)</td></tr><tr><td>6000</td><td>450</td><td>0.994 (0)</td><td>0.13 (0.07)</td></tr><tr><td>6000</td><td>600</td><td>0.994 (0)</td><td>0.13 (0.06)</td></tr></table>
|
| 124 |
+
|
| 125 |
+
Table 3: Evaluating Cosine Delta on 50 ground-truth sets. $n$ is the number of tokens in the compared texts, $t$ is the number of frequent words used to construct the vector, $\rho$ is the average Spearman correlation coefficient, $\Delta$ is the average difference between authors $\mathrm{A}0$ and A1 (between base and text 5). Interquartile ranges are provided in parentheses.
|
| 126 |
+
|
| 127 |
+
authors A0 and A1 (that is, between Base and 281
|
| 128 |
+
|
| 129 |
+
Text 5) to obtain a very rough estimate of average 282 distance between two different users. Later, when
|
| 130 |
+
|
| 131 |
+
we measure how linguistic distance changes over 284 time, we will use this estimate as a reference point, something to compare the change against, so that we can judge how large the effect size is. For $n = {3000}$ and $t = {300}$ , the average distance is about 0.13 (though there is, unsurprisingly, considerable variation).
|
| 132 |
+
|
| 133 |
+
291
|
| 134 |
+
|
| 135 |
+
Topic sensitivity. An important potential problem with measures like Cosine Delta is that they are topic-sensitive, that is, the distance values can be affected not only by differences in the authors' styles, but also by the topic, i.e., what the specific texts are about (Mikros and Argir, 2007; Björklund and Zechner, 2017). This is extremely undesirable for our purposes, since there is a risk that we observe that a convergence which is not in fact linguistic: the two authors do not start writing in a more similar way, they just start writing about more related topics. To eliminate or at least mitigate this risk, we always compare authors $\mathrm{A}$ and $\mathrm{B}$ by using texts that A wrote in one subforum and B in another sub-forum. While it is not completely impossible that 307 the authors discuss similar topics in different subfo-rums, it seems unlikely that "topical convergence" will systematically occur across subforums.
|
| 136 |
+
|
| 137 |
+
Note also that in the evaluation experiment described above all users come from the same sub- 312 forum. Moreover, their production was extracted from the corpus consecutively and thus at least parts of it come from the same threads. That means that the users are likely to discuss related topics, and the ranking system must be able to capture differences in style despite potential similarities in topic, which it does very well.
|
| 138 |
+
|
| 139 |
+
### 2.4 Calculating distance change
|
| 140 |
+
|
| 141 |
+
As mentioned in Section 2.3, all our calculations are always based on two subforums at once (for instance, Home and Sport or Drugs and Computer). We will call such pairs of subforums duplets (to distinguish them from user pairs).
|
| 142 |
+
|
| 143 |
+
Two users are considered to have gone through a period of active interaction if they have had at least 10 interactions within a year in each of the subforums (that is, no less than 20 interactions in total). We compare the production of users before and after the active interaction period, but ignore the period itself.
|
| 144 |
+
|
| 145 |
+
Within a subforum, the active period can have any length from one day to 365 days. We do not measure how often the users interact after the active period, but we discard all texts that have been produced more than one year later after the last interaction (it may be that users continue to interact and there are no messages to discard).
|
| 146 |
+
|
| 147 |
+
In other words, the general idea is that production before the active period includes everything written before the first interaction, production after the active period includes everything written after the tenth interaction (given that it is no more than one year apart from the first interaction), but no later than one year after the last interaction. We are, however, dealing with two subforums at once, and thus have two dates for each of the three seminal interactions. For convenience, we want the active period to be defined in the same way for both sub-forums. We achieve that by using the earlier of the dates for the first interaction and the later of the dates for the tenth generation (this can lead to joint active period being longer than a year). When discarding the messages that were written after the users have stopped interacting (if any), we use the later of the last interaction dates. See the visual summary in Figure 2. 357
|
| 148 |
+
|
| 149 |
+
Users who have never had a single interaction are 358 labelled as non-interacting. We compare them to 359 actively interacting users and ignore all that end up 360 in between: that is, have had some interactions but failed to pass the criteria outlined above (e.g. have 362 had less than 10 interactions in total or have had more, but never 10 within a year). The reason for that is that we want the difference between groups (non-interacting and actively interacting users) to be as large as possible, so that potentially small effects can become visible.
|
| 150 |
+
|
| 151 |
+
Remember that we always want the linguistic 369 distance to be calculated using text from different
|
| 152 |
+
|
| 153 |
+
subforums. The procedure is as follows. For every 371 pair, if before the active period, User 1 has produced at least $n\left( {n = {3000}}\right)$ tokens in Subforum 1, and User 2 has produced at least $n$ tokens in Sub-forum 2, we calculate the distance between them,
|
| 154 |
+
|
| 155 |
+
taking $n$ tokens for User 1 from Subforum 1 and $n$ 376 tokens for User 2 from Subforum 2.
|
| 156 |
+
|
| 157 |
+
Obviously, if User 1 has $n$ or more tokens in 378 Subforum 2, and User 2 has $n$ or more tokens in Subforum 1, the distance is calculated using tokens from Subforum 2 for User 1 and from Subforum 1 for User 2. If both conditions are met (Condition
|
| 158 |
+
|
| 159 |
+
1: User 1 has $n$ or more tokens in Subforum 1 and 383 User 2 has $n$ or more tokens in Subforum 2; Condition 2: User 1 has $n$ or more tokens in Subforum 2 and User 2 has $n$ or more tokens in Subforum 1), we calculate both cross-subforum distances and use their arithmetic mean as the final result. If neither of the conditions is met, the pair is discarded. This procedure is visualized in Figure 3. The same user can occur unlimited times in different pairs.
|
| 160 |
+
|
| 161 |
+
Note that when we calculate distance between users A and B, we always use the same amount of tokens(n)for A and B (since using texts of different sizes might skew the Cosine Delta). For the "before" period, we extract the earliest $n$ tokens, for the "after" period, the latest $n$ ones (see Figure 2). The idea is to maximize the temporal distance between the periods in order to see stronger effect.
|
| 162 |
+
|
| 163 |
+
For non-interacting users, it is not obvious how to define "before" and "after", since the active period is not defined. We do the following: find the earliest first interaction date and the latest last interaction date across all actively interacting pairs. Then we take the date which is exactly in the middle between those two as the active period (the length of the active period is thus one day, which is common for interacting pairs, too). Then exactly the same procedure as for actively interacting pairs is applied.
|
| 164 |
+
|
| 165 |
+

|
| 166 |
+
|
| 167 |
+
Figure 2: A visualization of how the periods before and after the active interaction has started are defined. Vertical lines represent interactions, the horizontal lines represent time. $n$ earliest tokens are sampled from the "before" period, $n$ latest tokens are sampled from "after"
|
| 168 |
+
|
| 169 |
+

|
| 170 |
+
|
| 171 |
+
Figure 3: Visualization of the threshold requirements. Let the table cells represent how many tokens the User has written in the Subforum in the given period. The following condition must be met for the user pair to be accepted: $\left( {\left( {{A1} \geq n\text{ }{AND}\text{ }{A2} \geq n}\right) \text{ }{OR}\text{ }({B1} \geq }\right)$ $n\left. \left. {{AND}\;{B2} \geq n}\right) \right) {AND}\left( \left( {{C1} \geq n}\right. \right. {AND}\;{C2} \geq$ n) ${OR}\left( {{D1} \geq n\text{AND}{D2} \geq n}\right)$ )
|
| 172 |
+
|
| 173 |
+
There are many more non-interacting pairs than actively interacting ones, and calculating the distance change for all of them is computationally expensive. We go through the list of all noninteracting pairs in a randomized order and stop when $m$ pairs have met the conditions, where $m$ is five times the number of actively interacting pairs that have met the conditions. The reason for this decision is that the number of actively interacting pairs is rather small for some combinations of the subforums, and it makes sense to have somewhat larger samples at least for the non-interacting group.
|
| 174 |
+
|
| 175 |
+
## 3 Results
|
| 176 |
+
|
| 177 |
+
We perform the comparisons for all possible combinations of subforums (ten duplets in total). The results are summarized in Table 4. For every duplet and every type of user pair (actively interacting
|
| 178 |
+
|
| 179 |
+
vs. non-interacting) we report sample size, average 429 distance change $\left( {{\Delta }_{\text{before }} - {\Delta }_{\text{after }}}\right)$ and the proportion of pairs for which the change was positive (the distance became smaller). Results for samples
|
| 180 |
+
|
| 181 |
+
where the number of pairs is less than 20 are not 433 reported.
|
| 182 |
+
|
| 183 |
+
We intentionally do not perform any statistical significance testing, see Wasserstein et al. (2019) about the limitations and pitfalls of this approach in general and Koplenig (2019) in corpus linguistics in particular. It can be argued that since the same user can occur in several different pairs, the observations in the sample (pairs) are not independent and thus the assumptions for most traditional tests are not met. In addition, $p$ -values will be affected by varying sample sizes. Instead, we choose to concentrate on the effect size and the robustness of effect: how often the same pattern can be observed across duplets and thresholds.
|
| 184 |
+
|
| 185 |
+
Remember that in the evaluation experiment (Section 2.3) we roughly estimated the average distance between two different users to be around 0.13 for the chosen parameter values. While there clearly is large variation, and while the average distance can be larger for the main experiment (since the users' texts come from different subforums, not the same one), the estimate still provides us with a reference point and helps to put the observed distance changes in perspective. For Home-Sport-i, for instance, the average change is 0.033 , which is approximately ${25}\%$ of 0.13 . This means that on average, actively interacting users in this duplet change their styles so much that they cover one quarter of an average distance the styles of two different persons.
|
| 186 |
+
|
| 187 |
+
Overall, the distance tends to become shorter both for interacting and non-interacting pairs. The proportion of pairs which (presumably) accommodate is larger than 0.5 in 19 cases out of 19 (though only marginally so for Sport-Culture-i). The average change is positive in 17 cases out of 19 (but note that IQR is very large in most cases, which means considerable variation across pairs).
|
| 188 |
+
|
| 189 |
+
<table><tr><td>Subforum1</td><td>Subforum2</td><td>type</td><td>pairs</td><td>positive</td><td>change</td><td>$\mathbf{{IQR}}$</td></tr><tr><td>home</td><td>sport</td><td>$\mathrm{i}$</td><td>29</td><td>0.828</td><td>0.033</td><td>0.042</td></tr><tr><td>home</td><td>sport</td><td>n</td><td>145</td><td>0.524</td><td>-0.012</td><td>0.081</td></tr><tr><td>computer</td><td>drugs</td><td>i</td><td>15</td><td>-</td><td>-</td><td>-</td></tr><tr><td>computer</td><td>drugs</td><td>n</td><td>75</td><td>0.680</td><td>0.048</td><td>0.096</td></tr><tr><td>sport</td><td>drugs</td><td>i</td><td>67</td><td>0.612</td><td>0.015</td><td>0.110</td></tr><tr><td>sport</td><td>drugs</td><td>n</td><td>335</td><td>0.546</td><td>0.002</td><td>0.094</td></tr><tr><td>home</td><td>computer</td><td>i</td><td>46</td><td>0.630</td><td>0.060</td><td>0.121</td></tr><tr><td>home</td><td>computer</td><td>n</td><td>230</td><td>0.617</td><td>0.029</td><td>0.089</td></tr><tr><td>home</td><td>drugs</td><td>i</td><td>22</td><td>0.682</td><td>0.101</td><td>0.201</td></tr><tr><td>home</td><td>drugs</td><td>n</td><td>110</td><td>0.664</td><td>0.028</td><td>0.081</td></tr><tr><td>sport</td><td>computer</td><td>i</td><td>89</td><td>0.607</td><td>0.031</td><td>0.153</td></tr><tr><td>sport</td><td>computer</td><td>n</td><td>445</td><td>0.600</td><td>0.027</td><td>0.105</td></tr><tr><td>home</td><td>culture</td><td>i</td><td>105</td><td>0.686</td><td>0.042</td><td>0.090</td></tr><tr><td>home</td><td>culture</td><td>n</td><td>525</td><td>0.608</td><td>0.020</td><td>0.078</td></tr><tr><td>sport</td><td>culture</td><td>i</td><td>332</td><td>0.506</td><td>-0.014</td><td>0.119</td></tr><tr><td>sport</td><td>culture</td><td>n</td><td>1660</td><td>0.619</td><td>0.009</td><td>0.101</td></tr><tr><td>drugs</td><td>culture</td><td>i</td><td>25</td><td>0.680</td><td>0.077</td><td>0.190</td></tr><tr><td>drugs</td><td>culture</td><td>n</td><td>125</td><td>0.584</td><td>0.023</td><td>0.115</td></tr><tr><td>computer</td><td>culture</td><td>i</td><td>144</td><td>0.694</td><td>0.058</td><td>0.114</td></tr><tr><td>computer</td><td>culture</td><td>n</td><td>720</td><td>0.640</td><td>0.032</td><td>0.107</td></tr></table>
|
| 190 |
+
|
| 191 |
+
Table 4: Results across the subforum duplets. Listed: whether the pair of users actively interacts or not (type); total number of pairs in the sample; proportion of pairs for which ${\Delta }_{\text{before }} - {\Delta }_{\text{after }}$ is positive; average change ${\Delta }_{\text{before }} - \left( {\Delta }_{\text{after }}\right)$ and the corresponding IQR. Shaded are rows where sample size is smaller than 20 pairs (considered unreliable)
|
| 192 |
+
|
| 193 |
+
We compare the observed results with the possible outcomes in Table 5. Out of nine duplets with sufficient sample size, six demonstrate the effect which we judge to be most compatible with Outcome 1 in Table 1: there is overall convergence to a community norm and pairwise accommodation on top of that. In the Home-Sport duplet, the average distance change for non-interacting users is negative, suggesting divergence, but the proportion of converging pairs is marginally larger than 0.5 . We label this case as Outcome 3: no clear effect for non-interacting users, thus no evidence for convergence to a community norm. In the Sport-Computer duplet, the differences are too small to prefer Outcome 1 over Outcome 2. Finally, the Sport-Culture duplet exhibits an unexpected effect: the non-interacting users seem to accommodate, while the non-interacting users do not (according to the proportion measures) or even diverge (according to the average change).
|
| 194 |
+
|
| 195 |
+
## 4 Discussion
|
| 196 |
+
|
| 197 |
+
492
|
| 198 |
+
|
| 199 |
+
From Section 3 it is clear that not all the results 493 unambiguously point in the same direction. It is, however, obvious, that in most cases distance does become shorter, that is, users do converge. Negative results (distance becomes longer) are not only less frequent, but also weaker than most of the positive ones.
|
| 200 |
+
|
| 201 |
+
By comparing distance changes with the average distance between two different users we show that the effect sizes can be viewed as considerable.
|
| 202 |
+
|
| 203 |
+
The shortening trend is stronger and more robust for actively interacting pairs (in six cases out of nine), and we thus find our results most compatible with Outcome 1.
|
| 204 |
+
|
| 205 |
+
More direct insight into the process of convergence would of course be desirable before it can be stated with certainty that it is caused by interactions. Nonetheless, our results provide evidence that it actually can be so. In other words, we show that convergence can exist (a necessary condition is meant: distance changes are observed), but not that it definitely exists.
|
| 206 |
+
|
| 207 |
+
Note that while a reversed causal link can be
|
| 208 |
+
|
| 209 |
+
<table><tr><td>Subforum1</td><td>Subforum2</td><td>${\Delta }_{pos}$</td><td>${\Delta }_{change}$</td><td>Outcome</td><td>Comment</td></tr><tr><td>home</td><td>sport</td><td>0.304</td><td>0.045</td><td>3</td><td>divergence for non-int. users?</td></tr><tr><td>computer</td><td>drugs</td><td>-</td><td>-</td><td>-</td><td>sample too small</td></tr><tr><td>sport</td><td>drugs</td><td>0.066</td><td>0.013</td><td>1</td><td/></tr><tr><td>home</td><td>computer</td><td>0.013</td><td>0.031</td><td>1</td><td/></tr><tr><td>home</td><td>drugs</td><td>0.018</td><td>0.073</td><td>1</td><td/></tr><tr><td>sport</td><td>computer</td><td>0.007</td><td>0.004</td><td>2</td><td rowspan="2">small differences</td></tr><tr><td>home</td><td>culture</td><td>0.078</td><td>0.022</td><td>1</td></tr><tr><td>sport</td><td>culture</td><td>-0.113</td><td>-0.023</td><td>?</td><td rowspan="3">divergence for int. users?</td></tr><tr><td>drugs</td><td>culture</td><td>0.096</td><td>0.054</td><td>1</td></tr><tr><td>computer</td><td>culture</td><td>0.054</td><td>0.026</td><td>1</td></tr></table>
|
| 210 |
+
|
| 211 |
+
Table 5: Classification of outcomes (see Table 1) per duplet (see Table 4). ${\Delta }_{pos} =$ difference between the proportions of presumably accommodating pairs for interacting and non-interacting users (column positive in Table 4). ${\Delta }_{\text{change }}$ = difference between the average distance changes for interacting and non-interacting users (column change in Table 4). Positive values indicate Outcome 1.
|
| 212 |
+
|
| 213 |
+
516 suggested: users who have similar writing styles will interact more often, or "birds of a feather flock together" (McPherson et al., 2001), it can hardly explain our results on its own: why would users who write on the same subforum and especially those who interact become linguistically closer over time?
|
| 214 |
+
|
| 215 |
+
There are several reasons to why our results are not as clean as one might want them to be (apart from the obvious "random noise"). First, users in the pairs that we label as "non-interacting" can still interact in other Flashback subforums. Second, while we showed that Cosine Delta is a very good measure for linguistic distance, the definition of an interaction is more arbitrary. There is already a tradition of using the "post-nearby-in-the-same-thread" measure (Hamilton et al., 2017; Del Tredici and Fernández, 2018), but it has not really been evaluated. Overall, further exploration of the same (or similar) data is of course desirable. Different experimental designs, different thresholds, different measures would show how robust the observed effects are.
|
| 216 |
+
|
| 217 |
+
We find the following questions particularly appealing for future studies.
|
| 218 |
+
|
| 219 |
+
- If we compare accommodation across interacting pairs, will it be correlated with the number/intensity of interactions?
|
| 220 |
+
|
| 221 |
+
- What happens if we consider not only direct connections between users, but also indirect ones? If $A$ interacts with $B, B$ interacts with $\mathrm{C}$ , but $\mathrm{A}$ does not directly interact with $\mathrm{C}$ : do A and C become closer?
|
| 222 |
+
|
| 223 |
+
- What happens if $\mathrm{A}$ and $\mathrm{C}$ from the previous 549
|
| 224 |
+
|
| 225 |
+
example are pulling the style of B into differ- 550
|
| 226 |
+
|
| 227 |
+
ent directions? 551
|
| 228 |
+
|
| 229 |
+
- Why do we sometimes observe negative val- 552
|
| 230 |
+
|
| 231 |
+
ues that suggest divergence (the distance in- 553 creases)? Danescu-Niculescu-Mizil et al.
|
| 232 |
+
|
| 233 |
+
(2013) observe an increasing divergence be- 555 tween the community norm and the production of a user who is become less active in the community (and will eventually leave), but it is unclear whether this can explain our results.
|
| 234 |
+
|
| 235 |
+
- Is it possible to explain convergence and di- 560
|
| 236 |
+
|
| 237 |
+
vergence better if we take into account the 561 content of the users' posts and the relation-
|
| 238 |
+
|
| 239 |
+
ship between users? 563
|
| 240 |
+
|
| 241 |
+
## 5 Conclusions
|
| 242 |
+
|
| 243 |
+
564
|
| 244 |
+
|
| 245 |
+
We show that writing styles of users who partici-
|
| 246 |
+
|
| 247 |
+
pate in the same subforums do become more sim- 566 ilar over time and that this increase in similarity
|
| 248 |
+
|
| 249 |
+
is stronger for pairs of users who actively interact 568 (compared to those who do not interact), though this is not an exceptionless trend. These results support the accommodation hypothesis (let us repeat Labov's wording: "the more often people talk to each other, the more similar their speech will be").
|
| 250 |
+
|
| 251 |
+
It is desirable to see if the observed effects can
|
| 252 |
+
|
| 253 |
+
be replicated in similar studies with different exper- 575 imental settings.
|
| 254 |
+
|
| 255 |
+
All data and scripts necessary to reproduce the study will be made openly available.
|
| 256 |
+
|
| 257 |
+
579 Acknowledgements 580
|
| 258 |
+
|
| 259 |
+
## References
|
| 260 |
+
|
| 261 |
+
581 Allan Bell. 1984. Language style as audience design. 582 Language in Society, 13(2):145-204.
|
| 262 |
+
|
| 263 |
+
583 Johanna Björklund and Niklas Zechner. 2017. Syntactic 584 methods for topic-independent authorship attribution. 585 Cambridge University Press.
|
| 264 |
+
|
| 265 |
+
586 Lars Borin, Markus Forsberg, and Johan Roxendal. 587 2012. Korp - the corpus infrastructure of spräk- 588 banken. In Proceedings of the Eighth International 589 Conference on Language Resources and Evaluation 590 (LREC'12), pages 474-478, Istanbul, Turkey. Euro- 591 pean Language Resources Association (ELRA).
|
| 266 |
+
|
| 267 |
+
592 John Burrows. 2002. 'Delta': a Measure of Stylistic Dif- 593 ference and a Guide to Likely Authorship. Literary 594 and Linguistic Computing, 17(3):267-287.
|
| 268 |
+
|
| 269 |
+
595 Patricia Cukor-Avila and Guy Bailey. 2001. The ef- 596 fects of the race of the interviewer on sociolinguistic 597 fieldwork. Journal of Sociolinguistics, 5(2):252-270.
|
| 270 |
+
|
| 271 |
+
598 Cristian Danescu-Niculescu-Mizil, Robert West, Dan 599 Jurafsky, Jure Leskovec, and Christopher Potts. 2013. 600 No country for old members: User lifecycle and lin- 601 guistic change in online communities. In Proceed- 602 ings of the 22nd International Conference on World 603 Wide Web, WWW '13, page 307-318, New York, NY, 604 USA. Association for Computing Machinery.
|
| 272 |
+
|
| 273 |
+
605 Marco Del Tredici and Raquel Fernández. 2018. The 606 road to success: Assessing the fate of linguistic inno- 607 vations in online communities. In Proceedings of the 608 27th International Conference on Computational Linguistics, pages 1591-1603, Santa Fe, New Mexico, 610 USA. Association for Computational Linguistics.
|
| 274 |
+
|
| 275 |
+
Penelope Eckert. 2012. Three waves of variation study: The emergence of meaning in the study of sociolin- 613 guistic variation. Annual Review of Anthropology, 41(1):87-100.
|
| 276 |
+
|
| 277 |
+
615 Stefan Evert, Thomas Proisl, Thorsten Vitt, Christof Schöch, Fotis Jannidis, and Steffen Pielström. 2015. Towards a better understanding of Burrows's delta 618 in literary authorship attribution. In Proceedings of the Fourth Workshop on Computational Linguistics for Literature, pages 79-88, Denver, Colorado, USA. 621 Association for Computational Linguistics.
|
| 278 |
+
|
| 279 |
+
Cynthia Gallois, Howard Giles, Elizabeth Jones, 623 Aaron C Cargile, and Hiroshi Ota. 1995. Accommodating intercultural encounters: Elaborations and extensions. In Richard L. Wiseman, editor, Intercul- 626 tural Communication Theory, pages 115-147. Sage Publications.
|
| 280 |
+
|
| 281 |
+
628 Simon Garrod, Alessia Tosi, and Martin J Pickering. 2018. Alignment during interaction. In Shirley-Ann Rueschemeyer and M. Gareth Gaskell, editors, The Oxford Handbook of Psycholinguistics, 2 ed. Oxford University Press.
|
| 282 |
+
|
| 283 |
+
Howard Giles. 1973. Accent mobility: A model and 633
|
| 284 |
+
|
| 285 |
+
some data. Anthropological Linguistics, 15(2):87- 105.
|
| 286 |
+
|
| 287 |
+
William Hamilton, Justine Zhang, Cristian Danescu-Niculescu-Mizil, Dan Jurafsky, and Jure Leskovec. 2017. Loyalty in online communities. Proceedings of the International AAAI Conference on Web and Social Media, 11(1):540-543.
|
| 288 |
+
|
| 289 |
+
Internetstiftelsen. 2021. Svenskarna och internet 2021. https://svenskarnaochinternet.se/ rapporter/svenskarna-och-internet- 2021/.
|
| 290 |
+
|
| 291 |
+
Fotis Jannidis, Steffen Pielström, Christof Schöch, and Thorsten Vitt. 2015. Improving Burrows' Delta. an empirical evaluation of text distance measures. In Digital Humanities Conference, volume 11, Sydney.
|
| 292 |
+
|
| 293 |
+
Alexander Koplenig. 2019. Against statistical significance testing in corpus linguistics. Corpus Linguistics and Linguistic Theory, 15(2):321-346.
|
| 294 |
+
|
| 295 |
+
William Labov. 2001. Principles of linguistic change. Volume 2: Social factors. Blackwell, Oxford.
|
| 296 |
+
|
| 297 |
+
Miller McPherson, Lynn Smith-Lovin, and James M Cook. 2001. Birds of a feather: Homophily in social networks. Annual Review of Sociology, 27(1):415- 444.
|
| 298 |
+
|
| 299 |
+
George K. Mikros and Eleni K. Argir. 2007. Investigating topic influence in authorship attribution. In SIGIR 2007 Workshop: Plagiarism Analysis, Authorship Identification, and Near-Duplicate Detection, pages 29-36.
|
| 300 |
+
|
| 301 |
+
Aurélie Nardy, Jean-Pierre Chevrot, and Stéphanie Barbu. 2014. Sociolinguistic convergence and social interactions within a group of preschoolers: A longitudinal study. Language Variation and Change, 26(3):273-301.
|
| 302 |
+
|
| 303 |
+
Dong Nguyen and Carolyn P. Rosé. 2011. Language use as a reflection of socialization in online communities. In Proceedings of the Workshop on Language in Social Media (LSM 2011), pages 76-85, Portland, Oregon. Association for Computational Linguistics.
|
| 304 |
+
|
| 305 |
+
Scott A. Reid and Howard Giles. 2008. Social identity theory. In The International Encyclopedia of Communication. John Wiley & Sons, Ltd.
|
| 306 |
+
|
| 307 |
+
John R Rickford, Faye McNair-Knox, et al. 1994. Addressee-and topic-influenced style shift: A quantitative sociolinguistic study. In Sociolinguistic perspectives on register, pages 235-276. Oxford University Press.
|
| 308 |
+
|
| 309 |
+
Peter W. H. Smith and W. Aldridge. 2011. Improving authorship attribution: Optimizing Burrows' Delta method*. Journal of Quantitative Linguistics, ${18}\left( 1\right) : {63} - {88}$ .
|
| 310 |
+
|
| 311 |
+
Ronald L. Wasserstein, Allen L. Schirm, and Nicole A. Lazar. 2019. Moving to a world beyond " $p < {0.05}$ ". The American Statistician, 73(sup1):1-19.
|
| 312 |
+
|
| 313 |
+
634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680
|
| 314 |
+
|
| 315 |
+
681
|
| 316 |
+
|
| 317 |
+
682
|
| 318 |
+
|
| 319 |
+
683
|
| 320 |
+
|
| 321 |
+
684 685 686 687
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/1Hwy5yfNadS/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,438 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ YOU SAY TOMATO, I SAY THE SAME: A LARGE-SCALE STUDY OF LINGUISTIC ACCOMMODATION IN ONLINE COMMUNITIES
|
| 2 |
+
|
| 3 |
+
Anonymous NODALIDA submission
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
001 An important assumption in sociolinguistics and cognitive psychology is that human beings adjust their language use to their interlocutors.
|
| 8 |
+
|
| 9 |
+
004 Put simply, the more often people talk (or write) to each other, the more similar their speech becomes. Such accommodation has often been observed in small-scale observational studies and experiments, but large-scale longitudinal studies that systematically test whether the accommodation occurs are scarce. We use data from a very large Swedish online discussion forum to show that linguistic production of the users who write in the same subforum does usually become more similar over time. Moreover, the results suggest that this trend is stronger for those pairs of users who actively interact than for those pairs who do not interact. Our data thus support the accommodation hypothesis.
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Language is a tool not only for conveying information, but also for expressing attitudes, constructing identities and building relationships (Eckert, 2012). One manifestation of this fundamental property of language is that how we speak (or write) depends on whom we are speaking (or writing) to. How exactly the audience affects the linguistic production is a complex and multi-faceted process which can be approached from various perspectives. Consider, for instance, the audience design theory (Bell, 1984), social identity theory (Reid and Giles, 2008) and accommodation theory (Giles, 1973; Gallois et al., 1995).
|
| 14 |
+
|
| 15 |
+
In this paper, we perform a large-scale test of the hypothesis that people adjust their production style to their interlocutors. This phenomenon is known as accommodation (sometimes attunement or linguistic alignment) or convergence if the styles of the interlocutors are becoming more similar (divergence if they are becoming more different). While it has received considerable attention within sociolinguistics (Rickford et al., 1994; Cukor-Avila
|
| 16 |
+
|
| 17 |
+
and Bailey, 2001) and cognitive psychology (Gar- 042
|
| 18 |
+
|
| 19 |
+
rod et al., 2018), large-scale longitudinal studies 043
|
| 20 |
+
|
| 21 |
+
are wanting. An exception is a study by Nardy 044
|
| 22 |
+
|
| 23 |
+
et al. (2014), who have observed a group of French- 045
|
| 24 |
+
|
| 25 |
+
speaking children at a kindergarten for one year and 046 shown that children who interacted more frequently adopted similar usages of a number of sociolinguistic variables (such as, for instance, the dropping of the consonant $/\mathrm{R}/$ in post-consonantal word-final
|
| 26 |
+
|
| 27 |
+
positions). 051
|
| 28 |
+
|
| 29 |
+
Internet and social media in particular provide
|
| 30 |
+
|
| 31 |
+
us with a vast amount of data about how people 053 communicate and how they use language for other purposes than information transmission (Nguyen and P. Rosé, 2011). While in some respects these data are not as informative as those collected by
|
| 32 |
+
|
| 33 |
+
direct observation or experimenting, in some other 058 respects they may be equally or even more useful,
|
| 34 |
+
|
| 35 |
+
providing very detailed information about who in- 060 teracted when with whom and how. Besides, it is often possible to collect large datasets that enable more systematic hypothesis testing.
|
| 36 |
+
|
| 37 |
+
We use data from a very large Swedish discus-
|
| 38 |
+
|
| 39 |
+
sion forum (Flashback) to test a widely held soci- 065 olinguistic assumption that "the more often people
|
| 40 |
+
|
| 41 |
+
talk to each other, the more similar their speech will 067 be" (Labov, 2001, p.288). In brief, we find pairs of Flashback users which during some period of time have actively interacted (see Section 2.2 for the definition of "active interaction"). We define a measure of linguistic distance between users and show that it is valid for our purposes (see Section 2.3). For every pair of users, we then calculate the linguistic distance between the two users' production before they have started interacting $\left( {\Delta }_{\text{ before }}\right)$ and after it $\left( {\Delta }_{\text{ after }}\right)$ , and the difference between these distances $\left( {{\Delta }_{i} = {\Delta }_{\text{ before }} - {\Delta }_{\text{ after }}}\right)$ . If the con-
|
| 42 |
+
|
| 43 |
+
vergence assumption is correct, we expect that the 079 distance will tend to become smaller and the average ${\Delta }_{i}$ will be positive.
|
| 44 |
+
|
| 45 |
+
A positive ${\Delta }_{i}$ , however, can arise for different 083 reasons, of which arguably the most prominent one is that distances between users become smaller not because users accommodate to specific interlocutors, but rather converge on a certain style adopted in the community (Danescu-Niculescu- 088 Mizil et al., 2013). To test whether this is a better explanation, we perform a similar calculation for those pairs who have never had a single interaction, comparing texts written earlier $\left( {\Delta }_{\text{ early }}\right)$ and later $\left( {\Delta }_{\text{ later }}\right)$ during their activity on the forum $\left( {{\Delta }_{n} = {\Delta }_{\text{ early }} - {\Delta }_{\text{ later }}}\right)$ . If there is a convergence to norm, the average ${\Delta }_{n}$ should be positive.
|
| 46 |
+
|
| 47 |
+
It is also possible that both pairwise accommodation and convergence to the community norm occur simultaneously. Moreover, they might even be parts of the same process: if speakers do converge on a certain norm, this convergence can emerge (at least partly) due to pairwise interactions. It is, however, also possible that only one of these processes occurs. Speakers can, for instance, converge on the community norm by adjusting to some perceived "average" style and not specific individual interlocutors. On the other hand, it can be imagined that speakers do adjust to the individual interlocutors, but that does not lead to the emergence of the community norm (for instance, because different interlocutors are "pulling" in different directions). The purpose of this study is to provide some insight into these not entirely understood processes.
|
| 48 |
+
|
| 49 |
+
We envisage four likely outcomes of our experiments, summarized in Table 1. Other outcomes are possible, but would be more difficult to explain. We would, for instance, be surprised if ${\Delta }_{n}$ turns out to be larger than ${\Delta }_{i}$ (since if there is convergence to community norm, it should be affecting actively interacting and non-interacting users in approximately the same way). Another unexpected result would be a negative value of either ${\Delta }_{n}$ or ${\Delta }_{i}$ , since that would imply systematic divergence (see discussion in Section 4).
|
| 50 |
+
|
| 51 |
+
§ 2 MATERIALS AND METHODS
|
| 52 |
+
|
| 53 |
+
§ 2.1 CORPORA
|
| 54 |
+
|
| 55 |
+
We use Flashback, ${}^{1}$ a very large Swedish discussion forum covering a broad variety of topics which has existed for more than two decades. In 2021, the proportion of internet users in Sweden (excluding those younger than eight years) who visited the forum at least once during the last 12 months was estimated to be 24% (Internetstiftelsen, 2021).
|
| 56 |
+
|
| 57 |
+
The forum is divided into 16 subforums, of which we use five: Dator och IT 'Computer and IT', Droger 'Drugs', Hem, bostad och family 'Home, house and family', Kultur & Media 'Culture and media', Sport och träning 'Sport and training'. These five were selected as being relatively large, of comparable size and representing diverse and not directly related topics.
|
| 58 |
+
|
| 59 |
+
To access the Flashback texts, we use the corpora created and maintained by Spräkbanken Text, a Swedish national NLP infrastructure. The corpora are available for download ${}^{2}$ and for searching via the Korp interface (Borin et al.,2012) and its API. ${}^{3}$
|
| 60 |
+
|
| 61 |
+
The basic corpus statistics are summarized in Table 2. The earliest available posts date back to 2000, and the corpora were last updated in February 2022. The number of users is estimated as a number of unique non-empty usernames. We list separately the number of "prolific" users, and we consider users prolific if they have written 6000 tokens or more. All other users will be discarded (many of the prolific users will not pass additional thresholds either, see Section 2.4).
|
| 62 |
+
|
| 63 |
+
Subforums may be further divided into subsub-and subsubsubforums, which we do not take into account. What is important for our purposes is that messages (posts) are always organized in threads: there is an initial message which starts a thread (often a question) and then an unlimited number of messages which either respond to the original message or to later messages or in some other way are related to the thread's topic. The structure of the thread is linear: that is, messages are posted in a strictly chronological order.
|
| 64 |
+
|
| 65 |
+
§ 2.2 DEFINING INTERACTION
|
| 66 |
+
|
| 67 |
+
Two users are assumed to have had an interaction if they have written messages within the same thread, the two messages are separated by no more than two other messages and there has gone no more than five days between the two messages were posted. This definition has been used by Hamilton et al. (2017) and Del Tredici and Fernández (2018), but without the temporal threshold. We consider the temporal threshold useful, since Flashback can have very long threads, sometimes spanning over the years.
|
| 68 |
+
|
| 69 |
+
${}^{2}$ https://spraakbanken.gu.se/resurser? s=flashback&language=All
|
| 70 |
+
|
| 71 |
+
${}^{3}$ https://ws.spraakbanken.gu.se/docs/ korp
|
| 72 |
+
|
| 73 |
+
'https://www.flashback.org/
|
| 74 |
+
|
| 75 |
+
max width=
|
| 76 |
+
|
| 77 |
+
Outcome Interpretation
|
| 78 |
+
|
| 79 |
+
1-2
|
| 80 |
+
1 ${\Delta }_{i} > {\Delta }_{n} > 0$ Both pairwise accommodation and overall convergence to community norm are detected
|
| 81 |
+
|
| 82 |
+
1-2
|
| 83 |
+
$t - {\Delta }_{i} = {\Delta }_{n} > 0$ No pairwise accommodation; overall convergence to community norm is detected
|
| 84 |
+
|
| 85 |
+
1-2
|
| 86 |
+
${\Delta }_{i} > {\Delta }_{n} = 0$ Pairwise accommodation is detected; no convergence to community norm
|
| 87 |
+
|
| 88 |
+
1-2
|
| 89 |
+
${\Delta }_{i} = {\Delta }_{n} = 0$ No pairwise accommodation; no convergence to community norm
|
| 90 |
+
|
| 91 |
+
1-2
|
| 92 |
+
|
| 93 |
+
Table 1: Four likely outcomes of the experiment. ${\Delta }_{i}$ is the change of linguistic distance between actively interacting users, ${\Delta }_{n}$ is the change of distance between non-interacting users.
|
| 94 |
+
|
| 95 |
+
max width=
|
| 96 |
+
|
| 97 |
+
Subforum tokens users prolific users
|
| 98 |
+
|
| 99 |
+
1-4
|
| 100 |
+
Computer 316M 187K 9.3K
|
| 101 |
+
|
| 102 |
+
1-4
|
| 103 |
+
Drugs 257M 123K 8.0K
|
| 104 |
+
|
| 105 |
+
1-4
|
| 106 |
+
Culture 434M 211K 12.2K
|
| 107 |
+
|
| 108 |
+
1-4
|
| 109 |
+
Home 348M 168K 10.0K
|
| 110 |
+
|
| 111 |
+
1-4
|
| 112 |
+
Sport 251M 105K 5.4K
|
| 113 |
+
|
| 114 |
+
1-4
|
| 115 |
+
|
| 116 |
+
Table 2: Basic statistics about the Flashback subforums. Prolific users have written 6000 tokens or more
|
| 117 |
+
|
| 118 |
+
178 See the definition of "actively interacting users" in section 2.4.
|
| 119 |
+
|
| 120 |
+
§ 2.3 MEASURING LINGUISTIC DISTANCE
|
| 121 |
+
|
| 122 |
+
Potential solutions. A traditional sociolinguistic approach would be to identify a number of linguistic variables (features for which variation is known to exist) and use them for comparison (Nardy et al., 2014).The main problem with this approach is that most variables are not very frequent and it is thus difficult to collect enough observations. A traditional NLP approach would be to use a language model (Danescu-Niculescu-Mizil et al., 2013). Here, the main problem would be to ensure that the model has enough training data. We use a metric which is often applied in authorship attribution studies, Cosine Delta (Smith and Aldridge, 2011), a modification of Burrows' delta (Burrows, 2002). Its main advantage is that it can often be successfully applied to relatively small datasets, and it is also computationally efficient. It can also be considered a language model, though very simple.
|
| 123 |
+
|
| 124 |
+
Cosine Delta. To calculate Cosine Delta between two texts, the texts are represented as $t$ - dimensional vectors where every element is a $z$ - score (standard score) of the relative frequency of one of $t$ most frequent words. The cosine of the angle between the two vectors gauges their proximity, by subtracting it from 1, we get the distance (see Equation 1).
|
| 125 |
+
|
| 126 |
+
$$
|
| 127 |
+
{\Delta }_{\angle }\left( {T,{T}^{\prime }}\right) = \frac{\mathbf{z}\left( T\right) \cdot \mathbf{z}\left( {T}^{\prime }\right) }{\parallel \mathbf{z}\left( T\right) {\parallel }_{2}{\begin{Vmatrix}\mathbf{z}\left( {T}^{\prime }\right) \end{Vmatrix}}_{2}} \tag{1}
|
| 128 |
+
$$
|
| 129 |
+
|
| 130 |
+
209
|
| 131 |
+
|
| 132 |
+
Cosine Delta has been shown to outperform Bur- 210 rows' Delta and other similar measures (Jannidis
|
| 133 |
+
|
| 134 |
+
et al., 2015; Evert et al., 2015). 212
|
| 135 |
+
|
| 136 |
+
Evaluating the metric. A typical usage of Co- 214 sine Delta is to compare text $\mathrm{X}$ of unknown or disputed authorship with texts by authors A and B in order to see whose style is more similar to the one used in $\mathrm{X}$ and whether the similarity is strong enough to attribute the text. This is not the same task that we have in mind. We want to compare texts written by authors $\mathrm{A}$ and $\mathrm{B}$ at time $\mathrm{P}$ and then at a later time $Q$ in order to see whether the styles of the two authors have become more similar. In other words, we are not trying to infer who authored which text (we know that). Instead, we want to be able to measure the distance between two different authors.
|
| 137 |
+
|
| 138 |
+
To test whether Cosine Delta is suitable for that, we run the following experiment. The main requirement for an evaluation is a meaningful benchmark which can represent the ground truth. In order to evaluate a distance measure we need a set of texts between which true distances are known. We create such a set by mixing texts produced by two authors in different proportions. For two Flashback users, (A0 and A1) an equal amount of tokens is extracted and used to create six texts: Base (contains solely the A0 production), 1 (80% of production belongs to $\mathrm{A}0,{20}\%$ to $\mathrm{A}1$ ; every token is randomly selected), 2 (60% A0, 40% A1), 3 (40% A0, 60% A1), 4 (20% A0, 80% A1) and 5 (100% A1), see Figure 1.
|
| 139 |
+
|
| 140 |
+
We accept as ground truth that the distance between the Base text and, say, Text 1 should be smaller than between Base and Text 5. We use Cosine Delta to compare Texts 1-5 with the Base text, rank them by their distance from Base and then measure Spearman correlation coefficient between this ranking and the true one(1,2,3,4,5).
|
| 141 |
+
|
| 142 |
+
< g r a p h i c s >
|
| 143 |
+
|
| 144 |
+
Figure 1: The artificial benchmark for evaluating the linguistic distance measure: six texts with different proportions of the authors’ (A0 and A1) production.
|
| 145 |
+
|
| 146 |
+
We run the ranking test on 50 artificial sets, each consisting of six texts generated from two different authors' production, as described above. All data were extracted from the subforum Fordon och trafik 'Vehicles and traffic' (not used in the main experiment). The data were extracted consecutively without any randomization, i.e. the extraction script started from the beginning of the corpus, tried to extract a predefined number of tokens for every new user it encountered and stopped when it collected enough data for 100 unique users.
|
| 147 |
+
|
| 148 |
+
We try several combinations of two parameters: $t$ , the dimension of vectors (the number of the most frequent words the frequencies of which will be used), and $n$ , the minimum size of the texts to be compared (larger texts are expected yield more reliable estimates). The frequency list is compiled using the whole Flashback corpus (uncased). The results are reported in Table 3.
|
| 149 |
+
|
| 150 |
+
The performance of the ranking system is very high and increases as $n$ increases. Unfortunately, increasing $n$ decreases sample size, since less user pairs will be able to pass the thresholds (see Section 2.4). We judge that the best balance between reliability of Cosine Delta and sample size is reached with $n = {3000}(\rho \geq {0.95}$ . For $n = {6000}$ , the performance of Cosine Delta is better, but sample sizes (number of analyzable user pairs) are too small. We use $t = {300}$ , since larger values do not yield any gain for the chosen $n$ values.
|
| 151 |
+
|
| 152 |
+
We also calculate average distance between
|
| 153 |
+
|
| 154 |
+
max width=
|
| 155 |
+
|
| 156 |
+
$n$ $t$ $\rho$ $\Delta$
|
| 157 |
+
|
| 158 |
+
1-4
|
| 159 |
+
1500 150 0.936 (0.1) 0.16 (0.06)
|
| 160 |
+
|
| 161 |
+
1-4
|
| 162 |
+
1500 300 0.936 (0.1) 0.15 (0.06)
|
| 163 |
+
|
| 164 |
+
1-4
|
| 165 |
+
1500 450 0.940 (0.1) 0.15 (0.06)
|
| 166 |
+
|
| 167 |
+
1-4
|
| 168 |
+
1500 600 0.944 (0.1) 0.15 (0.06)
|
| 169 |
+
|
| 170 |
+
1-4
|
| 171 |
+
3000 150 0.950 (0.1) 0.15 (0.07)
|
| 172 |
+
|
| 173 |
+
1-4
|
| 174 |
+
3000 300 0.952 (0.1) 0.14 (0.06)
|
| 175 |
+
|
| 176 |
+
1-4
|
| 177 |
+
3000 450 0.952 (0.1) 0.13 (0.06)
|
| 178 |
+
|
| 179 |
+
1-4
|
| 180 |
+
3000 600 0.952 (0.1) 0.13 (0.06)
|
| 181 |
+
|
| 182 |
+
1-4
|
| 183 |
+
4500 150 0.976 (0) 0.14 (0.08)
|
| 184 |
+
|
| 185 |
+
1-4
|
| 186 |
+
4500 300 0.978 (0) 0.13 (0.07)
|
| 187 |
+
|
| 188 |
+
1-4
|
| 189 |
+
4500 450 0.978 (0) 0.13 (0.07)
|
| 190 |
+
|
| 191 |
+
1-4
|
| 192 |
+
4500 600 0.978 (0) 0.13 (0.07)
|
| 193 |
+
|
| 194 |
+
1-4
|
| 195 |
+
6000 150 0.994 (0) 0.14 (0.06)
|
| 196 |
+
|
| 197 |
+
1-4
|
| 198 |
+
6000 300 0.994 (0) 0.13 (0.07)
|
| 199 |
+
|
| 200 |
+
1-4
|
| 201 |
+
6000 450 0.994 (0) 0.13 (0.07)
|
| 202 |
+
|
| 203 |
+
1-4
|
| 204 |
+
6000 600 0.994 (0) 0.13 (0.06)
|
| 205 |
+
|
| 206 |
+
1-4
|
| 207 |
+
|
| 208 |
+
Table 3: Evaluating Cosine Delta on 50 ground-truth sets. $n$ is the number of tokens in the compared texts, $t$ is the number of frequent words used to construct the vector, $\rho$ is the average Spearman correlation coefficient, $\Delta$ is the average difference between authors $\mathrm{A}0$ and A1 (between base and text 5). Interquartile ranges are provided in parentheses.
|
| 209 |
+
|
| 210 |
+
authors A0 and A1 (that is, between Base and 281
|
| 211 |
+
|
| 212 |
+
Text 5) to obtain a very rough estimate of average 282 distance between two different users. Later, when
|
| 213 |
+
|
| 214 |
+
we measure how linguistic distance changes over 284 time, we will use this estimate as a reference point, something to compare the change against, so that we can judge how large the effect size is. For $n = {3000}$ and $t = {300}$ , the average distance is about 0.13 (though there is, unsurprisingly, considerable variation).
|
| 215 |
+
|
| 216 |
+
291
|
| 217 |
+
|
| 218 |
+
Topic sensitivity. An important potential problem with measures like Cosine Delta is that they are topic-sensitive, that is, the distance values can be affected not only by differences in the authors' styles, but also by the topic, i.e., what the specific texts are about (Mikros and Argir, 2007; Björklund and Zechner, 2017). This is extremely undesirable for our purposes, since there is a risk that we observe that a convergence which is not in fact linguistic: the two authors do not start writing in a more similar way, they just start writing about more related topics. To eliminate or at least mitigate this risk, we always compare authors $\mathrm{A}$ and $\mathrm{B}$ by using texts that A wrote in one subforum and B in another sub-forum. While it is not completely impossible that 307 the authors discuss similar topics in different subfo-rums, it seems unlikely that "topical convergence" will systematically occur across subforums.
|
| 219 |
+
|
| 220 |
+
Note also that in the evaluation experiment described above all users come from the same sub- 312 forum. Moreover, their production was extracted from the corpus consecutively and thus at least parts of it come from the same threads. That means that the users are likely to discuss related topics, and the ranking system must be able to capture differences in style despite potential similarities in topic, which it does very well.
|
| 221 |
+
|
| 222 |
+
§ 2.4 CALCULATING DISTANCE CHANGE
|
| 223 |
+
|
| 224 |
+
As mentioned in Section 2.3, all our calculations are always based on two subforums at once (for instance, Home and Sport or Drugs and Computer). We will call such pairs of subforums duplets (to distinguish them from user pairs).
|
| 225 |
+
|
| 226 |
+
Two users are considered to have gone through a period of active interaction if they have had at least 10 interactions within a year in each of the subforums (that is, no less than 20 interactions in total). We compare the production of users before and after the active interaction period, but ignore the period itself.
|
| 227 |
+
|
| 228 |
+
Within a subforum, the active period can have any length from one day to 365 days. We do not measure how often the users interact after the active period, but we discard all texts that have been produced more than one year later after the last interaction (it may be that users continue to interact and there are no messages to discard).
|
| 229 |
+
|
| 230 |
+
In other words, the general idea is that production before the active period includes everything written before the first interaction, production after the active period includes everything written after the tenth interaction (given that it is no more than one year apart from the first interaction), but no later than one year after the last interaction. We are, however, dealing with two subforums at once, and thus have two dates for each of the three seminal interactions. For convenience, we want the active period to be defined in the same way for both sub-forums. We achieve that by using the earlier of the dates for the first interaction and the later of the dates for the tenth generation (this can lead to joint active period being longer than a year). When discarding the messages that were written after the users have stopped interacting (if any), we use the later of the last interaction dates. See the visual summary in Figure 2. 357
|
| 231 |
+
|
| 232 |
+
Users who have never had a single interaction are 358 labelled as non-interacting. We compare them to 359 actively interacting users and ignore all that end up 360 in between: that is, have had some interactions but failed to pass the criteria outlined above (e.g. have 362 had less than 10 interactions in total or have had more, but never 10 within a year). The reason for that is that we want the difference between groups (non-interacting and actively interacting users) to be as large as possible, so that potentially small effects can become visible.
|
| 233 |
+
|
| 234 |
+
Remember that we always want the linguistic 369 distance to be calculated using text from different
|
| 235 |
+
|
| 236 |
+
subforums. The procedure is as follows. For every 371 pair, if before the active period, User 1 has produced at least $n\left( {n = {3000}}\right)$ tokens in Subforum 1, and User 2 has produced at least $n$ tokens in Sub-forum 2, we calculate the distance between them,
|
| 237 |
+
|
| 238 |
+
taking $n$ tokens for User 1 from Subforum 1 and $n$ 376 tokens for User 2 from Subforum 2.
|
| 239 |
+
|
| 240 |
+
Obviously, if User 1 has $n$ or more tokens in 378 Subforum 2, and User 2 has $n$ or more tokens in Subforum 1, the distance is calculated using tokens from Subforum 2 for User 1 and from Subforum 1 for User 2. If both conditions are met (Condition
|
| 241 |
+
|
| 242 |
+
1: User 1 has $n$ or more tokens in Subforum 1 and 383 User 2 has $n$ or more tokens in Subforum 2; Condition 2: User 1 has $n$ or more tokens in Subforum 2 and User 2 has $n$ or more tokens in Subforum 1), we calculate both cross-subforum distances and use their arithmetic mean as the final result. If neither of the conditions is met, the pair is discarded. This procedure is visualized in Figure 3. The same user can occur unlimited times in different pairs.
|
| 243 |
+
|
| 244 |
+
Note that when we calculate distance between users A and B, we always use the same amount of tokens(n)for A and B (since using texts of different sizes might skew the Cosine Delta). For the "before" period, we extract the earliest $n$ tokens, for the "after" period, the latest $n$ ones (see Figure 2). The idea is to maximize the temporal distance between the periods in order to see stronger effect.
|
| 245 |
+
|
| 246 |
+
For non-interacting users, it is not obvious how to define "before" and "after", since the active period is not defined. We do the following: find the earliest first interaction date and the latest last interaction date across all actively interacting pairs. Then we take the date which is exactly in the middle between those two as the active period (the length of the active period is thus one day, which is common for interacting pairs, too). Then exactly the same procedure as for actively interacting pairs is applied.
|
| 247 |
+
|
| 248 |
+
< g r a p h i c s >
|
| 249 |
+
|
| 250 |
+
Figure 2: A visualization of how the periods before and after the active interaction has started are defined. Vertical lines represent interactions, the horizontal lines represent time. $n$ earliest tokens are sampled from the "before" period, $n$ latest tokens are sampled from "after"
|
| 251 |
+
|
| 252 |
+
< g r a p h i c s >
|
| 253 |
+
|
| 254 |
+
Figure 3: Visualization of the threshold requirements. Let the table cells represent how many tokens the User has written in the Subforum in the given period. The following condition must be met for the user pair to be accepted: $\left( {\left( {{A1} \geq n\text{ }{AND}\text{ }{A2} \geq n}\right) \text{ }{OR}\text{ }({B1} \geq }\right)$ $n\left. \left. {{AND}\;{B2} \geq n}\right) \right) {AND}\left( \left( {{C1} \geq n}\right. \right. {AND}\;{C2} \geq$ n) ${OR}\left( {{D1} \geq n\text{ AND }{D2} \geq n}\right)$ )
|
| 255 |
+
|
| 256 |
+
There are many more non-interacting pairs than actively interacting ones, and calculating the distance change for all of them is computationally expensive. We go through the list of all noninteracting pairs in a randomized order and stop when $m$ pairs have met the conditions, where $m$ is five times the number of actively interacting pairs that have met the conditions. The reason for this decision is that the number of actively interacting pairs is rather small for some combinations of the subforums, and it makes sense to have somewhat larger samples at least for the non-interacting group.
|
| 257 |
+
|
| 258 |
+
§ 3 RESULTS
|
| 259 |
+
|
| 260 |
+
We perform the comparisons for all possible combinations of subforums (ten duplets in total). The results are summarized in Table 4. For every duplet and every type of user pair (actively interacting
|
| 261 |
+
|
| 262 |
+
vs. non-interacting) we report sample size, average 429 distance change $\left( {{\Delta }_{\text{ before }} - {\Delta }_{\text{ after }}}\right)$ and the proportion of pairs for which the change was positive (the distance became smaller). Results for samples
|
| 263 |
+
|
| 264 |
+
where the number of pairs is less than 20 are not 433 reported.
|
| 265 |
+
|
| 266 |
+
We intentionally do not perform any statistical significance testing, see Wasserstein et al. (2019) about the limitations and pitfalls of this approach in general and Koplenig (2019) in corpus linguistics in particular. It can be argued that since the same user can occur in several different pairs, the observations in the sample (pairs) are not independent and thus the assumptions for most traditional tests are not met. In addition, $p$ -values will be affected by varying sample sizes. Instead, we choose to concentrate on the effect size and the robustness of effect: how often the same pattern can be observed across duplets and thresholds.
|
| 267 |
+
|
| 268 |
+
Remember that in the evaluation experiment (Section 2.3) we roughly estimated the average distance between two different users to be around 0.13 for the chosen parameter values. While there clearly is large variation, and while the average distance can be larger for the main experiment (since the users' texts come from different subforums, not the same one), the estimate still provides us with a reference point and helps to put the observed distance changes in perspective. For Home-Sport-i, for instance, the average change is 0.033, which is approximately ${25}\%$ of 0.13 . This means that on average, actively interacting users in this duplet change their styles so much that they cover one quarter of an average distance the styles of two different persons.
|
| 269 |
+
|
| 270 |
+
Overall, the distance tends to become shorter both for interacting and non-interacting pairs. The proportion of pairs which (presumably) accommodate is larger than 0.5 in 19 cases out of 19 (though only marginally so for Sport-Culture-i). The average change is positive in 17 cases out of 19 (but note that IQR is very large in most cases, which means considerable variation across pairs).
|
| 271 |
+
|
| 272 |
+
max width=
|
| 273 |
+
|
| 274 |
+
Subforum1 Subforum2 type pairs positive change $\mathbf{{IQR}}$
|
| 275 |
+
|
| 276 |
+
1-7
|
| 277 |
+
home sport $\mathrm{i}$ 29 0.828 0.033 0.042
|
| 278 |
+
|
| 279 |
+
1-7
|
| 280 |
+
home sport n 145 0.524 -0.012 0.081
|
| 281 |
+
|
| 282 |
+
1-7
|
| 283 |
+
computer drugs i 15 - - -
|
| 284 |
+
|
| 285 |
+
1-7
|
| 286 |
+
computer drugs n 75 0.680 0.048 0.096
|
| 287 |
+
|
| 288 |
+
1-7
|
| 289 |
+
sport drugs i 67 0.612 0.015 0.110
|
| 290 |
+
|
| 291 |
+
1-7
|
| 292 |
+
sport drugs n 335 0.546 0.002 0.094
|
| 293 |
+
|
| 294 |
+
1-7
|
| 295 |
+
home computer i 46 0.630 0.060 0.121
|
| 296 |
+
|
| 297 |
+
1-7
|
| 298 |
+
home computer n 230 0.617 0.029 0.089
|
| 299 |
+
|
| 300 |
+
1-7
|
| 301 |
+
home drugs i 22 0.682 0.101 0.201
|
| 302 |
+
|
| 303 |
+
1-7
|
| 304 |
+
home drugs n 110 0.664 0.028 0.081
|
| 305 |
+
|
| 306 |
+
1-7
|
| 307 |
+
sport computer i 89 0.607 0.031 0.153
|
| 308 |
+
|
| 309 |
+
1-7
|
| 310 |
+
sport computer n 445 0.600 0.027 0.105
|
| 311 |
+
|
| 312 |
+
1-7
|
| 313 |
+
home culture i 105 0.686 0.042 0.090
|
| 314 |
+
|
| 315 |
+
1-7
|
| 316 |
+
home culture n 525 0.608 0.020 0.078
|
| 317 |
+
|
| 318 |
+
1-7
|
| 319 |
+
sport culture i 332 0.506 -0.014 0.119
|
| 320 |
+
|
| 321 |
+
1-7
|
| 322 |
+
sport culture n 1660 0.619 0.009 0.101
|
| 323 |
+
|
| 324 |
+
1-7
|
| 325 |
+
drugs culture i 25 0.680 0.077 0.190
|
| 326 |
+
|
| 327 |
+
1-7
|
| 328 |
+
drugs culture n 125 0.584 0.023 0.115
|
| 329 |
+
|
| 330 |
+
1-7
|
| 331 |
+
computer culture i 144 0.694 0.058 0.114
|
| 332 |
+
|
| 333 |
+
1-7
|
| 334 |
+
computer culture n 720 0.640 0.032 0.107
|
| 335 |
+
|
| 336 |
+
1-7
|
| 337 |
+
|
| 338 |
+
Table 4: Results across the subforum duplets. Listed: whether the pair of users actively interacts or not (type); total number of pairs in the sample; proportion of pairs for which ${\Delta }_{\text{ before }} - {\Delta }_{\text{ after }}$ is positive; average change ${\Delta }_{\text{ before }} - \left( {\Delta }_{\text{ after }}\right)$ and the corresponding IQR. Shaded are rows where sample size is smaller than 20 pairs (considered unreliable)
|
| 339 |
+
|
| 340 |
+
We compare the observed results with the possible outcomes in Table 5. Out of nine duplets with sufficient sample size, six demonstrate the effect which we judge to be most compatible with Outcome 1 in Table 1: there is overall convergence to a community norm and pairwise accommodation on top of that. In the Home-Sport duplet, the average distance change for non-interacting users is negative, suggesting divergence, but the proportion of converging pairs is marginally larger than 0.5 . We label this case as Outcome 3: no clear effect for non-interacting users, thus no evidence for convergence to a community norm. In the Sport-Computer duplet, the differences are too small to prefer Outcome 1 over Outcome 2. Finally, the Sport-Culture duplet exhibits an unexpected effect: the non-interacting users seem to accommodate, while the non-interacting users do not (according to the proportion measures) or even diverge (according to the average change).
|
| 341 |
+
|
| 342 |
+
§ 4 DISCUSSION
|
| 343 |
+
|
| 344 |
+
492
|
| 345 |
+
|
| 346 |
+
From Section 3 it is clear that not all the results 493 unambiguously point in the same direction. It is, however, obvious, that in most cases distance does become shorter, that is, users do converge. Negative results (distance becomes longer) are not only less frequent, but also weaker than most of the positive ones.
|
| 347 |
+
|
| 348 |
+
By comparing distance changes with the average distance between two different users we show that the effect sizes can be viewed as considerable.
|
| 349 |
+
|
| 350 |
+
The shortening trend is stronger and more robust for actively interacting pairs (in six cases out of nine), and we thus find our results most compatible with Outcome 1.
|
| 351 |
+
|
| 352 |
+
More direct insight into the process of convergence would of course be desirable before it can be stated with certainty that it is caused by interactions. Nonetheless, our results provide evidence that it actually can be so. In other words, we show that convergence can exist (a necessary condition is meant: distance changes are observed), but not that it definitely exists.
|
| 353 |
+
|
| 354 |
+
Note that while a reversed causal link can be
|
| 355 |
+
|
| 356 |
+
max width=
|
| 357 |
+
|
| 358 |
+
Subforum1 Subforum2 ${\Delta }_{pos}$ ${\Delta }_{change}$ Outcome Comment
|
| 359 |
+
|
| 360 |
+
1-6
|
| 361 |
+
home sport 0.304 0.045 3 divergence for non-int. users?
|
| 362 |
+
|
| 363 |
+
1-6
|
| 364 |
+
computer drugs - - - sample too small
|
| 365 |
+
|
| 366 |
+
1-6
|
| 367 |
+
sport drugs 0.066 0.013 1 X
|
| 368 |
+
|
| 369 |
+
1-6
|
| 370 |
+
home computer 0.013 0.031 1 X
|
| 371 |
+
|
| 372 |
+
1-6
|
| 373 |
+
home drugs 0.018 0.073 1 X
|
| 374 |
+
|
| 375 |
+
1-6
|
| 376 |
+
sport computer 0.007 0.004 2 2*small differences
|
| 377 |
+
|
| 378 |
+
1-5
|
| 379 |
+
home culture 0.078 0.022 1
|
| 380 |
+
|
| 381 |
+
1-6
|
| 382 |
+
sport culture -0.113 -0.023 ? 3*divergence for int. users?
|
| 383 |
+
|
| 384 |
+
1-5
|
| 385 |
+
drugs culture 0.096 0.054 1
|
| 386 |
+
|
| 387 |
+
1-5
|
| 388 |
+
computer culture 0.054 0.026 1
|
| 389 |
+
|
| 390 |
+
1-6
|
| 391 |
+
|
| 392 |
+
Table 5: Classification of outcomes (see Table 1) per duplet (see Table 4). ${\Delta }_{pos} =$ difference between the proportions of presumably accommodating pairs for interacting and non-interacting users (column positive in Table 4). ${\Delta }_{\text{ change }}$ = difference between the average distance changes for interacting and non-interacting users (column change in Table 4). Positive values indicate Outcome 1.
|
| 393 |
+
|
| 394 |
+
516 suggested: users who have similar writing styles will interact more often, or "birds of a feather flock together" (McPherson et al., 2001), it can hardly explain our results on its own: why would users who write on the same subforum and especially those who interact become linguistically closer over time?
|
| 395 |
+
|
| 396 |
+
There are several reasons to why our results are not as clean as one might want them to be (apart from the obvious "random noise"). First, users in the pairs that we label as "non-interacting" can still interact in other Flashback subforums. Second, while we showed that Cosine Delta is a very good measure for linguistic distance, the definition of an interaction is more arbitrary. There is already a tradition of using the "post-nearby-in-the-same-thread" measure (Hamilton et al., 2017; Del Tredici and Fernández, 2018), but it has not really been evaluated. Overall, further exploration of the same (or similar) data is of course desirable. Different experimental designs, different thresholds, different measures would show how robust the observed effects are.
|
| 397 |
+
|
| 398 |
+
We find the following questions particularly appealing for future studies.
|
| 399 |
+
|
| 400 |
+
* If we compare accommodation across interacting pairs, will it be correlated with the number/intensity of interactions?
|
| 401 |
+
|
| 402 |
+
* What happens if we consider not only direct connections between users, but also indirect ones? If $A$ interacts with $B,B$ interacts with $\mathrm{C}$ , but $\mathrm{A}$ does not directly interact with $\mathrm{C}$ : do A and C become closer?
|
| 403 |
+
|
| 404 |
+
* What happens if $\mathrm{A}$ and $\mathrm{C}$ from the previous 549
|
| 405 |
+
|
| 406 |
+
example are pulling the style of B into differ- 550
|
| 407 |
+
|
| 408 |
+
ent directions? 551
|
| 409 |
+
|
| 410 |
+
* Why do we sometimes observe negative val- 552
|
| 411 |
+
|
| 412 |
+
ues that suggest divergence (the distance in- 553 creases)? Danescu-Niculescu-Mizil et al.
|
| 413 |
+
|
| 414 |
+
(2013) observe an increasing divergence be- 555 tween the community norm and the production of a user who is become less active in the community (and will eventually leave), but it is unclear whether this can explain our results.
|
| 415 |
+
|
| 416 |
+
* Is it possible to explain convergence and di- 560
|
| 417 |
+
|
| 418 |
+
vergence better if we take into account the 561 content of the users' posts and the relation-
|
| 419 |
+
|
| 420 |
+
ship between users? 563
|
| 421 |
+
|
| 422 |
+
§ 5 CONCLUSIONS
|
| 423 |
+
|
| 424 |
+
564
|
| 425 |
+
|
| 426 |
+
We show that writing styles of users who partici-
|
| 427 |
+
|
| 428 |
+
pate in the same subforums do become more sim- 566 ilar over time and that this increase in similarity
|
| 429 |
+
|
| 430 |
+
is stronger for pairs of users who actively interact 568 (compared to those who do not interact), though this is not an exceptionless trend. These results support the accommodation hypothesis (let us repeat Labov's wording: "the more often people talk to each other, the more similar their speech will be").
|
| 431 |
+
|
| 432 |
+
It is desirable to see if the observed effects can
|
| 433 |
+
|
| 434 |
+
be replicated in similar studies with different exper- 575 imental settings.
|
| 435 |
+
|
| 436 |
+
All data and scripts necessary to reproduce the study will be made openly available.
|
| 437 |
+
|
| 438 |
+
579 Acknowledgements 580
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/1sGdp5g0NP/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,891 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
# Evaluating morphological generalisation in machine translation by distribution-based compositionality assessment
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
057
|
| 10 |
+
|
| 11 |
+
Anonymous Author
|
| 12 |
+
|
| 13 |
+
Affiliation / Address line 1
|
| 14 |
+
|
| 15 |
+
006 Affiliation / Address line 2
|
| 16 |
+
|
| 17 |
+
Affiliation / Address line 3
|
| 18 |
+
|
| 19 |
+
email@domain
|
| 20 |
+
|
| 21 |
+
Anonymouser Author
|
| 22 |
+
|
| 23 |
+
Affiliation / Address line 1
|
| 24 |
+
|
| 25 |
+
Affiliation / Address line 2
|
| 26 |
+
|
| 27 |
+
Affiliation / Address line 3
|
| 28 |
+
|
| 29 |
+
email@domain
|
| 30 |
+
|
| 31 |
+
Anonymousest Author 058
|
| 32 |
+
|
| 33 |
+
Affiliation / Address line 1 059
|
| 34 |
+
|
| 35 |
+
Affiliation / Address line 2 060
|
| 36 |
+
|
| 37 |
+
Affiliation / Address line 3 061
|
| 38 |
+
|
| 39 |
+
email@domain 062
|
| 40 |
+
|
| 41 |
+
## Abstract
|
| 42 |
+
|
| 43 |
+
Compositional generalisation refers to the ability to understand and generate an infinite number of novel meanings using a finite group of known primitives and a set of rules of how to combine them. The
|
| 44 |
+
|
| 45 |
+
018 degree to which artificial neural networks can possess this ability is an open question. Recently, many evaluation methods and benchmarks have been proposed to test compositional generalisation, but
|
| 46 |
+
|
| 47 |
+
023 not many have focused on the morphological level of language. We propose an application of the previously developed distribution-based compositionality assessment method to assess composi-
|
| 48 |
+
|
| 49 |
+
028 tional generalisation on the level of morphology in NLP tasks, such as machine
|
| 50 |
+
|
| 51 |
+
031 translation or paraphrase detection. We demonstrate the use of our method by
|
| 52 |
+
|
| 53 |
+
033 comparing the morphological generalisation ability of translation models with different BPE vocabulary sizes. The evaluation method we propose suggests that small vocabularies help with morpholog-
|
| 54 |
+
|
| 55 |
+
038 ical generalisation in NMT. ${}^{1}$
|
| 56 |
+
|
| 57 |
+
## 1 Introduction
|
| 58 |
+
|
| 59 |
+
Natural languages usually adhere to the principle of compositionality, with the exception of idiomatic expressions. Partee et al. (1995) phrased this principle as "The meaning of a whole is a function of the meanings of the parts and of the way they are syntactically combined". Deriving from this principle, compositional generalisation (CG) refers to the capacity to understand and generate an infinite number of novel meanings using a finite group of known primitives and a set of rules of how to combine them. In the case of language,
|
| 60 |
+
|
| 61 |
+
053
|
| 62 |
+
|
| 63 |
+
morphemes are combined into words and words in 065 turn into phrases and sentences, using the syntac-
|
| 64 |
+
|
| 65 |
+
tical rules of the language. 067
|
| 66 |
+
|
| 67 |
+
Neural networks have long been argued to lack
|
| 68 |
+
|
| 69 |
+
the ability to generalise compositionally the way 070 humans do (Fodor and Pylyshyn, 1988; Marcus,
|
| 70 |
+
|
| 71 |
+
1998). After the rapid improvement of neural NLP 072 systems during the previous decade, this question has gained renewed interest. Many new evaluation
|
| 72 |
+
|
| 73 |
+
methods have been developed to assess whether 075 the modern sequence-to-sequence (seq2seq) archi-
|
| 74 |
+
|
| 75 |
+
tectures such as Transformers exhibit CG, since 077 they certainly exhibit increasingly competent linguistic behaviour. For instance, in one of the sem-
|
| 76 |
+
|
| 77 |
+
inal CG evaluation methods, called SCAN (Lake 080 and Baroni, 2018), a seq2seq system has seen cer-
|
| 78 |
+
|
| 79 |
+
tain natural language commands in training and 082 needs to combine them in novel ways in testing.
|
| 80 |
+
|
| 81 |
+
CG is a general capacity that can be seen as a
|
| 82 |
+
|
| 83 |
+
desideratum in many NLP tasks, and in machine 085 learning more generally. Furthermore, CG is a
|
| 84 |
+
|
| 85 |
+
multifaceted concept that can be, and should be, 087 decomposed into narrower, more manageable aspects that can be tested separately (Hupkes et al.,
|
| 86 |
+
|
| 87 |
+
2020). For example, NLP systems should be able 090 to generalise compositionally both on the level of
|
| 88 |
+
|
| 89 |
+
words and on the level of morphology. 092
|
| 90 |
+
|
| 91 |
+
Although many aspects of CG have recently been evaluated in NLP (an extensive review is offered by Hupkes et al. (2022)), some aspects
|
| 92 |
+
|
| 93 |
+
have remained without an evaluation method. We 097 identify (see Section 2) a lack of methods to evaluate compositional morphological generalisation using only natural, non-synthetic, data. To fill this gap, we propose an application of the distribution-based compositionality assessment (DBCA) method (Keysers et al., 2020) (henceforth Keysers) to generate adversarial data splits to evaluate morphological generalisation in NLP systems.
|
| 94 |
+
|
| 95 |
+
Specifically, we split natural language corpora 107 while controlling the distributions of lemmas and morphological features (atoms in the terminology of Keysers) on the one hand, and the distributions of the combinations of atoms (compounds, not to be confused with compound words) on the other hand. By requiring a low divergence between the atom distributions of the train and test sets, and a high divergence between the compound distributions, we can evaluate how well a system is able to generalise its morphological knowledge to unseen word forms.
|
| 96 |
+
|
| 97 |
+
---
|
| 98 |
+
|
| 99 |
+
${}^{1}$ A link to the Github repository anonymised.
|
| 100 |
+
|
| 101 |
+
---
|
| 102 |
+
|
| 103 |
+
For example, if our corpus included as atoms the lemmas "cat" and "dog", and the morphological tags Number=Sing and Number=Plur, a low divergence between the atom distributions would mean that both the training and test sets included all four of the atoms, and a high compound divergence would mean that the sets include different combinations of them, for instance training set \{cat, dogs\} and test set \{cats, dog\}.
|
| 104 |
+
|
| 105 |
+
Our main contributions are the following: firstly, we describe an application of DBCA to evaluate morphological generalisation in any NLP task in which the train and test data consist of sentences for which morphological tags are available. Secondly, we demonstrate how by this method we can evaluate morphological generalisation in machine translation without manual test design. And thirdly, using our proposed method, we assess the effect of the source language BPE (Sennrich et al., 2016) vocabulary size in Finnish-English NMT performance, and conclude that a smaller vocabulary helps the NMT models in morphological generalisation.
|
| 106 |
+
|
| 107 |
+
## 2 Background
|
| 108 |
+
|
| 109 |
+
In the broader field of machine learning, CG has been analysed in various domains besides that of natural language, such as visual question answering (Bahdanau et al., 2018), visual reasoning (Zer-roug et al., 2022) and mathematics (Saxton et al., 2019), but in this work we focus on natural language tasks. Two reviews have recently been published about CG in NLP, of which Donatelli and Koller (2023) focus on semantic parsing and the aforementioned Hupkes et al. (2022) (henceforth Hupkes) take a broader view, reviewing generalisation in general, not only the compositional type.
|
| 110 |
+
|
| 111 |
+
Hupkes categorised NLP generalisation experiments along five dimensions, of which we discuss two here to motivate our work. The first is the type
|
| 112 |
+
|
| 113 |
+
of generalisation along which the compositional 162
|
| 114 |
+
|
| 115 |
+
type is distinguished from the morphological type. 163 Hupkes define compositionality as "the ability to systematically recombine previously learned elements to map new inputs made up from these elements to their correct output. In language, the
|
| 116 |
+
|
| 117 |
+
inputs are 'forms' (e.g. phrases, sentences, larger 168 pieces of discourse), and the output that they need to be mapped to is their meaning ...". In NMT, the translation works as a proxy to meaning, so that CG can be evaluated by evaluating the translation (Dankers et al., 2022) (other works that assess CG in NMT include (Li et al., 2021; Raunak et al., 2019)).
|
| 118 |
+
|
| 119 |
+
Hupkes contrast compositional with structural,
|
| 120 |
+
|
| 121 |
+
including morphological, generalisation where an 178 output space is not required but which focuses
|
| 122 |
+
|
| 123 |
+
on generation of the correct forms. These defini- 180 tions suggest a clear divide between the categories, which is understandable when analysing the literature: morphological generalisation, specifically inflection generation, has for decades been studied in psycholinguistics (Berko, 1958; Marcus et al., 1992) and computational linguistics (Rumelhart and McClelland, 1986; Corkery et al., 2019; Kod-ner et al., 2022). These studies do not address the question of how the different inflections are mapped to different meanings, hence they do not address compositional generalisation. However, inflections do bear meaning, of course, and so compositional morphological generalisation is an ability that humans possess, and NLP systems ought to be tested on.
|
| 124 |
+
|
| 125 |
+
Although Hupkes do not categorise any experiments as assessing compositional morphological generalisation, there has been at least one that we think could be so categorised: Burlot and Yvon (2017) designed an NMT test suite in which a single morphological feature is modified in a source language sentence, creating a contrastive pair, and the translations of the contrastive sentences are inspected for a corresponding change in the target language.
|
| 126 |
+
|
| 127 |
+
The other dimension of Hupkes relevant to the motivation of our experiments is that of shift source: the shift between train and test sets could occur naturally (as in two natural corpora in different domains), it can be created by generating synthetic data, or an artificial partition of natural data can be obtained. Most of the previous methods to
|
| 128 |
+
|
| 129 |
+
assess compositional generalisation in NMT (Bur- 215 lot and Yvon, 2017; Li et al., 2021; Dankers et al.,
|
| 130 |
+
|
| 131 |
+
217 2022) have synthetised data for the test sets. Generating synthetic data has its benefits: any morphological form can occur in the data when it is generated, and a single morphological feature can be easily focused on and evaluated qualitatively as well as quantitatively.
|
| 132 |
+
|
| 133 |
+
However, synthetic data has at least practical disadvantages, leaving aside the more theoretical question of how well the synthetic language approximates natural language, assuming the ultimate goal is systems that process natural language.
|
| 134 |
+
|
| 135 |
+
229 In practice, synthetic test sets require manual design, which means it is difficult to come by a method to generate an unlimited number of syn-
|
| 136 |
+
|
| 137 |
+
232 thetic sentences, or a method that could work in arbitrary languages. Furthermore, when manu-
|
| 138 |
+
|
| 139 |
+
234 ally designing test suites to evaluate morphological generalisation, as Burlot and Yvon (2017) de-
|
| 140 |
+
|
| 141 |
+
237 signed, the requirement for manual work restricts the number of morphological phenomena we have
|
| 142 |
+
|
| 143 |
+
239 resources to test.
|
| 144 |
+
|
| 145 |
+
The other option is to create artificial data splits of natural data. While natural data may be noisier and it might be more difficult to focus on a specific phenomenon of the language by this method, this method is easier to automate completely. Furthermore, the method of automatically generating data splits that we present in the next section is also generalisable to other tasks (e.g. paraphrase detection) and any corpus of sentences. Generating artificial data splits of natural data has previously been used to test CG in translation (Raunak et al., 2019), but not for assessing morphological generalisation, as far as we are aware. (For a more general discussion of splitting data into non-random testing and training sets, see Søgaard et al. (2021).)
|
| 146 |
+
|
| 147 |
+
The method we describe in this paper is an application of the DBCA method developed by
|
| 148 |
+
|
| 149 |
+
259 Keysers. Since this method is generic and task-agnostic, it can be applied to any dataset for which it is possible to define atom and compound distributions. Although it is easier to define these distributions for synthetic data, as in the CFQ dataset described by Keysers, it can also be applied to natural data, for example in semantic parsing (Shaw et al., 2021). The next section describes how DBCA can be used to assess morphological generalisation in any task where the training and testing
|
| 150 |
+
|
| 151 |
+
269 corpora consist of natural language sentences.
|
| 152 |
+
|
| 153 |
+
## 3 Applying DBCA to assess morphological generalisation in NLP
|
| 154 |
+
|
| 155 |
+
270
|
| 156 |
+
|
| 157 |
+
271
|
| 158 |
+
|
| 159 |
+
272
|
| 160 |
+
|
| 161 |
+
DBCA is a method to evaluate CG by splitting 273
|
| 162 |
+
|
| 163 |
+
a dataset into train/test sets with differing dis- 274
|
| 164 |
+
|
| 165 |
+
tributions, requiring some capacity to generalise 275
|
| 166 |
+
|
| 167 |
+
from the training distribution to the test distri- 276 bution. Specifically, the distributions of atoms (known primitives) and compounds (combinations of atoms) are controlled to get similar atom distributions but contrasting compound distributions
|
| 168 |
+
|
| 169 |
+
in the training and test sets. In our application of 281 DBCA to a corpus of natural language sentences,
|
| 170 |
+
|
| 171 |
+
the atom distribution ${\mathcal{F}}_{A}$ of the corpus is the distri- 283 bution of the lemmas and morphological features
|
| 172 |
+
|
| 173 |
+
and the compound distribution ${\mathcal{F}}_{C}$ is the distribu- 286 tion of their combinations. Table 1 presents exam-
|
| 174 |
+
|
| 175 |
+
ples of atoms and compounds in this work. 288
|
| 176 |
+
|
| 177 |
+
To determine the atom and compound distributions, we first need to obtain the lemmas and morphological tags of all words in the corpus, which we accomplish for Finnish corpora using the Turku Neural Parser Pipeline (Kanerva et al., 2018). For the experiments presented in Section 4, we use a corpus of $1\mathrm{M}$ sentences. In practice, we do not have resources to control the distribution of all lemmas even in this relatively small corpus, so we need to select some subset of the lemmas that we include in our analysis.
|
| 178 |
+
|
| 179 |
+
Selecting the lemma subset could be done in many ways, but the following is a way we deemed reasonable. To limit the number of lemmas, we first filter out lemmas that do not appear in the list of 94110 Finnish lemmas ${}^{2}$ or, since this list does not include proper names, in lists ${}^{3}$ of names for places, or lists of Finnish and English given names. This way, the lemmas that are filtered out include most of the typos and other nonwords. Then we rank the remaining lemmas by frequency in our corpus, and sample a fixed number of lemma occurrences from constant intervals in the ranked list of lemmas. Specifically, we take 40000 lemma occurrences at intervals of 1000 lemma types in the list of lemmas. For our corpus of $1\mathrm{M}$ sentences, this method subsamples
|
| 180 |
+
|
| 181 |
+
323
|
| 182 |
+
|
| 183 |
+
---
|
| 184 |
+
|
| 185 |
+
${}^{2}$ Available at https://kaino.kotus.fi/sanat/ nykysuomi/
|
| 186 |
+
|
| 187 |
+
${}^{3}$ List of names of places: https://kaino.kotus.fi/eksonyymit/?a=aineisto
|
| 188 |
+
|
| 189 |
+
English given names: https://en.wiktionary.org/ wiki/Appendix:English_given_names and Finnish: https://tinyurl.com/3mn52ms6 https://tinyurl.com/mwjvaxkk
|
| 190 |
+
|
| 191 |
+
---
|
| 192 |
+
|
| 193 |
+
324 378
|
| 194 |
+
|
| 195 |
+
<table><tr><td/><td>Atoms</td><td>Compounds</td></tr><tr><td>Description</td><td>lemmas and morphological tags</td><td>combinations of atoms</td></tr><tr><td>Examples</td><td>tunturi, Case=Gen, Case=Ade, Number=Sing, Number=Plur</td><td>tunturi|Case=Gen|Number=Plur ("tunturien"), tunturi|Case=Ade|Number=Sing ("tunturilla")</td></tr></table>
|
| 196 |
+
|
| 197 |
+
Table 1: Description and examples of what we call "atoms" and "compounds". The compounds are the unique word forms, determined by the lemma and the morphological tags. The word form is written inside the brackets.
|
| 198 |
+
|
| 199 |
+
380
|
| 200 |
+
|
| 201 |
+
381
|
| 202 |
+
|
| 203 |
+
382
|
| 204 |
+
|
| 205 |
+
325 379
|
| 206 |
+
|
| 207 |
+
383
|
| 208 |
+
|
| 209 |
+
330 384 the lemmas with frequency ranks of 1000-1033, 2000-2083, 3000-3174, and so on, so that there are fewer frequent lemma types than rare lemma types, but the total number of occurrences of each bucket is around ${40}\mathrm{k}$ . Lemmas that occur fewer than 10 times in the corpus are excluded. After
|
| 210 |
+
|
| 211 |
+
340 the filtering, we have 8720 lemma types that occur about ${390}\mathrm{k}$ times in total in our corpus of $1\mathrm{M}$ sentences. We append the list of 48 morphological ${\operatorname{tags}}^{4}$ (after filtering some that indicate uninteresting words such as 'Typo' and 'Abbr') that these lemmas appear with to the lemma list to complete our list of atoms.
|
| 212 |
+
|
| 213 |
+
Keysers weighted the compounds to "avoid double-counting compounds that are highly correlated with some of their super-compounds". The idea is to lessen the weight of those compounds that only or often occur as a part of one certain super-compound. We weight the compounds analogously, but use only two levels in our weighting, which makes the weighting simpler than in Keysers: we consider the combinations of morphological tags as the lower level of compounds, and these combined with lemmas as the higher level. Thus the motivation for weighting in our case is not to use those morphological tag combinations that only occur with some specific lemma. Therefore, we look for the lemma with which each morphological tag combination occurs most often, and give the tag combination a weight that is the complement of the empirical probability that the tag combination occurs with this lemma. For example, we found that the rare morph tag combination Case=Ade | Degree=Pos | Number=Plur | PartForm=Pres | VerbForm=Part | Voice=Pass occurs ${84}\%$ of the time with the lemma saada forming the word "saatavilla", so it gets a weight of 0.16 . After weighting the tag combinations, we exclude those that have a weight of 0.33 or less.
|
| 214 |
+
|
| 215 |
+
After the described filtering steps, we have 8322
|
| 216 |
+
|
| 217 |
+
atoms, which includes the lemmas and morpho- 389 logical tags. The atoms occur about ${1.3}\mathrm{M}$ times in ${273}\mathrm{k}$ sentences in our corpus of $1\mathrm{M}$ sentences. There are 335 morphological tag combinations that these lemmas appear with, which create about ${69}\mathrm{k}$ unique word forms with the lemmas; i.e. there are ${69}\mathrm{k}$ compounds that we use in our analysis. These compounds occur 352k times in the corpus, in 273k sentences.
|
| 218 |
+
|
| 219 |
+
Calculating atom and compound divergences is done the same way as in Keysers. Namely, divergence $\mathcal{F}$ between distributions $P$ and $Q$ is calculated using the Chernoff coefficient ${C}_{\alpha }\left( {P\parallel Q}\right) =$ $\mathop{\sum }\limits_{k}{p}_{k}^{\alpha }{q}_{k}^{1 - \alpha } \in \left\lbrack {0,1}\right\rbrack$ (Chung et al.,1989), with $\alpha = {0.5}$ for the atom divergence and $\alpha = {0.1}$ for the compound divergence. As described by Key-sers, $\alpha = {0.5}$ for the atom divergence "reflects the desire of making the atom distributions in train and test as similar as possible", and $\alpha = {0.1}$ for the compound divergence "reflects the intuition that it is more important whether a certain compound occurs in $\mathrm{P}$ (train) than whether the probabilities in $\mathrm{P}$ (train) and $\mathrm{Q}$ (test) match exactly". Since the Chernoff coefficient is a similarity metric, the atom and compound divergences of a train set $V$ and a test set $W$ are:
|
| 220 |
+
|
| 221 |
+
$$
|
| 222 |
+
{\mathcal{D}}_{A}\left( {V\parallel W}\right) = 1 - {C}_{0.5}\left( {{\mathcal{F}}_{A}\left( V\right) \parallel {\mathcal{F}}_{A}\left( W\right) }\right)
|
| 223 |
+
$$
|
| 224 |
+
|
| 225 |
+
$$
|
| 226 |
+
{\mathcal{D}}_{C}\left( {V\parallel W}\right) = 1 - {C}_{0.1}\left( {{\mathcal{F}}_{C}\left( V\right) \parallel {\mathcal{F}}_{C}\left( W\right) }\right) .
|
| 227 |
+
$$
|
| 228 |
+
|
| 229 |
+
Once the divergences are defined, we can split a corpus of natural language sentences into training and testing sets with an arbitrary compound and atom divergence values. For this, we use a simple greedy algorithm, sketched in Algorithm 1. For a maximum compound divergence split, the score is calculated as
|
| 230 |
+
|
| 231 |
+
$$
|
| 232 |
+
\operatorname{score}\left( {Q, P}\right) = {\mathcal{D}}_{C}\left( {Q\parallel P}\right) - {\mathcal{D}}_{A}\left( {Q\parallel P}\right) ,
|
| 233 |
+
$$
|
| 234 |
+
|
| 235 |
+
and in general, for any desired compound divergence value $c$ :
|
| 236 |
+
|
| 237 |
+
$$
|
| 238 |
+
\operatorname{score}\left( {Q, P}\right) = - \left| {c - {\mathcal{D}}_{C}\left( {Q\parallel P}\right) }\right| - {\mathcal{D}}_{A}\left( {Q\parallel P}\right) .
|
| 239 |
+
$$
|
| 240 |
+
|
| 241 |
+
431 In practice, we do not have resources to calculate the $\mathop{\max }\limits_{{x \in G}}$ score. Instead, at each iteration we take a subset ${G}^{\prime } \subset G$ , say 1000 sentences, and calculate $\mathop{\max }\limits_{{x \in {G}^{\prime }}}$ score.
|
| 242 |
+
|
| 243 |
+
---
|
| 244 |
+
|
| 245 |
+
${}^{4}$ See https://universaldependencies.org/ docs/fi/feat/forthelist of Finnish morphological tags.
|
| 246 |
+
|
| 247 |
+
---
|
| 248 |
+
|
| 249 |
+
Procedure 1 Data division algorithm.
|
| 250 |
+
|
| 251 |
+
---
|
| 252 |
+
|
| 253 |
+
Input: $G$ $\vartriangleright$ Corpus of sentences
|
| 254 |
+
|
| 255 |
+
Input: $N$ $\vartriangleright$ Use $N$ sentences from $G$
|
| 256 |
+
|
| 257 |
+
Input: $a$ $\vartriangleright$ Lower bound for $\left| V\right| /\left| W\right|$
|
| 258 |
+
|
| 259 |
+
Input: $b$ $\vartriangleright$ Upper bound for $\left| V\right| /\left| W\right|$
|
| 260 |
+
|
| 261 |
+
Output: $V, W\; \vartriangleright$ Train set, test set
|
| 262 |
+
|
| 263 |
+
$V \leftarrow \left\{ {x{ \in }_{R}G}\right\} \; \vartriangleright$ A random sentence
|
| 264 |
+
|
| 265 |
+
$W \leftarrow \varnothing$
|
| 266 |
+
|
| 267 |
+
$G \leftarrow G \smallsetminus V$
|
| 268 |
+
|
| 269 |
+
for $i \leftarrow 1$ to $N$ do
|
| 270 |
+
|
| 271 |
+
$r \leftarrow \left| V\right| /\left| W\right|$
|
| 272 |
+
|
| 273 |
+
${s}_{V} \leftarrow \mathop{\max }\limits_{{x \in G}}\operatorname{score}\left( {V\cup \{ x\} , W}\right)$
|
| 274 |
+
|
| 275 |
+
${i}_{V} \leftarrow {\operatorname{argmax}}_{x \in G}\operatorname{score}\left( {V\cup \{ x\} , W}\right)$
|
| 276 |
+
|
| 277 |
+
${s}_{W} \leftarrow \mathop{\max }\limits_{{x \in G}}\operatorname{score}\left( {V, W\cup \{ x\} }\right)$
|
| 278 |
+
|
| 279 |
+
${i}_{W} \leftarrow {\operatorname{argmax}}_{x \in G}\operatorname{score}\left( {V, W\cup \{ x\} }\right)$
|
| 280 |
+
|
| 281 |
+
if $\left( {{s}_{V} > {s}_{W} \land r < b}\right) \vee r < a$ then
|
| 282 |
+
|
| 283 |
+
$V \leftarrow V \cup \left\{ {i}_{V}\right\}$
|
| 284 |
+
|
| 285 |
+
$G \leftarrow G \smallsetminus \left\{ {i}_{V}\right\}$
|
| 286 |
+
|
| 287 |
+
else
|
| 288 |
+
|
| 289 |
+
$W \leftarrow W \cup \left\{ {i}_{W}\right\}$
|
| 290 |
+
|
| 291 |
+
$G \leftarrow G \smallsetminus \left\{ {i}_{W}\right\}$
|
| 292 |
+
|
| 293 |
+
end if
|
| 294 |
+
|
| 295 |
+
end for
|
| 296 |
+
|
| 297 |
+
---
|
| 298 |
+
|
| 299 |
+
As mentioned above, this method can be used for any corpus that consists of natural language sentences for which the morphological tags can be obtained. In the next section we use this method to assess morphological generalisation in machine translation.
|
| 300 |
+
|
| 301 |
+
## 4 Experiments and results
|
| 302 |
+
|
| 303 |
+
### 4.1 NMT model training setup and data
|
| 304 |
+
|
| 305 |
+
We chose Finnish as the language we analyse because of its rich morphology and because there is a good morphological tagger available for Finnish. We use the English-Finnish parallel corpus from the Tatoeba challenge data release (Tiedemann, 2020). We first apply some heuristics provided by Aulamo et al. (2020) to remove noisy data, and restrict the maximum sentence length to 100 words, after which we take a random sample of 1 million sentence pairs.
|
| 306 |
+
|
| 307 |
+
We use the OpenNMT-py (Klein et al., 2017) library to train Finnish-English Transformer NMT models using the hyperparameters provided in the
|
| 308 |
+
|
| 309 |
+
example config file ${}^{5}$ , which includes the standard 486
|
| 310 |
+
|
| 311 |
+
6 transformer layers with 8 heads and a hidden di- 487 mension of 512, as in (Vaswani et al., 2017). We train the models until convergence or until a maximum of 33000 steps with 2000 warm-up steps and a batch size of 4096 tokens.
|
| 312 |
+
|
| 313 |
+
For more details about the setup, see the Github repository linked on the first page.
|
| 314 |
+
|
| 315 |
+
### 4.2 The effect of compound divergence on translation performance
|
| 316 |
+
|
| 317 |
+
The basic experiment we propose is to make at least two different train/test splits of a corpus, using ${\mathcal{D}}_{C}$ values of 0 and 1, respectively,(keeping ${\mathcal{D}}_{A} = 0$ ) and assess the change in translation performance (for which we use BLEU (Papineni et al., 2002) and chrF2++ (Popović, 2017) as metrics). Since with ${\mathcal{D}}_{C} = 1$ there are more unseen word forms in the test set, we expect a decrease in translation performance from ${\mathcal{D}}_{C} = 0$ to ${\mathcal{D}}_{C} = 1$ that is caused by the ${\mathcal{D}}_{C} = 1$ test set requiring more morphological generalisation capacity.
|
| 318 |
+
|
| 319 |
+
We show empirically the decrease in performance in Section 4.3, but the cause of this decrease is of course more difficult to verify exactly. The atom and compound distributions are the only things we explicitly control when splitting the corpus, and we only require the compound divergence to differ between different data splits. Therefore, we assume the differing compound divergence to be the cause of this effect, but to be more certain, we conduct two simple checks to look for confounding factors.
|
| 320 |
+
|
| 321 |
+
Firstly, an increase in the average sentence length could be another factor that makes one test set more difficult than another. Increasing the sequence length from training to test set is actually a method that has been proposed to test a certain type of compositional generalisation, sometimes called productivity (Hupkes et al., 2020; Raunak et al., 2019). We calculated the average sentence lengths of the train and test sets of the 8 different data splits that we obtained using 8 different random seeds for the data split algorithm. What we found is that for ${\mathcal{D}}_{C} = 1$ the average lengths in test sets are actually shorter (ranging from 11.35 to 11.66 words) than those for ${\mathcal{D}}_{C} = 0$ (ranging from 12.27 to 13.72 words). The average training set sentence lengths are similar for both ${\mathcal{D}}_{C}$ values,
|
| 322 |
+
|
| 323 |
+
539 ranging from 8.66 to 8.79 for ${\mathcal{D}}_{C} = 0$ and from 8.65 to 8.73 for ${\mathcal{D}}_{C} = 1$ . Thus we know that an increased difference between train and test set sentence lengths cannot explain the decrease in NMT performance from ${\mathcal{D}}_{C} = 0$ to ${\mathcal{D}}_{C} = 1$ since the difference is actually larger for ${\mathcal{D}}_{C} = 0$ . The fact that the average sentence length in training sets is always significantly shorter than in test sets is an interesting unintended artefact of the data division algorithm that deserves further investigation in the future, but it does not confound our analysis.
|
| 324 |
+
|
| 325 |
+
---
|
| 326 |
+
|
| 327 |
+
${}^{5}$ https://github.com/OpenNMT/ OpenNMT-py/blob/master/config/ config-transformer-base-1GPU.yml
|
| 328 |
+
|
| 329 |
+
---
|
| 330 |
+
|
| 331 |
+
As the second sanity check, we evaluated the NMT models on a neutral test set to see if, for any reason, the training set would be in general worse with ${\mathcal{D}}_{C} = 1$ than with ${\mathcal{D}}_{C} = 0$ , instead of only being worse for the specific test set that we have created. For this we used the Tatoeba challenge test set, which we did not use to train or tune the hyperparameters of any models. The results for the vocabulary size 1000 are presented in Figure 1. We used the models trained on the training sets from the data splits with compound divergences0.0,0.5and 1.0 . The compound divergences between these training sets and the Tatoeba challenge test set do correlate with the target ${\mathcal{D}}_{C}$ of the data split, but they range only from about 0.4 to 0.6 .
|
| 332 |
+
|
| 333 |
+

|
| 334 |
+
|
| 335 |
+
Figure 1: Results on the Tatoeba challenge test set. The x-axis labels denote the compound divergences between the training sets and the test sets analysed later in Figure 2. That is, the divergence is not between the training sets and the Tatoeba challenge test set.
|
| 336 |
+
|
| 337 |
+
583
|
| 338 |
+
|
| 339 |
+
From Figure 1 we can see that the NMT models trained with different data sets, from data splits with different ${\mathcal{D}}_{C}$ values, do not show similar de-
|
| 340 |
+
|
| 341 |
+
593 crease in performance on the neutral-ish Tatoeba
|
| 342 |
+
|
| 343 |
+
challenge test set as on the test sets obtained from 594
|
| 344 |
+
|
| 345 |
+
the data split algorithm. We take this to mean that 595 the models trained on ${\mathcal{D}}_{C} = 1$ data splits are not in general worse than those trained with ${\mathcal{D}}_{C} = 0$ data splits, but only worse on the high-divergence test set.
|
| 346 |
+
|
| 347 |
+
600
|
| 348 |
+
|
| 349 |
+
### 4.3 The effect of BPE vocabulary size on morphological generalisation in NMT
|
| 350 |
+
|
| 351 |
+
Next, we make the assumption, based on the anal-
|
| 352 |
+
|
| 353 |
+
ysis in Section 4.2, that we can measure mor- 605 phological generalisation by measuring the de-
|
| 354 |
+
|
| 355 |
+
crease of NMT performance between train/test 607 splits of ${\mathcal{D}}_{C} = 0$ and ${\mathcal{D}}_{C} = 1$ . Previous studies have suggested the hypothesis that NMT mod-
|
| 356 |
+
|
| 357 |
+
els with smaller BPE vocabularies are more capa- 610 ble of modelling morphological phenomena than those with larger vocabularies (for example Li-bovicky and Fraser (2020)). In this section, we compare the morphological generalisation capacities of NMT models with different source-side (Finnish) vocabulary sizes, using the method we have proposed.
|
| 358 |
+
|
| 359 |
+
As a preliminary experiment, we tuned the BPE vocabulary size for our setup (see Section 4.1) on the Tatoeba challenge development set, and found the optimal size to be around ${3000}\mathrm{{BPE}}$ tokens for both the source and target languages. Since we are interested in the Finnish morphology, next we
|
| 360 |
+
|
| 361 |
+
kept the target (English) vocabulary size constant 625 and varied only the source-side vocabulary size.
|
| 362 |
+
|
| 363 |
+
We chose 7 different vocabulary sizes, 3 larger 627 and 3 smaller than the optimal 3000 , and evaluated them with target compound divergence values of 0.0,0.25,0.5,0.75and 1.0 . The sizes of the test sets are in the order of a few tens of thousands, or a
|
| 364 |
+
|
| 365 |
+
little over a hundred thousand, sentences. The rel- 632 atively large test set size leads to statistical significance even for small BLEU differences (see Table 3 for details).
|
| 366 |
+
|
| 367 |
+
From the BLEU results for ${\mathcal{D}}_{C} = 0$ and ${\mathcal{D}}_{C} =$ 637 1 in Figure 2 we can see that as we either increase or decrease the vocab size from 3000 , the performance drops, but it drops slightly differently w.r.t ${\mathcal{D}}_{C}$ . This effect is most conspicuous for the pair of sizes 500 and 18000. The larger vocabulary performs slightly better when there is less need for morphological generalisation, but the small vocabulary performs better when it is needed more. In general, from this figure we can see that the vo-
|
| 368 |
+
|
| 369 |
+
cabulary size roughly correlates with the angle of 647
|
| 370 |
+
|
| 371 |
+
648 the downward slope, suggesting that the larger the
|
| 372 |
+
|
| 373 |
+
649 vocabulary, the poorer the capacity for morphological generalisation.
|
| 374 |
+
|
| 375 |
+
654
|
| 376 |
+
|
| 377 |
+

|
| 378 |
+
|
| 379 |
+
Figure 2: Different source vocabulary sizes evaluated with minimum and maximum ( 0 and 1 ) compound divergence data splits. Compound divergence value 1 requires more morphological generalisation. The larger the vocabulary the steeper the slope, suggesting poorer ability to generalise. For more details, see Table 3 in Appendix A.
|
| 380 |
+
|
| 381 |
+
659
|
| 382 |
+
|
| 383 |
+
661
|
| 384 |
+
|
| 385 |
+
663
|
| 386 |
+
|
| 387 |
+
664
|
| 388 |
+
|
| 389 |
+
666
|
| 390 |
+
|
| 391 |
+
681
|
| 392 |
+
|
| 393 |
+
To investigate the effect of the initialisation of the data split algorithm on the results, we split the same corpus starting from 8 different random ini-tialisations, and trained NMT models for each data split. For this, we chose two pairs of vocabulary sizes that showed most clearly contrasting performance w.r.t ${\mathcal{D}}_{C} : {500}\& {18000}$ and 1000&6000. The main results are presented in Table 2. For these results, the test sets of the 8 random seeds are concatenated together to create exceptionally large test sets of around ${400}\mathrm{k} - {500}\mathrm{k}$ sentences. The results for the individual data splits are presented in Appendix A in Table 4.
|
| 394 |
+
|
| 395 |
+
From these results we can see the same contrasting performance of the small and large vocabularies w.r.t the different compound divergence values. The difference is small but statistically significant and consistent. The models with small vocabularies show better performance than those
|
| 396 |
+
|
| 397 |
+
with large ones when morphological generalisa- 702
|
| 398 |
+
|
| 399 |
+
tion is needed, and vice versa when morphological 703
|
| 400 |
+
|
| 401 |
+
generalisation is not needed as much. 704
|
| 402 |
+
|
| 403 |
+
705
|
| 404 |
+
|
| 405 |
+
## 5 Discussion and future work
|
| 406 |
+
|
| 407 |
+
706
|
| 408 |
+
|
| 409 |
+
In Section 3, we proposed an application of DBCA 708 to divide any corpus of sentences, for which mor-
|
| 410 |
+
|
| 411 |
+
phological tags are available, into training and 710 test sets with similar distributions of lemmas and morphological tags but contrasting distributions of
|
| 412 |
+
|
| 413 |
+
word forms, in order to assess morphological gen- 713 eralisation. By this method, we can take a large
|
| 414 |
+
|
| 415 |
+
proportion of the morphological phenomena of a 715 selected language into consideration, in our exper-
|
| 416 |
+
|
| 417 |
+
iments 335 different morphological categories that 718 together with about $8\mathrm{k}$ lemmas create ${69}\mathrm{k}$ unique
|
| 418 |
+
|
| 419 |
+
Finnish word forms, and evaluate the effects of 720 the contrasting train/test distributions of the word forms in machine translation. This enables a different, complementing type of assessment of morphological generalisation than previous synthetic benchmarks (mainly Burlot and Yvon (2017)) that focus on a smaller number of morphological phenomena. The benefit of our method is the comprehensiveness, focusing on the corpus-wide distributions of word forms.
|
| 420 |
+
|
| 421 |
+
Using only corpus-wide metrics such as BLEU, as we used, does not allow for the qualitative evaluation that the synthetic benchmarks offer. In the terminology of Burlot and Yvon (2017), this holistic, document-level evaluation can be contrasted with analytic evaluation that focuses more specifically on difficulties in morphology. A trick that could enable a more analytic assessment of the translations of the unseen word forms would be to align the words in the source sentences with the
|
| 422 |
+
|
| 423 |
+
words in the reference translations and the words 740 in the predicted translations, and evaluate only the translations of the parts of the sentences that correspond to the unseen word forms. Similar method
|
| 424 |
+
|
| 425 |
+
has been used previously for example by Bau et al. 745 (2019); Stanovsky et al. (2019), and we aim to experiment with this method in future work.
|
| 426 |
+
|
| 427 |
+
Especially combined with this word-alignment trick, we could also make our evaluation more fine-grained (this concept also from (Burlot and Yvon, 2017)), that is, our evaluation could differentiate between different types of mistakes. Since we have the morphological tags, we could sort the words by morphological category and compare the
|
| 428 |
+
|
| 429 |
+
translation accuracies to look for any especially 755
|
| 430 |
+
|
| 431 |
+
<table><tr><td colspan="3">chrF2++</td><td colspan="2">BLEU</td></tr><tr><td>Vocab</td><td>${\mathcal{D}}_{C} = 0$</td><td>${\mathcal{D}}_{C} = 1$</td><td>${\mathcal{D}}_{C} = 0$</td><td>${\mathcal{D}}_{C} = 1$</td></tr><tr><td>500</td><td>${51.20}\left( {{51.20} \pm {0.05}}\right)$</td><td>49.33 (49.33 ± 0.05)</td><td>27.50 (27.50 ± 0.07)</td><td>25.4 (25.40 ± 0.07)</td></tr><tr><td>18000</td><td>51.29 (51.29 ± 0.05)</td><td>49.04 (49.05 ± 0.05)</td><td>$\mathbf{{27.69}\left( {{27.69} \pm {0.07}}\right) }$</td><td>25.18 (25.18 ± 0.07)</td></tr><tr><td/><td>$p = {0.0003}$</td><td>$p = {0.0003}$</td><td>$p = {0.0003}$</td><td>$p = {0.0003}$</td></tr><tr><td>1000</td><td>51.78 (51.78 ± 0.05)</td><td>49.79 (49.79 ± 0.05)</td><td>${28.17}\left( {{28.17} \pm {0.07}}\right)$</td><td>$\mathbf{{25.89}\left( {{25.89} \pm {0.07}}\right) }$</td></tr><tr><td>6000</td><td>51.83 (51.83 ± 0.05)</td><td>49.67 (49.67 ± 0.05)</td><td>$\mathbf{{28.24}\left( {{28.24} \pm {0.07}}\right) }$</td><td>25.80 (25.80 ± 0.07)</td></tr><tr><td/><td>$p = {0.0003}$</td><td>$p = {0.0003}$</td><td>$p = {0.0003}$</td><td>$p = {0.0003}$</td></tr></table>
|
| 432 |
+
|
| 433 |
+
Table 2: Pairwise comparisons of the source vocabulary sizes 500 and 18000; 1000 and 6000. The results are calculated for the concatenated test sets generated with 8 random seeds. Inside brackets is the true mean estimated from bootstrap resampling and the ${95}\%$ confidence interval. The results for the individual seeds are presented in Appendix A in Table 4 and Figure 3.
|
| 434 |
+
|
| 435 |
+
810
|
| 436 |
+
|
| 437 |
+
811
|
| 438 |
+
|
| 439 |
+
816 difficult categories for the translation models.
|
| 440 |
+
|
| 441 |
+
To demonstrate the use of our proposed method, we compared NMT models with different BPE vocabulary sizes, since vocabulary size had been hypothesised to affect the capacity to model morphology in translation. Besides vocabulary size, there are many other model design choices that have been proposed to help either in generalisation or in capturing morphological phenomena. For example, factored NMT systems (García-Martínez et al., 2016) can cover more of the target side vocabulary than subword-based NMT systems, which can help in modelling the morphology of the target language. It would be interesting to assess different model types such as factored NMT systems or LSTM-based systems to see how
|
| 442 |
+
|
| 443 |
+
789 they compare with Transformers on our evaluation method.
|
| 444 |
+
|
| 445 |
+
The DBCA method is general, and could be applied to a wide variety of tasks and datasets. Our application of DBCA is more specific, but it still inherits some of the generality of the original method. Our method is directly applicable to any machine learning task where the data consists of sentences for which the morphological tags are available. In the future, we intend to extend our assessment of morphological generalisation to other languages, as well as to other NLP tasks, such as paraphrase detection.
|
| 446 |
+
|
| 447 |
+
## 6 Conclusion
|
| 448 |
+
|
| 449 |
+
We proposed a method to assess morphological generalisation by distribution-based composition-ality assessment. Because this method is fully automated, it enables more comprehensive assess-
|
| 450 |
+
|
| 451 |
+
809 ment of morphological generalisation than previously proposed synthetic benchmarks, in terms of the number of inflection types we can evaluate. We used our method to assess NMT models with different BPE vocabulary sizes and found that models with smaller vocabularies are better at morphological generalisation than those with larger vocabularies. Lastly, we discussed the varied future directions that our generalisable method offers, such as assessing morphological generali-
|
| 452 |
+
|
| 453 |
+
sation in other NLP tasks besides NMT. 836
|
| 454 |
+
|
| 455 |
+
## References
|
| 456 |
+
|
| 457 |
+
838
|
| 458 |
+
|
| 459 |
+
839
|
| 460 |
+
|
| 461 |
+
Mikko Aulamo, Sami Virpioja, and Jörg Tiedemann. 840
|
| 462 |
+
|
| 463 |
+
2020. OpusFilter: A configurable parallel corpus 841 filtering toolbox. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin-
|
| 464 |
+
|
| 465 |
+
guistics: System Demonstrations, pages 150-156. 843 Association for Computational Linguistics.
|
| 466 |
+
|
| 467 |
+
Dzmitry Bahdanau, Shikhar Murty, Michael 846 Noukhovitch, Thien Huu Nguyen, Harm de Vries, and Aaron Courville. 2018. Systematic generaliza-
|
| 468 |
+
|
| 469 |
+
tion: What is required and can it be learned? In 848 International Conference on Learning Representations.
|
| 470 |
+
|
| 471 |
+
Anthony Bau, Yonatan Belinkov, Hassan Sajjad, Nadir 851 Durrani, Fahim Dalvi, and James R. Glass. 2019.
|
| 472 |
+
|
| 473 |
+
Identifying and controlling important neurons in 853 neural machine translation. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. Open-Review.net.
|
| 474 |
+
|
| 475 |
+
Jean Berko. 1958. The child's learning of english mor- 858
|
| 476 |
+
|
| 477 |
+
phology. Word, 14(2-3):150-177. 859
|
| 478 |
+
|
| 479 |
+
Franck Burlot and François Yvon. 2017. Evaluating 860 the morphological competence of machine transla-
|
| 480 |
+
|
| 481 |
+
tion systems. In Proceedings of the Second Confer- 862
|
| 482 |
+
|
| 483 |
+
ence on Machine Translation, pages 43-55. 863
|
| 484 |
+
|
| 485 |
+
JK Chung, PL Kannappan, CT Ng, and PK Sahoo. 865 1989. Measures of distance between probability dis- 866 tributions. Journal of mathematical analysis and applications, 138(1):280-292.
|
| 486 |
+
|
| 487 |
+
Maria Corkery, Yevgen Matusevych, and Sharon Goldwater. 2019. Are we there yet? encoder-decoder 870 neural networks as cognitive models of english past tense inflection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3868-3877.
|
| 488 |
+
|
| 489 |
+
Verna Dankers, Elia Bruni, and Dieuwke Hupkes. 875 2022. The paradox of the compositionality of natural language: A neural machine translation case study. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4154-4175.
|
| 490 |
+
|
| 491 |
+
Lucia Donatelli and Alexander Koller. 2023. Compo-sitionality in computational linguistics. Annual Re-
|
| 492 |
+
|
| 493 |
+
882 view of Linguistics, 9.
|
| 494 |
+
|
| 495 |
+
Jerry A Fodor and Zenon W Pylyshyn. 1988. Connectionism and cognitive architecture: A critical analy-
|
| 496 |
+
|
| 497 |
+
885 sis. Cognition, 28(1-2):3-71.
|
| 498 |
+
|
| 499 |
+
887 Mercedes García-Martínez, Loïc Barrault, and Fethi Bougares. 2016. Factored neural machine translation architectures. In Proceedings of the 13th International Conference on Spoken Language Translation.
|
| 500 |
+
|
| 501 |
+
892 Dieuwke Hupkes, Verna Dankers, Mathijs Mul, and Elia Bruni. 2020. Compositionality decomposed: How do neural networks generalise? Journal of Artificial Intelligence Research, 67:757-795.
|
| 502 |
+
|
| 503 |
+
895
|
| 504 |
+
|
| 505 |
+
Dieuwke Hupkes, Mario Giulianelli, Verna Dankers,
|
| 506 |
+
|
| 507 |
+
897 Mikel Artetxe, Yanai Elazar, Tiago Pimentel, Christos Christodoulopoulos, Karim Lasri, Naomi Saphra, Arabella Sinclair, Dennis Ulmer, Florian Schottmann, Khuyagbaatar Batsuren, Kaiser Sun,
|
| 508 |
+
|
| 509 |
+
900 Koustuv Sinha, Leila Khalatbari, Maria Ryskina, Rita Frieske, Ryan Cotterell, and Zhijing Jin. 2022.
|
| 510 |
+
|
| 511 |
+
902 State-of-the-art generalisation research in NLP: a taxonomy and review. CoRR.
|
| 512 |
+
|
| 513 |
+
Jenna Kanerva, Filip Ginter, Niko Miekka, Akseli Leino, and Tapio Salakoski. 2018. Turku neural parser pipeline: An end-to-end system for the conll 2018 shared task. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Association for Computational Linguistics.
|
| 514 |
+
|
| 515 |
+
Daniel Keysers, Nathanael Schärli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashubin, Nikola Momchev, Danila Sinopalnikov, Lukasz Stafiniak, Tibor Tihon, Dmitry Tsarkov, Xiao Wang, Marc van Zee, and Olivier Bousquet. 2020. Measuring compositional generalization: A comprehensive method on realistic data. In International Confer-
|
| 516 |
+
|
| 517 |
+
917 ence on Learning Representations.
|
| 518 |
+
|
| 519 |
+
Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67-72, Vancouver, Canada. Association for Computational Linguistics.
|
| 520 |
+
|
| 521 |
+
Jordan Kodner, Salam Khalifa, Khuyagbaatar Bat-suren, Hossep Dolatian, Ryan Cotterell, Faruk Akkus, Antonios Anastasopoulos, Taras Andrushko, Aryaman Arora, Nona Atanalov, et al. 2022. Sigmorphon-unimorph 2022 shared task 0: Generalization and typologically diverse morphological inflection. In Proceedings of the 19th SIGMOR-PHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 176- 203.
|
| 522 |
+
|
| 523 |
+
Brenden Lake and Marco Baroni. 2018. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In ${In}$ - ternational Conference on Machine Learning, pages 2873-2882. PMLR.
|
| 524 |
+
|
| 525 |
+
Yafu Li, Yongjing Yin, Yulong Chen, and Yue Zhang. 2021. On compositional generalization of neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4767-4780.
|
| 526 |
+
|
| 527 |
+
Jindrich Libovický and Alexander Fraser. 2020. Towards reasonably-sized character-level transformer nmt by finetuning subword systems. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2572-2579.
|
| 528 |
+
|
| 529 |
+
Gary F Marcus. 1998. Rethinking eliminative connectionism. Cognitive psychology, 37(3):243-282.
|
| 530 |
+
|
| 531 |
+
Gary F. Marcus, Steven Pinker, Michael Ullman, Michelle Hollander, T. John Rosen, Fei Xu, and Harald Clahsen. 1992. Overregularization in language acquisition. Monographs of the Society for Research in Child Development, 57(4):i-178.
|
| 532 |
+
|
| 533 |
+
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311-318.
|
| 534 |
+
|
| 535 |
+
Barbara Partee et al. 1995. Lexical semantics and compositionality. An invitation to cognitive science, 1:311-360.
|
| 536 |
+
|
| 537 |
+
Maja Popović. 2017. chrf++: words helping character n-grams. In Proceedings of the second conference on machine translation, pages 612-618.
|
| 538 |
+
|
| 539 |
+
Vikas Raunak, Vaibhav Kumar, and Florian Metze. 2019. On compositionality in neural machine translation. arXiv preprint arXiv:1911.01497.
|
| 540 |
+
|
| 541 |
+
918
|
| 542 |
+
|
| 543 |
+
919
|
| 544 |
+
|
| 545 |
+
920
|
| 546 |
+
|
| 547 |
+
921
|
| 548 |
+
|
| 549 |
+
922
|
| 550 |
+
|
| 551 |
+
923
|
| 552 |
+
|
| 553 |
+
924
|
| 554 |
+
|
| 555 |
+
929
|
| 556 |
+
|
| 557 |
+
934
|
| 558 |
+
|
| 559 |
+
936
|
| 560 |
+
|
| 561 |
+
939
|
| 562 |
+
|
| 563 |
+
951
|
| 564 |
+
|
| 565 |
+
954
|
| 566 |
+
|
| 567 |
+
956
|
| 568 |
+
|
| 569 |
+
959
|
| 570 |
+
|
| 571 |
+
961 966
|
| 572 |
+
|
| 573 |
+
968 970 971
|
| 574 |
+
|
| 575 |
+
972 David E Rumelhart and James L McClelland. 1986. On 1026 973 learning the past tenses of english verbs. 1027
|
| 576 |
+
|
| 577 |
+
974 David Saxton, Edward Grefenstette, Felix Hill, and 1028 975 Pushmeet Kohli. 2019. Analysing mathemati- 1029 976 cal reasoning abilities of neural models. ArXiv, 1030
|
| 578 |
+
|
| 579 |
+
977 abs/1904.01557. 1031
|
| 580 |
+
|
| 581 |
+
978 Rico Sennrich, Barry Haddow, and Alexandra Birch. 1032 1033 979 2016. Neural machine translation of rare words 980 with subword units. In Proceedings of the 54th An- 1034 nual Meeting of the Association for Computational 1035
|
| 582 |
+
|
| 583 |
+
Linguistics (Volume 1: Long Papers), pages 1715- 1036 983 1725. 1037
|
| 584 |
+
|
| 585 |
+
984 Peter Shaw, Ming-Wei Chang, Panupong Pasupat, and 1038 985 Kristina Toutanova. 2021. Compositional general- 1039
|
| 586 |
+
|
| 587 |
+
986 ization and natural language variation: Can a se- 1040
|
| 588 |
+
|
| 589 |
+
987 mantic parsing approach handle both? In Proceed- 1041
|
| 590 |
+
|
| 591 |
+
988 ings of the 59th Annual Meeting of the Association 1042 for Computational Linguistics and the 11th Interna- 989 tional Joint Conference on Natural Language Pro- 1043
|
| 592 |
+
|
| 593 |
+
990 cessing (Volume 1: Long Papers), pages 922-938. 1044
|
| 594 |
+
|
| 595 |
+
1045
|
| 596 |
+
|
| 597 |
+
Anders Søgaard, Sebastian Ebert, Jasmijn Bastings, 1046
|
| 598 |
+
|
| 599 |
+
993 and Katja Filippova. 2021. We need to talk about 1047 random splits. In Proceedings of the 16th Confer-
|
| 600 |
+
|
| 601 |
+
ence of the European Chapter of the Association 1048
|
| 602 |
+
|
| 603 |
+
995 for Computational Linguistics: Main Volume, pages 1049
|
| 604 |
+
|
| 605 |
+
${1823} - {1832}$ . 1050
|
| 606 |
+
|
| 607 |
+
Gabriel Stanovsky, Noah A. Smith, and Luke Zettle- 1051
|
| 608 |
+
|
| 609 |
+
998 moyer. 2019. Evaluating gender bias in machine 1052
|
| 610 |
+
|
| 611 |
+
translation. In Proceedings of the 57th Annual Meet- 1053
|
| 612 |
+
|
| 613 |
+
1000 ing of the Association for Computational Linguis- 1054
|
| 614 |
+
|
| 615 |
+
tics, pages 1679-1684, Florence, Italy. Association 1055
|
| 616 |
+
|
| 617 |
+
for Computational Linguistics. 1056
|
| 618 |
+
|
| 619 |
+
Jörg Tiedemann. 2020. The Tatoeba Translation Chal- 1057
|
| 620 |
+
|
| 621 |
+
lenge - Realistic data sets for low resource and mul- 1058
|
| 622 |
+
|
| 623 |
+
1005 tilingual MT. In Proceedings of the Fifth Con- 1059
|
| 624 |
+
|
| 625 |
+
ference on Machine Translation, pages 1174-1182, 1060
|
| 626 |
+
|
| 627 |
+
Online. Association for Computational Linguistics. 1061
|
| 628 |
+
|
| 629 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob 1062
|
| 630 |
+
|
| 631 |
+
Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz 1063
|
| 632 |
+
|
| 633 |
+
1010 Kaiser, and Illia Polosukhin. 2017. Attention is all 1064
|
| 634 |
+
|
| 635 |
+
you need. Advances in neural information process- 1065 ing systems, 30. 1066
|
| 636 |
+
|
| 637 |
+
Aimen Zerroug, Mohit Vaishnav, Julien Colin, Sebas- 1067
|
| 638 |
+
|
| 639 |
+
tian Musslick, and Thomas Serre. 2022. A bench- 1068
|
| 640 |
+
|
| 641 |
+
1015 mark for compositional visual reasoning. arXiv 1069 preprint arXiv:2206.05379. 1070
|
| 642 |
+
|
| 643 |
+
A Detailed results 1071
|
| 644 |
+
|
| 645 |
+
1072
|
| 646 |
+
|
| 647 |
+
1073
|
| 648 |
+
|
| 649 |
+
1020 1074
|
| 650 |
+
|
| 651 |
+
1075
|
| 652 |
+
|
| 653 |
+
1022 1076
|
| 654 |
+
|
| 655 |
+
1023 1077
|
| 656 |
+
|
| 657 |
+
1024 1078
|
| 658 |
+
|
| 659 |
+
1025 1079
|
| 660 |
+
|
| 661 |
+
<table><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>11331132</td><td>1131</td><td>1130</td><td>1129</td><td>1128</td><td>1127</td><td>1126</td><td>1125</td><td>1124</td><td>1123</td><td>1122</td><td>1121</td><td>11201119</td><td>1118</td><td>1117</td><td>11161115</td><td>1114</td><td>1113</td><td>1112</td><td>1111</td><td>1110</td><td>1109</td><td>1108</td><td>1107</td><td>1106</td><td>1105</td><td>1104</td><td>1103</td><td>1102</td><td>1101</td><td>1100</td><td>1099</td><td>1098</td><td>1097</td><td>1096</td><td>1095</td><td>1094</td><td>10931092</td><td>10911090</td><td>1089</td><td>1088</td><td>1087</td><td>1086</td><td>1085</td><td>1084</td><td>1083</td><td>1082</td><td>1081</td><td>1080</td></tr></table>
|
| 662 |
+
|
| 663 |
+
<table><tr><td>Vocab size</td><td/><td colspan="4">BLEU per Compound divergence</td></tr><tr><td/><td>0.00</td><td>0.25</td><td>0.50</td><td>0.75</td><td>1.00</td></tr><tr><td>500</td><td>27.3182 (27.3207 ± 0.1694)</td><td>26.5671 (26.5661 ± 0.1970)</td><td>25.3562 (25.3538 ± 0.1735)</td><td>24.7654 (24.7629 ± 0.1839)</td><td>${25.4613}\left( {{25.4627} \pm {0.1744}}\right)$</td></tr><tr><td>1000</td><td>27.8639 (27.8654 ± 0.1826)</td><td>${27.3292}\left( {{27.3268} \pm {0.2033}}\right)$</td><td>${25.8704}\left( {{25.8681} \pm {0.1773}}\right)$</td><td>25.5556 (25.5516 ± 0.1824)</td><td>${25.8745}\left( {{25.8747} \pm {0.1751}}\right)$</td></tr><tr><td>2000</td><td>27.9148 (27.9170 ± 0.1798)</td><td>27.5814 (27.5795 ± 0.2033)</td><td>26.0737 (26.0715 ± 0.1756)</td><td>25.5343 (25.5310 ± 0.1846)</td><td>${25.8743}\left( {{25.8753} \pm {0.1746}}\right)$</td></tr><tr><td>3000</td><td>${28.0879}\left( {{28.0894} \pm {0.1825}}\right)$</td><td>${27.5439}\left( {{27.5422} \pm {0.2038}}\right)$</td><td>25.9751 (25.9734 ± 0.1739)</td><td>${25.6909}\left( {{25.6888} \pm {0.1842}}\right)$</td><td>25.9213 (25.9236 ± 0.1793)</td></tr><tr><td>6000</td><td>${28.0282}\left( {{28.0299} \pm {0.1828}}\right)$</td><td>${27.3659}\left( {{27.3625} \pm {0.2013}}\right)$</td><td>${25.9772}\left( {{25.9752} \pm {0.1801}}\right)$</td><td>${25.4389}\left( {{25.4360} \pm {0.1858}}\right)$</td><td>${25.7023}\left( {{25.7032} \pm {0.1722}}\right)$</td></tr><tr><td>9000</td><td>27.8178 (27.8196 ± 0.1850)</td><td>${27.2639}\left( {{27.2612} \pm {0.2072}}\right)$</td><td>25.7347 (25.7331 ± 0.1708)</td><td>${25.3648}\left( {{25.3612} \pm {0.1857}}\right)$</td><td>${25.5939}\left( {{25.5947} \pm {0.1790}}\right)$</td></tr><tr><td>18000</td><td>${27.4262}\left( {{27.4281} \pm {0.1829}}\right)$</td><td>${26.8142}\left( {{26.8115} \pm {0.2053}}\right)$</td><td>${25.3575}\left( {{25.3540} \pm {0.1744}}\right)$</td><td>24.7417 (24.7397 ± 0.1873)</td><td>${25.0643}\left( {{25.0649} \pm {0.1742}}\right)$</td></tr><tr><td colspan="6">chrF2++ per Compound divergence</td></tr><tr><td>500</td><td>51.0092 (51.0110 ± 0.1375)</td><td>${50.5787}\left( {{50.5782} \pm {0.1611}}\right)$</td><td>49.7497 (49.7488 ± 0.1425)</td><td>49.2403 (49.2380 ± 0.1564)</td><td>49.1858 (49.1856 ± 0.1385)</td></tr><tr><td>1000</td><td>${51.5254}\left( {{51.5268} \pm {0.1395}}\right)$</td><td>51.3332 (51.3315 ± 0.1625)</td><td>${50.3049}\left( {{50.3037} \pm {0.1403}}\right)$</td><td>49.9848 (49.9819 ± 0.1507)</td><td>49.5935 (49.5925 ± 0.1429)</td></tr><tr><td>2000</td><td>${51.5363}\left( {{51.5383} \pm {0.1392}}\right)$</td><td>${51.5222}\left( {{51.5204} \pm {0.1635}}\right)$</td><td>50.3994 (50.3982 ± 0.1448)</td><td>49.9124 (49.9093 ± 0.1536)</td><td>49.6822 (49.6818 ± 0.1421)</td></tr><tr><td>3000</td><td>${51.6843}\left( {{51.6853} \pm {0.1378}}\right)$</td><td>${51.4699}\left( {{51.4687} \pm {0.1622}}\right)$</td><td>50.4017 (50.4011 ± 0.1409)</td><td>${50.0425}\left( {{50.0401} \pm {0.1538}}\right)$</td><td>49.6165 (49.6170 ± 0.1411)</td></tr><tr><td>6000</td><td>${51.6609}\left( {{51.6628} \pm {0.1375}}\right)$</td><td>${51.3303}\left( {{51.3282} \pm {0.1614}}\right)$</td><td>${50.3249}\left( {{50.3242} \pm {0.1413}}\right)$</td><td>49.792 (49.7897 ± 0.1551)</td><td>49.4806 (49.4801 ± 0.1417)</td></tr><tr><td>9000</td><td>51.3687 ( ${51.3704} \pm {0.1383}$ )</td><td>${51.09}\left( {{51.0879} \pm {0.1625}}\right)$</td><td>${50.0669}\left( {{50.0659} \pm {0.1414}}\right)$</td><td>49.7751 (49.7725 ± 0.1557)</td><td>49.3615 (49.3612 ± 0.1396)</td></tr><tr><td>18000</td><td>51.0249 (51.0266 ± 0.1391)</td><td>${50.7775}\left( {{50.7761} \pm {0.1618}}\right)$</td><td>49.7367 (49.7352 ± 0.1428)</td><td>49.2308 (49.2288 ± 0.1518)</td><td>${48.7807}\left( {{48.7799} \pm {0.1448}}\right)$</td></tr></table>
|
| 664 |
+
|
| 665 |
+
Table 3: The BLEU and chrF2++ results for the different source-side (Finnish) BPE vocabulary sizes and different compound divergence values. Inside brackets is the true mean estimated from bootstrap resampling and the 95% confidence interval.
|
| 666 |
+
|
| 667 |
+
1187 1186 1185 1184 1183 1182 1181 1180 1179 1178 1177 1176 1175 1174 1173 1172 1171 1170 1169 1168 1167 1166 1164 1163 1161 1160 1159 1158 1156 1155 1154 1153 1151 1150 1149 1148 1146 1145 1144 1143 1142 1141 1140 1139 1138 1137 1136 1135 1134
|
| 668 |
+
|
| 669 |
+
1188 1242
|
| 670 |
+
|
| 671 |
+
1189 1243
|
| 672 |
+
|
| 673 |
+
1190 1244
|
| 674 |
+
|
| 675 |
+
<table><tr><td colspan="4">chrF2++</td><td colspan="2">BLEU</td></tr><tr><td>Seed</td><td>Vocab</td><td>${\mathcal{D}}_{C} = 0$</td><td>${\mathcal{D}}_{C} = 1$</td><td>${\mathcal{D}}_{C} = 0$</td><td>${\mathcal{D}}_{C} = 1$</td></tr><tr><td>11</td><td>500 18000</td><td>${51.01}\left( {{51.01} \pm {0.14}}\right)$ 51.02 ( ${51.03} \pm {0.14}$ ) p = 0.2439</td><td>49.19 (49.19 ± 0.14) 48.78 (48.78 ± 0.14) $\mathrm{p} = {0.0003}$</td><td>27.32 (27.32 ± 0.17) 27.43 (27.43 ± 0.18) p = 0.0243</td><td>25.46 (25.46 ± 0.17) ${25.06}\left( {{25.06} \pm {0.17}}\right)$ p = 0.0003</td></tr><tr><td>22</td><td>500 18000</td><td>${51.01}\left( {{51.01} \pm {0.14}}\right)$ ${50.85}\left( {{50.85} \pm {0.14}}\right)$ p = 0.0003</td><td>49.08 (49.08 ± 0.15) 49.05 (49.05 ± 0.15) p = 0.1913</td><td>${27.3}\left( {{27.3} \pm {0.18}}\right)$ 27.17 (27.17 ± 0.18) p = 0.0107</td><td>${25.2}\left( {{25.2} \pm {0.18}}\right)$ ${25.1}\left( {{25.1} \pm {0.18}}\right)$ p = 0.053</td></tr><tr><td>33</td><td>500 18000</td><td>51.07 (51.07 ± 0.14) 50.97 (50.97 ± 0.14) p = 0.0047</td><td>49.37 (49.37 ± 0.17) 49.04 (49.04 ± 0.17) p = 0.0003</td><td>27.37 (27.37 ± 0.18) 27.3 (27.3 ± 0.18) p = 0.092</td><td>${25.09}\left( {{25.09} \pm {0.2}}\right)$ 24.83 (24.83 ± 0.2) p = 0.0003</td></tr><tr><td>44</td><td>500 18000</td><td>${52.02}\left( {{52.02} \pm {0.17}}\right)$ ${52.44}\left( {{52.44} \pm {0.17}}\right)$ p = 0.0003</td><td>49.7 $\left( {{49.7} \pm {0.18}}\right)$ 49.43 (49.43 ± 0.17) $\mathrm{p} = {0.0003}$</td><td>${28.3}\left( {{28.3} \pm {0.21}}\right)$ ${28.72}\left( {{28.72} \pm {0.21}}\right)$ p = 0.0003</td><td>${25.8}\left( {{25.8} \pm {0.22}}\right)$ ${25.63}\left( {{25.63} \pm {0.22}}\right)$ p = 0.0077</td></tr><tr><td>55</td><td>500 18000</td><td>${52.33}\left( {{52.34} \pm {0.18}}\right)$ ${52.76}\left( {{52.76} \pm {0.18}}\right)$ p = 0.0003</td><td>49.34 (49.34 ± 0.16) 49.04 (49.04 ± 0.16) p = 0.0003</td><td>29.04 (29.04 ± 0.23) 29.58 (29.58 ± 0.24) p = 0.0003</td><td>${25.29}\left( {{25.29} \pm {0.2}}\right)$ ${25.08}\left( {{25.08} \pm {0.2}}\right)$ p = 0.001</td></tr><tr><td>66</td><td>500 18000</td><td>${50.98}\left( {{50.98} \pm {0.14}}\right)$ 51.06 (51.06 ± 0.14) p = 0.0183</td><td>49.24 (49.24 ± 0.14) ${48.87}\left( {{48.87} \pm {0.14}}\right)$ p = 0.0003</td><td>27.12 (27.12 ± 0.18) 27.4 (27.4 ± 0.18) p = 0.0003</td><td>${25.31}\left( {{25.31} \pm {0.18}}\right)$ ${25.04}\left( {{25.04} \pm {0.17}}\right)$ p = 0.0003</td></tr><tr><td>77</td><td>500 18000</td><td>${50.84}\left( {{50.83} \pm {0.14}}\right)$ ${50.68}\left( {{50.68} \pm {0.14}}\right)$ p = 0.0007</td><td>49.46 (49.46 ± 0.14) 49.22 (49.22 ± 0.14) p = 0.0003</td><td>27.12 (27.12 ± 0.18) 27.06 (27.06 ± 0.18) p = 0.1186</td><td>${25.41}\left( {{25.4} \pm {0.16}}\right)$ ${25.25}\left( {{25.25} \pm {0.17}}\right)$ $p = {0.0023}$</td></tr><tr><td>88</td><td>500 18000</td><td>${50.97}\left( {{50.97} \pm {0.14}}\right)$ 51.37 (51.37 ± 0.14) $p = {0.0003}$</td><td>49.38 (49.38 ± 0.14) 49.05 (49.05 ± 0.14) p = 0.0003</td><td>${27.22}\left( {{27.22} \pm {0.18}}\right)$ 27.81 (27.81 ± 0.18) p = 0.0003</td><td>${25.61}\left( {{25.61} \pm {0.17}}\right)$ ${25.43}\left( {{25.43} \pm {0.18}}\right)$ p = 0.0003</td></tr><tr><td>11</td><td>1000 6000</td><td>51.53 (51.53 ± 0.14) 51.66 (51.66 ± 0.14) p = 0.0003</td><td>49.59 (49.59 ± 0.14) 49.48 (49.48 ± 0.14) p = 0.0017</td><td>${27.86}\left( {{27.87} \pm {0.18}}\right)$ ${28.03}\left( {{28.03} \pm {0.18}}\right)$ p = 0.0013</td><td>${25.87}\left( {{25.87} \pm {0.18}}\right)$ ${25.7}\left( {{25.7} \pm {0.17}}\right)$ p = 0.001</td></tr><tr><td>22</td><td>1000 6000</td><td>51.46 (51.46 ± 0.14) 51.47 (51.47 ± 0.14) p = 0.3059</td><td>49.64 (49.64 ± 0.15) 49.61 (49.61 ± 0.15) p = 0.1786</td><td>27.9 (27.9 ± 0.18) 27.94 (27.94 ± 0.19) p = 0.1519</td><td>${25.69}\left( {{25.69} \pm {0.18}}\right)$ 25.64 (25.64 ± 0.18) p = 0.1383</td></tr><tr><td>33</td><td>1000 6000</td><td>${51.59}\left( {{51.59} \pm {0.14}}\right)$ 51.63 (51.63 ± 0.14) p = 0.117</td><td>49.7 (49.7 ± 0.17) 49.67 (49.68 ± 0.17) p = 0.2073</td><td>27.89 (27.88 ± 0.18) ${28.02}\left( {{28.02} \pm {0.18}}\right)$ p = 0.0047</td><td>${25.45}\left( {{25.45} \pm {0.2}}\right)$ ${25.51}\left( {{25.51} \pm {0.21}}\right)$ p = 0.1276</td></tr><tr><td>44</td><td>1000 6000</td><td>${52.67}\left( {{52.67} \pm {0.16}}\right)$ ${52.68}\left( {{52.68} \pm {0.16}}\right)$ p = 0.2809</td><td>${50.32}\left( {{50.32} \pm {0.17}}\right)$ ${50.06}\left( {{50.06} \pm {0.18}}\right)$ p = 0.0003</td><td>29.01 (29.01 ± 0.21) 29.01 (29.01 ± 0.22) p = 0.3949</td><td>${26.53}\left( {{26.53} \pm {0.22}}\right)$ 26.33 (26.33 ± 0.22) p = 0.0037</td></tr><tr><td>55</td><td>1000 6000</td><td>${52.8}\left( {{52.8} \pm {0.18}}\right)$ 53.02 (53.03 ± 0.18) p = 0.0003</td><td>49.92 (49.92 ± 0.16) 49.72 (49.73 ± 0.16) p = 0.0003</td><td>29.66 (29.66 ± 0.24) 29.84 (29.85 ± 0.24) p = 0.0017</td><td>${25.92}\left( {{25.92} \pm {0.2}}\right)$ ${25.73}\left( {{25.73} \pm {0.2}}\right)$ $p = {0.0003}$</td></tr><tr><td>66</td><td>1000 6000</td><td>${51.39}\left( {{51.39} \pm {0.14}}\right)$ 51.5 (51.49 ± 0.14) p = 0.0013</td><td>49.57 (49.57 ± 0.14) 49.37 (49.37 ± 0.14) p = 0.0003</td><td>27.64 (27.64 ± 0.18) 27.79 (27.79 ± 0.19) p = 0.0017</td><td>${25.71}\left( {{25.71} \pm {0.18}}\right)$ ${25.54}\left( {{25.54} \pm {0.18}}\right)$ p = 0.0017</td></tr><tr><td>77</td><td>1000 6000</td><td>51.51 ( ${51.51} \pm {0.15}$ ) 51.76 (51.76 ± 0.14) p = 0.0003</td><td>49.8 (49.8 ± 0.13) 49.74 (49.74 ± 0.14) p = 0.0453</td><td>27.86 (27.86 ± 0.18) ${28.09}\left( {{28.09} \pm {0.19}}\right)$ p = 0.0003</td><td>25.84 (25.84 ± 0.17) ${25.74}\left( {{25.74} \pm {0.17}}\right)$ p = 0.022</td></tr><tr><td>88</td><td>1000 6000</td><td>${51.9}\left( {{51.9} \pm {0.14}}\right)$ 51.6 (51.6 ± 0.14) p = 0.0003</td><td>49.95 (49.95 ± 0.14) 49.84 (49.84 ± 0.14) p = 0.0007</td><td>${28.29}\left( {{28.29} \pm {0.18}}\right)$ ${28.01}\left( {{28.01} \pm {0.18}}\right)$ p = 0.0003</td><td>${26.2}\left( {{26.2} \pm {0.18}}\right)$ ${26.23}\left( {{26.23} \pm {0.18}}\right)$ p = 0.2209</td></tr></table>
|
| 676 |
+
|
| 677 |
+
Table 4: Pairwise comparisons of the source vocabulary sizes 500 and 18000; 1000 and 6000 on the minimum and maximum compound divergence data splits. For 8 data split algorithm random seeds. Inside brackets is the true mean estimated from bootstrap resampling and the 95% confidence interval.
|
| 678 |
+
|
| 679 |
+
1191 1245
|
| 680 |
+
|
| 681 |
+
1192 1246
|
| 682 |
+
|
| 683 |
+
1193 1247
|
| 684 |
+
|
| 685 |
+
1194 1248
|
| 686 |
+
|
| 687 |
+
1195 1249
|
| 688 |
+
|
| 689 |
+
1196 1250
|
| 690 |
+
|
| 691 |
+
1197 1251
|
| 692 |
+
|
| 693 |
+
1198 1252
|
| 694 |
+
|
| 695 |
+
1199 1253
|
| 696 |
+
|
| 697 |
+
1200 1254
|
| 698 |
+
|
| 699 |
+
1201 1255
|
| 700 |
+
|
| 701 |
+
1202 1256
|
| 702 |
+
|
| 703 |
+
1203 1257
|
| 704 |
+
|
| 705 |
+
1204 1258
|
| 706 |
+
|
| 707 |
+
1205 1259
|
| 708 |
+
|
| 709 |
+
1206 1260
|
| 710 |
+
|
| 711 |
+
1207 1261
|
| 712 |
+
|
| 713 |
+
1208 1262
|
| 714 |
+
|
| 715 |
+
1209 1263
|
| 716 |
+
|
| 717 |
+
1210
|
| 718 |
+
|
| 719 |
+
1211 1265
|
| 720 |
+
|
| 721 |
+
1212 1266
|
| 722 |
+
|
| 723 |
+
1213 1267
|
| 724 |
+
|
| 725 |
+
1214 1268
|
| 726 |
+
|
| 727 |
+
1215 1269
|
| 728 |
+
|
| 729 |
+
1216 1270
|
| 730 |
+
|
| 731 |
+
1217
|
| 732 |
+
|
| 733 |
+
1218
|
| 734 |
+
|
| 735 |
+
1219
|
| 736 |
+
|
| 737 |
+
1220 1274
|
| 738 |
+
|
| 739 |
+
1221 1275
|
| 740 |
+
|
| 741 |
+
1222 1276
|
| 742 |
+
|
| 743 |
+
1223
|
| 744 |
+
|
| 745 |
+
1224 1278
|
| 746 |
+
|
| 747 |
+
1225 1279
|
| 748 |
+
|
| 749 |
+
1226 1280
|
| 750 |
+
|
| 751 |
+
1227 1281
|
| 752 |
+
|
| 753 |
+
1228 1282
|
| 754 |
+
|
| 755 |
+
1229 1283
|
| 756 |
+
|
| 757 |
+
1230 1284
|
| 758 |
+
|
| 759 |
+
1231 1285
|
| 760 |
+
|
| 761 |
+
1232 1286
|
| 762 |
+
|
| 763 |
+
1233 1287
|
| 764 |
+
|
| 765 |
+
1234 1288
|
| 766 |
+
|
| 767 |
+
1235 1289
|
| 768 |
+
|
| 769 |
+
1236 1290
|
| 770 |
+
|
| 771 |
+
1237 1291
|
| 772 |
+
|
| 773 |
+
1238 1292
|
| 774 |
+
|
| 775 |
+
1239 1293
|
| 776 |
+
|
| 777 |
+
1240 1294
|
| 778 |
+
|
| 779 |
+
1241 1295
|
| 780 |
+
|
| 781 |
+
1296 1350
|
| 782 |
+
|
| 783 |
+
1297 1351
|
| 784 |
+
|
| 785 |
+
1298 1352
|
| 786 |
+
|
| 787 |
+

|
| 788 |
+
|
| 789 |
+
Figure 3: Comparison of vocabulary sizes 500 and 18000 with compound divergence values 0.0,0.25, 0.5,0.75and 1.0 . The same results are partly in Table 4.
|
| 790 |
+
|
| 791 |
+
1299 1353
|
| 792 |
+
|
| 793 |
+
1300 1354
|
| 794 |
+
|
| 795 |
+
1301 1355
|
| 796 |
+
|
| 797 |
+
1302 1356
|
| 798 |
+
|
| 799 |
+
1303 1357
|
| 800 |
+
|
| 801 |
+
1304 1358
|
| 802 |
+
|
| 803 |
+
1305 1359
|
| 804 |
+
|
| 805 |
+
1306 1360
|
| 806 |
+
|
| 807 |
+
1307 1361
|
| 808 |
+
|
| 809 |
+
1308 1362
|
| 810 |
+
|
| 811 |
+
1309 1363
|
| 812 |
+
|
| 813 |
+
1310 1364
|
| 814 |
+
|
| 815 |
+
1311 1365
|
| 816 |
+
|
| 817 |
+
1312 1366
|
| 818 |
+
|
| 819 |
+
1313 1367
|
| 820 |
+
|
| 821 |
+
1314 1368
|
| 822 |
+
|
| 823 |
+
1315 1369
|
| 824 |
+
|
| 825 |
+
1316 1370
|
| 826 |
+
|
| 827 |
+
1317 1371
|
| 828 |
+
|
| 829 |
+
1318 1372
|
| 830 |
+
|
| 831 |
+
1319 1373
|
| 832 |
+
|
| 833 |
+
1320 1374
|
| 834 |
+
|
| 835 |
+
1321 1375
|
| 836 |
+
|
| 837 |
+
1322 1376
|
| 838 |
+
|
| 839 |
+
1323 1377
|
| 840 |
+
|
| 841 |
+
1324 1378
|
| 842 |
+
|
| 843 |
+
1325 1379
|
| 844 |
+
|
| 845 |
+
1326 1380
|
| 846 |
+
|
| 847 |
+
1327 1381
|
| 848 |
+
|
| 849 |
+
1328 1382
|
| 850 |
+
|
| 851 |
+
1329 1383
|
| 852 |
+
|
| 853 |
+
1330 1384
|
| 854 |
+
|
| 855 |
+
1331 1385
|
| 856 |
+
|
| 857 |
+
1332 1386
|
| 858 |
+
|
| 859 |
+
1333 1387
|
| 860 |
+
|
| 861 |
+
1334 1388
|
| 862 |
+
|
| 863 |
+
1335 1389
|
| 864 |
+
|
| 865 |
+
1336 1390
|
| 866 |
+
|
| 867 |
+
1337 1391
|
| 868 |
+
|
| 869 |
+
1338 1392
|
| 870 |
+
|
| 871 |
+
1339 1393
|
| 872 |
+
|
| 873 |
+
1340 1394
|
| 874 |
+
|
| 875 |
+
1341 1395
|
| 876 |
+
|
| 877 |
+
1342 1396
|
| 878 |
+
|
| 879 |
+
1343 1397
|
| 880 |
+
|
| 881 |
+
1344 1398
|
| 882 |
+
|
| 883 |
+
1345 1399
|
| 884 |
+
|
| 885 |
+
1346 1400
|
| 886 |
+
|
| 887 |
+
1347 1401
|
| 888 |
+
|
| 889 |
+
1348 1402
|
| 890 |
+
|
| 891 |
+
1349 1403
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/1sGdp5g0NP/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,468 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
§ EVALUATING MORPHOLOGICAL GENERALISATION IN MACHINE TRANSLATION BY DISTRIBUTION-BASED COMPOSITIONALITY ASSESSMENT
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
057
|
| 10 |
+
|
| 11 |
+
Anonymous Author
|
| 12 |
+
|
| 13 |
+
Affiliation / Address line 1
|
| 14 |
+
|
| 15 |
+
006 Affiliation / Address line 2
|
| 16 |
+
|
| 17 |
+
Affiliation / Address line 3
|
| 18 |
+
|
| 19 |
+
email@domain
|
| 20 |
+
|
| 21 |
+
Anonymouser Author
|
| 22 |
+
|
| 23 |
+
Affiliation / Address line 1
|
| 24 |
+
|
| 25 |
+
Affiliation / Address line 2
|
| 26 |
+
|
| 27 |
+
Affiliation / Address line 3
|
| 28 |
+
|
| 29 |
+
email@domain
|
| 30 |
+
|
| 31 |
+
Anonymousest Author 058
|
| 32 |
+
|
| 33 |
+
Affiliation / Address line 1 059
|
| 34 |
+
|
| 35 |
+
Affiliation / Address line 2 060
|
| 36 |
+
|
| 37 |
+
Affiliation / Address line 3 061
|
| 38 |
+
|
| 39 |
+
email@domain 062
|
| 40 |
+
|
| 41 |
+
§ ABSTRACT
|
| 42 |
+
|
| 43 |
+
Compositional generalisation refers to the ability to understand and generate an infinite number of novel meanings using a finite group of known primitives and a set of rules of how to combine them. The
|
| 44 |
+
|
| 45 |
+
018 degree to which artificial neural networks can possess this ability is an open question. Recently, many evaluation methods and benchmarks have been proposed to test compositional generalisation, but
|
| 46 |
+
|
| 47 |
+
023 not many have focused on the morphological level of language. We propose an application of the previously developed distribution-based compositionality assessment method to assess composi-
|
| 48 |
+
|
| 49 |
+
028 tional generalisation on the level of morphology in NLP tasks, such as machine
|
| 50 |
+
|
| 51 |
+
031 translation or paraphrase detection. We demonstrate the use of our method by
|
| 52 |
+
|
| 53 |
+
033 comparing the morphological generalisation ability of translation models with different BPE vocabulary sizes. The evaluation method we propose suggests that small vocabularies help with morpholog-
|
| 54 |
+
|
| 55 |
+
038 ical generalisation in NMT. ${}^{1}$
|
| 56 |
+
|
| 57 |
+
§ 1 INTRODUCTION
|
| 58 |
+
|
| 59 |
+
Natural languages usually adhere to the principle of compositionality, with the exception of idiomatic expressions. Partee et al. (1995) phrased this principle as "The meaning of a whole is a function of the meanings of the parts and of the way they are syntactically combined". Deriving from this principle, compositional generalisation (CG) refers to the capacity to understand and generate an infinite number of novel meanings using a finite group of known primitives and a set of rules of how to combine them. In the case of language,
|
| 60 |
+
|
| 61 |
+
053
|
| 62 |
+
|
| 63 |
+
morphemes are combined into words and words in 065 turn into phrases and sentences, using the syntac-
|
| 64 |
+
|
| 65 |
+
tical rules of the language. 067
|
| 66 |
+
|
| 67 |
+
Neural networks have long been argued to lack
|
| 68 |
+
|
| 69 |
+
the ability to generalise compositionally the way 070 humans do (Fodor and Pylyshyn, 1988; Marcus,
|
| 70 |
+
|
| 71 |
+
1998). After the rapid improvement of neural NLP 072 systems during the previous decade, this question has gained renewed interest. Many new evaluation
|
| 72 |
+
|
| 73 |
+
methods have been developed to assess whether 075 the modern sequence-to-sequence (seq2seq) archi-
|
| 74 |
+
|
| 75 |
+
tectures such as Transformers exhibit CG, since 077 they certainly exhibit increasingly competent linguistic behaviour. For instance, in one of the sem-
|
| 76 |
+
|
| 77 |
+
inal CG evaluation methods, called SCAN (Lake 080 and Baroni, 2018), a seq2seq system has seen cer-
|
| 78 |
+
|
| 79 |
+
tain natural language commands in training and 082 needs to combine them in novel ways in testing.
|
| 80 |
+
|
| 81 |
+
CG is a general capacity that can be seen as a
|
| 82 |
+
|
| 83 |
+
desideratum in many NLP tasks, and in machine 085 learning more generally. Furthermore, CG is a
|
| 84 |
+
|
| 85 |
+
multifaceted concept that can be, and should be, 087 decomposed into narrower, more manageable aspects that can be tested separately (Hupkes et al.,
|
| 86 |
+
|
| 87 |
+
2020). For example, NLP systems should be able 090 to generalise compositionally both on the level of
|
| 88 |
+
|
| 89 |
+
words and on the level of morphology. 092
|
| 90 |
+
|
| 91 |
+
Although many aspects of CG have recently been evaluated in NLP (an extensive review is offered by Hupkes et al. (2022)), some aspects
|
| 92 |
+
|
| 93 |
+
have remained without an evaluation method. We 097 identify (see Section 2) a lack of methods to evaluate compositional morphological generalisation using only natural, non-synthetic, data. To fill this gap, we propose an application of the distribution-based compositionality assessment (DBCA) method (Keysers et al., 2020) (henceforth Keysers) to generate adversarial data splits to evaluate morphological generalisation in NLP systems.
|
| 94 |
+
|
| 95 |
+
Specifically, we split natural language corpora 107 while controlling the distributions of lemmas and morphological features (atoms in the terminology of Keysers) on the one hand, and the distributions of the combinations of atoms (compounds, not to be confused with compound words) on the other hand. By requiring a low divergence between the atom distributions of the train and test sets, and a high divergence between the compound distributions, we can evaluate how well a system is able to generalise its morphological knowledge to unseen word forms.
|
| 96 |
+
|
| 97 |
+
${}^{1}$ A link to the Github repository anonymised.
|
| 98 |
+
|
| 99 |
+
For example, if our corpus included as atoms the lemmas "cat" and "dog", and the morphological tags Number=Sing and Number=Plur, a low divergence between the atom distributions would mean that both the training and test sets included all four of the atoms, and a high compound divergence would mean that the sets include different combinations of them, for instance training set {cat, dogs} and test set {cats, dog}.
|
| 100 |
+
|
| 101 |
+
Our main contributions are the following: firstly, we describe an application of DBCA to evaluate morphological generalisation in any NLP task in which the train and test data consist of sentences for which morphological tags are available. Secondly, we demonstrate how by this method we can evaluate morphological generalisation in machine translation without manual test design. And thirdly, using our proposed method, we assess the effect of the source language BPE (Sennrich et al., 2016) vocabulary size in Finnish-English NMT performance, and conclude that a smaller vocabulary helps the NMT models in morphological generalisation.
|
| 102 |
+
|
| 103 |
+
§ 2 BACKGROUND
|
| 104 |
+
|
| 105 |
+
In the broader field of machine learning, CG has been analysed in various domains besides that of natural language, such as visual question answering (Bahdanau et al., 2018), visual reasoning (Zer-roug et al., 2022) and mathematics (Saxton et al., 2019), but in this work we focus on natural language tasks. Two reviews have recently been published about CG in NLP, of which Donatelli and Koller (2023) focus on semantic parsing and the aforementioned Hupkes et al. (2022) (henceforth Hupkes) take a broader view, reviewing generalisation in general, not only the compositional type.
|
| 106 |
+
|
| 107 |
+
Hupkes categorised NLP generalisation experiments along five dimensions, of which we discuss two here to motivate our work. The first is the type
|
| 108 |
+
|
| 109 |
+
of generalisation along which the compositional 162
|
| 110 |
+
|
| 111 |
+
type is distinguished from the morphological type. 163 Hupkes define compositionality as "the ability to systematically recombine previously learned elements to map new inputs made up from these elements to their correct output. In language, the
|
| 112 |
+
|
| 113 |
+
inputs are 'forms' (e.g. phrases, sentences, larger 168 pieces of discourse), and the output that they need to be mapped to is their meaning ...". In NMT, the translation works as a proxy to meaning, so that CG can be evaluated by evaluating the translation (Dankers et al., 2022) (other works that assess CG in NMT include (Li et al., 2021; Raunak et al., 2019)).
|
| 114 |
+
|
| 115 |
+
Hupkes contrast compositional with structural,
|
| 116 |
+
|
| 117 |
+
including morphological, generalisation where an 178 output space is not required but which focuses
|
| 118 |
+
|
| 119 |
+
on generation of the correct forms. These defini- 180 tions suggest a clear divide between the categories, which is understandable when analysing the literature: morphological generalisation, specifically inflection generation, has for decades been studied in psycholinguistics (Berko, 1958; Marcus et al., 1992) and computational linguistics (Rumelhart and McClelland, 1986; Corkery et al., 2019; Kod-ner et al., 2022). These studies do not address the question of how the different inflections are mapped to different meanings, hence they do not address compositional generalisation. However, inflections do bear meaning, of course, and so compositional morphological generalisation is an ability that humans possess, and NLP systems ought to be tested on.
|
| 120 |
+
|
| 121 |
+
Although Hupkes do not categorise any experiments as assessing compositional morphological generalisation, there has been at least one that we think could be so categorised: Burlot and Yvon (2017) designed an NMT test suite in which a single morphological feature is modified in a source language sentence, creating a contrastive pair, and the translations of the contrastive sentences are inspected for a corresponding change in the target language.
|
| 122 |
+
|
| 123 |
+
The other dimension of Hupkes relevant to the motivation of our experiments is that of shift source: the shift between train and test sets could occur naturally (as in two natural corpora in different domains), it can be created by generating synthetic data, or an artificial partition of natural data can be obtained. Most of the previous methods to
|
| 124 |
+
|
| 125 |
+
assess compositional generalisation in NMT (Bur- 215 lot and Yvon, 2017; Li et al., 2021; Dankers et al.,
|
| 126 |
+
|
| 127 |
+
217 2022) have synthetised data for the test sets. Generating synthetic data has its benefits: any morphological form can occur in the data when it is generated, and a single morphological feature can be easily focused on and evaluated qualitatively as well as quantitatively.
|
| 128 |
+
|
| 129 |
+
However, synthetic data has at least practical disadvantages, leaving aside the more theoretical question of how well the synthetic language approximates natural language, assuming the ultimate goal is systems that process natural language.
|
| 130 |
+
|
| 131 |
+
229 In practice, synthetic test sets require manual design, which means it is difficult to come by a method to generate an unlimited number of syn-
|
| 132 |
+
|
| 133 |
+
232 thetic sentences, or a method that could work in arbitrary languages. Furthermore, when manu-
|
| 134 |
+
|
| 135 |
+
234 ally designing test suites to evaluate morphological generalisation, as Burlot and Yvon (2017) de-
|
| 136 |
+
|
| 137 |
+
237 signed, the requirement for manual work restricts the number of morphological phenomena we have
|
| 138 |
+
|
| 139 |
+
239 resources to test.
|
| 140 |
+
|
| 141 |
+
The other option is to create artificial data splits of natural data. While natural data may be noisier and it might be more difficult to focus on a specific phenomenon of the language by this method, this method is easier to automate completely. Furthermore, the method of automatically generating data splits that we present in the next section is also generalisable to other tasks (e.g. paraphrase detection) and any corpus of sentences. Generating artificial data splits of natural data has previously been used to test CG in translation (Raunak et al., 2019), but not for assessing morphological generalisation, as far as we are aware. (For a more general discussion of splitting data into non-random testing and training sets, see Søgaard et al. (2021).)
|
| 142 |
+
|
| 143 |
+
The method we describe in this paper is an application of the DBCA method developed by
|
| 144 |
+
|
| 145 |
+
259 Keysers. Since this method is generic and task-agnostic, it can be applied to any dataset for which it is possible to define atom and compound distributions. Although it is easier to define these distributions for synthetic data, as in the CFQ dataset described by Keysers, it can also be applied to natural data, for example in semantic parsing (Shaw et al., 2021). The next section describes how DBCA can be used to assess morphological generalisation in any task where the training and testing
|
| 146 |
+
|
| 147 |
+
269 corpora consist of natural language sentences.
|
| 148 |
+
|
| 149 |
+
§ 3 APPLYING DBCA TO ASSESS MORPHOLOGICAL GENERALISATION IN NLP
|
| 150 |
+
|
| 151 |
+
270
|
| 152 |
+
|
| 153 |
+
271
|
| 154 |
+
|
| 155 |
+
272
|
| 156 |
+
|
| 157 |
+
DBCA is a method to evaluate CG by splitting 273
|
| 158 |
+
|
| 159 |
+
a dataset into train/test sets with differing dis- 274
|
| 160 |
+
|
| 161 |
+
tributions, requiring some capacity to generalise 275
|
| 162 |
+
|
| 163 |
+
from the training distribution to the test distri- 276 bution. Specifically, the distributions of atoms (known primitives) and compounds (combinations of atoms) are controlled to get similar atom distributions but contrasting compound distributions
|
| 164 |
+
|
| 165 |
+
in the training and test sets. In our application of 281 DBCA to a corpus of natural language sentences,
|
| 166 |
+
|
| 167 |
+
the atom distribution ${\mathcal{F}}_{A}$ of the corpus is the distri- 283 bution of the lemmas and morphological features
|
| 168 |
+
|
| 169 |
+
and the compound distribution ${\mathcal{F}}_{C}$ is the distribu- 286 tion of their combinations. Table 1 presents exam-
|
| 170 |
+
|
| 171 |
+
ples of atoms and compounds in this work. 288
|
| 172 |
+
|
| 173 |
+
To determine the atom and compound distributions, we first need to obtain the lemmas and morphological tags of all words in the corpus, which we accomplish for Finnish corpora using the Turku Neural Parser Pipeline (Kanerva et al., 2018). For the experiments presented in Section 4, we use a corpus of $1\mathrm{M}$ sentences. In practice, we do not have resources to control the distribution of all lemmas even in this relatively small corpus, so we need to select some subset of the lemmas that we include in our analysis.
|
| 174 |
+
|
| 175 |
+
Selecting the lemma subset could be done in many ways, but the following is a way we deemed reasonable. To limit the number of lemmas, we first filter out lemmas that do not appear in the list of 94110 Finnish lemmas ${}^{2}$ or, since this list does not include proper names, in lists ${}^{3}$ of names for places, or lists of Finnish and English given names. This way, the lemmas that are filtered out include most of the typos and other nonwords. Then we rank the remaining lemmas by frequency in our corpus, and sample a fixed number of lemma occurrences from constant intervals in the ranked list of lemmas. Specifically, we take 40000 lemma occurrences at intervals of 1000 lemma types in the list of lemmas. For our corpus of $1\mathrm{M}$ sentences, this method subsamples
|
| 176 |
+
|
| 177 |
+
323
|
| 178 |
+
|
| 179 |
+
${}^{2}$ Available at https://kaino.kotus.fi/sanat/ nykysuomi/
|
| 180 |
+
|
| 181 |
+
${}^{3}$ List of names of places: https://kaino.kotus.fi/eksonyymit/?a=aineisto
|
| 182 |
+
|
| 183 |
+
English given names: https://en.wiktionary.org/ wiki/Appendix:English_given_names and Finnish: https://tinyurl.com/3mn52ms6 https://tinyurl.com/mwjvaxkk
|
| 184 |
+
|
| 185 |
+
324 378
|
| 186 |
+
|
| 187 |
+
max width=
|
| 188 |
+
|
| 189 |
+
X Atoms Compounds
|
| 190 |
+
|
| 191 |
+
1-3
|
| 192 |
+
Description lemmas and morphological tags combinations of atoms
|
| 193 |
+
|
| 194 |
+
1-3
|
| 195 |
+
Examples tunturi, Case=Gen, Case=Ade, Number=Sing, Number=Plur tunturi|Case=Gen|Number=Plur ("tunturien"), tunturi|Case=Ade|Number=Sing ("tunturilla")
|
| 196 |
+
|
| 197 |
+
1-3
|
| 198 |
+
|
| 199 |
+
Table 1: Description and examples of what we call "atoms" and "compounds". The compounds are the unique word forms, determined by the lemma and the morphological tags. The word form is written inside the brackets.
|
| 200 |
+
|
| 201 |
+
380
|
| 202 |
+
|
| 203 |
+
381
|
| 204 |
+
|
| 205 |
+
382
|
| 206 |
+
|
| 207 |
+
325 379
|
| 208 |
+
|
| 209 |
+
383
|
| 210 |
+
|
| 211 |
+
330 384 the lemmas with frequency ranks of 1000-1033, 2000-2083, 3000-3174, and so on, so that there are fewer frequent lemma types than rare lemma types, but the total number of occurrences of each bucket is around ${40}\mathrm{k}$ . Lemmas that occur fewer than 10 times in the corpus are excluded. After
|
| 212 |
+
|
| 213 |
+
340 the filtering, we have 8720 lemma types that occur about ${390}\mathrm{k}$ times in total in our corpus of $1\mathrm{M}$ sentences. We append the list of 48 morphological ${\operatorname{tags}}^{4}$ (after filtering some that indicate uninteresting words such as 'Typo' and 'Abbr') that these lemmas appear with to the lemma list to complete our list of atoms.
|
| 214 |
+
|
| 215 |
+
Keysers weighted the compounds to "avoid double-counting compounds that are highly correlated with some of their super-compounds". The idea is to lessen the weight of those compounds that only or often occur as a part of one certain super-compound. We weight the compounds analogously, but use only two levels in our weighting, which makes the weighting simpler than in Keysers: we consider the combinations of morphological tags as the lower level of compounds, and these combined with lemmas as the higher level. Thus the motivation for weighting in our case is not to use those morphological tag combinations that only occur with some specific lemma. Therefore, we look for the lemma with which each morphological tag combination occurs most often, and give the tag combination a weight that is the complement of the empirical probability that the tag combination occurs with this lemma. For example, we found that the rare morph tag combination Case=Ade | Degree=Pos | Number=Plur | PartForm=Pres | VerbForm=Part | Voice=Pass occurs ${84}\%$ of the time with the lemma saada forming the word "saatavilla", so it gets a weight of 0.16 . After weighting the tag combinations, we exclude those that have a weight of 0.33 or less.
|
| 216 |
+
|
| 217 |
+
After the described filtering steps, we have 8322
|
| 218 |
+
|
| 219 |
+
atoms, which includes the lemmas and morpho- 389 logical tags. The atoms occur about ${1.3}\mathrm{M}$ times in ${273}\mathrm{k}$ sentences in our corpus of $1\mathrm{M}$ sentences. There are 335 morphological tag combinations that these lemmas appear with, which create about ${69}\mathrm{k}$ unique word forms with the lemmas; i.e. there are ${69}\mathrm{k}$ compounds that we use in our analysis. These compounds occur 352k times in the corpus, in 273k sentences.
|
| 220 |
+
|
| 221 |
+
Calculating atom and compound divergences is done the same way as in Keysers. Namely, divergence $\mathcal{F}$ between distributions $P$ and $Q$ is calculated using the Chernoff coefficient ${C}_{\alpha }\left( {P\parallel Q}\right) =$ $\mathop{\sum }\limits_{k}{p}_{k}^{\alpha }{q}_{k}^{1 - \alpha } \in \left\lbrack {0,1}\right\rbrack$ (Chung et al.,1989), with $\alpha = {0.5}$ for the atom divergence and $\alpha = {0.1}$ for the compound divergence. As described by Key-sers, $\alpha = {0.5}$ for the atom divergence "reflects the desire of making the atom distributions in train and test as similar as possible", and $\alpha = {0.1}$ for the compound divergence "reflects the intuition that it is more important whether a certain compound occurs in $\mathrm{P}$ (train) than whether the probabilities in $\mathrm{P}$ (train) and $\mathrm{Q}$ (test) match exactly". Since the Chernoff coefficient is a similarity metric, the atom and compound divergences of a train set $V$ and a test set $W$ are:
|
| 222 |
+
|
| 223 |
+
$$
|
| 224 |
+
{\mathcal{D}}_{A}\left( {V\parallel W}\right) = 1 - {C}_{0.5}\left( {{\mathcal{F}}_{A}\left( V\right) \parallel {\mathcal{F}}_{A}\left( W\right) }\right)
|
| 225 |
+
$$
|
| 226 |
+
|
| 227 |
+
$$
|
| 228 |
+
{\mathcal{D}}_{C}\left( {V\parallel W}\right) = 1 - {C}_{0.1}\left( {{\mathcal{F}}_{C}\left( V\right) \parallel {\mathcal{F}}_{C}\left( W\right) }\right) .
|
| 229 |
+
$$
|
| 230 |
+
|
| 231 |
+
Once the divergences are defined, we can split a corpus of natural language sentences into training and testing sets with an arbitrary compound and atom divergence values. For this, we use a simple greedy algorithm, sketched in Algorithm 1. For a maximum compound divergence split, the score is calculated as
|
| 232 |
+
|
| 233 |
+
$$
|
| 234 |
+
\operatorname{score}\left( {Q,P}\right) = {\mathcal{D}}_{C}\left( {Q\parallel P}\right) - {\mathcal{D}}_{A}\left( {Q\parallel P}\right) ,
|
| 235 |
+
$$
|
| 236 |
+
|
| 237 |
+
and in general, for any desired compound divergence value $c$ :
|
| 238 |
+
|
| 239 |
+
$$
|
| 240 |
+
\operatorname{score}\left( {Q,P}\right) = - \left| {c - {\mathcal{D}}_{C}\left( {Q\parallel P}\right) }\right| - {\mathcal{D}}_{A}\left( {Q\parallel P}\right) .
|
| 241 |
+
$$
|
| 242 |
+
|
| 243 |
+
431 In practice, we do not have resources to calculate the $\mathop{\max }\limits_{{x \in G}}$ score. Instead, at each iteration we take a subset ${G}^{\prime } \subset G$ , say 1000 sentences, and calculate $\mathop{\max }\limits_{{x \in {G}^{\prime }}}$ score.
|
| 244 |
+
|
| 245 |
+
${}^{4}$ See https://universaldependencies.org/ docs/fi/feat/forthelist of Finnish morphological tags.
|
| 246 |
+
|
| 247 |
+
Procedure 1 Data division algorithm.
|
| 248 |
+
|
| 249 |
+
Input: $G$ $\vartriangleright$ Corpus of sentences
|
| 250 |
+
|
| 251 |
+
Input: $N$ $\vartriangleright$ Use $N$ sentences from $G$
|
| 252 |
+
|
| 253 |
+
Input: $a$ $\vartriangleright$ Lower bound for $\left| V\right| /\left| W\right|$
|
| 254 |
+
|
| 255 |
+
Input: $b$ $\vartriangleright$ Upper bound for $\left| V\right| /\left| W\right|$
|
| 256 |
+
|
| 257 |
+
Output: $V,W\; \vartriangleright$ Train set, test set
|
| 258 |
+
|
| 259 |
+
$V \leftarrow \left\{ {x{ \in }_{R}G}\right\} \; \vartriangleright$ A random sentence
|
| 260 |
+
|
| 261 |
+
$W \leftarrow \varnothing$
|
| 262 |
+
|
| 263 |
+
$G \leftarrow G \smallsetminus V$
|
| 264 |
+
|
| 265 |
+
for $i \leftarrow 1$ to $N$ do
|
| 266 |
+
|
| 267 |
+
$r \leftarrow \left| V\right| /\left| W\right|$
|
| 268 |
+
|
| 269 |
+
${s}_{V} \leftarrow \mathop{\max }\limits_{{x \in G}}\operatorname{score}\left( {V\cup \{ x\} ,W}\right)$
|
| 270 |
+
|
| 271 |
+
${i}_{V} \leftarrow {\operatorname{argmax}}_{x \in G}\operatorname{score}\left( {V\cup \{ x\} ,W}\right)$
|
| 272 |
+
|
| 273 |
+
${s}_{W} \leftarrow \mathop{\max }\limits_{{x \in G}}\operatorname{score}\left( {V,W\cup \{ x\} }\right)$
|
| 274 |
+
|
| 275 |
+
${i}_{W} \leftarrow {\operatorname{argmax}}_{x \in G}\operatorname{score}\left( {V,W\cup \{ x\} }\right)$
|
| 276 |
+
|
| 277 |
+
if $\left( {{s}_{V} > {s}_{W} \land r < b}\right) \vee r < a$ then
|
| 278 |
+
|
| 279 |
+
$V \leftarrow V \cup \left\{ {i}_{V}\right\}$
|
| 280 |
+
|
| 281 |
+
$G \leftarrow G \smallsetminus \left\{ {i}_{V}\right\}$
|
| 282 |
+
|
| 283 |
+
else
|
| 284 |
+
|
| 285 |
+
$W \leftarrow W \cup \left\{ {i}_{W}\right\}$
|
| 286 |
+
|
| 287 |
+
$G \leftarrow G \smallsetminus \left\{ {i}_{W}\right\}$
|
| 288 |
+
|
| 289 |
+
end if
|
| 290 |
+
|
| 291 |
+
end for
|
| 292 |
+
|
| 293 |
+
As mentioned above, this method can be used for any corpus that consists of natural language sentences for which the morphological tags can be obtained. In the next section we use this method to assess morphological generalisation in machine translation.
|
| 294 |
+
|
| 295 |
+
§ 4 EXPERIMENTS AND RESULTS
|
| 296 |
+
|
| 297 |
+
§ 4.1 NMT MODEL TRAINING SETUP AND DATA
|
| 298 |
+
|
| 299 |
+
We chose Finnish as the language we analyse because of its rich morphology and because there is a good morphological tagger available for Finnish. We use the English-Finnish parallel corpus from the Tatoeba challenge data release (Tiedemann, 2020). We first apply some heuristics provided by Aulamo et al. (2020) to remove noisy data, and restrict the maximum sentence length to 100 words, after which we take a random sample of 1 million sentence pairs.
|
| 300 |
+
|
| 301 |
+
We use the OpenNMT-py (Klein et al., 2017) library to train Finnish-English Transformer NMT models using the hyperparameters provided in the
|
| 302 |
+
|
| 303 |
+
example config file ${}^{5}$ , which includes the standard 486
|
| 304 |
+
|
| 305 |
+
6 transformer layers with 8 heads and a hidden di- 487 mension of 512, as in (Vaswani et al., 2017). We train the models until convergence or until a maximum of 33000 steps with 2000 warm-up steps and a batch size of 4096 tokens.
|
| 306 |
+
|
| 307 |
+
For more details about the setup, see the Github repository linked on the first page.
|
| 308 |
+
|
| 309 |
+
§ 4.2 THE EFFECT OF COMPOUND DIVERGENCE ON TRANSLATION PERFORMANCE
|
| 310 |
+
|
| 311 |
+
The basic experiment we propose is to make at least two different train/test splits of a corpus, using ${\mathcal{D}}_{C}$ values of 0 and 1, respectively,(keeping ${\mathcal{D}}_{A} = 0$ ) and assess the change in translation performance (for which we use BLEU (Papineni et al., 2002) and chrF2++ (Popović, 2017) as metrics). Since with ${\mathcal{D}}_{C} = 1$ there are more unseen word forms in the test set, we expect a decrease in translation performance from ${\mathcal{D}}_{C} = 0$ to ${\mathcal{D}}_{C} = 1$ that is caused by the ${\mathcal{D}}_{C} = 1$ test set requiring more morphological generalisation capacity.
|
| 312 |
+
|
| 313 |
+
We show empirically the decrease in performance in Section 4.3, but the cause of this decrease is of course more difficult to verify exactly. The atom and compound distributions are the only things we explicitly control when splitting the corpus, and we only require the compound divergence to differ between different data splits. Therefore, we assume the differing compound divergence to be the cause of this effect, but to be more certain, we conduct two simple checks to look for confounding factors.
|
| 314 |
+
|
| 315 |
+
Firstly, an increase in the average sentence length could be another factor that makes one test set more difficult than another. Increasing the sequence length from training to test set is actually a method that has been proposed to test a certain type of compositional generalisation, sometimes called productivity (Hupkes et al., 2020; Raunak et al., 2019). We calculated the average sentence lengths of the train and test sets of the 8 different data splits that we obtained using 8 different random seeds for the data split algorithm. What we found is that for ${\mathcal{D}}_{C} = 1$ the average lengths in test sets are actually shorter (ranging from 11.35 to 11.66 words) than those for ${\mathcal{D}}_{C} = 0$ (ranging from 12.27 to 13.72 words). The average training set sentence lengths are similar for both ${\mathcal{D}}_{C}$ values,
|
| 316 |
+
|
| 317 |
+
539 ranging from 8.66 to 8.79 for ${\mathcal{D}}_{C} = 0$ and from 8.65 to 8.73 for ${\mathcal{D}}_{C} = 1$ . Thus we know that an increased difference between train and test set sentence lengths cannot explain the decrease in NMT performance from ${\mathcal{D}}_{C} = 0$ to ${\mathcal{D}}_{C} = 1$ since the difference is actually larger for ${\mathcal{D}}_{C} = 0$ . The fact that the average sentence length in training sets is always significantly shorter than in test sets is an interesting unintended artefact of the data division algorithm that deserves further investigation in the future, but it does not confound our analysis.
|
| 318 |
+
|
| 319 |
+
${}^{5}$ https://github.com/OpenNMT/ OpenNMT-py/blob/master/config/ config-transformer-base-1GPU.yml
|
| 320 |
+
|
| 321 |
+
As the second sanity check, we evaluated the NMT models on a neutral test set to see if, for any reason, the training set would be in general worse with ${\mathcal{D}}_{C} = 1$ than with ${\mathcal{D}}_{C} = 0$ , instead of only being worse for the specific test set that we have created. For this we used the Tatoeba challenge test set, which we did not use to train or tune the hyperparameters of any models. The results for the vocabulary size 1000 are presented in Figure 1. We used the models trained on the training sets from the data splits with compound divergences0.0,0.5and 1.0 . The compound divergences between these training sets and the Tatoeba challenge test set do correlate with the target ${\mathcal{D}}_{C}$ of the data split, but they range only from about 0.4 to 0.6 .
|
| 322 |
+
|
| 323 |
+
< g r a p h i c s >
|
| 324 |
+
|
| 325 |
+
Figure 1: Results on the Tatoeba challenge test set. The x-axis labels denote the compound divergences between the training sets and the test sets analysed later in Figure 2. That is, the divergence is not between the training sets and the Tatoeba challenge test set.
|
| 326 |
+
|
| 327 |
+
583
|
| 328 |
+
|
| 329 |
+
From Figure 1 we can see that the NMT models trained with different data sets, from data splits with different ${\mathcal{D}}_{C}$ values, do not show similar de-
|
| 330 |
+
|
| 331 |
+
593 crease in performance on the neutral-ish Tatoeba
|
| 332 |
+
|
| 333 |
+
challenge test set as on the test sets obtained from 594
|
| 334 |
+
|
| 335 |
+
the data split algorithm. We take this to mean that 595 the models trained on ${\mathcal{D}}_{C} = 1$ data splits are not in general worse than those trained with ${\mathcal{D}}_{C} = 0$ data splits, but only worse on the high-divergence test set.
|
| 336 |
+
|
| 337 |
+
600
|
| 338 |
+
|
| 339 |
+
§ 4.3 THE EFFECT OF BPE VOCABULARY SIZE ON MORPHOLOGICAL GENERALISATION IN NMT
|
| 340 |
+
|
| 341 |
+
Next, we make the assumption, based on the anal-
|
| 342 |
+
|
| 343 |
+
ysis in Section 4.2, that we can measure mor- 605 phological generalisation by measuring the de-
|
| 344 |
+
|
| 345 |
+
crease of NMT performance between train/test 607 splits of ${\mathcal{D}}_{C} = 0$ and ${\mathcal{D}}_{C} = 1$ . Previous studies have suggested the hypothesis that NMT mod-
|
| 346 |
+
|
| 347 |
+
els with smaller BPE vocabularies are more capa- 610 ble of modelling morphological phenomena than those with larger vocabularies (for example Li-bovicky and Fraser (2020)). In this section, we compare the morphological generalisation capacities of NMT models with different source-side (Finnish) vocabulary sizes, using the method we have proposed.
|
| 348 |
+
|
| 349 |
+
As a preliminary experiment, we tuned the BPE vocabulary size for our setup (see Section 4.1) on the Tatoeba challenge development set, and found the optimal size to be around ${3000}\mathrm{{BPE}}$ tokens for both the source and target languages. Since we are interested in the Finnish morphology, next we
|
| 350 |
+
|
| 351 |
+
kept the target (English) vocabulary size constant 625 and varied only the source-side vocabulary size.
|
| 352 |
+
|
| 353 |
+
We chose 7 different vocabulary sizes, 3 larger 627 and 3 smaller than the optimal 3000, and evaluated them with target compound divergence values of 0.0,0.25,0.5,0.75and 1.0 . The sizes of the test sets are in the order of a few tens of thousands, or a
|
| 354 |
+
|
| 355 |
+
little over a hundred thousand, sentences. The rel- 632 atively large test set size leads to statistical significance even for small BLEU differences (see Table 3 for details).
|
| 356 |
+
|
| 357 |
+
From the BLEU results for ${\mathcal{D}}_{C} = 0$ and ${\mathcal{D}}_{C} =$ 637 1 in Figure 2 we can see that as we either increase or decrease the vocab size from 3000, the performance drops, but it drops slightly differently w.r.t ${\mathcal{D}}_{C}$ . This effect is most conspicuous for the pair of sizes 500 and 18000. The larger vocabulary performs slightly better when there is less need for morphological generalisation, but the small vocabulary performs better when it is needed more. In general, from this figure we can see that the vo-
|
| 358 |
+
|
| 359 |
+
cabulary size roughly correlates with the angle of 647
|
| 360 |
+
|
| 361 |
+
648 the downward slope, suggesting that the larger the
|
| 362 |
+
|
| 363 |
+
649 vocabulary, the poorer the capacity for morphological generalisation.
|
| 364 |
+
|
| 365 |
+
654
|
| 366 |
+
|
| 367 |
+
< g r a p h i c s >
|
| 368 |
+
|
| 369 |
+
Figure 2: Different source vocabulary sizes evaluated with minimum and maximum ( 0 and 1 ) compound divergence data splits. Compound divergence value 1 requires more morphological generalisation. The larger the vocabulary the steeper the slope, suggesting poorer ability to generalise. For more details, see Table 3 in Appendix A.
|
| 370 |
+
|
| 371 |
+
659
|
| 372 |
+
|
| 373 |
+
661
|
| 374 |
+
|
| 375 |
+
663
|
| 376 |
+
|
| 377 |
+
664
|
| 378 |
+
|
| 379 |
+
666
|
| 380 |
+
|
| 381 |
+
681
|
| 382 |
+
|
| 383 |
+
To investigate the effect of the initialisation of the data split algorithm on the results, we split the same corpus starting from 8 different random ini-tialisations, and trained NMT models for each data split. For this, we chose two pairs of vocabulary sizes that showed most clearly contrasting performance w.r.t ${\mathcal{D}}_{C} : {500}\& {18000}$ and 1000&6000. The main results are presented in Table 2. For these results, the test sets of the 8 random seeds are concatenated together to create exceptionally large test sets of around ${400}\mathrm{k} - {500}\mathrm{k}$ sentences. The results for the individual data splits are presented in Appendix A in Table 4.
|
| 384 |
+
|
| 385 |
+
From these results we can see the same contrasting performance of the small and large vocabularies w.r.t the different compound divergence values. The difference is small but statistically significant and consistent. The models with small vocabularies show better performance than those
|
| 386 |
+
|
| 387 |
+
with large ones when morphological generalisa- 702
|
| 388 |
+
|
| 389 |
+
tion is needed, and vice versa when morphological 703
|
| 390 |
+
|
| 391 |
+
generalisation is not needed as much. 704
|
| 392 |
+
|
| 393 |
+
705
|
| 394 |
+
|
| 395 |
+
§ 5 DISCUSSION AND FUTURE WORK
|
| 396 |
+
|
| 397 |
+
706
|
| 398 |
+
|
| 399 |
+
In Section 3, we proposed an application of DBCA 708 to divide any corpus of sentences, for which mor-
|
| 400 |
+
|
| 401 |
+
phological tags are available, into training and 710 test sets with similar distributions of lemmas and morphological tags but contrasting distributions of
|
| 402 |
+
|
| 403 |
+
word forms, in order to assess morphological gen- 713 eralisation. By this method, we can take a large
|
| 404 |
+
|
| 405 |
+
proportion of the morphological phenomena of a 715 selected language into consideration, in our exper-
|
| 406 |
+
|
| 407 |
+
iments 335 different morphological categories that 718 together with about $8\mathrm{k}$ lemmas create ${69}\mathrm{k}$ unique
|
| 408 |
+
|
| 409 |
+
Finnish word forms, and evaluate the effects of 720 the contrasting train/test distributions of the word forms in machine translation. This enables a different, complementing type of assessment of morphological generalisation than previous synthetic benchmarks (mainly Burlot and Yvon (2017)) that focus on a smaller number of morphological phenomena. The benefit of our method is the comprehensiveness, focusing on the corpus-wide distributions of word forms.
|
| 410 |
+
|
| 411 |
+
Using only corpus-wide metrics such as BLEU, as we used, does not allow for the qualitative evaluation that the synthetic benchmarks offer. In the terminology of Burlot and Yvon (2017), this holistic, document-level evaluation can be contrasted with analytic evaluation that focuses more specifically on difficulties in morphology. A trick that could enable a more analytic assessment of the translations of the unseen word forms would be to align the words in the source sentences with the
|
| 412 |
+
|
| 413 |
+
words in the reference translations and the words 740 in the predicted translations, and evaluate only the translations of the parts of the sentences that correspond to the unseen word forms. Similar method
|
| 414 |
+
|
| 415 |
+
has been used previously for example by Bau et al. 745 (2019); Stanovsky et al. (2019), and we aim to experiment with this method in future work.
|
| 416 |
+
|
| 417 |
+
Especially combined with this word-alignment trick, we could also make our evaluation more fine-grained (this concept also from (Burlot and Yvon, 2017)), that is, our evaluation could differentiate between different types of mistakes. Since we have the morphological tags, we could sort the words by morphological category and compare the
|
| 418 |
+
|
| 419 |
+
translation accuracies to look for any especially 755
|
| 420 |
+
|
| 421 |
+
max width=
|
| 422 |
+
|
| 423 |
+
3|c|chrF2++ 2|c|BLEU
|
| 424 |
+
|
| 425 |
+
1-5
|
| 426 |
+
Vocab ${\mathcal{D}}_{C} = 0$ ${\mathcal{D}}_{C} = 1$ ${\mathcal{D}}_{C} = 0$ ${\mathcal{D}}_{C} = 1$
|
| 427 |
+
|
| 428 |
+
1-5
|
| 429 |
+
500 ${51.20}\left( {{51.20} \pm {0.05}}\right)$ 49.33 (49.33 ± 0.05) 27.50 (27.50 ± 0.07) 25.4 (25.40 ± 0.07)
|
| 430 |
+
|
| 431 |
+
1-5
|
| 432 |
+
18000 51.29 (51.29 ± 0.05) 49.04 (49.05 ± 0.05) $\mathbf{{27.69}\left( {{27.69} \pm {0.07}}\right) }$ 25.18 (25.18 ± 0.07)
|
| 433 |
+
|
| 434 |
+
1-5
|
| 435 |
+
X $p = {0.0003}$ $p = {0.0003}$ $p = {0.0003}$ $p = {0.0003}$
|
| 436 |
+
|
| 437 |
+
1-5
|
| 438 |
+
1000 51.78 (51.78 ± 0.05) 49.79 (49.79 ± 0.05) ${28.17}\left( {{28.17} \pm {0.07}}\right)$ $\mathbf{{25.89}\left( {{25.89} \pm {0.07}}\right) }$
|
| 439 |
+
|
| 440 |
+
1-5
|
| 441 |
+
6000 51.83 (51.83 ± 0.05) 49.67 (49.67 ± 0.05) $\mathbf{{28.24}\left( {{28.24} \pm {0.07}}\right) }$ 25.80 (25.80 ± 0.07)
|
| 442 |
+
|
| 443 |
+
1-5
|
| 444 |
+
X $p = {0.0003}$ $p = {0.0003}$ $p = {0.0003}$ $p = {0.0003}$
|
| 445 |
+
|
| 446 |
+
1-5
|
| 447 |
+
|
| 448 |
+
Table 2: Pairwise comparisons of the source vocabulary sizes 500 and 18000; 1000 and 6000. The results are calculated for the concatenated test sets generated with 8 random seeds. Inside brackets is the true mean estimated from bootstrap resampling and the ${95}\%$ confidence interval. The results for the individual seeds are presented in Appendix A in Table 4 and Figure 3.
|
| 449 |
+
|
| 450 |
+
810
|
| 451 |
+
|
| 452 |
+
811
|
| 453 |
+
|
| 454 |
+
816 difficult categories for the translation models.
|
| 455 |
+
|
| 456 |
+
To demonstrate the use of our proposed method, we compared NMT models with different BPE vocabulary sizes, since vocabulary size had been hypothesised to affect the capacity to model morphology in translation. Besides vocabulary size, there are many other model design choices that have been proposed to help either in generalisation or in capturing morphological phenomena. For example, factored NMT systems (García-Martínez et al., 2016) can cover more of the target side vocabulary than subword-based NMT systems, which can help in modelling the morphology of the target language. It would be interesting to assess different model types such as factored NMT systems or LSTM-based systems to see how
|
| 457 |
+
|
| 458 |
+
789 they compare with Transformers on our evaluation method.
|
| 459 |
+
|
| 460 |
+
The DBCA method is general, and could be applied to a wide variety of tasks and datasets. Our application of DBCA is more specific, but it still inherits some of the generality of the original method. Our method is directly applicable to any machine learning task where the data consists of sentences for which the morphological tags are available. In the future, we intend to extend our assessment of morphological generalisation to other languages, as well as to other NLP tasks, such as paraphrase detection.
|
| 461 |
+
|
| 462 |
+
§ 6 CONCLUSION
|
| 463 |
+
|
| 464 |
+
We proposed a method to assess morphological generalisation by distribution-based composition-ality assessment. Because this method is fully automated, it enables more comprehensive assess-
|
| 465 |
+
|
| 466 |
+
809 ment of morphological generalisation than previously proposed synthetic benchmarks, in terms of the number of inflection types we can evaluate. We used our method to assess NMT models with different BPE vocabulary sizes and found that models with smaller vocabularies are better at morphological generalisation than those with larger vocabularies. Lastly, we discussed the varied future directions that our generalisable method offers, such as assessing morphological generali-
|
| 467 |
+
|
| 468 |
+
sation in other NLP tasks besides NMT. 836
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/1vkyEY-HeLY/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,1139 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
# Dozens of Translation Directions or Millions of Shared Parameters? Comparing Two Types of Multilinguality in Modular Machine Translation
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 057
|
| 10 |
+
|
| 11 |
+
Anonymous Author
|
| 12 |
+
|
| 13 |
+
Affiliation / Address line 1
|
| 14 |
+
|
| 15 |
+
006 Affiliation / Address line 2
|
| 16 |
+
|
| 17 |
+
Affiliation / Address line 3
|
| 18 |
+
|
| 19 |
+
email@domain
|
| 20 |
+
|
| 21 |
+
Anonymouser Author
|
| 22 |
+
|
| 23 |
+
Affiliation / Address line 1
|
| 24 |
+
|
| 25 |
+
Affiliation / Address line 2
|
| 26 |
+
|
| 27 |
+
Affiliation / Address line 3
|
| 28 |
+
|
| 29 |
+
email@domain
|
| 30 |
+
|
| 31 |
+
Anonymousest Author 058
|
| 32 |
+
|
| 33 |
+
Affiliation / Address line 1 059
|
| 34 |
+
|
| 35 |
+
Affiliation / Address line 2 060 Affiliation / Address line 3 email@domain
|
| 36 |
+
|
| 37 |
+
## Abstract
|
| 38 |
+
|
| 39 |
+
013 There are several ways of implementing multilingual NLP systems but little consensus as to whether different approaches
|
| 40 |
+
|
| 41 |
+
016 exhibit similar effects. Are the trends that we observe when adding more languages
|
| 42 |
+
|
| 43 |
+
018 the same as those we observe when sharing more parameters? We focus on encoder representations drawn from modu-
|
| 44 |
+
|
| 45 |
+
021 lar multilingual machine translation systems in an English-centric scenario, and
|
| 46 |
+
|
| 47 |
+
023 study their quality from multiple aspects: how adequate they are for machine trans-
|
| 48 |
+
|
| 49 |
+
026 lation, how independent of the source lan- guage they are, and what semantic infor-
|
| 50 |
+
|
| 51 |
+
028 mation they convey. Adding translation directions in English-centric scenarios does
|
| 52 |
+
|
| 53 |
+
030 not conclusively lead to an increase in
|
| 54 |
+
|
| 55 |
+
031 translation quality. Shared layers increase performance on zero-shot translation pairs
|
| 56 |
+
|
| 57 |
+
033 and lead to more language-independent representations, but these improvements do not systematically align with more se-
|
| 58 |
+
|
| 59 |
+
036 mantically accurate representations, from a monolingual standpoint.
|
| 60 |
+
|
| 61 |
+
038
|
| 62 |
+
|
| 63 |
+
## 1 Introduction
|
| 64 |
+
|
| 65 |
+
Multilinguality, within the scope of neural NLP, can mean either ensuring that computations for dif-
|
| 66 |
+
|
| 67 |
+
043 ferent languages are homogeneous, or ensuring that models are trained with data coming from different languages. These two definitions are not as equivalent as they might appear: for instance, modular architectures, where some parameters are devoted
|
| 68 |
+
|
| 69 |
+
048 to specific inputs, can only be conceived as multilingual under the latter definition.
|
| 70 |
+
|
| 71 |
+
Both of these trends have been explored across multiple works. Machine translation studies have looked into sharing no parameters at all (Escolano
|
| 72 |
+
|
| 73 |
+
053 et al., 2021), sharing across linguistically informed
|
| 74 |
+
|
| 75 |
+
groups (Fan et al., 2021; Purason and Tättar, 2022), 065 sharing only some components across all languages
|
| 76 |
+
|
| 77 |
+
(Dong et al., 2015; Firat et al., 2016; Vázquez et al., 067 2020; Liao et al., 2021; Zhu et al., 2020; Kong et al., 2021; Blackwood et al., 2018; Sachan and
|
| 78 |
+
|
| 79 |
+
Neubig, 2018; Zhang et al., 2021), and sharing 070 the entire model (Johnson et al., 2017). Concerns
|
| 80 |
+
|
| 81 |
+
about multilinguality have spearheaded research 072 on how to make representations and systems more reliable for typologically and linguistically diverse data (Bojanowski et al., 2017; Adelani et al., 2022), the distinction between multilingual and monolingual representations (Wu and Dredze, 2020), the specificity of massively-multilingual representa-
|
| 82 |
+
|
| 83 |
+
tions (Kudugunta et al., 2019) or the effects of 080 having more diverse data (Arivazhagan et al., 2019;
|
| 84 |
+
|
| 85 |
+
Aharoni et al., 2019; Costa-jussà et al., 2022; Sid- 082 dhant et al., 2022; Kim et al., 2021; Voita et al., 2019). In this paper, we study whether these differ-
|
| 86 |
+
|
| 87 |
+
ent implementations of multilinguality yield quali- 085 tatively different types of representations-in other
|
| 88 |
+
|
| 89 |
+
words: Are the effects of parameter sharing orthog- 087 onal to those of adding new languages?
|
| 90 |
+
|
| 91 |
+
To broach this question, we make three simplify-
|
| 92 |
+
|
| 93 |
+
ing assumptions. First, we only consider the task 090 of multilingual machine translation—an exhaustive
|
| 94 |
+
|
| 95 |
+
study of the impact of all multilingual NLP tasks 092 is beyond the scope of this paper. Moreover, massively multilingual language models are known to leverage parallel data to enhance semantic abstrac-
|
| 96 |
+
|
| 97 |
+
tions (Hu et al., 2021; Ouyang et al., 2021; Kale 097 et al., 2021). Second, we only consider parameter sharing in the last layers of the encoders: we focus on the intermediary representations acquired directly after the encoder and leave decoders for future study. As language selection tokens would compromise the language independence of the representations, this rules out fully shared decoders. Third, we focus on an English-centric scenario: i.e., all translation directions seen during training con-
|
| 98 |
+
|
| 99 |
+
tain English as a source or target language. While 107 such an approach is not without issues (Gu et al., 2019; Zhang et al., 2020), it makes it possible to select translation directions for zero-shot evaluations in a principled manner. Furthermore, most multilingual translation datasets are highly skewed in any case and contain orders of magnitude more English examples (e.g., Costa-jussà et al., 2022).
|
| 100 |
+
|
| 101 |
+
We conduct our study by testing encoder outputs on three aspects: task fitness, language independence and semantic content. These features have been discussed in earlier literature: probing pretrained language models for semantic content in particular has proven very fecund (e.g., Rogers et al., 2021; Doddapaneni et al., 2021). As for machine translation, these studies are less numerous, although similar aspects have been investigated (Raganato and Tiedemann, 2018). For instance, Kudugunta et al. (2019) study how the learned representations evolve in a multilingual scenario, whereas Vázquez et al. (2020), Raganato et al. (2019) or Mareček et al. (2020) focus on the use of multilingual-MT as a signal for learning language. As we will show, studying representations under different angles is required in order to highlight the differences underpinning distinct implementations of multilinguality. ${}^{1}$
|
| 102 |
+
|
| 103 |
+
## 2 Experimental setup
|
| 104 |
+
|
| 105 |
+
### 2.1 Datasets
|
| 106 |
+
|
| 107 |
+
We focus on datasets derived from the OPUS-100 corpus (Zhang et al., 2020), built by randomly sampling from the OPUS parallel text collection (Tiedemann, 2012). We construct datasets containing3,6,9,12,24,36,48,60and 72 languages other than English and refer to them as opus-03, opus-06, and so on. To test the impact on the model performance when adding languages, we build the datasets with an incremental approach, so that smaller datasets are systematically contained in the larger ones. Languages are selected so as to maximize the number of available datapoints-for training, zero-shot evaluation and probing- as well as linguistic diversity. See Appendix A for details.
|
| 108 |
+
|
| 109 |
+
### 2.2 Models
|
| 110 |
+
|
| 111 |
+
We train modular sequence-to-sequence Transformer models (Escolano et al., 2021), with 6 layers in the encoder and the decoder. Decoders are systematically language-specific, whereas encoders
|
| 112 |
+
|
| 113 |
+

|
| 114 |
+
|
| 115 |
+
Figure 1: Example model architectures for varying number of shared encoder layers $s$ . Modules with a light grey background are language-specific, modules with a dark grey background are fully shared.
|
| 116 |
+
|
| 117 |
+
162
|
| 118 |
+
|
| 119 |
+
163
|
| 120 |
+
|
| 121 |
+
164
|
| 122 |
+
|
| 123 |
+
168 contain $s \in \{ 0,\ldots ,6\}$ fully-shared layers on top of $6 - s$ language-specific layers, as shown in Figure 1. We train distinct models for each value of $s$ and each dataset; due to the computational costs incurred, we consider $s \geq 2$ only in combination with datasets up to opus-12, as well as opus-36. Models vary along two axes: models trained on larger datasets are exposed to more languages, whereas models with higher values of $s$ share more parameters. When training models over a dataset, we consider the translation directions $L$ - to-English, English-to- $L$ , and a $L$ -to- $L$ denoising task, for all languages $L$ in the dataset. ${}^{2}$ An illus-
|
| 124 |
+
|
| 125 |
+
tration of opus-03 models is shown in Figure 1. 190 Training details are given in Appendix B
|
| 126 |
+
|
| 127 |
+
## 3 Experiments
|
| 128 |
+
|
| 129 |
+
193
|
| 130 |
+
|
| 131 |
+
### 3.1 Task fitness: Machine Translation
|
| 132 |
+
|
| 133 |
+
195
|
| 134 |
+
|
| 135 |
+
The first aspect we consider is the models' performance on machine translation. We report BLEU scores in Figure 2. Where relevant, we also include supervised results for translation directions present in opus-06 so as to provide comparable scores. ${}^{3}$
|
| 136 |
+
|
| 137 |
+
The most obvious trend present is that models trained on opus-03 with $s \geq 5$ underfit, and perform considerably worse than their $s < 5$ counterpart. Otherwise, models with an equivalent number of shared layers $s$ tend to perform very reliably across datasets: e.g., across all supervised translation directions we tested, we found that the maximum variation in BLEU scores for $s < 2$ was of $\pm {4.8}$ . In Figure 2b, we also observe consistent
|
| 138 |
+
|
| 139 |
+
215
|
| 140 |
+
|
| 141 |
+
---
|
| 142 |
+
|
| 143 |
+
${}^{2}$ I.e., a model trained over the opus- $n$ dataset is trained over ${3n}$ tasks: ${2n}$ translation tasks, plus $n$ denoising tasks for languages other than English.
|
| 144 |
+
|
| 145 |
+
${}^{3}$ Note that all available zero-shot translation directions are systematically present in opus-06 and all larger datasets.
|
| 146 |
+
|
| 147 |
+
${}^{4}$ See also Aharoni et al. (2019) or Conneau et al. (2020).
|
| 148 |
+
|
| 149 |
+
${}^{1}$ Code, data, and full results of our experiments will be made available upon acceptance.
|
| 150 |
+
|
| 151 |
+
---
|
| 152 |
+
|
| 153 |
+
216
|
| 154 |
+
|
| 155 |
+
$= 1$ , supervised directions $- ○ - s = 0$ , supervised directions
|
| 156 |
+
|
| 157 |
+
1, zero-shot directions
|
| 158 |
+
|
| 159 |
+
$= 1$ , directions in opus- ${06} - \odot - s = 0$ , directions in opus-06
|
| 160 |
+
|
| 161 |
+
40
|
| 162 |
+
|
| 163 |
+
BLEU 20
|
| 164 |
+
|
| 165 |
+
10
|
| 166 |
+
|
| 167 |
+
1
|
| 168 |
+
|
| 169 |
+
24 36 48 60 72
|
| 170 |
+
|
| 171 |
+
Number of languages
|
| 172 |
+
|
| 173 |
+
(a) Average BLEU scores per dataset size
|
| 174 |
+
|
| 175 |
+
supervised directions opus-36 opus-12
|
| 176 |
+
|
| 177 |
+
zero-shot directions opus-09 opus-06
|
| 178 |
+
|
| 179 |
+
directions in opus-06 opus-03
|
| 180 |
+
|
| 181 |
+
40
|
| 182 |
+
|
| 183 |
+
30
|
| 184 |
+
|
| 185 |
+
BLEU 20
|
| 186 |
+
|
| 187 |
+
3 5
|
| 188 |
+
|
| 189 |
+
Number of shared layers $s$ (b) Average BLEU scores per number of shared layers
|
| 190 |
+
|
| 191 |
+
Figure 2: Average BLEU scores
|
| 192 |
+
|
| 193 |
+
217
|
| 194 |
+
|
| 195 |
+
218
|
| 196 |
+
|
| 197 |
+
219
|
| 198 |
+
|
| 199 |
+
220
|
| 200 |
+
|
| 201 |
+
221
|
| 202 |
+
|
| 203 |
+
222
|
| 204 |
+
|
| 205 |
+
223
|
| 206 |
+
|
| 207 |
+
224
|
| 208 |
+
|
| 209 |
+
225
|
| 210 |
+
|
| 211 |
+
226
|
| 212 |
+
|
| 213 |
+
227
|
| 214 |
+
|
| 215 |
+
228
|
| 216 |
+
|
| 217 |
+
229
|
| 218 |
+
|
| 219 |
+
230
|
| 220 |
+
|
| 221 |
+
231
|
| 222 |
+
|
| 223 |
+
232
|
| 224 |
+
|
| 225 |
+
233
|
| 226 |
+
|
| 227 |
+
234
|
| 228 |
+
|
| 229 |
+
235
|
| 230 |
+
|
| 231 |
+
237
|
| 232 |
+
|
| 233 |
+
239 improvement on zero-shot translation when increasing the number of shared layers $s$ from 0 to 4, and
|
| 234 |
+
|
| 235 |
+
249 for opus-36 this trend only breaks when the full stack is shared $\left( {s = 6}\right)$ . Lastly, results in Figure 2a suggest that adding more translation directions decreases zero-shot translation performances, but this trend seems to reverse when a significant number of layers are shared $\left( {s > 3}\right)$ , as displayed in Figure 2b. In all, under the setup we consider here, it appears that task fitness and zero-shot generalization are best achieved by sharing more parameters, rather than adding translation directions-although ex-
|
| 236 |
+
|
| 237 |
+
259 cessive sharing also impacts performances. ${}^{5}$
|
| 238 |
+
|
| 239 |
+
### 3.2 Language Independence: XNLI
|
| 240 |
+
|
| 241 |
+
To test to what degree encoder representations are
|
| 242 |
+
|
| 243 |
+
264 language-independent, we train classifier probes on XNLI (Conneau et al., 2018). We train models on English and report results for all languages: the gap between English and non-English performances
|
| 244 |
+
|
| 245 |
+
269
|
| 246 |
+
|
| 247 |
+

|
| 248 |
+
|
| 249 |
+
Figure 3: Average XNLI macro-f1 scores
|
| 250 |
+
|
| 251 |
+
270
|
| 252 |
+
|
| 253 |
+
271
|
| 254 |
+
|
| 255 |
+
272
|
| 256 |
+
|
| 257 |
+
273
|
| 258 |
+
|
| 259 |
+
274
|
| 260 |
+
|
| 261 |
+
275
|
| 262 |
+
|
| 263 |
+
276
|
| 264 |
+
|
| 265 |
+
277
|
| 266 |
+
|
| 267 |
+
278
|
| 268 |
+
|
| 269 |
+
279
|
| 270 |
+
|
| 271 |
+
280
|
| 272 |
+
|
| 273 |
+
281
|
| 274 |
+
|
| 275 |
+
282
|
| 276 |
+
|
| 277 |
+
283
|
| 278 |
+
|
| 279 |
+
284
|
| 280 |
+
|
| 281 |
+
285
|
| 282 |
+
|
| 283 |
+
286
|
| 284 |
+
|
| 285 |
+
287
|
| 286 |
+
|
| 287 |
+
288
|
| 288 |
+
|
| 289 |
+
289
|
| 290 |
+
|
| 291 |
+
290
|
| 292 |
+
|
| 293 |
+
291
|
| 294 |
+
|
| 295 |
+
293
|
| 296 |
+
|
| 297 |
+
294
|
| 298 |
+
|
| 299 |
+
295
|
| 300 |
+
|
| 301 |
+
296
|
| 302 |
+
|
| 303 |
+
297
|
| 304 |
+
|
| 305 |
+
298
|
| 306 |
+
|
| 307 |
+
quantifies how language-dependent the represen- 299
|
| 308 |
+
|
| 309 |
+
tations are. We report macro-f1 on the validation 300
|
| 310 |
+
|
| 311 |
+
split; if no such split is available, we randomly 301 select ${10}\%$ instead. See Appendix C for details.
|
| 312 |
+
|
| 313 |
+
Figure 3 underscores that our English-centric 303 scenario prevents language-independent encoder representations: English targets fare better than
|
| 314 |
+
|
| 315 |
+
their counterparts. Variation seems driven by the 306
|
| 316 |
+
|
| 317 |
+
number of shared parameters: in Figure 3a, models 308 with $s = 1$ outperform models with $s = 0$ , whereas in Figure 3b, higher values of $s$ tend to close the gap between English and other targets. Interestingly, higher values of $s$ yield lower f1 scores in smaller
|
| 318 |
+
|
| 319 |
+
datasets, both for English and other languages. In 313 particular, we observe a drop for all languages on opus-03 with $s > 4$ , matching the underfitting we saw in Section 3.1; this trend is also attested in all datasets except opus-36. But on the whole, $a$
|
| 320 |
+
|
| 321 |
+
greater number of shared parameters leads to more 318 language-independent representations.
|
| 322 |
+
|
| 323 |
+
### 3.3 Semantic Content: NLU benchmarks
|
| 324 |
+
|
| 325 |
+
320
|
| 326 |
+
|
| 327 |
+
321
|
| 328 |
+
|
| 329 |
+
To verify the semantic contents captured by our rep- 322
|
| 330 |
+
|
| 331 |
+
resentations, we test them on monolingual GLUE- 323
|
| 332 |
+
|
| 333 |
+
---
|
| 334 |
+
|
| 335 |
+
${}^{5}$ Previous fully-shared models achieved high zero-shot performances, e.g. Johnson et al. (2017).
|
| 336 |
+
|
| 337 |
+
---
|
| 338 |
+
|
| 339 |
+
324 378
|
| 340 |
+
|
| 341 |
+

|
| 342 |
+
|
| 343 |
+
Figure 4: Average macro-f1 scores ( $z$ -scaled) on NLU monolingual tasks
|
| 344 |
+
|
| 345 |
+
390
|
| 346 |
+
|
| 347 |
+
391
|
| 348 |
+
|
| 349 |
+
325 379
|
| 350 |
+
|
| 351 |
+
326 380
|
| 352 |
+
|
| 353 |
+
327 381
|
| 354 |
+
|
| 355 |
+
328 382
|
| 356 |
+
|
| 357 |
+
329 383
|
| 358 |
+
|
| 359 |
+
330 384
|
| 360 |
+
|
| 361 |
+
331 385
|
| 362 |
+
|
| 363 |
+
332 386
|
| 364 |
+
|
| 365 |
+
333 387
|
| 366 |
+
|
| 367 |
+
334 388
|
| 368 |
+
|
| 369 |
+
335 389
|
| 370 |
+
|
| 371 |
+
394
|
| 372 |
+
|
| 373 |
+
396 style benchmarks. We focus on benchmarks for languages present in opus-03: Arabic (ALUE, Seelawi et al. 2021), Chinese (CLUE, Xu et al. 2020), English (GLUE, Wang et al. 2018) and French (FLUE, Le et al. 2020). We select tasks that can be learned using a simple classifier; see Table 3 in Appendix C for a full list of the monolingual classification tasks considered. We follow
|
| 374 |
+
|
| 375 |
+
352 the same methodology as in Section 3.2.
|
| 376 |
+
|
| 377 |
+
Results are displayed in Figure 4. Instead of plotting raw macro-f1 scores, we first $z$ -normalize them so as to convert them to a comparable scale. Looking across datasets (Figures 4a to 4d), we do not see a clear variation; at best, we can argue English performances improves when using more language pairs. This is consistent with the English-
|
| 378 |
+
|
| 379 |
+
360 centric scenario under which we trained our models. Arabic and Chinese results would suggest that $s =$
|
| 380 |
+
|
| 381 |
+
362 1 models fare better than $s = 0$ models, but this trend does not carry on convincingly for French.
|
| 382 |
+
|
| 383 |
+
Comparing across number of shared layers (Figures 4e to 4h) suggests this trend might be more
|
| 384 |
+
|
| 385 |
+
367 complex: all languages tend to lose in accuracy for higher values of $s$ , and this effect is all the more pronounced for non-English languages and models trained on smaller datasets. For instance, the optimal number of shared layers for Chinese is either $s = 3$ or $s = 4$ , depending on the task under consideration and the number of language pairs in the training dataset, but the gain over $s < 3$ models is minimal. This differs crucially from what we observed in Section 3.1, where only $s = 6$ impacted
|
| 386 |
+
|
| 387 |
+
377 BLEU scores, and in Section 3.2, where there was a clear improvement from low to mid values of $s$ . In
|
| 388 |
+
|
| 389 |
+
sum, probing encoder representations for their se- 399 mantic contents paints a more nuanced picture, one where semantic accuracy does not clearly align with task fitness or language-independence.
|
| 390 |
+
|
| 391 |
+
## 4 Conclusions
|
| 392 |
+
|
| 393 |
+
404
|
| 394 |
+
|
| 395 |
+
We have studied whether different means of achiev-
|
| 396 |
+
|
| 397 |
+
ing multilinguality-sharing parameters and mul- 406 tiplying languages-bring about the same effects. What transpires from our experiments is that the
|
| 398 |
+
|
| 399 |
+
two means are not equivalent: we generally observe 409 higher performances and more reliable represen-
|
| 400 |
+
|
| 401 |
+
tations by setting the optimal number of shared 411 parameters. Crucially, this optimum depends on the criteria chosen to evaluate representations: machine translation quality (Section 3.1), language
|
| 402 |
+
|
| 403 |
+
independence (Section 3.2) and semantic accuracy 416 (Section 3.3) all differed in that respect.
|
| 404 |
+
|
| 405 |
+
These two approaches are not dichotomous: it is possible to both scale the number of languages and select optimal parameter sharing. What is possible
|
| 406 |
+
|
| 407 |
+
may however not be practical. As guidance to NLP 421 practitioners, we recommend spending effort on tuning the level of parameter sharing for the task at hand. Sharing either too little ( $0 - 1$ layers in our experiments) or too much (sharing the entire en-
|
| 408 |
+
|
| 409 |
+
coder) results in sub-optimal performance overall, 426 but the optimal number of layers to share depends on the task. Spending significant effort on acquiring data for additional language pairs may not yield
|
| 410 |
+
|
| 411 |
+
improved representations past the initial stages of 430
|
| 412 |
+
|
| 413 |
+
data collection (opus-03 in our experiments). 431
|
| 414 |
+
|
| 415 |
+
## References
|
| 416 |
+
|
| 417 |
+
433
|
| 418 |
+
|
| 419 |
+
434 David Adelani, Jesujoba Alabi, Angela Fan, Julia Kreutzer, Xiaoyu Shen, Machel Reid, Dana Ruiter, 435 Dietrich Klakow, Peter Nabende, Ernie Chang, Tajud- 436 deen Gwadabe, Freshia Sackey, Bonaventure F. P. 437 Dossou, Chris Emezue, Colin Leong, Michael Beuk- 438 man, Shamsuddeen Muhammad, Guyo Jarso, Oreen 439 Yousuf, Andre Niyongabo Rubungo, Gilles Hacheme, 440 Eric Peter Wairagala, Muhammad Umair Nasir, Ben- jamin Ajibade, Tunde Ajayi, Yvonne Gitau, Jade 441 Abbott, Mohamed Ahmed, Millicent Ochieng, An- 442 uoluwapo Aremu, Perez Ogayo, Jonathan Mukiibi, 443 Fatoumata Ouoba Kabore, Godson Kalipe, Derguene 444 Mbaye, Allahsera Auguste Tapo, Victoire Memd- 445 jokam Koagne, Edwin Munkoh-Buabeng, Valen- cia Wagner, Idris Abdulmumin, Ayodele Awokoya, Happy Buzaaba, Blessing Sibanda, Andiswa Bukula, and Sam Manthalu. 2022. A few thousand transla- 448 tions go a long way! leveraging pre-trained models for African news translation. In Proceedings of 450 the 2022 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 3053-3070, Seattle, United States. Association for Computational Linguistics.
|
| 420 |
+
|
| 421 |
+
Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3874-3884, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 422 |
+
|
| 423 |
+
Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, et al. 2019. Massively multilingual neural machine translation in the wild: Findings and challenges. arXiv preprint arXiv:1907.05019.
|
| 424 |
+
|
| 425 |
+
Graeme Blackwood, Miguel Ballesteros, and Todd Ward. 2018. Multilingual neural machine translation
|
| 426 |
+
|
| 427 |
+
470 with task-specific attention. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3112-3122, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
|
| 428 |
+
|
| 429 |
+
Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146.
|
| 430 |
+
|
| 431 |
+
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin-
|
| 432 |
+
|
| 433 |
+
485 guistics.
|
| 434 |
+
|
| 435 |
+
Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina 486
|
| 436 |
+
|
| 437 |
+
Williams, Samuel Bowman, Holger Schwenk, and 487
|
| 438 |
+
|
| 439 |
+
Veselin Stoyanov. 2018. XNLI: Evaluating cross- 488
|
| 440 |
+
|
| 441 |
+
lingual sentence representations. In Proceedings of 489
|
| 442 |
+
|
| 443 |
+
the 2018 Conference on Empirical Methods in Nat- 490 ural Language Processing, pages 2475-2485, Brussels, Belgium. Association for Computational Lin-
|
| 444 |
+
|
| 445 |
+
guistics. 492
|
| 446 |
+
|
| 447 |
+
Marta R Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, et al. 2022. No language left behind: Scaling
|
| 448 |
+
|
| 449 |
+
human-centered machine translation, arXiv preprint 497 arXiv:2207.04672.
|
| 450 |
+
|
| 451 |
+
Sumanth Doddapaneni, Gowtham Ramesh, Anoop Kunchukuttan, Pratyush Kumar, and Mitesh M. Khapra. 2021. A primer on pretrained multilingual
|
| 452 |
+
|
| 453 |
+
language models. CoRR, abs/2107.00676. 502
|
| 454 |
+
|
| 455 |
+
Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and 504 Haifeng Wang. 2015. Multi-task learning for multiple language translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational
|
| 456 |
+
|
| 457 |
+
Linguistics and the 7th International Joint Confer- 507 ence on Natural Language Processing (Volume 1:
|
| 458 |
+
|
| 459 |
+
Long Papers), pages 1723-1732, Beijing, China. As- 509 sociation for Computational Linguistics.
|
| 460 |
+
|
| 461 |
+
Carlos Escolano, Marta R. Costa-jussà, José A. R. Fonollosa, and Mikel Artetxe. 2021. Multilingual machine translation: Closing the gap between shared
|
| 462 |
+
|
| 463 |
+
and language-specific encoder-decoders. In Proceed- 514 ings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 944-948, Online. Association
|
| 464 |
+
|
| 465 |
+
for Computational Linguistics. 517
|
| 466 |
+
|
| 467 |
+
Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi 519 Ma, Ahmed El-Kishky, Siddharth Goyal, Man-deep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vi-
|
| 468 |
+
|
| 469 |
+
taliy Liptchinsky, Sergey Edunov, Edouard Grave, 522 Michael Auli, and Armand Joulin. 2021. Beyond
|
| 470 |
+
|
| 471 |
+
English-centric multilingual machine translation. $J$ . 524 Mach. Learn. Res., 22(1).
|
| 472 |
+
|
| 473 |
+
Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016.
|
| 474 |
+
|
| 475 |
+
Multi-way, multilingual neural machine translation 527 with a shared attention mechanism. In Proceedings
|
| 476 |
+
|
| 477 |
+
of the 2016 Conference of the North American Chap- 529 ter of the Association for Computational Linguistics: Human Language Technologies, pages 866-875, San Diego, California. Association for Computational Linguistics.
|
| 478 |
+
|
| 479 |
+
Jiatao Gu, Yong Wang, Kyunghyun Cho, and Vic- 534 tor O.K. Li. 2019. Improved zero-shot neural machine translation via ignoring spurious correlations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1258- 1268, Florence, Italy. Association for Computational
|
| 480 |
+
|
| 481 |
+
Linguistics. 539
|
| 482 |
+
|
| 483 |
+
Junjie Hu, Melvin Johnson, Orhan Firat, Aditya Sid- 541 dhant, and Graham Neubig. 2021. Explicit alignment objectives for multilingual bidirectional encoders. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3633-3643, Online. Association for Computa- 546 tional Linguistics.
|
| 484 |
+
|
| 485 |
+
Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's 551 multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339-351.
|
| 486 |
+
|
| 487 |
+
Mihir Kale, Aditya Siddhant, Rami Al-Rfou, Linting Xue, Noah Constant, and Melvin Johnson. 2021.
|
| 488 |
+
|
| 489 |
+
556 nmT5 - is parallel data still relevant for pre-training massively multilingual language models? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 683-691,
|
| 490 |
+
|
| 491 |
+
561 Online. Association for Computational Linguistics.
|
| 492 |
+
|
| 493 |
+
Zae Myung Kim, Laurent Besacier, Vassilina Nikoulina,
|
| 494 |
+
|
| 495 |
+
563 and Didier Schwab. 2021. Do multilingual neural machine translation models contain language pair specific attention heads? In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2832-2841, Online. Association for Computational Linguistics.
|
| 496 |
+
|
| 497 |
+
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.
|
| 498 |
+
|
| 499 |
+
Xiang Kong, Adithya Renduchintala, James Cross, Yuqing Tang, Jiatao Gu, and Xian Li. 2021. Multilingual neural machine translation with deep encoder and multiple shallow decoders. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1613-1624, Online. Association for
|
| 500 |
+
|
| 501 |
+
578 Computational Linguistics.
|
| 502 |
+
|
| 503 |
+
Sneha Kudugunta, Ankur Bapna, Isaac Caswell, and Orhan Firat. 2019. Investigating multilingual NMT representations at scale. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1565-1575, Hong Kong, China. Association for Computational Linguistics.
|
| 504 |
+
|
| 505 |
+
Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Al-lauzen, Benoit Crabbé, Laurent Besacier, and Didier Schwab. 2020. FlauBERT: Unsupervised language model pre-training for French. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 2479-2490, Marseille, France. European 593 Language Resources Association.
|
| 506 |
+
|
| 507 |
+
Junwei Liao, Yu Shi, Ming Gong, Linjun Shou, Hong 594
|
| 508 |
+
|
| 509 |
+
Qu, and Michael Zeng. 2021. Improving zero- 595 shot neural machine translation on language-specific 596
|
| 510 |
+
|
| 511 |
+
encoders- decoders. In 2021 International Joint Con- 597
|
| 512 |
+
|
| 513 |
+
ference on Neural Networks (IJCNN), pages 1-8. 598
|
| 514 |
+
|
| 515 |
+
David Mareček, Hande Celikkanat, Miikka Silfverberg, 599
|
| 516 |
+
|
| 517 |
+
Vinit Ravishankar, and Jörg Tiedemann. 2020. Are 600 multilingual neural machine translation models better at capturing linguistic features? The Prague Bulletin of Mathematical Linguistics.
|
| 518 |
+
|
| 519 |
+
Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun,
|
| 520 |
+
|
| 521 |
+
Hao Tian, Hua Wu, and Haifeng Wang. 2021. 605 ERNIE-M: Enhanced multilingual representation by aligning cross-lingual semantics with monolingual corpora. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 27-38, Online and Punta Cana, Dominican Re-
|
| 522 |
+
|
| 523 |
+
public. Association for Computational Linguistics. 610
|
| 524 |
+
|
| 525 |
+
Taido Purason and Andre Tättar. 2022. Multilingual 612 neural machine translation with the right amount of sharing. In Proceedings of the 23rd Annual Conference of the European Association for Machine
|
| 526 |
+
|
| 527 |
+
Translation, pages 91-100, Ghent, Belgium. Euro- 615 pean Association for Machine Translation.
|
| 528 |
+
|
| 529 |
+
617
|
| 530 |
+
|
| 531 |
+
Alessandro Raganato and Jörg Tiedemann. 2018. An analysis of encoder representations in transformer-based machine translation. In Proceedings of the
|
| 532 |
+
|
| 533 |
+
2018 EMNLP Workshop BlackboxNLP: Analyzing 620 and Interpreting Neural Networks for NLP, pages
|
| 534 |
+
|
| 535 |
+
287-297, Brussels, Belgium. Association for Com- 622 putational Linguistics.
|
| 536 |
+
|
| 537 |
+
Alessandro Raganato, Raúl Vázquez, Mathias Creutz,
|
| 538 |
+
|
| 539 |
+
and Jörg Tiedemann. 2019. An evaluation of 625 language-agnostic inner-attention-based representa-
|
| 540 |
+
|
| 541 |
+
tions in machine translation. In Proceedings of the 627 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 27-32, Florence, Italy. As-
|
| 542 |
+
|
| 543 |
+
sociation for Computational Linguistics. 630
|
| 544 |
+
|
| 545 |
+
Anna Rogers, Olga Kovaleva, and Anna Rumshisky.
|
| 546 |
+
|
| 547 |
+
2021. A Primer in BERTology: What We Know 632 About How BERT Works. Transactions of the Association for Computational Linguistics, 8:842-866.
|
| 548 |
+
|
| 549 |
+
Devendra Sachan and Graham Neubig. 2018. Parame- 635 ter sharing methods for multilingual self-attentional
|
| 550 |
+
|
| 551 |
+
translation models. In Proceedings of the Third Con- 637 ference on Machine Translation: Research Papers, pages 261-271, Brussels, Belgium. Association for Computational Linguistics.
|
| 552 |
+
|
| 553 |
+
Haitham Seelawi, Ibraheem Tuffaha, Mahmoud Gzawi,
|
| 554 |
+
|
| 555 |
+
Wael Farhan, Bashar Talafha, Riham Badawi, Zyad 642 Sober, Oday Al-Dweik, Abed Alhakim Freihat, and Hussein Al-Natsheh. 2021. ALUE: Arabic language understanding evaluation. In Proceedings of the Sixth Arabic Natural Language Processing Workshop, pages 173-184, Kyiv, Ukraine (Virtual). Association for Computational Linguistics. 647
|
| 556 |
+
|
| 557 |
+
Noam Shazeer and Mitchell Stern. 2018. Adafactor:
|
| 558 |
+
|
| 559 |
+
649 Adaptive learning rates with sublinear memory cost In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 4596-4604. PMLR.
|
| 560 |
+
|
| 561 |
+
Aditya Siddhant, Ankur Bapna, Orhan Firat, Yuan Cao, Mia Xu Chen, Isaac Caswell, and Xavier Garcia. 2022. Towards the next 1000 languages in multilingual machine translation: Exploring the synergy between supervised and self-supervised learning, arXiv preprint arXiv:2201.03110.
|
| 562 |
+
|
| 563 |
+
Jörg Tiedemann. 2012. Parallel data, tools and inter-
|
| 564 |
+
|
| 565 |
+
661 faces in OPUS. In Proceedings of the Eighth International Conference on Language Resources and
|
| 566 |
+
|
| 567 |
+
663 Evaluation (LREC'12), pages 2214-2218, Istanbul, Turkey. European Language Resources Association 664 (ELRA).
|
| 568 |
+
|
| 569 |
+
666 Elena Voita, David Talbot, Fedor Moiseev, Rico Sen-nrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lift-
|
| 570 |
+
|
| 571 |
+
669 ing, the rest can be pruned. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5797-5808, Florence, Italy.
|
| 572 |
+
|
| 573 |
+
671 Association for Computational Linguistics.
|
| 574 |
+
|
| 575 |
+
Raúl Vázquez, Alessandro Raganato, Mathias Creutz, and Jörg Tiedemann. 2020. A Systematic Study of Inner-Attention-Based Sentence Representations in Multilingual Neural Machine Translation. Computa-
|
| 576 |
+
|
| 577 |
+
676 tional Linguistics, 46(2):387-424.
|
| 578 |
+
|
| 579 |
+
Alex Wang, Amanpreet Singh, Julian Michael, Felix
|
| 580 |
+
|
| 581 |
+
679 Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for nat-
|
| 582 |
+
|
| 583 |
+
681 ural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages
|
| 584 |
+
|
| 585 |
+
683 353-355, Brussels, Belgium. Association for Com-
|
| 586 |
+
|
| 587 |
+
684 putational Linguistics.
|
| 588 |
+
|
| 589 |
+
685
|
| 590 |
+
|
| 591 |
+
686 Shijie Wu and Mark Dredze. 2020. Are all languages
|
| 592 |
+
|
| 593 |
+
687 created equal in multilingual BERT? In Proceedings of the 5th Workshop on Representation Learning for
|
| 594 |
+
|
| 595 |
+
688 ${NLP}$ , pages 120-130, Online. Association for Com-
|
| 596 |
+
|
| 597 |
+
689 putational Linguistics.
|
| 598 |
+
|
| 599 |
+
690
|
| 600 |
+
|
| 601 |
+
691 Liang Xu, Hai Hu, Xuanwei Zhang, Lu Li, Chenjie Cao,
|
| 602 |
+
|
| 603 |
+
692 Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, Yin Tian, Qianqian Dong, Weitang Liu, Bo Shi, Yiming Cui, Junyi Li, Jun Zeng, Rongzhao Wang, Weijian Xie, Yanting Li, Yina Patterson, Zuoyu Tian, Yiwen Zhang, He Zhou, Shaoweihua Liu, Zhe Zhao,
|
| 604 |
+
|
| 605 |
+
696 Qipeng Zhao, Cong Yue, Xinrui Zhang, Zhengliang Yang, Kyle Richardson, and Zhenzhong Lan. 2020.
|
| 606 |
+
|
| 607 |
+
698 CLUE: A Chinese language understanding evaluation benchmark. In Proceedings of the 28th Inter-
|
| 608 |
+
|
| 609 |
+
699 national Conference on Computational Linguistics,
|
| 610 |
+
|
| 611 |
+
700 pages 4762-4772, Barcelona, Spain (Online). Inter-
|
| 612 |
+
|
| 613 |
+
701 national Committee on Computational Linguistics.
|
| 614 |
+
|
| 615 |
+
Yilin Yang, Akiko Eriguchi, Alexandre Muzio, Prasad 702
|
| 616 |
+
|
| 617 |
+
Tadepalli, Stefan Lee, and Hany Hassan. 2021. Im- 703
|
| 618 |
+
|
| 619 |
+
proving multilingual translation by representation 704
|
| 620 |
+
|
| 621 |
+
and gradient regularization. In Proceedings of the 705
|
| 622 |
+
|
| 623 |
+
2021 Conference on Empirical Methods in Natural 706 Language Processing, pages 7266-7279, Online and
|
| 624 |
+
|
| 625 |
+
Punta Cana, Dominican Republic. Association for 707
|
| 626 |
+
|
| 627 |
+
Computational Linguistics. 708
|
| 628 |
+
|
| 629 |
+
Biao Zhang, Ankur Bapna, Rico Sennrich, and Orhan 709 710 Firat. 2021. Share or not? learning to schedule language-specific capacity for multilingual translation. In International Conference on Learning Rep-
|
| 630 |
+
|
| 631 |
+
resentations. 713
|
| 632 |
+
|
| 633 |
+
Biao Zhang, Philip Williams, Ivan Titov, and Rico Sen-
|
| 634 |
+
|
| 635 |
+
nrich. 2020. Improving massively multilingual neu- 715 ral machine translation and zero-shot translation. In Proceedings of the 58th Annual Meeting of the Asso-
|
| 636 |
+
|
| 637 |
+
ciation for Computational Linguistics, pages 1628- 718 1639, Online. Association for Computational Linguistics.
|
| 638 |
+
|
| 639 |
+
720
|
| 640 |
+
|
| 641 |
+
Changfeng Zhu, Heng Yu, Shanbo Cheng, and Weihua Luo. 2020. Language-aware interlingua for multi-
|
| 642 |
+
|
| 643 |
+
lingual neural machine translation. In Proceedings 723 of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1650-1655, On-
|
| 644 |
+
|
| 645 |
+
line. Association for Computational Linguistics. 725
|
| 646 |
+
|
| 647 |
+
## A Selected Languages
|
| 648 |
+
|
| 649 |
+
728
|
| 650 |
+
|
| 651 |
+
When constructing larger datasets, we select the
|
| 652 |
+
|
| 653 |
+
additional languages based on four criteria: 730
|
| 654 |
+
|
| 655 |
+
(a) maximise the number of datapoints available
|
| 656 |
+
|
| 657 |
+
for training 733
|
| 658 |
+
|
| 659 |
+
(b) the presence of zero-shot translation test sets
|
| 660 |
+
|
| 661 |
+
735
|
| 662 |
+
|
| 663 |
+
(c) the existence of XNLI data for the languages
|
| 664 |
+
|
| 665 |
+
(d) maximize language diversity in the dataset 737 738
|
| 666 |
+
|
| 667 |
+
The information we considered is listed in Table 1,
|
| 668 |
+
|
| 669 |
+
with the exception of criterion (b): only languages 740 in opus-03 and opus-06 are relevant to this criterion.
|
| 670 |
+
|
| 671 |
+
<table><tr><td>ISO 2</td><td>Dataset</td><td>Train size</td><td>XNLI</td></tr><tr><td>ar</td><td>opus-03</td><td>1,000,000</td><td>✓</td></tr><tr><td>fr</td><td>opus-03</td><td>1,000,000</td><td>✓</td></tr><tr><td>zh</td><td>opus-03</td><td>1,000,000</td><td>✓</td></tr><tr><td>de</td><td>opus-06</td><td>1,000,000</td><td>✓</td></tr><tr><td>nl</td><td>opus-06</td><td>1,000,000</td><td>✓</td></tr><tr><td>ru</td><td>opus-06</td><td>1,000,000</td><td>✓</td></tr><tr><td>th</td><td>opus-09</td><td>1,000,000</td><td>✓</td></tr></table>
|
| 672 |
+
|
| 673 |
+
(Continued on next column)
|
| 674 |
+
|
| 675 |
+
745
|
| 676 |
+
|
| 677 |
+
750
|
| 678 |
+
|
| 679 |
+
755
|
| 680 |
+
|
| 681 |
+
756
|
| 682 |
+
|
| 683 |
+
<table><tr><td colspan="4">(Continued from previous column)</td></tr><tr><td>ISO 2</td><td>$\mathbf{{Dataset}}$</td><td>Train size</td><td>XNLI</td></tr><tr><td>tr</td><td>opus-09</td><td>1,000,000</td><td>✓</td></tr><tr><td>vi</td><td>opus-09</td><td>1,000,000</td><td>✓</td></tr><tr><td>bg</td><td>opus-12</td><td>1,000,000</td><td>✓</td></tr><tr><td>el</td><td>opus-12</td><td>1,000,000</td><td>✓</td></tr><tr><td>es</td><td>opus-12</td><td>1,000,000</td><td>✓</td></tr><tr><td>bn</td><td>opus-24</td><td>1,000,000</td><td>-</td></tr><tr><td>eu</td><td>opus-24</td><td>1,000,000</td><td>-</td></tr><tr><td>fa</td><td>opus-24</td><td>1,000,000</td><td>-</td></tr><tr><td>fi</td><td>opus-24</td><td>1,000,000</td><td>-</td></tr><tr><td>he</td><td>opus-24</td><td>1,000,000</td><td>-</td></tr><tr><td>id</td><td>opus-24</td><td>1,000,000</td><td>-</td></tr><tr><td>it</td><td>opus-24</td><td>1,000,000</td><td>-</td></tr><tr><td>ja</td><td>opus-24</td><td>1,000,000</td><td>-</td></tr><tr><td>ko</td><td>opus-24</td><td>1,000,000</td><td>-</td></tr><tr><td>lv</td><td>opus-24</td><td>1,000,000</td><td>-</td></tr><tr><td>mk</td><td>opus-24</td><td>1,000,000</td><td>-</td></tr><tr><td>SV</td><td>opus-24</td><td>1,000,000</td><td>-</td></tr><tr><td>bs</td><td>opus-36</td><td>1,000,000</td><td>-</td></tr><tr><td>CS</td><td>opus-36</td><td>1,000,000</td><td>-</td></tr><tr><td>et</td><td>opus-36</td><td>1,000,000</td><td>-</td></tr><tr><td>hu</td><td>opus-36</td><td>1,000,000</td><td>-</td></tr><tr><td>is</td><td>opus-36</td><td>1,000,000</td><td>-</td></tr><tr><td>lt</td><td>opus-36</td><td>1,000,000</td><td>-</td></tr><tr><td>mt</td><td>opus-36</td><td>1,000,000</td><td>-</td></tr><tr><td>ro</td><td>opus-36</td><td>1,000,000</td><td>-</td></tr><tr><td>sk</td><td>opus-36</td><td>1,000,000</td><td>-</td></tr><tr><td>sq</td><td>opus-36</td><td>1,000,000</td><td>-</td></tr><tr><td>sr</td><td>opus-36</td><td>1,000,000</td><td>-</td></tr><tr><td>uk</td><td>opus-36</td><td>1,000,000</td><td>-</td></tr><tr><td>ca</td><td>opus-48</td><td>1,000,000</td><td>-</td></tr><tr><td>da</td><td>opus-48</td><td>1,000,000</td><td>-</td></tr><tr><td>hr</td><td>opus-48</td><td>1,000,000</td><td>-</td></tr><tr><td>mg</td><td>opus-48</td><td>590,771</td><td>-</td></tr><tr><td>ml</td><td>opus-48</td><td>822,746</td><td>-</td></tr><tr><td>ms</td><td>opus-48</td><td>1,000,000</td><td>-</td></tr><tr><td>no</td><td>opus-48</td><td>1,000,000</td><td>-</td></tr><tr><td>pl</td><td>opus-48</td><td>1,000,000</td><td>-</td></tr><tr><td>pt</td><td>opus-48</td><td>1,000,000</td><td>-</td></tr><tr><td>si</td><td>opus-48</td><td>979,109</td><td>-</td></tr><tr><td>s1</td><td>opus-48</td><td>1,000,000</td><td>-</td></tr><tr><td>ur</td><td>opus-48</td><td>753,913</td><td>-</td></tr><tr><td>af</td><td>opus-60</td><td>275,512</td><td>-</td></tr><tr><td>cy</td><td>opus-60</td><td>289,521</td><td>-</td></tr><tr><td>eo</td><td>opus-60</td><td>337,106</td><td>-</td></tr><tr><td>ga</td><td>opus-60</td><td>289,524</td><td>-</td></tr><tr><td>gl</td><td>opus-60</td><td>515,344</td><td>-</td></tr><tr><td>gu</td><td>opus-60</td><td>318,306</td><td>-</td></tr></table>
|
| 684 |
+
|
| 685 |
+
757
|
| 686 |
+
|
| 687 |
+
758
|
| 688 |
+
|
| 689 |
+
759
|
| 690 |
+
|
| 691 |
+
760
|
| 692 |
+
|
| 693 |
+
761
|
| 694 |
+
|
| 695 |
+
762
|
| 696 |
+
|
| 697 |
+
763
|
| 698 |
+
|
| 699 |
+
764
|
| 700 |
+
|
| 701 |
+
765
|
| 702 |
+
|
| 703 |
+
766
|
| 704 |
+
|
| 705 |
+
767
|
| 706 |
+
|
| 707 |
+
768
|
| 708 |
+
|
| 709 |
+
769
|
| 710 |
+
|
| 711 |
+
770
|
| 712 |
+
|
| 713 |
+
771
|
| 714 |
+
|
| 715 |
+
772
|
| 716 |
+
|
| 717 |
+
773
|
| 718 |
+
|
| 719 |
+
774
|
| 720 |
+
|
| 721 |
+
775
|
| 722 |
+
|
| 723 |
+
776
|
| 724 |
+
|
| 725 |
+
777
|
| 726 |
+
|
| 727 |
+
778
|
| 728 |
+
|
| 729 |
+
779
|
| 730 |
+
|
| 731 |
+
780
|
| 732 |
+
|
| 733 |
+
781
|
| 734 |
+
|
| 735 |
+
782
|
| 736 |
+
|
| 737 |
+
783
|
| 738 |
+
|
| 739 |
+
784
|
| 740 |
+
|
| 741 |
+
785
|
| 742 |
+
|
| 743 |
+
786
|
| 744 |
+
|
| 745 |
+
787
|
| 746 |
+
|
| 747 |
+
788
|
| 748 |
+
|
| 749 |
+
789
|
| 750 |
+
|
| 751 |
+
790
|
| 752 |
+
|
| 753 |
+
791
|
| 754 |
+
|
| 755 |
+
792
|
| 756 |
+
|
| 757 |
+
793
|
| 758 |
+
|
| 759 |
+
794
|
| 760 |
+
|
| 761 |
+
795
|
| 762 |
+
|
| 763 |
+
796
|
| 764 |
+
|
| 765 |
+
797
|
| 766 |
+
|
| 767 |
+
798
|
| 768 |
+
|
| 769 |
+
799
|
| 770 |
+
|
| 771 |
+
800
|
| 772 |
+
|
| 773 |
+
801
|
| 774 |
+
|
| 775 |
+
802
|
| 776 |
+
|
| 777 |
+
803
|
| 778 |
+
|
| 779 |
+
804
|
| 780 |
+
|
| 781 |
+
805
|
| 782 |
+
|
| 783 |
+
806
|
| 784 |
+
|
| 785 |
+
807
|
| 786 |
+
|
| 787 |
+
808
|
| 788 |
+
|
| 789 |
+
809
|
| 790 |
+
|
| 791 |
+
(Continued on next column)
|
| 792 |
+
|
| 793 |
+
<table><tr><td colspan="4">(Continued from previous column)</td></tr><tr><td>ISO 2</td><td>Dataset</td><td>Train size</td><td>XNLI</td></tr><tr><td>hi</td><td>opus-60</td><td>534,319</td><td>-</td></tr><tr><td>ka</td><td>opus-60</td><td>377,306</td><td>-</td></tr><tr><td>ne</td><td>opus-60</td><td>406,381</td><td>-</td></tr><tr><td>nn</td><td>opus-60</td><td>486,055</td><td>-</td></tr><tr><td>sh</td><td>opus-60</td><td>267,211</td><td>-</td></tr><tr><td>xh</td><td>opus-60</td><td>439,671</td><td>-</td></tr><tr><td>as</td><td>opus-72</td><td>138,479</td><td>-</td></tr><tr><td>az</td><td>opus-72</td><td>262,089</td><td>-</td></tr><tr><td>br</td><td>opus-72</td><td>153,447</td><td>-</td></tr><tr><td>km</td><td>opus-72</td><td>111,483</td><td>-</td></tr><tr><td>ku</td><td>opus-72</td><td>144,844</td><td>-</td></tr><tr><td>nb</td><td>opus-72</td><td>142,906</td><td>-</td></tr><tr><td>pa</td><td>opus-72</td><td>107,296</td><td>-</td></tr><tr><td>rw</td><td>opus-72</td><td>173,823</td><td>-</td></tr><tr><td>ta</td><td>opus-72</td><td>227,014</td><td>-</td></tr><tr><td>tg</td><td>opus-72</td><td>193,882</td><td>-</td></tr><tr><td>uz</td><td>opus-72</td><td>173,157</td><td>-</td></tr><tr><td>wa</td><td>opus-72</td><td>104,496</td><td>-</td></tr></table>
|
| 794 |
+
|
| 795 |
+
Table 1: Languages selected matched with the first sub-dataset they appear in
|
| 796 |
+
|
| 797 |
+
810
|
| 798 |
+
|
| 799 |
+
811
|
| 800 |
+
|
| 801 |
+
812
|
| 802 |
+
|
| 803 |
+
813
|
| 804 |
+
|
| 805 |
+
814
|
| 806 |
+
|
| 807 |
+
815
|
| 808 |
+
|
| 809 |
+
816
|
| 810 |
+
|
| 811 |
+
817
|
| 812 |
+
|
| 813 |
+
818
|
| 814 |
+
|
| 815 |
+
819
|
| 816 |
+
|
| 817 |
+
820
|
| 818 |
+
|
| 819 |
+
821
|
| 820 |
+
|
| 821 |
+
822
|
| 822 |
+
|
| 823 |
+
823
|
| 824 |
+
|
| 825 |
+
824
|
| 826 |
+
|
| 827 |
+
825
|
| 828 |
+
|
| 829 |
+
826
|
| 830 |
+
|
| 831 |
+
827
|
| 832 |
+
|
| 833 |
+
828
|
| 834 |
+
|
| 835 |
+
829
|
| 836 |
+
|
| 837 |
+
830
|
| 838 |
+
|
| 839 |
+
831
|
| 840 |
+
|
| 841 |
+
832
|
| 842 |
+
|
| 843 |
+
833
|
| 844 |
+
|
| 845 |
+
834
|
| 846 |
+
|
| 847 |
+
835
|
| 848 |
+
|
| 849 |
+
## B Hyperparameters & Training details
|
| 850 |
+
|
| 851 |
+
836
|
| 852 |
+
|
| 853 |
+
Models were trained for a total of ${100}\mathrm{\;K}$ steps to 837
|
| 854 |
+
|
| 855 |
+
minimize the negative log-likelihood of the target 838 839 translation. We accumulate gradients over all trans- 840 lation directions before back-propagation. We op- 841 timize our models using AdaFactor (Shazeer and 842 Stern, 2018). 843 Training occurred on SLURM clusters of A100 844
|
| 856 |
+
|
| 857 |
+
NVIDIA GPUs. Each GPU contains the param- 845
|
| 858 |
+
|
| 859 |
+
eters for 3 languages (i.e., 9 translation direc- 846
|
| 860 |
+
|
| 861 |
+
tions); groups of 4 GPUs form a node. In other 847
|
| 862 |
+
|
| 863 |
+
words, models for opus-03 were trained on a sin- 848
|
| 864 |
+
|
| 865 |
+
gle A100 GPU, whereas models for opus-72 were 849
|
| 866 |
+
|
| 867 |
+
trained over 24 A100 GPUs, distributed across 6 850
|
| 868 |
+
|
| 869 |
+
nodes. We did not go beyond opus-72 because this 851
|
| 870 |
+
|
| 871 |
+
matches the largest setup in the computing cluster 852
|
| 872 |
+
|
| 873 |
+
we used for our experiments. Individual models 853 were trained under 36 hours.
|
| 874 |
+
|
| 875 |
+
Hyperparameters shared across all models are shown in Table 2; they were set a priori so as to not use the validation split of opus-100, as it has
|
| 876 |
+
|
| 877 |
+
been reported to significantly overlap with the test 858 set (Yang et al., 2021). Input data is pre-tokenized
|
| 878 |
+
|
| 879 |
+
using language-specific sentence piece models with 860
|
| 880 |
+
|
| 881 |
+
32,000 pieces, except for Chinese and Japanese, 861
|
| 882 |
+
|
| 883 |
+
where we use 64,000 pieces. 862
|
| 884 |
+
|
| 885 |
+
863
|
| 886 |
+
|
| 887 |
+
864
|
| 888 |
+
|
| 889 |
+
<table><tr><td>Parameter</td><td>Value</td></tr><tr><td>src.seq. length</td><td>200</td></tr><tr><td>tgt.seq. length</td><td>200</td></tr><tr><td>subword type</td><td>sentencepiece</td></tr><tr><td>mask ratio</td><td>0.2</td></tr><tr><td>replace length</td><td>1</td></tr><tr><td>batch size</td><td>4,096</td></tr><tr><td>batch type</td><td>tokens</td></tr><tr><td>normalization</td><td>tokens</td></tr><tr><td>valid batch size</td><td>4,096</td></tr><tr><td>max generator batches</td><td>2</td></tr><tr><td>encoder type</td><td>transformer</td></tr><tr><td>decoder type</td><td>transformer</td></tr><tr><td>rnn size</td><td>512</td></tr><tr><td>word vec size</td><td>512</td></tr><tr><td>transformer ff</td><td>2,048</td></tr><tr><td>heads</td><td>8</td></tr><tr><td>dec layers</td><td>6</td></tr><tr><td>dropout</td><td>0.1</td></tr><tr><td>label smoothing</td><td>0.1</td></tr><tr><td>param init</td><td>0.0</td></tr><tr><td>param init glorot</td><td>true</td></tr><tr><td>position encoding</td><td>true</td></tr><tr><td>valid steps</td><td>500,000</td></tr><tr><td>warmup steps</td><td>10,000</td></tr><tr><td>report every</td><td>50</td></tr><tr><td>save checkpoint steps</td><td>25,000</td></tr><tr><td>keep checkpoint</td><td>3</td></tr><tr><td>accum count</td><td>1</td></tr><tr><td>optim</td><td>adafactor</td></tr><tr><td>decay method</td><td>none</td></tr><tr><td>learning rate</td><td>3.0</td></tr><tr><td>max grad norm</td><td>0.0</td></tr><tr><td>seed</td><td>3435</td></tr><tr><td>model type</td><td>text</td></tr></table>
|
| 890 |
+
|
| 891 |
+
Table 2: Set of hyper-parameters shared across all our models
|
| 892 |
+
|
| 893 |
+
865
|
| 894 |
+
|
| 895 |
+
866
|
| 896 |
+
|
| 897 |
+
867
|
| 898 |
+
|
| 899 |
+
868
|
| 900 |
+
|
| 901 |
+
869
|
| 902 |
+
|
| 903 |
+
870
|
| 904 |
+
|
| 905 |
+
871
|
| 906 |
+
|
| 907 |
+
872
|
| 908 |
+
|
| 909 |
+
873
|
| 910 |
+
|
| 911 |
+
874
|
| 912 |
+
|
| 913 |
+
875
|
| 914 |
+
|
| 915 |
+
877
|
| 916 |
+
|
| 917 |
+
878
|
| 918 |
+
|
| 919 |
+
879
|
| 920 |
+
|
| 921 |
+
880
|
| 922 |
+
|
| 923 |
+
881
|
| 924 |
+
|
| 925 |
+
882
|
| 926 |
+
|
| 927 |
+
883
|
| 928 |
+
|
| 929 |
+
884
|
| 930 |
+
|
| 931 |
+
885
|
| 932 |
+
|
| 933 |
+
886
|
| 934 |
+
|
| 935 |
+
887
|
| 936 |
+
|
| 937 |
+
888
|
| 938 |
+
|
| 939 |
+
889
|
| 940 |
+
|
| 941 |
+
890
|
| 942 |
+
|
| 943 |
+
891
|
| 944 |
+
|
| 945 |
+
892
|
| 946 |
+
|
| 947 |
+
893
|
| 948 |
+
|
| 949 |
+
894
|
| 950 |
+
|
| 951 |
+
895
|
| 952 |
+
|
| 953 |
+
896
|
| 954 |
+
|
| 955 |
+
897
|
| 956 |
+
|
| 957 |
+
899
|
| 958 |
+
|
| 959 |
+
902
|
| 960 |
+
|
| 961 |
+
905
|
| 962 |
+
|
| 963 |
+
## C Classifiers training procedure
|
| 964 |
+
|
| 965 |
+
906 In Sections 3.2 and 3.3, we train classifier probes to 907 investigate the information contained in the encoder spaces. All classifiers correspond to two-layer per-ceptrons with a hidden layer size of 128 , dropout applied to the input layer, and trained with Adam
|
| 966 |
+
|
| 967 |
+
912 (Kingma and Ba, 2015) to optimize cross-entropy. We define sentence embeddings by simply taking the sum of the encoder output vectors; the input features of the classifiers are the concatenation of these sentence embeddings. For each set of tar-
|
| 968 |
+
|
| 969 |
+
917 gets, we train 10 classifiers with different random
|
| 970 |
+
|
| 971 |
+
seeds and report the mean and standard deviation of 918
|
| 972 |
+
|
| 973 |
+
macro-f1 scores. In Section 3.2, we set the learning 919
|
| 974 |
+
|
| 975 |
+
rate for XNLI to $5 \cdot {10}^{-5}$ with a dropout of $p = {0.1}$ 920
|
| 976 |
+
|
| 977 |
+
and use minibatches of 100 examples. Note that 921
|
| 978 |
+
|
| 979 |
+
we consider each language in XNLI as a different 922
|
| 980 |
+
|
| 981 |
+
set of targets, and therefore use different classifiers 923
|
| 982 |
+
|
| 983 |
+
to compute macro-f1 scores. 924
|
| 984 |
+
|
| 985 |
+
925
|
| 986 |
+
|
| 987 |
+
<table><tr><td>$\mathbf{{Dataset}}$</td><td>Task</td><td>Size</td></tr><tr><td>NSURL-2019 Task 8</td><td>question similarity</td><td>10,797</td></tr><tr><td>☑ OSACT4 Task-A</td><td>offensive speech detection</td><td>6,839</td></tr><tr><td>OSACT4 Task-B</td><td>hate speech detection</td><td>6,839</td></tr><tr><td>COLA</td><td>linguistic acceptability</td><td>8,551</td></tr><tr><td>MRPC</td><td>sentence similarity</td><td>3,668</td></tr><tr><td>QNLI</td><td>NLI</td><td>104,743</td></tr><tr><td>QQP</td><td>question similarity</td><td>363,846</td></tr><tr><td>PAWSX</td><td>paraphrase detection</td><td>49,399</td></tr><tr><td>CSTSB</td><td>paraphrase detection</td><td>5,749</td></tr><tr><td>XNLI</td><td>NLI</td><td>392,702</td></tr><tr><td>AFQMC</td><td>question similarity</td><td>34,334</td></tr><tr><td>HCMNLI</td><td>NLI</td><td>391,783</td></tr><tr><td>TNEWS</td><td>news topic classification</td><td>53,360</td></tr></table>
|
| 988 |
+
|
| 989 |
+
Table 3: NLU monolingual classification tasks
|
| 990 |
+
|
| 991 |
+
926
|
| 992 |
+
|
| 993 |
+
927
|
| 994 |
+
|
| 995 |
+
928
|
| 996 |
+
|
| 997 |
+
929
|
| 998 |
+
|
| 999 |
+
930
|
| 1000 |
+
|
| 1001 |
+
931
|
| 1002 |
+
|
| 1003 |
+
934
|
| 1004 |
+
|
| 1005 |
+
936
|
| 1006 |
+
|
| 1007 |
+
939
|
| 1008 |
+
|
| 1009 |
+
941
|
| 1010 |
+
|
| 1011 |
+
The classification tasks selected for studying the semantic contents of encoder representations in
|
| 1012 |
+
|
| 1013 |
+
Section 3.3 are shown in Table 3. Due to the limited 944 number of usable tasks in FLUE, we also include
|
| 1014 |
+
|
| 1015 |
+
a STSB French translation ${}^{6}$ which we binarize by 946 considering similarity judgments $> 3$ as indicating near-paraphrases. Classifiers discussed in Sec-
|
| 1016 |
+
|
| 1017 |
+
tion 3.3 are trained for 10 epochs with a dropout 949 of0.1and a learning rate of $5 \cdot {10}^{-5}$ , using mini-
|
| 1018 |
+
|
| 1019 |
+
batches of 100 datapoints. We reduced the number 951 of epochs to 5 for all Arabic tasks and used mini-
|
| 1020 |
+
|
| 1021 |
+
batches of 10 examples for the OSACT4 shared 954 tasks A & B due to the longer length of the training
|
| 1022 |
+
|
| 1023 |
+
examples. 956
|
| 1024 |
+
|
| 1025 |
+
## D Limitations
|
| 1026 |
+
|
| 1027 |
+
957
|
| 1028 |
+
|
| 1029 |
+
958
|
| 1030 |
+
|
| 1031 |
+
### D.1 Material Limitations
|
| 1032 |
+
|
| 1033 |
+
959
|
| 1034 |
+
|
| 1035 |
+
960
|
| 1036 |
+
|
| 1037 |
+
As stated in the introduction, we make multiple 961
|
| 1038 |
+
|
| 1039 |
+
explicit assumptions that limit the scope of this 962 research. It is plausible that parameter-sharing in the decoder or that replicating our experiments in a non-English-centric scenario will yield a different
|
| 1040 |
+
|
| 1041 |
+
set of conclusions. 966
|
| 1042 |
+
|
| 1043 |
+
Also worth highlighting are the computational re-
|
| 1044 |
+
|
| 1045 |
+
quirements underlying this work: the most demand- 968
|
| 1046 |
+
|
| 1047 |
+
ing experiments require up to 24 A100 NVIDIA 969
|
| 1048 |
+
|
| 1049 |
+
970
|
| 1050 |
+
|
| 1051 |
+
971 972 GPUs. A side-effect of these demanding computa- 1026 973 tional requirements is that we have not been able 1027 974 to replicate model training across multiple seeds, 1028 975 and therefore report results based on a single model 1029
|
| 1052 |
+
|
| 1053 |
+
---
|
| 1054 |
+
|
| 1055 |
+
https://huggingface.co/datasets/stsb_multi_mt
|
| 1056 |
+
|
| 1057 |
+
---
|
| 1058 |
+
|
| 1059 |
+
976 per dataset and number of shared layers. It is also 1030
|
| 1060 |
+
|
| 1061 |
+
977 plausible that greatly scaling up the total number 1031
|
| 1062 |
+
|
| 1063 |
+
978 of parameters in the networks would affect the con- 1032
|
| 1064 |
+
|
| 1065 |
+
979 clusions. 1033
|
| 1066 |
+
|
| 1067 |
+
980 Lastly, our use of classifiers to probe for lan- 1034 1035 guage independence and semantic contents of the 1036 983 representations can be discussed. We have avoided 1037 discussing the raw performances of our classifiers, 1038 984 and instead discussed the trends that we observed 1039 985
|
| 1068 |
+
|
| 1069 |
+
across our different MT models. Results from our 1040
|
| 1070 |
+
|
| 1071 |
+
987 classifiers should be taken as indicators of the as- 1041
|
| 1072 |
+
|
| 1073 |
+
988 pects we are trying to probe, rather than accurate 1042
|
| 1074 |
+
|
| 1075 |
+
989 measures of said aspects: replication studies and 1043
|
| 1076 |
+
|
| 1077 |
+
990 further evidence from other settings would be re- 1044
|
| 1078 |
+
|
| 1079 |
+
quired to establish our models' performances on 1045
|
| 1080 |
+
|
| 1081 |
+
the criteria we outlined. 1046
|
| 1082 |
+
|
| 1083 |
+
993 1047
|
| 1084 |
+
|
| 1085 |
+
### D.2 Ethics Considerations
|
| 1086 |
+
|
| 1087 |
+
1048
|
| 1088 |
+
|
| 1089 |
+
995 In the present paper, we have argued against adding 1049
|
| 1090 |
+
|
| 1091 |
+
languages if practical implementation costs are a 1050
|
| 1092 |
+
|
| 1093 |
+
relevant constraint. We acknowledge that this rec- 1051
|
| 1094 |
+
|
| 1095 |
+
998 ommendation may push NLP researchers and en- 1052
|
| 1096 |
+
|
| 1097 |
+
gineers towards constructing models specifically 1053
|
| 1098 |
+
|
| 1099 |
+
1000 for high-resource languages, which would further 1054
|
| 1100 |
+
|
| 1101 |
+
1001 the coverage gap between low- and high-resource 1055
|
| 1102 |
+
|
| 1103 |
+
1002 languages. 1056
|
| 1104 |
+
|
| 1105 |
+
1003 Nonetheless, it must be stressed that our experi- 1057
|
| 1106 |
+
|
| 1107 |
+
1004 ments say nothing of linguistic diversity, as we have 1058
|
| 1108 |
+
|
| 1109 |
+
1005 ensured that even our smallest dataset (opus-03) 1059
|
| 1110 |
+
|
| 1111 |
+
1006 would contain maximally different languages. Also 1060
|
| 1112 |
+
|
| 1113 |
+
1007 relevant to the discussion at hand is that one sce- 1061
|
| 1114 |
+
|
| 1115 |
+
1008 nario where practical implementation costs are a 1062 1063 1009 known constraint is that of developing low-resource 1064 1010 languages systems and NLP tools. We believe that 1065 1011 providing evidence as to which approach is most 1066 1012 effective can prove valuable in such scenarios as 1067 1013
|
| 1116 |
+
|
| 1117 |
+
1014 well, so as to ensure that efforts can be focused 1068
|
| 1118 |
+
|
| 1119 |
+
1015 on the most viable path towards endowing lower- 1069
|
| 1120 |
+
|
| 1121 |
+
1016 resource languages with more efficient and suitable 1070
|
| 1122 |
+
|
| 1123 |
+
1017 tools. 1071
|
| 1124 |
+
|
| 1125 |
+
1018 1072
|
| 1126 |
+
|
| 1127 |
+
1019 1073
|
| 1128 |
+
|
| 1129 |
+
1020 1074
|
| 1130 |
+
|
| 1131 |
+
1021 1075
|
| 1132 |
+
|
| 1133 |
+
1022 1076
|
| 1134 |
+
|
| 1135 |
+
1023 1077
|
| 1136 |
+
|
| 1137 |
+
1024 1078
|
| 1138 |
+
|
| 1139 |
+
1025 1079
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/1vkyEY-HeLY/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,405 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
§ DOZENS OF TRANSLATION DIRECTIONS OR MILLIONS OF SHARED PARAMETERS? COMPARING TWO TYPES OF MULTILINGUALITY IN MODULAR MACHINE TRANSLATION
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 057
|
| 10 |
+
|
| 11 |
+
Anonymous Author
|
| 12 |
+
|
| 13 |
+
Affiliation / Address line 1
|
| 14 |
+
|
| 15 |
+
006 Affiliation / Address line 2
|
| 16 |
+
|
| 17 |
+
Affiliation / Address line 3
|
| 18 |
+
|
| 19 |
+
email@domain
|
| 20 |
+
|
| 21 |
+
Anonymouser Author
|
| 22 |
+
|
| 23 |
+
Affiliation / Address line 1
|
| 24 |
+
|
| 25 |
+
Affiliation / Address line 2
|
| 26 |
+
|
| 27 |
+
Affiliation / Address line 3
|
| 28 |
+
|
| 29 |
+
email@domain
|
| 30 |
+
|
| 31 |
+
Anonymousest Author 058
|
| 32 |
+
|
| 33 |
+
Affiliation / Address line 1 059
|
| 34 |
+
|
| 35 |
+
Affiliation / Address line 2 060 Affiliation / Address line 3 email@domain
|
| 36 |
+
|
| 37 |
+
§ ABSTRACT
|
| 38 |
+
|
| 39 |
+
013 There are several ways of implementing multilingual NLP systems but little consensus as to whether different approaches
|
| 40 |
+
|
| 41 |
+
016 exhibit similar effects. Are the trends that we observe when adding more languages
|
| 42 |
+
|
| 43 |
+
018 the same as those we observe when sharing more parameters? We focus on encoder representations drawn from modu-
|
| 44 |
+
|
| 45 |
+
021 lar multilingual machine translation systems in an English-centric scenario, and
|
| 46 |
+
|
| 47 |
+
023 study their quality from multiple aspects: how adequate they are for machine trans-
|
| 48 |
+
|
| 49 |
+
026 lation, how independent of the source lan- guage they are, and what semantic infor-
|
| 50 |
+
|
| 51 |
+
028 mation they convey. Adding translation directions in English-centric scenarios does
|
| 52 |
+
|
| 53 |
+
030 not conclusively lead to an increase in
|
| 54 |
+
|
| 55 |
+
031 translation quality. Shared layers increase performance on zero-shot translation pairs
|
| 56 |
+
|
| 57 |
+
033 and lead to more language-independent representations, but these improvements do not systematically align with more se-
|
| 58 |
+
|
| 59 |
+
036 mantically accurate representations, from a monolingual standpoint.
|
| 60 |
+
|
| 61 |
+
038
|
| 62 |
+
|
| 63 |
+
§ 1 INTRODUCTION
|
| 64 |
+
|
| 65 |
+
Multilinguality, within the scope of neural NLP, can mean either ensuring that computations for dif-
|
| 66 |
+
|
| 67 |
+
043 ferent languages are homogeneous, or ensuring that models are trained with data coming from different languages. These two definitions are not as equivalent as they might appear: for instance, modular architectures, where some parameters are devoted
|
| 68 |
+
|
| 69 |
+
048 to specific inputs, can only be conceived as multilingual under the latter definition.
|
| 70 |
+
|
| 71 |
+
Both of these trends have been explored across multiple works. Machine translation studies have looked into sharing no parameters at all (Escolano
|
| 72 |
+
|
| 73 |
+
053 et al., 2021), sharing across linguistically informed
|
| 74 |
+
|
| 75 |
+
groups (Fan et al., 2021; Purason and Tättar, 2022), 065 sharing only some components across all languages
|
| 76 |
+
|
| 77 |
+
(Dong et al., 2015; Firat et al., 2016; Vázquez et al., 067 2020; Liao et al., 2021; Zhu et al., 2020; Kong et al., 2021; Blackwood et al., 2018; Sachan and
|
| 78 |
+
|
| 79 |
+
Neubig, 2018; Zhang et al., 2021), and sharing 070 the entire model (Johnson et al., 2017). Concerns
|
| 80 |
+
|
| 81 |
+
about multilinguality have spearheaded research 072 on how to make representations and systems more reliable for typologically and linguistically diverse data (Bojanowski et al., 2017; Adelani et al., 2022), the distinction between multilingual and monolingual representations (Wu and Dredze, 2020), the specificity of massively-multilingual representa-
|
| 82 |
+
|
| 83 |
+
tions (Kudugunta et al., 2019) or the effects of 080 having more diverse data (Arivazhagan et al., 2019;
|
| 84 |
+
|
| 85 |
+
Aharoni et al., 2019; Costa-jussà et al., 2022; Sid- 082 dhant et al., 2022; Kim et al., 2021; Voita et al., 2019). In this paper, we study whether these differ-
|
| 86 |
+
|
| 87 |
+
ent implementations of multilinguality yield quali- 085 tatively different types of representations-in other
|
| 88 |
+
|
| 89 |
+
words: Are the effects of parameter sharing orthog- 087 onal to those of adding new languages?
|
| 90 |
+
|
| 91 |
+
To broach this question, we make three simplify-
|
| 92 |
+
|
| 93 |
+
ing assumptions. First, we only consider the task 090 of multilingual machine translation—an exhaustive
|
| 94 |
+
|
| 95 |
+
study of the impact of all multilingual NLP tasks 092 is beyond the scope of this paper. Moreover, massively multilingual language models are known to leverage parallel data to enhance semantic abstrac-
|
| 96 |
+
|
| 97 |
+
tions (Hu et al., 2021; Ouyang et al., 2021; Kale 097 et al., 2021). Second, we only consider parameter sharing in the last layers of the encoders: we focus on the intermediary representations acquired directly after the encoder and leave decoders for future study. As language selection tokens would compromise the language independence of the representations, this rules out fully shared decoders. Third, we focus on an English-centric scenario: i.e., all translation directions seen during training con-
|
| 98 |
+
|
| 99 |
+
tain English as a source or target language. While 107 such an approach is not without issues (Gu et al., 2019; Zhang et al., 2020), it makes it possible to select translation directions for zero-shot evaluations in a principled manner. Furthermore, most multilingual translation datasets are highly skewed in any case and contain orders of magnitude more English examples (e.g., Costa-jussà et al., 2022).
|
| 100 |
+
|
| 101 |
+
We conduct our study by testing encoder outputs on three aspects: task fitness, language independence and semantic content. These features have been discussed in earlier literature: probing pretrained language models for semantic content in particular has proven very fecund (e.g., Rogers et al., 2021; Doddapaneni et al., 2021). As for machine translation, these studies are less numerous, although similar aspects have been investigated (Raganato and Tiedemann, 2018). For instance, Kudugunta et al. (2019) study how the learned representations evolve in a multilingual scenario, whereas Vázquez et al. (2020), Raganato et al. (2019) or Mareček et al. (2020) focus on the use of multilingual-MT as a signal for learning language. As we will show, studying representations under different angles is required in order to highlight the differences underpinning distinct implementations of multilinguality. ${}^{1}$
|
| 102 |
+
|
| 103 |
+
§ 2 EXPERIMENTAL SETUP
|
| 104 |
+
|
| 105 |
+
§ 2.1 DATASETS
|
| 106 |
+
|
| 107 |
+
We focus on datasets derived from the OPUS-100 corpus (Zhang et al., 2020), built by randomly sampling from the OPUS parallel text collection (Tiedemann, 2012). We construct datasets containing3,6,9,12,24,36,48,60and 72 languages other than English and refer to them as opus-03, opus-06, and so on. To test the impact on the model performance when adding languages, we build the datasets with an incremental approach, so that smaller datasets are systematically contained in the larger ones. Languages are selected so as to maximize the number of available datapoints-for training, zero-shot evaluation and probing- as well as linguistic diversity. See Appendix A for details.
|
| 108 |
+
|
| 109 |
+
§ 2.2 MODELS
|
| 110 |
+
|
| 111 |
+
We train modular sequence-to-sequence Transformer models (Escolano et al., 2021), with 6 layers in the encoder and the decoder. Decoders are systematically language-specific, whereas encoders
|
| 112 |
+
|
| 113 |
+
< g r a p h i c s >
|
| 114 |
+
|
| 115 |
+
Figure 1: Example model architectures for varying number of shared encoder layers $s$ . Modules with a light grey background are language-specific, modules with a dark grey background are fully shared.
|
| 116 |
+
|
| 117 |
+
162
|
| 118 |
+
|
| 119 |
+
163
|
| 120 |
+
|
| 121 |
+
164
|
| 122 |
+
|
| 123 |
+
168 contain $s \in \{ 0,\ldots ,6\}$ fully-shared layers on top of $6 - s$ language-specific layers, as shown in Figure 1. We train distinct models for each value of $s$ and each dataset; due to the computational costs incurred, we consider $s \geq 2$ only in combination with datasets up to opus-12, as well as opus-36. Models vary along two axes: models trained on larger datasets are exposed to more languages, whereas models with higher values of $s$ share more parameters. When training models over a dataset, we consider the translation directions $L$ - to-English, English-to- $L$ , and a $L$ -to- $L$ denoising task, for all languages $L$ in the dataset. ${}^{2}$ An illus-
|
| 124 |
+
|
| 125 |
+
tration of opus-03 models is shown in Figure 1. 190 Training details are given in Appendix B
|
| 126 |
+
|
| 127 |
+
§ 3 EXPERIMENTS
|
| 128 |
+
|
| 129 |
+
193
|
| 130 |
+
|
| 131 |
+
§ 3.1 TASK FITNESS: MACHINE TRANSLATION
|
| 132 |
+
|
| 133 |
+
195
|
| 134 |
+
|
| 135 |
+
The first aspect we consider is the models' performance on machine translation. We report BLEU scores in Figure 2. Where relevant, we also include supervised results for translation directions present in opus-06 so as to provide comparable scores. ${}^{3}$
|
| 136 |
+
|
| 137 |
+
The most obvious trend present is that models trained on opus-03 with $s \geq 5$ underfit, and perform considerably worse than their $s < 5$ counterpart. Otherwise, models with an equivalent number of shared layers $s$ tend to perform very reliably across datasets: e.g., across all supervised translation directions we tested, we found that the maximum variation in BLEU scores for $s < 2$ was of $\pm {4.8}$ . In Figure 2b, we also observe consistent
|
| 138 |
+
|
| 139 |
+
215
|
| 140 |
+
|
| 141 |
+
${}^{2}$ I.e., a model trained over the opus- $n$ dataset is trained over ${3n}$ tasks: ${2n}$ translation tasks, plus $n$ denoising tasks for languages other than English.
|
| 142 |
+
|
| 143 |
+
${}^{3}$ Note that all available zero-shot translation directions are systematically present in opus-06 and all larger datasets.
|
| 144 |
+
|
| 145 |
+
${}^{4}$ See also Aharoni et al. (2019) or Conneau et al. (2020).
|
| 146 |
+
|
| 147 |
+
${}^{1}$ Code, data, and full results of our experiments will be made available upon acceptance.
|
| 148 |
+
|
| 149 |
+
216
|
| 150 |
+
|
| 151 |
+
$= 1$ , supervised directions $- ○ - s = 0$ , supervised directions
|
| 152 |
+
|
| 153 |
+
1, zero-shot directions
|
| 154 |
+
|
| 155 |
+
$= 1$ , directions in opus- ${06} - \odot - s = 0$ , directions in opus-06
|
| 156 |
+
|
| 157 |
+
40
|
| 158 |
+
|
| 159 |
+
BLEU 20
|
| 160 |
+
|
| 161 |
+
10
|
| 162 |
+
|
| 163 |
+
1
|
| 164 |
+
|
| 165 |
+
24 36 48 60 72
|
| 166 |
+
|
| 167 |
+
Number of languages
|
| 168 |
+
|
| 169 |
+
(a) Average BLEU scores per dataset size
|
| 170 |
+
|
| 171 |
+
supervised directions opus-36 opus-12
|
| 172 |
+
|
| 173 |
+
zero-shot directions opus-09 opus-06
|
| 174 |
+
|
| 175 |
+
directions in opus-06 opus-03
|
| 176 |
+
|
| 177 |
+
40
|
| 178 |
+
|
| 179 |
+
30
|
| 180 |
+
|
| 181 |
+
BLEU 20
|
| 182 |
+
|
| 183 |
+
3 5
|
| 184 |
+
|
| 185 |
+
Number of shared layers $s$ (b) Average BLEU scores per number of shared layers
|
| 186 |
+
|
| 187 |
+
Figure 2: Average BLEU scores
|
| 188 |
+
|
| 189 |
+
217
|
| 190 |
+
|
| 191 |
+
218
|
| 192 |
+
|
| 193 |
+
219
|
| 194 |
+
|
| 195 |
+
220
|
| 196 |
+
|
| 197 |
+
221
|
| 198 |
+
|
| 199 |
+
222
|
| 200 |
+
|
| 201 |
+
223
|
| 202 |
+
|
| 203 |
+
224
|
| 204 |
+
|
| 205 |
+
225
|
| 206 |
+
|
| 207 |
+
226
|
| 208 |
+
|
| 209 |
+
227
|
| 210 |
+
|
| 211 |
+
228
|
| 212 |
+
|
| 213 |
+
229
|
| 214 |
+
|
| 215 |
+
230
|
| 216 |
+
|
| 217 |
+
231
|
| 218 |
+
|
| 219 |
+
232
|
| 220 |
+
|
| 221 |
+
233
|
| 222 |
+
|
| 223 |
+
234
|
| 224 |
+
|
| 225 |
+
235
|
| 226 |
+
|
| 227 |
+
237
|
| 228 |
+
|
| 229 |
+
239 improvement on zero-shot translation when increasing the number of shared layers $s$ from 0 to 4, and
|
| 230 |
+
|
| 231 |
+
249 for opus-36 this trend only breaks when the full stack is shared $\left( {s = 6}\right)$ . Lastly, results in Figure 2a suggest that adding more translation directions decreases zero-shot translation performances, but this trend seems to reverse when a significant number of layers are shared $\left( {s > 3}\right)$ , as displayed in Figure 2b. In all, under the setup we consider here, it appears that task fitness and zero-shot generalization are best achieved by sharing more parameters, rather than adding translation directions-although ex-
|
| 232 |
+
|
| 233 |
+
259 cessive sharing also impacts performances. ${}^{5}$
|
| 234 |
+
|
| 235 |
+
§ 3.2 LANGUAGE INDEPENDENCE: XNLI
|
| 236 |
+
|
| 237 |
+
To test to what degree encoder representations are
|
| 238 |
+
|
| 239 |
+
264 language-independent, we train classifier probes on XNLI (Conneau et al., 2018). We train models on English and report results for all languages: the gap between English and non-English performances
|
| 240 |
+
|
| 241 |
+
269
|
| 242 |
+
|
| 243 |
+
< g r a p h i c s >
|
| 244 |
+
|
| 245 |
+
Figure 3: Average XNLI macro-f1 scores
|
| 246 |
+
|
| 247 |
+
270
|
| 248 |
+
|
| 249 |
+
271
|
| 250 |
+
|
| 251 |
+
272
|
| 252 |
+
|
| 253 |
+
273
|
| 254 |
+
|
| 255 |
+
274
|
| 256 |
+
|
| 257 |
+
275
|
| 258 |
+
|
| 259 |
+
276
|
| 260 |
+
|
| 261 |
+
277
|
| 262 |
+
|
| 263 |
+
278
|
| 264 |
+
|
| 265 |
+
279
|
| 266 |
+
|
| 267 |
+
280
|
| 268 |
+
|
| 269 |
+
281
|
| 270 |
+
|
| 271 |
+
282
|
| 272 |
+
|
| 273 |
+
283
|
| 274 |
+
|
| 275 |
+
284
|
| 276 |
+
|
| 277 |
+
285
|
| 278 |
+
|
| 279 |
+
286
|
| 280 |
+
|
| 281 |
+
287
|
| 282 |
+
|
| 283 |
+
288
|
| 284 |
+
|
| 285 |
+
289
|
| 286 |
+
|
| 287 |
+
290
|
| 288 |
+
|
| 289 |
+
291
|
| 290 |
+
|
| 291 |
+
293
|
| 292 |
+
|
| 293 |
+
294
|
| 294 |
+
|
| 295 |
+
295
|
| 296 |
+
|
| 297 |
+
296
|
| 298 |
+
|
| 299 |
+
297
|
| 300 |
+
|
| 301 |
+
298
|
| 302 |
+
|
| 303 |
+
quantifies how language-dependent the represen- 299
|
| 304 |
+
|
| 305 |
+
tations are. We report macro-f1 on the validation 300
|
| 306 |
+
|
| 307 |
+
split; if no such split is available, we randomly 301 select ${10}\%$ instead. See Appendix C for details.
|
| 308 |
+
|
| 309 |
+
Figure 3 underscores that our English-centric 303 scenario prevents language-independent encoder representations: English targets fare better than
|
| 310 |
+
|
| 311 |
+
their counterparts. Variation seems driven by the 306
|
| 312 |
+
|
| 313 |
+
number of shared parameters: in Figure 3a, models 308 with $s = 1$ outperform models with $s = 0$ , whereas in Figure 3b, higher values of $s$ tend to close the gap between English and other targets. Interestingly, higher values of $s$ yield lower f1 scores in smaller
|
| 314 |
+
|
| 315 |
+
datasets, both for English and other languages. In 313 particular, we observe a drop for all languages on opus-03 with $s > 4$ , matching the underfitting we saw in Section 3.1; this trend is also attested in all datasets except opus-36. But on the whole, $a$
|
| 316 |
+
|
| 317 |
+
greater number of shared parameters leads to more 318 language-independent representations.
|
| 318 |
+
|
| 319 |
+
§ 3.3 SEMANTIC CONTENT: NLU BENCHMARKS
|
| 320 |
+
|
| 321 |
+
320
|
| 322 |
+
|
| 323 |
+
321
|
| 324 |
+
|
| 325 |
+
To verify the semantic contents captured by our rep- 322
|
| 326 |
+
|
| 327 |
+
resentations, we test them on monolingual GLUE- 323
|
| 328 |
+
|
| 329 |
+
${}^{5}$ Previous fully-shared models achieved high zero-shot performances, e.g. Johnson et al. (2017).
|
| 330 |
+
|
| 331 |
+
324 378
|
| 332 |
+
|
| 333 |
+
< g r a p h i c s >
|
| 334 |
+
|
| 335 |
+
Figure 4: Average macro-f1 scores ( $z$ -scaled) on NLU monolingual tasks
|
| 336 |
+
|
| 337 |
+
390
|
| 338 |
+
|
| 339 |
+
391
|
| 340 |
+
|
| 341 |
+
325 379
|
| 342 |
+
|
| 343 |
+
326 380
|
| 344 |
+
|
| 345 |
+
327 381
|
| 346 |
+
|
| 347 |
+
328 382
|
| 348 |
+
|
| 349 |
+
329 383
|
| 350 |
+
|
| 351 |
+
330 384
|
| 352 |
+
|
| 353 |
+
331 385
|
| 354 |
+
|
| 355 |
+
332 386
|
| 356 |
+
|
| 357 |
+
333 387
|
| 358 |
+
|
| 359 |
+
334 388
|
| 360 |
+
|
| 361 |
+
335 389
|
| 362 |
+
|
| 363 |
+
394
|
| 364 |
+
|
| 365 |
+
396 style benchmarks. We focus on benchmarks for languages present in opus-03: Arabic (ALUE, Seelawi et al. 2021), Chinese (CLUE, Xu et al. 2020), English (GLUE, Wang et al. 2018) and French (FLUE, Le et al. 2020). We select tasks that can be learned using a simple classifier; see Table 3 in Appendix C for a full list of the monolingual classification tasks considered. We follow
|
| 366 |
+
|
| 367 |
+
352 the same methodology as in Section 3.2.
|
| 368 |
+
|
| 369 |
+
Results are displayed in Figure 4. Instead of plotting raw macro-f1 scores, we first $z$ -normalize them so as to convert them to a comparable scale. Looking across datasets (Figures 4a to 4d), we do not see a clear variation; at best, we can argue English performances improves when using more language pairs. This is consistent with the English-
|
| 370 |
+
|
| 371 |
+
360 centric scenario under which we trained our models. Arabic and Chinese results would suggest that $s =$
|
| 372 |
+
|
| 373 |
+
362 1 models fare better than $s = 0$ models, but this trend does not carry on convincingly for French.
|
| 374 |
+
|
| 375 |
+
Comparing across number of shared layers (Figures 4e to 4h) suggests this trend might be more
|
| 376 |
+
|
| 377 |
+
367 complex: all languages tend to lose in accuracy for higher values of $s$ , and this effect is all the more pronounced for non-English languages and models trained on smaller datasets. For instance, the optimal number of shared layers for Chinese is either $s = 3$ or $s = 4$ , depending on the task under consideration and the number of language pairs in the training dataset, but the gain over $s < 3$ models is minimal. This differs crucially from what we observed in Section 3.1, where only $s = 6$ impacted
|
| 378 |
+
|
| 379 |
+
377 BLEU scores, and in Section 3.2, where there was a clear improvement from low to mid values of $s$ . In
|
| 380 |
+
|
| 381 |
+
sum, probing encoder representations for their se- 399 mantic contents paints a more nuanced picture, one where semantic accuracy does not clearly align with task fitness or language-independence.
|
| 382 |
+
|
| 383 |
+
§ 4 CONCLUSIONS
|
| 384 |
+
|
| 385 |
+
404
|
| 386 |
+
|
| 387 |
+
We have studied whether different means of achiev-
|
| 388 |
+
|
| 389 |
+
ing multilinguality-sharing parameters and mul- 406 tiplying languages-bring about the same effects. What transpires from our experiments is that the
|
| 390 |
+
|
| 391 |
+
two means are not equivalent: we generally observe 409 higher performances and more reliable represen-
|
| 392 |
+
|
| 393 |
+
tations by setting the optimal number of shared 411 parameters. Crucially, this optimum depends on the criteria chosen to evaluate representations: machine translation quality (Section 3.1), language
|
| 394 |
+
|
| 395 |
+
independence (Section 3.2) and semantic accuracy 416 (Section 3.3) all differed in that respect.
|
| 396 |
+
|
| 397 |
+
These two approaches are not dichotomous: it is possible to both scale the number of languages and select optimal parameter sharing. What is possible
|
| 398 |
+
|
| 399 |
+
may however not be practical. As guidance to NLP 421 practitioners, we recommend spending effort on tuning the level of parameter sharing for the task at hand. Sharing either too little ( $0 - 1$ layers in our experiments) or too much (sharing the entire en-
|
| 400 |
+
|
| 401 |
+
coder) results in sub-optimal performance overall, 426 but the optimal number of layers to share depends on the task. Spending significant effort on acquiring data for additional language pairs may not yield
|
| 402 |
+
|
| 403 |
+
improved representations past the initial stages of 430
|
| 404 |
+
|
| 405 |
+
data collection (opus-03 in our experiments). 431
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/4CTnlIc1rhw/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,777 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
# Estonian Named Entity Recognition: New Datasets and Models
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 057
|
| 10 |
+
|
| 11 |
+
004 058
|
| 12 |
+
|
| 13 |
+
005 059
|
| 14 |
+
|
| 15 |
+
006 060
|
| 16 |
+
|
| 17 |
+
## Abstract
|
| 18 |
+
|
| 19 |
+
This paper describes the annotation of two Estonian named entity recognition datasets. For this purposes, annotation
|
| 20 |
+
|
| 21 |
+
016 guidelines for labeling eleven types of entities were created. In addition to the
|
| 22 |
+
|
| 23 |
+
018 common entities of person names, organization names and locations, the annotation scheme includes geopolitical en-
|
| 24 |
+
|
| 25 |
+
021 tities, product names, titles (or roles), events, dates, times, monetary values and
|
| 26 |
+
|
| 27 |
+
023 percents. One annotation task involved the reannotation of an existing Estonian
|
| 28 |
+
|
| 29 |
+
026 named entity recognition dataset, consist- ing mostly of news texts, with the new
|
| 30 |
+
|
| 31 |
+
028 annotation scheme. The second annotated dataset includes new texts taken from both news and social media domains.
|
| 32 |
+
|
| 33 |
+
031 Transformer-based models were trained on these datasets to establish the baseline
|
| 34 |
+
|
| 35 |
+
033 predictive performance. The best results were obtained by training a single model on the joined dataset, suggesting that the
|
| 36 |
+
|
| 37 |
+
036 domain differences between the datasets are relatively small.
|
| 38 |
+
|
| 39 |
+
038
|
| 40 |
+
|
| 41 |
+
## 1 Introduction
|
| 42 |
+
|
| 43 |
+
Named entity recognition (NER) is a useful natural language processing task that enables to extract from text information in the form of named entities that can be useful for various downstream tasks, like for instance anonymisation of documents or assigning thematic keywords to texts. Contemporary NER systems are usually trained as supervised tagging models where annotated training data is used to train the model to tag the spans in the text corresponding to named entities in the text.
|
| 44 |
+
|
| 45 |
+
For Estonian, previous efforts to develop NER
|
| 46 |
+
|
| 47 |
+
053 systems involves the creation of an annotated cor-
|
| 48 |
+
|
| 49 |
+
061
|
| 50 |
+
|
| 51 |
+
062
|
| 52 |
+
|
| 53 |
+
063
|
| 54 |
+
|
| 55 |
+
064
|
| 56 |
+
|
| 57 |
+
pus labeled with person, organization and loca- 065 tion names (Tkachenko et al., 2013), and training
|
| 58 |
+
|
| 59 |
+
CRF- and transformer-based models on this data 067 (Tkachenko et al., 2013; Kittask et al., 2020; Tan-vir et al., 2021) This section gives the context, in-
|
| 60 |
+
|
| 61 |
+
troduces the problem, proposes a solution and out- 070 lines the paper contributions. From a less common
|
| 62 |
+
|
| 63 |
+
domain, recently, a corpus of 19th century parish 072 court records annotated with named entities was
|
| 64 |
+
|
| 65 |
+
created (Orasmaa et al., 2022). 075
|
| 66 |
+
|
| 67 |
+
This paper describes the efforts to annotate ad-
|
| 68 |
+
|
| 69 |
+
ditional data in Estonian with named entities with 077 the purpose of advancing the development of general purpose NER systems for Estonian. The data
|
| 70 |
+
|
| 71 |
+
annotation part of the study involves creating two 080 annotated datasets labelled with a rich annotation
|
| 72 |
+
|
| 73 |
+
scheme developed as part of the study. First, we 082 reannotated the existing NER dataset according to the new annotation scheme and secondly, we an-
|
| 74 |
+
|
| 75 |
+
notated about ${130}\mathrm{\;K}$ tokens of new texts, mostly 085 from news portals and social media sources.
|
| 76 |
+
|
| 77 |
+
087
|
| 78 |
+
|
| 79 |
+
The second part of the paper describes the experimental results of training transformer-based
|
| 80 |
+
|
| 81 |
+
predictive models on these datasets. Our main 090 goal was first to establish the baseline performance
|
| 82 |
+
|
| 83 |
+
for various entity types based on the new annota- 092 tion scheme. The second goal was to study the optimal ways of using these two datasets that originate from somewhat different domains. The experimental results showed that the baseline perfor-
|
| 84 |
+
|
| 85 |
+
mance on the newly annotated dataset is somewhat 097 lower than was on the less richly annotated Estonian NER dataset, which suggests that the new annotation can be on one hand more noisy and on the other hand more rich and complex. We also
|
| 86 |
+
|
| 87 |
+
found that the domains of the two datasets are sim- 102 ilar enough, such that a model trained on the joint dataset performs as well or better than two models trained on each dataset separately.
|
| 88 |
+
|
| 89 |
+
In short, our contributions in this paper are: 107
|
| 90 |
+
|
| 91 |
+
1. Two new Estonian NER datasets annotated
|
| 92 |
+
|
| 93 |
+
109 with a rich set of entities;
|
| 94 |
+
|
| 95 |
+
2. Baseline performance assessment of a transformer-based model on these two datasets.
|
| 96 |
+
|
| 97 |
+
## 2 Dataset Creation
|
| 98 |
+
|
| 99 |
+
In this section, we describe the process of creating the two labelled NER datasets for Estonian.
|
| 100 |
+
|
| 101 |
+
### 2.1 Data Sources
|
| 102 |
+
|
| 103 |
+
The first dataset that we subsequently call the Main NER dataset, is the reannotation of the existing Estonian NER dataset (Tkachenko et al., 2013). This dataset consists of ca ${220}\mathrm{\;K}$ words of news texts and is generally homogeneus in its domain. Previously, this dataset has been annotated with person, organization and location names. Previous works adopting this dataset have observed errors in its annotation (Tanvir et al., 2021), which was one of the reason why this dataset was chosen for reannotation.
|
| 104 |
+
|
| 105 |
+
The second dataset that we will subsequently call New NER dataset, is totally new. To compile this dataset, the aim was to choose ca ${130}\mathrm{K}$ tokens from both news and social media domains, with roughly ${100}\mathrm{\;K}$ from news domains and ${30}\mathrm{\;K}$ from the social media domain. The underlying texts were sampled from the Estonian Web Corpus 2017 (Jakubíček et al., 2013). The metadata containing the URL and the title of the web page was used to choose the texts from both categories. For selecting the news sources, we looked for the urls and titles referring to the major Estonian news sites, such as Postimees, EPL, ERR and Delfi. For extracting social media texts, we searched for keywords that would point to well-known blogging and forum platforms, such as blogspot and foo-rum.
|
| 106 |
+
|
| 107 |
+
### 2.2 Annotation Guidelines
|
| 108 |
+
|
| 109 |
+
Annotation guidelines were developed to label the data. In addition to the commonly used person, organization and location names we wanted to adopt a richer set of labels. First, we decided to separate locations into geopolitical entities and geographical locations. Following similar works in Finnish (Ruokolainen et al., 2020), we added events, products and dates. Finally, we also added titles, times, monetary values and percentages. The short de-
|
| 110 |
+
|
| 111 |
+
scription for each entity as used in the annotation 162
|
| 112 |
+
|
| 113 |
+
guidelines was as follows: 163
|
| 114 |
+
|
| 115 |
+
164
|
| 116 |
+
|
| 117 |
+
- Persons (PER): This includes names referring 165
|
| 118 |
+
|
| 119 |
+
to all kinds of real and fictional persons. 166
|
| 120 |
+
|
| 121 |
+
- Organizations (ORG): This includes all kinds 167 168 of clearly and unambiguously identifiable or-
|
| 122 |
+
|
| 123 |
+
ganizations, for example companies and sim- 170 ilar commercial institutions as well as administrative bodies.
|
| 124 |
+
|
| 125 |
+
- Locations (LOC): This includes all geograph- 173
|
| 126 |
+
|
| 127 |
+
ical locations that are not associated with a 175 specific political organization such as GPEs.
|
| 128 |
+
|
| 129 |
+
- Geopolitical entities (GPE): This includes all
|
| 130 |
+
|
| 131 |
+
geographic locations associated with a po- 178 litical organization, such as countries, cities,
|
| 132 |
+
|
| 133 |
+
empires. 180
|
| 134 |
+
|
| 135 |
+
- Titles (TITLE): This includes job titles, posi-
|
| 136 |
+
|
| 137 |
+
tions, scientific degrees, etc. Titles must be 183 indicated only if a specific person behind the
|
| 138 |
+
|
| 139 |
+
title can be identified based on the preceding 185 text. The personal name immediately following the title is not part of the TITLE. If the
|
| 140 |
+
|
| 141 |
+
title is preceded by the ORG tag, only the job 188 title must be marked with the TITLE, not the
|
| 142 |
+
|
| 143 |
+
words in the ORG. 190
|
| 144 |
+
|
| 145 |
+
- Products (PROD): This includes all clearly
|
| 146 |
+
|
| 147 |
+
identifiable products, objects, works, etc., by 193 name.
|
| 148 |
+
|
| 149 |
+
195
|
| 150 |
+
|
| 151 |
+
- Events (EVENT): This includes events with a specific name.
|
| 152 |
+
|
| 153 |
+
198
|
| 154 |
+
|
| 155 |
+
- Dates (DATE): This includes time expres-
|
| 156 |
+
|
| 157 |
+
sions, both in day/month/year type, e.g., "Oc- 200 tober 3rd", "in 2020", "2019", "in September", as well as as general expressions (yesterday, last month, next year) if the expression has a clear referent. This means that
|
| 158 |
+
|
| 159 |
+
based on the expression it must be possible 205 to determine a specific point in time, i.e., a specific year or month or day. Thus, vague expressions such as a few years from now, a few months ago are not suitable, but more
|
| 160 |
+
|
| 161 |
+
specific five years later, three months ago, the 210 day before yesterday are suitable.
|
| 162 |
+
|
| 163 |
+
- Times (TIME): This includes time expressions that refer to an entity smaller than a day: times and parts of day that have a referent 215
|
| 164 |
+
|
| 165 |
+
216 (analogous to DATE entities). General ex-
|
| 166 |
+
|
| 167 |
+
217 pressions without a referent are not marked. Durations are also not marked.
|
| 168 |
+
|
| 169 |
+
- Monetary values (MONEY): This includes expressions that refer to specific currencies
|
| 170 |
+
|
| 171 |
+
222 and amounts in those currencies.
|
| 172 |
+
|
| 173 |
+
- Percentages (PERCENT): This includes entities expressing percentages. A percentage can be expressed both with a percentage mark $\left( \% \right)$ or verbally.
|
| 174 |
+
|
| 175 |
+
229 Similar to Ruokolainen et al. (2020), we de- cided to include nested entities in our annotation schema. An example of a nested entity is New
|
| 176 |
+
|
| 177 |
+
232 York City Government which itself is an ORG entity and which contains the GPE entity New York. We limited the maximum number of levels of nesting to three. Nested labelling of the same types of entities was not permitted, with the exception of ORG. For instance, in the phrase Republic of Ireland annotated as GPE, the further annotation of Ireland as a nested GPE was not allowed. However, for instance in a phrase such as UN Department of Economic and Social Affairs labelled as ORG, the word token ${UN}$ was allowed to be annotated as nested ORG.
|
| 178 |
+
|
| 179 |
+
### 2.3 Annotation Process
|
| 180 |
+
|
| 181 |
+
The annotation process took place for both datasets separately. For the Main NER dataset, three annotators were recruited who were graduate students in general or computational linguistics. All annotators labelled the dataset independently, according to the given guidelines. Two annotators completed the annotations of the full dataset, one annotator completed most of the annotations, except for few documents. The annotation of the Main NER dataset was done with the Label Studio, which is a free open-source data annotation platform.
|
| 182 |
+
|
| 183 |
+
The New NER dataset was annotated by total of twelve annotators. Two annotators labelled the data in full. One of those was an undergraduate linguistics student and another was a graduate computer science student with undergraduate degree in linguistics. The rest of the ten annotators took part of the graduate level NLP course and each of them annotated ca ${12}\mathrm{\;K}$ word tokens as part of their course work. All annotators worked independently without access to any other's persons
|
| 184 |
+
|
| 185 |
+
269 work, following the given annotation guidelines.
|
| 186 |
+
|
| 187 |
+
Thus, each text received three independent annota- 270
|
| 188 |
+
|
| 189 |
+
tions. The annotation of the New NER dataset was 271
|
| 190 |
+
|
| 191 |
+
done with the currently non-existent DataTurks 272
|
| 192 |
+
|
| 193 |
+
annotation platform. 273
|
| 194 |
+
|
| 195 |
+
### 2.4 Label Harmonization
|
| 196 |
+
|
| 197 |
+
The annotations of the New NER dataset were 276 harmonized using both automatic and manual approaches. First, automatic harmonization was applied according to the principle if the annotators A
|
| 198 |
+
|
| 199 |
+
and B had agreed on the annotation but the anno- 281 tator $\mathrm{C}$ had not annotated anything, the final label
|
| 200 |
+
|
| 201 |
+
was set to the annotation of $\mathrm{A}$ and $\mathrm{B}$ . After that, 283 the entire corpus was manually reviewed by two people, one of whom was the original annotator A
|
| 202 |
+
|
| 203 |
+
and the other an author of this paper, and the labels 286 were disambiguated after discussion. Mostly, the
|
| 204 |
+
|
| 205 |
+
final label was the one that was chosen by at least 288 two annotators. However, in some cases the label was completely changed or a span of words was annotated as an entity that had been left unmarked by all annotators.
|
| 206 |
+
|
| 207 |
+
The annotations of the Main NER dataset were 293 disambiguated automatically. According to the automatic procedure, the word span was labelled
|
| 208 |
+
|
| 209 |
+
as an entity if at least two annotators had marked 296 it as an entity with the same tag.
|
| 210 |
+
|
| 211 |
+
298
|
| 212 |
+
|
| 213 |
+
### 2.5 Interannotator Agreement
|
| 214 |
+
|
| 215 |
+
In order to assess the reliability of the annotations,
|
| 216 |
+
|
| 217 |
+
interannotator agreements were computed for the 301 Main NER dataset, which are shown in Table 1.
|
| 218 |
+
|
| 219 |
+
We computed the Fleiss $\kappa$ , which is an extension 303 of the Cohen’s $\kappa$ to more than two annotators. We followed the procedure described by (Ruokolainen
|
| 220 |
+
|
| 221 |
+
et al., 2020), where each entity in the running text 306 is treated as an instance of a positive class. The
|
| 222 |
+
|
| 223 |
+
exact match annotation of this entity was checked 308 for each annotator. If the annotator had marked this exact entity with the same label then it was recorded as an instance of the positive class, other-
|
| 224 |
+
|
| 225 |
+
wise it was recorded as an instance of the negative 313 class.
|
| 226 |
+
|
| 227 |
+
The overall agreement of the 1st level entities is in the range of substantial agreement, while the annotations on the second and third level do not agree considerably as illustrated by the low or even negative Fleiss $\kappa$ values. In the first level, person names, geopolitical entities, and percentages obtain almost perfect agreement $\left( {\kappa > {0.8}}\right)$ . Most other entities show substantial agreement
|
| 228 |
+
|
| 229 |
+
$\left( {\kappa > {0.6}}\right)$ . The lowest agreement scores were 323
|
| 230 |
+
|
| 231 |
+
324
|
| 232 |
+
|
| 233 |
+
<table><tr><td/><td>1st level</td><td>2nd level</td><td>3rd level</td></tr><tr><td/><td>0.65</td><td>0.23</td><td>-0.16</td></tr><tr><td>PER</td><td>0.95</td><td>0.27</td><td>0.66</td></tr><tr><td>ORG</td><td>0.76</td><td>0.33</td><td>0.19</td></tr><tr><td>LOC</td><td>0.65</td><td>0.35</td><td>0.18</td></tr><tr><td>GPE</td><td>0.84</td><td>0.47</td><td>-0.08</td></tr><tr><td>TITLE</td><td>0.63</td><td>0.21</td><td>0.00</td></tr><tr><td>PROD</td><td>0.48</td><td>0.02</td><td>-</td></tr><tr><td>EVENT</td><td>0.43</td><td>0.53</td><td>-</td></tr><tr><td>DATE</td><td>0.72</td><td>0.06</td><td>-</td></tr><tr><td>TIME</td><td>0.53</td><td>0.00</td><td>-</td></tr><tr><td>MONEY</td><td>0.78</td><td>0.00</td><td>-</td></tr><tr><td>PERCENT</td><td>0.90</td><td>-</td><td>-</td></tr></table>
|
| 234 |
+
|
| 235 |
+
Table 1: Interannotator agreement of the Main NER dataset as measured with the Fleiss $\kappa$ .
|
| 236 |
+
|
| 237 |
+
325
|
| 238 |
+
|
| 239 |
+
326
|
| 240 |
+
|
| 241 |
+
327
|
| 242 |
+
|
| 243 |
+
328
|
| 244 |
+
|
| 245 |
+
329
|
| 246 |
+
|
| 247 |
+
330
|
| 248 |
+
|
| 249 |
+
331
|
| 250 |
+
|
| 251 |
+
335 found for products and events that still obtained moderate agreement $\left( {\kappa > {0.4}}\right)$ .
|
| 252 |
+
|
| 253 |
+
### 2.6 Final Datasets
|
| 254 |
+
|
| 255 |
+
After the label unification process, the both final datasets were divided into train, validation and test splits. The datasets will be distributed with these prepared splits to allow for future comparison of developed models. The statistics of the final datasets are shown in Table 2.
|
| 256 |
+
|
| 257 |
+
Previously, the Main NER dataset was annotated only with PER, ORG and LOC entities (Tkachenko et al., 2013). PER and ORG labels are still among the most frequent ones while the bulk of the LOC annotations have changed to GPE according to the new annotation guidelines. The Main NER dataset also features relatively large number of titles, dates and products. The event
|
| 258 |
+
|
| 259 |
+
362 entity is the least frequent in this dataset.
|
| 260 |
+
|
| 261 |
+
Almost similar prevalence patterns can be observed also in the New NER dataset. The PER, ORG and GPE entities are also the most frequent, followed by a relatively large number of titles,
|
| 262 |
+
|
| 263 |
+
367 dates and products. Compared to the Main NER dataset, the New NER dataset also contains considerably more event entities. The time, percent and money entities are the least frequent in this New NER dataset.
|
| 264 |
+
|
| 265 |
+
## 3 Experiments
|
| 266 |
+
|
| 267 |
+
We had two main goals when conducting the experiments. The first goal was to establish the baseline performance on both datasets. Al-
|
| 268 |
+
|
| 269 |
+
though several previous results have been pub- 378
|
| 270 |
+
|
| 271 |
+
lished on the old annotation of the Main NER 379
|
| 272 |
+
|
| 273 |
+
dataset (Tkachenko et al., 2013; Kittask et al., 380
|
| 274 |
+
|
| 275 |
+
2020; Tanvir et al., 2021), the new annotations are 381 much richer and were collected without looking at the old annotations. Therefore the baseline perfor-
|
| 276 |
+
|
| 277 |
+
mance of the new annotations might be different 384 than was for the old annotations. As the New NER dataset contains new material, its baseline performance also had to be evaluated.
|
| 278 |
+
|
| 279 |
+
The second goal was related to the potential do- 389 main difference of the two datasets-the average
|
| 280 |
+
|
| 281 |
+
document length of the New NER corpus (1281 391 word tokens) was more than three times higher than the average document length of the Main
|
| 282 |
+
|
| 283 |
+
NER corpus (373 word tokens). Also, the New 394 NER corpus contains at least ${30}\mathrm{\;K}$ tokens from
|
| 284 |
+
|
| 285 |
+
social media domain. Moreover, the documents 396 from the news sources are not all formal news texts but contain also less formal opinion pieces. Thus, our goal was to determine how to use these datasets-whether two models should be trained, one for each dataset separately, or if joining the data and training a single model would be more beneficial.
|
| 286 |
+
|
| 287 |
+
We only used the first level annotations to train
|
| 288 |
+
|
| 289 |
+
the models because as shown in Table 2, much less 406 entities were labelled on the 2nd and on the 3rd level and as shown in Table 1, the inter-annotator agreements of the second and third level entities are lacking.
|
| 290 |
+
|
| 291 |
+
We adopted the transformer-based token classification model that for each word token, assigns a label in the commonly-used BIO format, where the B-tag denotes the start of an entity, I-tag de-
|
| 292 |
+
|
| 293 |
+
notes the continuation of an entity and the $\mathrm{O}$ tag 416 is assigned to all word tokens that are not part of any named entity. We used the TokenClas-sification implementation from the Huggingface transformers library (Wolf et al., 2020). The
|
| 294 |
+
|
| 295 |
+
EstBERT model with 128 sequence length (Tanvir 421 et al., 2021) was used as a base model that was fine-tuned on the NER datasets.
|
| 296 |
+
|
| 297 |
+
The batch size was fixed to 16 , the Adam opti-
|
| 298 |
+
|
| 299 |
+
mizer was used with betas 0.9 and 0.98 and epsilon 426 1e-6. The models were trained 150 epochs in maximum, by stopping early if the overall F1-score on the validation set did not improve in 20 epochs
|
| 300 |
+
|
| 301 |
+
for more than ${0.0001}\mathrm{\;F}1$ -score points. The eval- 430
|
| 302 |
+
|
| 303 |
+
uations during the training and final testing were 431
|
| 304 |
+
|
| 305 |
+
432 486
|
| 306 |
+
|
| 307 |
+
<table><tr><td rowspan="2"/><td colspan="4">Main NER dataset</td><td colspan="4">New NER dataset</td></tr><tr><td>Train</td><td>Val</td><td>Test</td><td>Total</td><td>Train</td><td>Val</td><td>Test</td><td>Total</td></tr><tr><td>Documents</td><td>525</td><td>18</td><td>39</td><td>582</td><td>78</td><td>16</td><td>15</td><td>109</td></tr><tr><td>Sentences</td><td>9965</td><td>2415</td><td>1907</td><td>14287</td><td>7001</td><td>882</td><td>890</td><td>8773</td></tr><tr><td>Tokens</td><td>155983</td><td>32890</td><td>28370</td><td>217243</td><td>111858</td><td>13130</td><td>14686</td><td>139674</td></tr><tr><td>1st lvl entities</td><td>14944</td><td>2808</td><td>2522</td><td>20274</td><td>8078</td><td>541</td><td>1002</td><td>9594</td></tr><tr><td>2nd lvl entities</td><td>987</td><td>223</td><td>122</td><td>1332</td><td>571</td><td>44</td><td>59</td><td>674</td></tr><tr><td>3rd lvl entities</td><td>40</td><td>14</td><td>4</td><td>58</td><td>27</td><td>0</td><td>1</td><td>28</td></tr><tr><td>PER</td><td>3563</td><td>642</td><td>722</td><td>4927</td><td>2601</td><td>109</td><td>299</td><td>3009</td></tr><tr><td>ORG</td><td>3215</td><td>504</td><td>541</td><td>4260</td><td>1177</td><td>85</td><td>150</td><td>1412</td></tr><tr><td>LOC</td><td>328</td><td>118</td><td>61</td><td>507</td><td>449</td><td>31</td><td>35</td><td>515</td></tr><tr><td>GPE</td><td>3377</td><td>714</td><td>479</td><td>4570</td><td>1253</td><td>129</td><td>231</td><td>1613</td></tr><tr><td>TITLE</td><td>1302</td><td>171</td><td>209</td><td>1682</td><td>702</td><td>19</td><td>59</td><td>772</td></tr><tr><td>PROD</td><td>874</td><td>161</td><td>66</td><td>1101</td><td>624</td><td>60</td><td>117</td><td>801</td></tr><tr><td>EVENT</td><td>56</td><td>13</td><td>17</td><td>86</td><td>230</td><td>15</td><td>26</td><td>271</td></tr><tr><td>DATE</td><td>1346</td><td>308</td><td>186</td><td>1840</td><td>746</td><td>64</td><td>77</td><td>887</td></tr><tr><td>TIME</td><td>456</td><td>39</td><td>30</td><td>525</td><td>103</td><td>6</td><td>6</td><td>115</td></tr><tr><td>PERCENT</td><td>137</td><td>62</td><td>58</td><td>257</td><td>75</td><td>11</td><td>1</td><td>87</td></tr><tr><td>MONEY</td><td>291</td><td>76</td><td>153</td><td>520</td><td>118</td><td>12</td><td>1</td><td>131</td></tr></table>
|
| 308 |
+
|
| 309 |
+
Table 2: Statistics of the two new Estonian NER datasets.
|
| 310 |
+
|
| 311 |
+
505
|
| 312 |
+
|
| 313 |
+
506
|
| 314 |
+
|
| 315 |
+
507
|
| 316 |
+
|
| 317 |
+
433 487
|
| 318 |
+
|
| 319 |
+
434 488
|
| 320 |
+
|
| 321 |
+
435 489
|
| 322 |
+
|
| 323 |
+
436 490
|
| 324 |
+
|
| 325 |
+
437 491
|
| 326 |
+
|
| 327 |
+
438 492
|
| 328 |
+
|
| 329 |
+
439
|
| 330 |
+
|
| 331 |
+
440
|
| 332 |
+
|
| 333 |
+
441
|
| 334 |
+
|
| 335 |
+
442
|
| 336 |
+
|
| 337 |
+
443 497
|
| 338 |
+
|
| 339 |
+
444
|
| 340 |
+
|
| 341 |
+
445 499
|
| 342 |
+
|
| 343 |
+
446 500
|
| 344 |
+
|
| 345 |
+
447 501
|
| 346 |
+
|
| 347 |
+
448 502
|
| 348 |
+
|
| 349 |
+
449 503
|
| 350 |
+
|
| 351 |
+
450 504
|
| 352 |
+
|
| 353 |
+
508
|
| 354 |
+
|
| 355 |
+
509 done with the seqeval package. ${}^{1}$ The learning rate was optimized on the validation set using the grid of $\{ 5\mathrm{e} - 6,1\mathrm{e} - 5,3\mathrm{e} - 5,5\mathrm{e} - 5,1\mathrm{e} - 4\}$ . Each model was trained ten times with different random seeds and their mean values with standard deviations are reported.
|
| 356 |
+
|
| 357 |
+
## 4 Results
|
| 358 |
+
|
| 359 |
+
We first trained and evaluated models on both datasets separately to assess the overall modeling performance on each dataset. Then, we also trained another model on the joint dataset and compared its performance on the evaluation sets of both datasets.
|
| 360 |
+
|
| 361 |
+
### 4.1 Separate Models
|
| 362 |
+
|
| 363 |
+
First, we trained predictive models on both datasets separately. The results of these experiments on the respective validation sets are shown in Table 3. The overall performance (bottom row) is on the same level on both datasets, showing that the annotation and modeling difficulty is comparable in the two datasets.
|
| 364 |
+
|
| 365 |
+
The most accurately predicted entities in both datasets are PER, GPE and PERCENT. The low-
|
| 366 |
+
|
| 367 |
+
485
|
| 368 |
+
|
| 369 |
+
510
|
| 370 |
+
|
| 371 |
+
est accuracy is obtained when predicting LOC, 511
|
| 372 |
+
|
| 373 |
+
EVENT and TIME for the Reannotated Main NER 512
|
| 374 |
+
|
| 375 |
+
datset and LOC, EVENT and PROD for the New 513
|
| 376 |
+
|
| 377 |
+
NER dataset. The prediction of the EVENT names 514 is especially poor in the Main NER dataset, probably because there are only 56 of the EVENT in-
|
| 378 |
+
|
| 379 |
+
stances in the respective train set. 517
|
| 380 |
+
|
| 381 |
+
Comparing the results of the Reannotated Main
|
| 382 |
+
|
| 383 |
+
dataset with the old annotations of the Main NER 519
|
| 384 |
+
|
| 385 |
+
dataset (see Table 4, taken from Tanvir et al. 520 (2021), Table 11) shows that the performance on
|
| 386 |
+
|
| 387 |
+
all three entities (PER, ORG, LOC) used in the 522 old annotations has decreased. Although the mod-
|
| 388 |
+
|
| 389 |
+
eling results are not directly comparable because 524 525 Table 3 shows results on the validation set and 526 Table 4 on the test set, the differences are large 527 enough to suggest that the new annotation is con- 528
|
| 390 |
+
|
| 391 |
+
siderably more complex for the models to learn. 529
|
| 392 |
+
|
| 393 |
+
### 4.2 Joint Model
|
| 394 |
+
|
| 395 |
+
530
|
| 396 |
+
|
| 397 |
+
The Joint model is trained on the concatenated train sets of both the Main NER and New NER
|
| 398 |
+
|
| 399 |
+
datasets. Table 5 shows the F1-scores of the Joint 534 model on the joined validation set as well as on the validation sets of both datasets separately. The overall F1-scores on each validation set are some-
|
| 400 |
+
|
| 401 |
+
what higher than for the separate models (0.766 vs 538
|
| 402 |
+
|
| 403 |
+
0.747 for the Main dataset and 0.752 vs 0.735 for 539
|
| 404 |
+
|
| 405 |
+
---
|
| 406 |
+
|
| 407 |
+
https://github.com/chakki-works/ seqeval
|
| 408 |
+
|
| 409 |
+
---
|
| 410 |
+
|
| 411 |
+
540 594
|
| 412 |
+
|
| 413 |
+
<table><tr><td rowspan="2"/><td rowspan="2">#</td><td colspan="4">Reannoated Main NER</td><td colspan="3">New NER</td></tr><tr><td>Precision</td><td>Recall</td><td>$\mathbf{{F1} - {score}}$</td><td>#</td><td>Precision</td><td>Recall</td><td>F1-score</td></tr><tr><td>PER</td><td>642</td><td>.827 (.012)</td><td>.871 (.009)</td><td>.848 (.005)</td><td>109</td><td>.809 (.044)</td><td>.816 (.023)</td><td>.811 (.019)</td></tr><tr><td>ORG</td><td>504</td><td>.654 (.016)</td><td>.666 (.014)</td><td>.660 (.013)</td><td>85</td><td>.580 (.027)</td><td>.585 (.052)</td><td>.581 (.024)</td></tr><tr><td>LOC</td><td>118</td><td>.643 (.036)</td><td>.478 (.028)</td><td>.547 (.016)</td><td>31</td><td>.600 (.065)</td><td>.560 (.060)</td><td>.576 (.044)</td></tr><tr><td>GPE</td><td>714</td><td>.821 (.012)</td><td>.831 (.021)</td><td>.826 (.008)</td><td>129</td><td>.900 (.017)</td><td>.879 (.030)</td><td>.889 (.014)</td></tr><tr><td>TITLE</td><td>171</td><td>.676 (.023)</td><td>.814 (.014)</td><td>.739 (.011)</td><td>19</td><td>.750 (.062)</td><td>.718 (.064)</td><td>.731 (.048)</td></tr><tr><td>PROD</td><td>161</td><td>.572 (.033)</td><td>.628 (.026)</td><td>.598 (.024)</td><td>60</td><td>.509 (.043)</td><td>.474 (.052)</td><td>.488 (.029)</td></tr><tr><td>EVENT</td><td>13</td><td>.069 (.029)</td><td>.077 (.034)</td><td>.072 (.031)</td><td>16</td><td>.518 (.104)</td><td>.558 (.104)</td><td>.525 (.070)</td></tr><tr><td>DATE</td><td>308</td><td>.682 (.020)</td><td>.720 (.017)</td><td>.700 (.007)</td><td>64</td><td>.816 (.027)</td><td>.824 (.024)</td><td>.820 (.021)</td></tr><tr><td>TIME</td><td>39</td><td>.553 (.066)</td><td>.555 (.045)</td><td>.553 (.053)</td><td>6</td><td>.812 (.041)</td><td>.788 (.108)</td><td>.797 (.074)</td></tr><tr><td>PERCENT</td><td>62</td><td>.985 (.016)</td><td>.867 (.032)</td><td>.922 (.019)</td><td>11</td><td>.895 (.126)</td><td>1 (-)</td><td>.940 (.074)</td></tr><tr><td>MONEY</td><td>76</td><td>.636 (.040)</td><td>.568 (.030)</td><td>.600 (.030)</td><td>12</td><td>.659 (.085)</td><td>.742 (.126)</td><td>.693 (.083)</td></tr><tr><td>Overall</td><td>2571</td><td>.737 (.010)</td><td>.757 (.009)</td><td>.747 (.004)</td><td>497</td><td>.736 (.014)</td><td>.734 (.017)</td><td>.735 (.006)</td></tr></table>
|
| 414 |
+
|
| 415 |
+
Table 3: Predictive performance of models trained on both two datasets, evaluated on the respective validation set.
|
| 416 |
+
|
| 417 |
+
598
|
| 418 |
+
|
| 419 |
+
541 595
|
| 420 |
+
|
| 421 |
+
542 596
|
| 422 |
+
|
| 423 |
+
543 597
|
| 424 |
+
|
| 425 |
+
545 599
|
| 426 |
+
|
| 427 |
+
546 600
|
| 428 |
+
|
| 429 |
+
547
|
| 430 |
+
|
| 431 |
+
548
|
| 432 |
+
|
| 433 |
+
549
|
| 434 |
+
|
| 435 |
+
550
|
| 436 |
+
|
| 437 |
+
551 605
|
| 438 |
+
|
| 439 |
+
552
|
| 440 |
+
|
| 441 |
+
553 607
|
| 442 |
+
|
| 443 |
+
554
|
| 444 |
+
|
| 445 |
+
555
|
| 446 |
+
|
| 447 |
+
556 610
|
| 448 |
+
|
| 449 |
+
557
|
| 450 |
+
|
| 451 |
+
558 612
|
| 452 |
+
|
| 453 |
+
561
|
| 454 |
+
|
| 455 |
+
<table><tr><td/><td>Precision</td><td>Recall</td><td>F1-score</td></tr><tr><td>PER</td><td>.948</td><td>.958</td><td>.953</td></tr><tr><td>ORG</td><td>.784</td><td>.826</td><td>.805</td></tr><tr><td>LOC</td><td>.899</td><td>.914</td><td>.907</td></tr><tr><td>Overall</td><td>.891</td><td>.912</td><td>.901</td></tr></table>
|
| 456 |
+
|
| 457 |
+
Table 4: Results of the old annotations of the Main NER test set. Adapted from Table 11 (Tanvir et al., 2021).
|
| 458 |
+
|
| 459 |
+
563
|
| 460 |
+
|
| 461 |
+
565
|
| 462 |
+
|
| 463 |
+
566
|
| 464 |
+
|
| 465 |
+
568 the New dataset, compare with the bottom row of Table 3).
|
| 466 |
+
|
| 467 |
+
Figure 1 shows the entity-level comparison of the Joint models and the separate models on the respective validation sets. Figure 1a, which de-
|
| 468 |
+
|
| 469 |
+
578 picts the comparison on the validation set of the Main NER dataset, shows that the Joint model performs the same or better on all entities, ex-
|
| 470 |
+
|
| 471 |
+
581 cept the TIME entity, which performance already was among the lowest one Main dataset and which
|
| 472 |
+
|
| 473 |
+
583 with the Joint model drops even more from 0.553 to 0.433 . On the other hand, the prediction accuracy of the EVENT entity, while still remaining quite low, improves considerably, from 0.072 to 0.310 .
|
| 474 |
+
|
| 475 |
+
When comparing the Joint and separate model results on the New NER dataset (see Figure 1b, we observe that the Joint model performs the same or better on some entity types (PER, ORG, GPE,
|
| 476 |
+
|
| 477 |
+
593 LOC, PROD, PERCENT, MONEY) and somewhat worse on the rest. The largest drop occurs again on the TIME entity, which drops from 0.797 to 0.627 .
|
| 478 |
+
|
| 479 |
+
Overall, we conclude that training a Joint model instead of two separate models is justified. Although with the Joint model, the prediction performance dropped for some entities, especially on the
|
| 480 |
+
|
| 481 |
+
New NER dataset, the overall F1-score was better 622 on the validation sets of both datasets than for the separate models. Therefore we conduct the final
|
| 482 |
+
|
| 483 |
+
evaluations on the test set with the Joint model. 625
|
| 484 |
+
|
| 485 |
+
### 4.3 Test Results
|
| 486 |
+
|
| 487 |
+
627
|
| 488 |
+
|
| 489 |
+
The test results of the Joint model on the joint validation set are shown in the fourth column of
|
| 490 |
+
|
| 491 |
+
the Table 5. The overall F1-score is somewhat 630 higher on the test set than on the validation set. For
|
| 492 |
+
|
| 493 |
+
some entities (PER, ORG, TITLE, DATE, TIME, 632 MONEY), the test score is higher than the validation score and for others it is somewhat lower. The test F1-score drops the most for the EVENT entity
|
| 494 |
+
|
| 495 |
+
(from 0.370 to 0.264). 637 All previous results were presented as averages over ten different runs. Finally we also picked one model to make it publicly available. We chose the Joint model with the highest overall F1-score on
|
| 496 |
+
|
| 497 |
+
the validation set. The test scores of this models 642 are shown in the right-most block of the Table 5. The overall F1-score of this best model is in line with the mean F1-score, which means that it was
|
| 498 |
+
|
| 499 |
+
not the model with the highest F1-score. However, 646
|
| 500 |
+
|
| 501 |
+
as the standard deviations are small, all the results 647
|
| 502 |
+
|
| 503 |
+
648 702
|
| 504 |
+
|
| 505 |
+
649 703
|
| 506 |
+
|
| 507 |
+
650 704
|
| 508 |
+
|
| 509 |
+
651 705
|
| 510 |
+
|
| 511 |
+
652 706
|
| 512 |
+
|
| 513 |
+
653 707
|
| 514 |
+
|
| 515 |
+

|
| 516 |
+
|
| 517 |
+
Figure 1: Entity-level comparison of the Joint model with models trained on each dataset separately.
|
| 518 |
+
|
| 519 |
+
729
|
| 520 |
+
|
| 521 |
+
654 708
|
| 522 |
+
|
| 523 |
+
655 709
|
| 524 |
+
|
| 525 |
+
656 710
|
| 526 |
+
|
| 527 |
+
657 711
|
| 528 |
+
|
| 529 |
+
658 712
|
| 530 |
+
|
| 531 |
+
659 713
|
| 532 |
+
|
| 533 |
+
660 714
|
| 534 |
+
|
| 535 |
+
661 715
|
| 536 |
+
|
| 537 |
+
662 716
|
| 538 |
+
|
| 539 |
+
663 717
|
| 540 |
+
|
| 541 |
+
664 718
|
| 542 |
+
|
| 543 |
+
665 719
|
| 544 |
+
|
| 545 |
+
666 720
|
| 546 |
+
|
| 547 |
+
667 721
|
| 548 |
+
|
| 549 |
+
668 722
|
| 550 |
+
|
| 551 |
+
669 723
|
| 552 |
+
|
| 553 |
+
670 724
|
| 554 |
+
|
| 555 |
+
671 725
|
| 556 |
+
|
| 557 |
+
672 726
|
| 558 |
+
|
| 559 |
+
673 727
|
| 560 |
+
|
| 561 |
+
674 728
|
| 562 |
+
|
| 563 |
+
676 730
|
| 564 |
+
|
| 565 |
+
677 731
|
| 566 |
+
|
| 567 |
+
678 732
|
| 568 |
+
|
| 569 |
+
679 733
|
| 570 |
+
|
| 571 |
+
680 734
|
| 572 |
+
|
| 573 |
+
681 735
|
| 574 |
+
|
| 575 |
+
682 736
|
| 576 |
+
|
| 577 |
+
683 737
|
| 578 |
+
|
| 579 |
+
684 738
|
| 580 |
+
|
| 581 |
+
685 739
|
| 582 |
+
|
| 583 |
+
686 740
|
| 584 |
+
|
| 585 |
+
687 741
|
| 586 |
+
|
| 587 |
+
688 742
|
| 588 |
+
|
| 589 |
+
689 743
|
| 590 |
+
|
| 591 |
+
690 744
|
| 592 |
+
|
| 593 |
+
691 745
|
| 594 |
+
|
| 595 |
+
692 746
|
| 596 |
+
|
| 597 |
+
693 747
|
| 598 |
+
|
| 599 |
+
694 748
|
| 600 |
+
|
| 601 |
+
695 749
|
| 602 |
+
|
| 603 |
+
696 750
|
| 604 |
+
|
| 605 |
+
697 751
|
| 606 |
+
|
| 607 |
+
698 752
|
| 608 |
+
|
| 609 |
+
699 753
|
| 610 |
+
|
| 611 |
+
700 754
|
| 612 |
+
|
| 613 |
+
701 755
|
| 614 |
+
|
| 615 |
+
756 810
|
| 616 |
+
|
| 617 |
+
<table><tr><td rowspan="2"/><td rowspan="2">Main+New Val F1</td><td rowspan="2">Main Val F1</td><td rowspan="2">$\mathbf{{New}}$ Val F1</td><td rowspan="2">Main+New Test F1</td><td colspan="3">Main+New Test</td></tr><tr><td>$\mathbf{{Prec}}$</td><td>$\mathbf{{Rec}}$</td><td>$\mathbf{{F1}}$</td></tr><tr><td>PER</td><td>.868 (.007)</td><td>.872 (.008)</td><td>.854 (.012)</td><td>.879 (.007)</td><td>.840</td><td>.927</td><td>.882</td></tr><tr><td>ORG</td><td>.690 (.010)</td><td>.702 (.009)</td><td>.669 (.021)</td><td>.700 (.016)</td><td>.698</td><td>.693</td><td>.696</td></tr><tr><td>LOC</td><td>.549 (.019)</td><td>.541 (.021)</td><td>.599 (.043)</td><td>.526 (.025)</td><td>.478</td><td>.563</td><td>.517</td></tr><tr><td>GPE</td><td>.849 (.005)</td><td>.843 (.005)</td><td>.884 (.009)</td><td>.826 (.004)</td><td>.827</td><td>.830</td><td>.828</td></tr><tr><td>TITLE</td><td>.733 (.013)</td><td>.737 (.011)</td><td>.709 (.034)</td><td>.777 (.017)</td><td>.788</td><td>.758</td><td>.773</td></tr><tr><td>PROD</td><td>.598 (.018)</td><td>.634 (.028)</td><td>.481 (.042)</td><td>.568 (.020)</td><td>.576</td><td>.579</td><td>.578</td></tr><tr><td>EVENT</td><td>.370 (.053)</td><td>.310 (.043)</td><td>.504 (.053)</td><td>.264 (.034)</td><td>.306</td><td>.256</td><td>.278</td></tr><tr><td>DATE</td><td>.708 (.013)</td><td>.699 (.016)</td><td>.792 (.024)</td><td>.740 (.010)</td><td>.727</td><td>.768</td><td>.747</td></tr><tr><td>TIME</td><td>.451 (.065)</td><td>.433 (.075)</td><td>.627 (.057)</td><td>.463 (.043)</td><td>.548</td><td>.472</td><td>.507</td></tr><tr><td>PERCENT</td><td>.969 (.019)</td><td>.969 (.013)</td><td>.960 (.049)</td><td>.958 (.013)</td><td>.967</td><td>.983</td><td>.975</td></tr><tr><td>MONEY</td><td>.622 (.032)</td><td>.625 (.042)</td><td>.719 (.105)</td><td>.699 (.014)</td><td>.789</td><td>.614</td><td>.690</td></tr><tr><td>Overall</td><td>.761 (.004)</td><td>.766 (.002)</td><td>.752 (.010)</td><td>.773 (.006)</td><td>.766</td><td>.783</td><td>.774</td></tr></table>
|
| 618 |
+
|
| 619 |
+
Table 5: Evaluations of the Joint models trained on the joined train sets of the both datasets. Left block: F1-scores on the different parts of the validation sets. Middle block: F1-scores on the joined test set. Right block: test scores of the best Joint model.
|
| 620 |
+
|
| 621 |
+
823
|
| 622 |
+
|
| 623 |
+
824
|
| 624 |
+
|
| 625 |
+
825
|
| 626 |
+
|
| 627 |
+
757 811
|
| 628 |
+
|
| 629 |
+
758 812
|
| 630 |
+
|
| 631 |
+
759 813
|
| 632 |
+
|
| 633 |
+
760 814
|
| 634 |
+
|
| 635 |
+
761 815
|
| 636 |
+
|
| 637 |
+
762 816
|
| 638 |
+
|
| 639 |
+
763 817
|
| 640 |
+
|
| 641 |
+
764 818
|
| 642 |
+
|
| 643 |
+
765 819
|
| 644 |
+
|
| 645 |
+
766 820
|
| 646 |
+
|
| 647 |
+
767 821
|
| 648 |
+
|
| 649 |
+
768 822
|
| 650 |
+
|
| 651 |
+
826
|
| 652 |
+
|
| 653 |
+
827
|
| 654 |
+
|
| 655 |
+
828
|
| 656 |
+
|
| 657 |
+
830 of all models are in close range, with the highest F1-score obtained on the test set was 0.785 .
|
| 658 |
+
|
| 659 |
+
## 5 Discussion
|
| 660 |
+
|
| 661 |
+
Although NER datasets have been created before for the Estonian language, this study presents the first attempt to annotate a richer set of entities beyond the most common person, organization and location names. As can be seen from inter-annotator agreements, there were some entities (PER, GPE, PERCENT) that the annotators
|
| 662 |
+
|
| 663 |
+
789 labelled with high consistency, while the reliability is lower for the other entities. The annotations of the EVENT entities had to lowest inter-annotator agreement, which suggests that the inconsistencies in annotation in this and other entities could be analysed more thoroughly to understand the sources of confusion and improve the annotation guidelines.
|
| 664 |
+
|
| 665 |
+
Following previous attempts in other languages
|
| 666 |
+
|
| 667 |
+
799 (notably Finnish), we decided to annotate also nested entities, by allowing up to three levels of nesting. The data statistics showed that very few entities were annotated on the 3rd level and even when considerable number of entities were la-
|
| 668 |
+
|
| 669 |
+
804 belled on the 2nd level, their reliability in terms of inter-annotator agreements is not high enough and thus using these labels for training predictive models might not be productive.
|
| 670 |
+
|
| 671 |
+
The experimental results with the BERT-based
|
| 672 |
+
|
| 673 |
+
809 model showed that while there seems to be some
|
| 674 |
+
|
| 675 |
+
domain shift between the two datasets on the level 831
|
| 676 |
+
|
| 677 |
+
on some entities, overall training a single joint 833 model on both datasets is justified. We only trained baseline models based on EstBERT and ac-
|
| 678 |
+
|
| 679 |
+
cording to previous studies (Kittask et al., 2020; 836 Tanvir et al., 2021), adopting other base models
|
| 680 |
+
|
| 681 |
+
like Estonian WikiBERT (Pyysalo et al., 2021) or 838
|
| 682 |
+
|
| 683 |
+
XLM-RoBERTa might lead to higher results. 839
|
| 684 |
+
|
| 685 |
+
840
|
| 686 |
+
|
| 687 |
+
841
|
| 688 |
+
|
| 689 |
+
## 6 Conclusions
|
| 690 |
+
|
| 691 |
+
842
|
| 692 |
+
|
| 693 |
+
843
|
| 694 |
+
|
| 695 |
+
844
|
| 696 |
+
|
| 697 |
+
We described the annotation process of the two 845
|
| 698 |
+
|
| 699 |
+
Estonian NER datasets, labelled with a rich an- 846
|
| 700 |
+
|
| 701 |
+
notation scheme involving eleven different entity 848 types. The datasets also include nested annotations of up two three levels, although the reliabil-
|
| 702 |
+
|
| 703 |
+
ity of the nested annotations proved to be much 851 less reliable than the first level entities. To es-
|
| 704 |
+
|
| 705 |
+
tablish the baseline predictive accuracy, we ex- 853 perimented with two modeling scenarios on these newly annotated datasets, by training two models, one for each dataset separately, and by training a joint model on the joined dataset. Overall, the joint
|
| 706 |
+
|
| 707 |
+
model performed better than the separate models, 858 except for few entity types, suggesting that the domain differences between these datasets are relatively small. Therefore we suggest using these two datasets jointly as a single, more varied, dataset.
|
| 708 |
+
|
| 709 |
+
863 864 918
|
| 710 |
+
|
| 711 |
+
## References
|
| 712 |
+
|
| 713 |
+
865 Miloš Jakubíček, Adam Kilgarriff, Vojtěch Kovář, 919 866 Pavel Rychly, and Vít Suchomel. 2013. The tenten 920 867 corpus family. In 7th international corpus linguis- 921 868 tics conference ${CL}$ , pages 125-127. Lancaster Uni- 922 869 versity. 923
|
| 714 |
+
|
| 715 |
+
870 Claudia Kittask, Kirill Milintsevich, and Kairit Sirts. 924 871 2020. Evaluating multilingual bert for estonian. In 925 872 Baltic HLT, pages 19-26. 926
|
| 716 |
+
|
| 717 |
+
Siim Orasmaa, Kadri Muischnek, Kristjan Poska, and 927 928 Anna Edela. 2022. Named entity recognition in es- 875 tonian 19th century parish court records. In Pro- 929 876 ceedings of the Thirteenth Language Resources and 930 877 Evaluation Conference, pages 5304-5313. 931
|
| 718 |
+
|
| 719 |
+
878 Sampo Pyysalo, Jenna Kanerva, Antti Virtanen, and 932 933 879 Filip Ginter. 2021. Wikibert models: Deep trans- 880 fer learning for many languages. In Proceedings of 934
|
| 720 |
+
|
| 721 |
+
881 the 23rd Nordic Conference on Computational Lin- 935 882 guistics (NoDaLiDa), pages 1-10. 936
|
| 722 |
+
|
| 723 |
+
Teemu Ruokolainen, Pekka Kauppinen, Miikka Sil- 937 fverberg, and Krister Lindén. 2020. A finnish news 938 885 corpus for named entity recognition. Language Re- 939 sources and Evaluation, 54:247-272. 940
|
| 724 |
+
|
| 725 |
+
887 Hasan Tanvir, Claudia Kittask, Sandra Eiche, and 941
|
| 726 |
+
|
| 727 |
+
Kairit Sirts. 2021. Estbert: A pretrained language- 942
|
| 728 |
+
|
| 729 |
+
specific bert for estonian. In Proceedings of the 943
|
| 730 |
+
|
| 731 |
+
890 23rd Nordic Conference on Computational Linguis- 944
|
| 732 |
+
|
| 733 |
+
tics (NoDaLiDa), pages 11-19. 945
|
| 734 |
+
|
| 735 |
+
892 Alexander Tkachenko, Timo Petmanson, and Sven 946
|
| 736 |
+
|
| 737 |
+
Laur. 2013. Named entity recognition in estonian. 947
|
| 738 |
+
|
| 739 |
+
In Proceedings of the 4th Biennial International 948
|
| 740 |
+
|
| 741 |
+
895 Workshop on Balto-Slavic Natural Language Pro- 949
|
| 742 |
+
|
| 743 |
+
cessing, pages 78-83. 950
|
| 744 |
+
|
| 745 |
+
897 Thomas Wolf, Lysandre Debut, Victor Sanh, Julien 951
|
| 746 |
+
|
| 747 |
+
Chaumond, Clement Delangue, Anthony Moi, Pier- 952
|
| 748 |
+
|
| 749 |
+
ric Cistac, Tim Rault, Rémi Louf, Morgan Funtow- 953
|
| 750 |
+
|
| 751 |
+
icz, et al. 2020. Transformers: State-of-the-art nat- 954 ural language processing. In Proceedings of the 955 2020 conference on empirical methods in natural
|
| 752 |
+
|
| 753 |
+
902 language processing: system demonstrations, pages 956 38-45. 957 958
|
| 754 |
+
|
| 755 |
+
905 959
|
| 756 |
+
|
| 757 |
+
906 960
|
| 758 |
+
|
| 759 |
+
907 961
|
| 760 |
+
|
| 761 |
+
908 962 963
|
| 762 |
+
|
| 763 |
+
964
|
| 764 |
+
|
| 765 |
+
965
|
| 766 |
+
|
| 767 |
+
912 966
|
| 768 |
+
|
| 769 |
+
913 967
|
| 770 |
+
|
| 771 |
+
914 968
|
| 772 |
+
|
| 773 |
+
915 969
|
| 774 |
+
|
| 775 |
+
916 970
|
| 776 |
+
|
| 777 |
+
917 971
|
NoDaLiDa/NoDaLiDa 2023/NoDaLiDa 2023 Conference/4CTnlIc1rhw/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,905 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
000 054
|
| 2 |
+
|
| 3 |
+
§ ESTONIAN NAMED ENTITY RECOGNITION: NEW DATASETS AND MODELS
|
| 4 |
+
|
| 5 |
+
001 055
|
| 6 |
+
|
| 7 |
+
002 056
|
| 8 |
+
|
| 9 |
+
003 057
|
| 10 |
+
|
| 11 |
+
004 058
|
| 12 |
+
|
| 13 |
+
005 059
|
| 14 |
+
|
| 15 |
+
006 060
|
| 16 |
+
|
| 17 |
+
§ ABSTRACT
|
| 18 |
+
|
| 19 |
+
This paper describes the annotation of two Estonian named entity recognition datasets. For this purposes, annotation
|
| 20 |
+
|
| 21 |
+
016 guidelines for labeling eleven types of entities were created. In addition to the
|
| 22 |
+
|
| 23 |
+
018 common entities of person names, organization names and locations, the annotation scheme includes geopolitical en-
|
| 24 |
+
|
| 25 |
+
021 tities, product names, titles (or roles), events, dates, times, monetary values and
|
| 26 |
+
|
| 27 |
+
023 percents. One annotation task involved the reannotation of an existing Estonian
|
| 28 |
+
|
| 29 |
+
026 named entity recognition dataset, consist- ing mostly of news texts, with the new
|
| 30 |
+
|
| 31 |
+
028 annotation scheme. The second annotated dataset includes new texts taken from both news and social media domains.
|
| 32 |
+
|
| 33 |
+
031 Transformer-based models were trained on these datasets to establish the baseline
|
| 34 |
+
|
| 35 |
+
033 predictive performance. The best results were obtained by training a single model on the joined dataset, suggesting that the
|
| 36 |
+
|
| 37 |
+
036 domain differences between the datasets are relatively small.
|
| 38 |
+
|
| 39 |
+
038
|
| 40 |
+
|
| 41 |
+
§ 1 INTRODUCTION
|
| 42 |
+
|
| 43 |
+
Named entity recognition (NER) is a useful natural language processing task that enables to extract from text information in the form of named entities that can be useful for various downstream tasks, like for instance anonymisation of documents or assigning thematic keywords to texts. Contemporary NER systems are usually trained as supervised tagging models where annotated training data is used to train the model to tag the spans in the text corresponding to named entities in the text.
|
| 44 |
+
|
| 45 |
+
For Estonian, previous efforts to develop NER
|
| 46 |
+
|
| 47 |
+
053 systems involves the creation of an annotated cor-
|
| 48 |
+
|
| 49 |
+
061
|
| 50 |
+
|
| 51 |
+
062
|
| 52 |
+
|
| 53 |
+
063
|
| 54 |
+
|
| 55 |
+
064
|
| 56 |
+
|
| 57 |
+
pus labeled with person, organization and loca- 065 tion names (Tkachenko et al., 2013), and training
|
| 58 |
+
|
| 59 |
+
CRF- and transformer-based models on this data 067 (Tkachenko et al., 2013; Kittask et al., 2020; Tan-vir et al., 2021) This section gives the context, in-
|
| 60 |
+
|
| 61 |
+
troduces the problem, proposes a solution and out- 070 lines the paper contributions. From a less common
|
| 62 |
+
|
| 63 |
+
domain, recently, a corpus of 19th century parish 072 court records annotated with named entities was
|
| 64 |
+
|
| 65 |
+
created (Orasmaa et al., 2022). 075
|
| 66 |
+
|
| 67 |
+
This paper describes the efforts to annotate ad-
|
| 68 |
+
|
| 69 |
+
ditional data in Estonian with named entities with 077 the purpose of advancing the development of general purpose NER systems for Estonian. The data
|
| 70 |
+
|
| 71 |
+
annotation part of the study involves creating two 080 annotated datasets labelled with a rich annotation
|
| 72 |
+
|
| 73 |
+
scheme developed as part of the study. First, we 082 reannotated the existing NER dataset according to the new annotation scheme and secondly, we an-
|
| 74 |
+
|
| 75 |
+
notated about ${130}\mathrm{\;K}$ tokens of new texts, mostly 085 from news portals and social media sources.
|
| 76 |
+
|
| 77 |
+
087
|
| 78 |
+
|
| 79 |
+
The second part of the paper describes the experimental results of training transformer-based
|
| 80 |
+
|
| 81 |
+
predictive models on these datasets. Our main 090 goal was first to establish the baseline performance
|
| 82 |
+
|
| 83 |
+
for various entity types based on the new annota- 092 tion scheme. The second goal was to study the optimal ways of using these two datasets that originate from somewhat different domains. The experimental results showed that the baseline perfor-
|
| 84 |
+
|
| 85 |
+
mance on the newly annotated dataset is somewhat 097 lower than was on the less richly annotated Estonian NER dataset, which suggests that the new annotation can be on one hand more noisy and on the other hand more rich and complex. We also
|
| 86 |
+
|
| 87 |
+
found that the domains of the two datasets are sim- 102 ilar enough, such that a model trained on the joint dataset performs as well or better than two models trained on each dataset separately.
|
| 88 |
+
|
| 89 |
+
In short, our contributions in this paper are: 107
|
| 90 |
+
|
| 91 |
+
1. Two new Estonian NER datasets annotated
|
| 92 |
+
|
| 93 |
+
109 with a rich set of entities;
|
| 94 |
+
|
| 95 |
+
2. Baseline performance assessment of a transformer-based model on these two datasets.
|
| 96 |
+
|
| 97 |
+
§ 2 DATASET CREATION
|
| 98 |
+
|
| 99 |
+
In this section, we describe the process of creating the two labelled NER datasets for Estonian.
|
| 100 |
+
|
| 101 |
+
§ 2.1 DATA SOURCES
|
| 102 |
+
|
| 103 |
+
The first dataset that we subsequently call the Main NER dataset, is the reannotation of the existing Estonian NER dataset (Tkachenko et al., 2013). This dataset consists of ca ${220}\mathrm{\;K}$ words of news texts and is generally homogeneus in its domain. Previously, this dataset has been annotated with person, organization and location names. Previous works adopting this dataset have observed errors in its annotation (Tanvir et al., 2021), which was one of the reason why this dataset was chosen for reannotation.
|
| 104 |
+
|
| 105 |
+
The second dataset that we will subsequently call New NER dataset, is totally new. To compile this dataset, the aim was to choose ca ${130}\mathrm{K}$ tokens from both news and social media domains, with roughly ${100}\mathrm{\;K}$ from news domains and ${30}\mathrm{\;K}$ from the social media domain. The underlying texts were sampled from the Estonian Web Corpus 2017 (Jakubíček et al., 2013). The metadata containing the URL and the title of the web page was used to choose the texts from both categories. For selecting the news sources, we looked for the urls and titles referring to the major Estonian news sites, such as Postimees, EPL, ERR and Delfi. For extracting social media texts, we searched for keywords that would point to well-known blogging and forum platforms, such as blogspot and foo-rum.
|
| 106 |
+
|
| 107 |
+
§ 2.2 ANNOTATION GUIDELINES
|
| 108 |
+
|
| 109 |
+
Annotation guidelines were developed to label the data. In addition to the commonly used person, organization and location names we wanted to adopt a richer set of labels. First, we decided to separate locations into geopolitical entities and geographical locations. Following similar works in Finnish (Ruokolainen et al., 2020), we added events, products and dates. Finally, we also added titles, times, monetary values and percentages. The short de-
|
| 110 |
+
|
| 111 |
+
scription for each entity as used in the annotation 162
|
| 112 |
+
|
| 113 |
+
guidelines was as follows: 163
|
| 114 |
+
|
| 115 |
+
164
|
| 116 |
+
|
| 117 |
+
* Persons (PER): This includes names referring 165
|
| 118 |
+
|
| 119 |
+
to all kinds of real and fictional persons. 166
|
| 120 |
+
|
| 121 |
+
* Organizations (ORG): This includes all kinds 167 168 of clearly and unambiguously identifiable or-
|
| 122 |
+
|
| 123 |
+
ganizations, for example companies and sim- 170 ilar commercial institutions as well as administrative bodies.
|
| 124 |
+
|
| 125 |
+
* Locations (LOC): This includes all geograph- 173
|
| 126 |
+
|
| 127 |
+
ical locations that are not associated with a 175 specific political organization such as GPEs.
|
| 128 |
+
|
| 129 |
+
* Geopolitical entities (GPE): This includes all
|
| 130 |
+
|
| 131 |
+
geographic locations associated with a po- 178 litical organization, such as countries, cities,
|
| 132 |
+
|
| 133 |
+
empires. 180
|
| 134 |
+
|
| 135 |
+
* Titles (TITLE): This includes job titles, posi-
|
| 136 |
+
|
| 137 |
+
tions, scientific degrees, etc. Titles must be 183 indicated only if a specific person behind the
|
| 138 |
+
|
| 139 |
+
title can be identified based on the preceding 185 text. The personal name immediately following the title is not part of the TITLE. If the
|
| 140 |
+
|
| 141 |
+
title is preceded by the ORG tag, only the job 188 title must be marked with the TITLE, not the
|
| 142 |
+
|
| 143 |
+
words in the ORG. 190
|
| 144 |
+
|
| 145 |
+
* Products (PROD): This includes all clearly
|
| 146 |
+
|
| 147 |
+
identifiable products, objects, works, etc., by 193 name.
|
| 148 |
+
|
| 149 |
+
195
|
| 150 |
+
|
| 151 |
+
* Events (EVENT): This includes events with a specific name.
|
| 152 |
+
|
| 153 |
+
198
|
| 154 |
+
|
| 155 |
+
* Dates (DATE): This includes time expres-
|
| 156 |
+
|
| 157 |
+
sions, both in day/month/year type, e.g., "Oc- 200 tober 3rd", "in 2020", "2019", "in September", as well as as general expressions (yesterday, last month, next year) if the expression has a clear referent. This means that
|
| 158 |
+
|
| 159 |
+
based on the expression it must be possible 205 to determine a specific point in time, i.e., a specific year or month or day. Thus, vague expressions such as a few years from now, a few months ago are not suitable, but more
|
| 160 |
+
|
| 161 |
+
specific five years later, three months ago, the 210 day before yesterday are suitable.
|
| 162 |
+
|
| 163 |
+
* Times (TIME): This includes time expressions that refer to an entity smaller than a day: times and parts of day that have a referent 215
|
| 164 |
+
|
| 165 |
+
216 (analogous to DATE entities). General ex-
|
| 166 |
+
|
| 167 |
+
217 pressions without a referent are not marked. Durations are also not marked.
|
| 168 |
+
|
| 169 |
+
* Monetary values (MONEY): This includes expressions that refer to specific currencies
|
| 170 |
+
|
| 171 |
+
222 and amounts in those currencies.
|
| 172 |
+
|
| 173 |
+
* Percentages (PERCENT): This includes entities expressing percentages. A percentage can be expressed both with a percentage mark $\left( \% \right)$ or verbally.
|
| 174 |
+
|
| 175 |
+
229 Similar to Ruokolainen et al. (2020), we de- cided to include nested entities in our annotation schema. An example of a nested entity is New
|
| 176 |
+
|
| 177 |
+
232 York City Government which itself is an ORG entity and which contains the GPE entity New York. We limited the maximum number of levels of nesting to three. Nested labelling of the same types of entities was not permitted, with the exception of ORG. For instance, in the phrase Republic of Ireland annotated as GPE, the further annotation of Ireland as a nested GPE was not allowed. However, for instance in a phrase such as UN Department of Economic and Social Affairs labelled as ORG, the word token ${UN}$ was allowed to be annotated as nested ORG.
|
| 178 |
+
|
| 179 |
+
§ 2.3 ANNOTATION PROCESS
|
| 180 |
+
|
| 181 |
+
The annotation process took place for both datasets separately. For the Main NER dataset, three annotators were recruited who were graduate students in general or computational linguistics. All annotators labelled the dataset independently, according to the given guidelines. Two annotators completed the annotations of the full dataset, one annotator completed most of the annotations, except for few documents. The annotation of the Main NER dataset was done with the Label Studio, which is a free open-source data annotation platform.
|
| 182 |
+
|
| 183 |
+
The New NER dataset was annotated by total of twelve annotators. Two annotators labelled the data in full. One of those was an undergraduate linguistics student and another was a graduate computer science student with undergraduate degree in linguistics. The rest of the ten annotators took part of the graduate level NLP course and each of them annotated ca ${12}\mathrm{\;K}$ word tokens as part of their course work. All annotators worked independently without access to any other's persons
|
| 184 |
+
|
| 185 |
+
269 work, following the given annotation guidelines.
|
| 186 |
+
|
| 187 |
+
Thus, each text received three independent annota- 270
|
| 188 |
+
|
| 189 |
+
tions. The annotation of the New NER dataset was 271
|
| 190 |
+
|
| 191 |
+
done with the currently non-existent DataTurks 272
|
| 192 |
+
|
| 193 |
+
annotation platform. 273
|
| 194 |
+
|
| 195 |
+
§ 2.4 LABEL HARMONIZATION
|
| 196 |
+
|
| 197 |
+
The annotations of the New NER dataset were 276 harmonized using both automatic and manual approaches. First, automatic harmonization was applied according to the principle if the annotators A
|
| 198 |
+
|
| 199 |
+
and B had agreed on the annotation but the anno- 281 tator $\mathrm{C}$ had not annotated anything, the final label
|
| 200 |
+
|
| 201 |
+
was set to the annotation of $\mathrm{A}$ and $\mathrm{B}$ . After that, 283 the entire corpus was manually reviewed by two people, one of whom was the original annotator A
|
| 202 |
+
|
| 203 |
+
and the other an author of this paper, and the labels 286 were disambiguated after discussion. Mostly, the
|
| 204 |
+
|
| 205 |
+
final label was the one that was chosen by at least 288 two annotators. However, in some cases the label was completely changed or a span of words was annotated as an entity that had been left unmarked by all annotators.
|
| 206 |
+
|
| 207 |
+
The annotations of the Main NER dataset were 293 disambiguated automatically. According to the automatic procedure, the word span was labelled
|
| 208 |
+
|
| 209 |
+
as an entity if at least two annotators had marked 296 it as an entity with the same tag.
|
| 210 |
+
|
| 211 |
+
298
|
| 212 |
+
|
| 213 |
+
§ 2.5 INTERANNOTATOR AGREEMENT
|
| 214 |
+
|
| 215 |
+
In order to assess the reliability of the annotations,
|
| 216 |
+
|
| 217 |
+
interannotator agreements were computed for the 301 Main NER dataset, which are shown in Table 1.
|
| 218 |
+
|
| 219 |
+
We computed the Fleiss $\kappa$ , which is an extension 303 of the Cohen’s $\kappa$ to more than two annotators. We followed the procedure described by (Ruokolainen
|
| 220 |
+
|
| 221 |
+
et al., 2020), where each entity in the running text 306 is treated as an instance of a positive class. The
|
| 222 |
+
|
| 223 |
+
exact match annotation of this entity was checked 308 for each annotator. If the annotator had marked this exact entity with the same label then it was recorded as an instance of the positive class, other-
|
| 224 |
+
|
| 225 |
+
wise it was recorded as an instance of the negative 313 class.
|
| 226 |
+
|
| 227 |
+
The overall agreement of the 1st level entities is in the range of substantial agreement, while the annotations on the second and third level do not agree considerably as illustrated by the low or even negative Fleiss $\kappa$ values. In the first level, person names, geopolitical entities, and percentages obtain almost perfect agreement $\left( {\kappa > {0.8}}\right)$ . Most other entities show substantial agreement
|
| 228 |
+
|
| 229 |
+
$\left( {\kappa > {0.6}}\right)$ . The lowest agreement scores were 323
|
| 230 |
+
|
| 231 |
+
324
|
| 232 |
+
|
| 233 |
+
max width=
|
| 234 |
+
|
| 235 |
+
X 1st level 2nd level 3rd level
|
| 236 |
+
|
| 237 |
+
1-4
|
| 238 |
+
X 0.65 0.23 -0.16
|
| 239 |
+
|
| 240 |
+
1-4
|
| 241 |
+
PER 0.95 0.27 0.66
|
| 242 |
+
|
| 243 |
+
1-4
|
| 244 |
+
ORG 0.76 0.33 0.19
|
| 245 |
+
|
| 246 |
+
1-4
|
| 247 |
+
LOC 0.65 0.35 0.18
|
| 248 |
+
|
| 249 |
+
1-4
|
| 250 |
+
GPE 0.84 0.47 -0.08
|
| 251 |
+
|
| 252 |
+
1-4
|
| 253 |
+
TITLE 0.63 0.21 0.00
|
| 254 |
+
|
| 255 |
+
1-4
|
| 256 |
+
PROD 0.48 0.02 -
|
| 257 |
+
|
| 258 |
+
1-4
|
| 259 |
+
EVENT 0.43 0.53 -
|
| 260 |
+
|
| 261 |
+
1-4
|
| 262 |
+
DATE 0.72 0.06 -
|
| 263 |
+
|
| 264 |
+
1-4
|
| 265 |
+
TIME 0.53 0.00 -
|
| 266 |
+
|
| 267 |
+
1-4
|
| 268 |
+
MONEY 0.78 0.00 -
|
| 269 |
+
|
| 270 |
+
1-4
|
| 271 |
+
PERCENT 0.90 - -
|
| 272 |
+
|
| 273 |
+
1-4
|
| 274 |
+
|
| 275 |
+
Table 1: Interannotator agreement of the Main NER dataset as measured with the Fleiss $\kappa$ .
|
| 276 |
+
|
| 277 |
+
325
|
| 278 |
+
|
| 279 |
+
326
|
| 280 |
+
|
| 281 |
+
327
|
| 282 |
+
|
| 283 |
+
328
|
| 284 |
+
|
| 285 |
+
329
|
| 286 |
+
|
| 287 |
+
330
|
| 288 |
+
|
| 289 |
+
331
|
| 290 |
+
|
| 291 |
+
335 found for products and events that still obtained moderate agreement $\left( {\kappa > {0.4}}\right)$ .
|
| 292 |
+
|
| 293 |
+
§ 2.6 FINAL DATASETS
|
| 294 |
+
|
| 295 |
+
After the label unification process, the both final datasets were divided into train, validation and test splits. The datasets will be distributed with these prepared splits to allow for future comparison of developed models. The statistics of the final datasets are shown in Table 2.
|
| 296 |
+
|
| 297 |
+
Previously, the Main NER dataset was annotated only with PER, ORG and LOC entities (Tkachenko et al., 2013). PER and ORG labels are still among the most frequent ones while the bulk of the LOC annotations have changed to GPE according to the new annotation guidelines. The Main NER dataset also features relatively large number of titles, dates and products. The event
|
| 298 |
+
|
| 299 |
+
362 entity is the least frequent in this dataset.
|
| 300 |
+
|
| 301 |
+
Almost similar prevalence patterns can be observed also in the New NER dataset. The PER, ORG and GPE entities are also the most frequent, followed by a relatively large number of titles,
|
| 302 |
+
|
| 303 |
+
367 dates and products. Compared to the Main NER dataset, the New NER dataset also contains considerably more event entities. The time, percent and money entities are the least frequent in this New NER dataset.
|
| 304 |
+
|
| 305 |
+
§ 3 EXPERIMENTS
|
| 306 |
+
|
| 307 |
+
We had two main goals when conducting the experiments. The first goal was to establish the baseline performance on both datasets. Al-
|
| 308 |
+
|
| 309 |
+
though several previous results have been pub- 378
|
| 310 |
+
|
| 311 |
+
lished on the old annotation of the Main NER 379
|
| 312 |
+
|
| 313 |
+
dataset (Tkachenko et al., 2013; Kittask et al., 380
|
| 314 |
+
|
| 315 |
+
2020; Tanvir et al., 2021), the new annotations are 381 much richer and were collected without looking at the old annotations. Therefore the baseline perfor-
|
| 316 |
+
|
| 317 |
+
mance of the new annotations might be different 384 than was for the old annotations. As the New NER dataset contains new material, its baseline performance also had to be evaluated.
|
| 318 |
+
|
| 319 |
+
The second goal was related to the potential do- 389 main difference of the two datasets-the average
|
| 320 |
+
|
| 321 |
+
document length of the New NER corpus (1281 391 word tokens) was more than three times higher than the average document length of the Main
|
| 322 |
+
|
| 323 |
+
NER corpus (373 word tokens). Also, the New 394 NER corpus contains at least ${30}\mathrm{\;K}$ tokens from
|
| 324 |
+
|
| 325 |
+
social media domain. Moreover, the documents 396 from the news sources are not all formal news texts but contain also less formal opinion pieces. Thus, our goal was to determine how to use these datasets-whether two models should be trained, one for each dataset separately, or if joining the data and training a single model would be more beneficial.
|
| 326 |
+
|
| 327 |
+
We only used the first level annotations to train
|
| 328 |
+
|
| 329 |
+
the models because as shown in Table 2, much less 406 entities were labelled on the 2nd and on the 3rd level and as shown in Table 1, the inter-annotator agreements of the second and third level entities are lacking.
|
| 330 |
+
|
| 331 |
+
We adopted the transformer-based token classification model that for each word token, assigns a label in the commonly-used BIO format, where the B-tag denotes the start of an entity, I-tag de-
|
| 332 |
+
|
| 333 |
+
notes the continuation of an entity and the $\mathrm{O}$ tag 416 is assigned to all word tokens that are not part of any named entity. We used the TokenClas-sification implementation from the Huggingface transformers library (Wolf et al., 2020). The
|
| 334 |
+
|
| 335 |
+
EstBERT model with 128 sequence length (Tanvir 421 et al., 2021) was used as a base model that was fine-tuned on the NER datasets.
|
| 336 |
+
|
| 337 |
+
The batch size was fixed to 16, the Adam opti-
|
| 338 |
+
|
| 339 |
+
mizer was used with betas 0.9 and 0.98 and epsilon 426 1e-6. The models were trained 150 epochs in maximum, by stopping early if the overall F1-score on the validation set did not improve in 20 epochs
|
| 340 |
+
|
| 341 |
+
for more than ${0.0001}\mathrm{\;F}1$ -score points. The eval- 430
|
| 342 |
+
|
| 343 |
+
uations during the training and final testing were 431
|
| 344 |
+
|
| 345 |
+
432 486
|
| 346 |
+
|
| 347 |
+
max width=
|
| 348 |
+
|
| 349 |
+
2*X 4|c|Main NER dataset 4|c|New NER dataset
|
| 350 |
+
|
| 351 |
+
2-9
|
| 352 |
+
Train Val Test Total Train Val Test Total
|
| 353 |
+
|
| 354 |
+
1-9
|
| 355 |
+
Documents 525 18 39 582 78 16 15 109
|
| 356 |
+
|
| 357 |
+
1-9
|
| 358 |
+
Sentences 9965 2415 1907 14287 7001 882 890 8773
|
| 359 |
+
|
| 360 |
+
1-9
|
| 361 |
+
Tokens 155983 32890 28370 217243 111858 13130 14686 139674
|
| 362 |
+
|
| 363 |
+
1-9
|
| 364 |
+
1st lvl entities 14944 2808 2522 20274 8078 541 1002 9594
|
| 365 |
+
|
| 366 |
+
1-9
|
| 367 |
+
2nd lvl entities 987 223 122 1332 571 44 59 674
|
| 368 |
+
|
| 369 |
+
1-9
|
| 370 |
+
3rd lvl entities 40 14 4 58 27 0 1 28
|
| 371 |
+
|
| 372 |
+
1-9
|
| 373 |
+
PER 3563 642 722 4927 2601 109 299 3009
|
| 374 |
+
|
| 375 |
+
1-9
|
| 376 |
+
ORG 3215 504 541 4260 1177 85 150 1412
|
| 377 |
+
|
| 378 |
+
1-9
|
| 379 |
+
LOC 328 118 61 507 449 31 35 515
|
| 380 |
+
|
| 381 |
+
1-9
|
| 382 |
+
GPE 3377 714 479 4570 1253 129 231 1613
|
| 383 |
+
|
| 384 |
+
1-9
|
| 385 |
+
TITLE 1302 171 209 1682 702 19 59 772
|
| 386 |
+
|
| 387 |
+
1-9
|
| 388 |
+
PROD 874 161 66 1101 624 60 117 801
|
| 389 |
+
|
| 390 |
+
1-9
|
| 391 |
+
EVENT 56 13 17 86 230 15 26 271
|
| 392 |
+
|
| 393 |
+
1-9
|
| 394 |
+
DATE 1346 308 186 1840 746 64 77 887
|
| 395 |
+
|
| 396 |
+
1-9
|
| 397 |
+
TIME 456 39 30 525 103 6 6 115
|
| 398 |
+
|
| 399 |
+
1-9
|
| 400 |
+
PERCENT 137 62 58 257 75 11 1 87
|
| 401 |
+
|
| 402 |
+
1-9
|
| 403 |
+
MONEY 291 76 153 520 118 12 1 131
|
| 404 |
+
|
| 405 |
+
1-9
|
| 406 |
+
|
| 407 |
+
Table 2: Statistics of the two new Estonian NER datasets.
|
| 408 |
+
|
| 409 |
+
505
|
| 410 |
+
|
| 411 |
+
506
|
| 412 |
+
|
| 413 |
+
507
|
| 414 |
+
|
| 415 |
+
433 487
|
| 416 |
+
|
| 417 |
+
434 488
|
| 418 |
+
|
| 419 |
+
435 489
|
| 420 |
+
|
| 421 |
+
436 490
|
| 422 |
+
|
| 423 |
+
437 491
|
| 424 |
+
|
| 425 |
+
438 492
|
| 426 |
+
|
| 427 |
+
439
|
| 428 |
+
|
| 429 |
+
440
|
| 430 |
+
|
| 431 |
+
441
|
| 432 |
+
|
| 433 |
+
442
|
| 434 |
+
|
| 435 |
+
443 497
|
| 436 |
+
|
| 437 |
+
444
|
| 438 |
+
|
| 439 |
+
445 499
|
| 440 |
+
|
| 441 |
+
446 500
|
| 442 |
+
|
| 443 |
+
447 501
|
| 444 |
+
|
| 445 |
+
448 502
|
| 446 |
+
|
| 447 |
+
449 503
|
| 448 |
+
|
| 449 |
+
450 504
|
| 450 |
+
|
| 451 |
+
508
|
| 452 |
+
|
| 453 |
+
509 done with the seqeval package. ${}^{1}$ The learning rate was optimized on the validation set using the grid of $\{ 5\mathrm{e} - 6,1\mathrm{e} - 5,3\mathrm{e} - 5,5\mathrm{e} - 5,1\mathrm{e} - 4\}$ . Each model was trained ten times with different random seeds and their mean values with standard deviations are reported.
|
| 454 |
+
|
| 455 |
+
§ 4 RESULTS
|
| 456 |
+
|
| 457 |
+
We first trained and evaluated models on both datasets separately to assess the overall modeling performance on each dataset. Then, we also trained another model on the joint dataset and compared its performance on the evaluation sets of both datasets.
|
| 458 |
+
|
| 459 |
+
§ 4.1 SEPARATE MODELS
|
| 460 |
+
|
| 461 |
+
First, we trained predictive models on both datasets separately. The results of these experiments on the respective validation sets are shown in Table 3. The overall performance (bottom row) is on the same level on both datasets, showing that the annotation and modeling difficulty is comparable in the two datasets.
|
| 462 |
+
|
| 463 |
+
The most accurately predicted entities in both datasets are PER, GPE and PERCENT. The low-
|
| 464 |
+
|
| 465 |
+
485
|
| 466 |
+
|
| 467 |
+
510
|
| 468 |
+
|
| 469 |
+
est accuracy is obtained when predicting LOC, 511
|
| 470 |
+
|
| 471 |
+
EVENT and TIME for the Reannotated Main NER 512
|
| 472 |
+
|
| 473 |
+
datset and LOC, EVENT and PROD for the New 513
|
| 474 |
+
|
| 475 |
+
NER dataset. The prediction of the EVENT names 514 is especially poor in the Main NER dataset, probably because there are only 56 of the EVENT in-
|
| 476 |
+
|
| 477 |
+
stances in the respective train set. 517
|
| 478 |
+
|
| 479 |
+
Comparing the results of the Reannotated Main
|
| 480 |
+
|
| 481 |
+
dataset with the old annotations of the Main NER 519
|
| 482 |
+
|
| 483 |
+
dataset (see Table 4, taken from Tanvir et al. 520 (2021), Table 11) shows that the performance on
|
| 484 |
+
|
| 485 |
+
all three entities (PER, ORG, LOC) used in the 522 old annotations has decreased. Although the mod-
|
| 486 |
+
|
| 487 |
+
eling results are not directly comparable because 524 525 Table 3 shows results on the validation set and 526 Table 4 on the test set, the differences are large 527 enough to suggest that the new annotation is con- 528
|
| 488 |
+
|
| 489 |
+
siderably more complex for the models to learn. 529
|
| 490 |
+
|
| 491 |
+
§ 4.2 JOINT MODEL
|
| 492 |
+
|
| 493 |
+
530
|
| 494 |
+
|
| 495 |
+
The Joint model is trained on the concatenated train sets of both the Main NER and New NER
|
| 496 |
+
|
| 497 |
+
datasets. Table 5 shows the F1-scores of the Joint 534 model on the joined validation set as well as on the validation sets of both datasets separately. The overall F1-scores on each validation set are some-
|
| 498 |
+
|
| 499 |
+
what higher than for the separate models (0.766 vs 538
|
| 500 |
+
|
| 501 |
+
0.747 for the Main dataset and 0.752 vs 0.735 for 539
|
| 502 |
+
|
| 503 |
+
https://github.com/chakki-works/ seqeval
|
| 504 |
+
|
| 505 |
+
540 594
|
| 506 |
+
|
| 507 |
+
max width=
|
| 508 |
+
|
| 509 |
+
2*X 2*# 4|c|Reannoated Main NER 3|c|New NER
|
| 510 |
+
|
| 511 |
+
3-9
|
| 512 |
+
Precision Recall $\mathbf{{F1} - {score}}$ # Precision Recall F1-score
|
| 513 |
+
|
| 514 |
+
1-9
|
| 515 |
+
PER 642 .827 (.012) .871 (.009) .848 (.005) 109 .809 (.044) .816 (.023) .811 (.019)
|
| 516 |
+
|
| 517 |
+
1-9
|
| 518 |
+
ORG 504 .654 (.016) .666 (.014) .660 (.013) 85 .580 (.027) .585 (.052) .581 (.024)
|
| 519 |
+
|
| 520 |
+
1-9
|
| 521 |
+
LOC 118 .643 (.036) .478 (.028) .547 (.016) 31 .600 (.065) .560 (.060) .576 (.044)
|
| 522 |
+
|
| 523 |
+
1-9
|
| 524 |
+
GPE 714 .821 (.012) .831 (.021) .826 (.008) 129 .900 (.017) .879 (.030) .889 (.014)
|
| 525 |
+
|
| 526 |
+
1-9
|
| 527 |
+
TITLE 171 .676 (.023) .814 (.014) .739 (.011) 19 .750 (.062) .718 (.064) .731 (.048)
|
| 528 |
+
|
| 529 |
+
1-9
|
| 530 |
+
PROD 161 .572 (.033) .628 (.026) .598 (.024) 60 .509 (.043) .474 (.052) .488 (.029)
|
| 531 |
+
|
| 532 |
+
1-9
|
| 533 |
+
EVENT 13 .069 (.029) .077 (.034) .072 (.031) 16 .518 (.104) .558 (.104) .525 (.070)
|
| 534 |
+
|
| 535 |
+
1-9
|
| 536 |
+
DATE 308 .682 (.020) .720 (.017) .700 (.007) 64 .816 (.027) .824 (.024) .820 (.021)
|
| 537 |
+
|
| 538 |
+
1-9
|
| 539 |
+
TIME 39 .553 (.066) .555 (.045) .553 (.053) 6 .812 (.041) .788 (.108) .797 (.074)
|
| 540 |
+
|
| 541 |
+
1-9
|
| 542 |
+
PERCENT 62 .985 (.016) .867 (.032) .922 (.019) 11 .895 (.126) 1 (-) .940 (.074)
|
| 543 |
+
|
| 544 |
+
1-9
|
| 545 |
+
MONEY 76 .636 (.040) .568 (.030) .600 (.030) 12 .659 (.085) .742 (.126) .693 (.083)
|
| 546 |
+
|
| 547 |
+
1-9
|
| 548 |
+
Overall 2571 .737 (.010) .757 (.009) .747 (.004) 497 .736 (.014) .734 (.017) .735 (.006)
|
| 549 |
+
|
| 550 |
+
1-9
|
| 551 |
+
|
| 552 |
+
Table 3: Predictive performance of models trained on both two datasets, evaluated on the respective validation set.
|
| 553 |
+
|
| 554 |
+
598
|
| 555 |
+
|
| 556 |
+
541 595
|
| 557 |
+
|
| 558 |
+
542 596
|
| 559 |
+
|
| 560 |
+
543 597
|
| 561 |
+
|
| 562 |
+
545 599
|
| 563 |
+
|
| 564 |
+
546 600
|
| 565 |
+
|
| 566 |
+
547
|
| 567 |
+
|
| 568 |
+
548
|
| 569 |
+
|
| 570 |
+
549
|
| 571 |
+
|
| 572 |
+
550
|
| 573 |
+
|
| 574 |
+
551 605
|
| 575 |
+
|
| 576 |
+
552
|
| 577 |
+
|
| 578 |
+
553 607
|
| 579 |
+
|
| 580 |
+
554
|
| 581 |
+
|
| 582 |
+
555
|
| 583 |
+
|
| 584 |
+
556 610
|
| 585 |
+
|
| 586 |
+
557
|
| 587 |
+
|
| 588 |
+
558 612
|
| 589 |
+
|
| 590 |
+
561
|
| 591 |
+
|
| 592 |
+
max width=
|
| 593 |
+
|
| 594 |
+
X Precision Recall F1-score
|
| 595 |
+
|
| 596 |
+
1-4
|
| 597 |
+
PER .948 .958 .953
|
| 598 |
+
|
| 599 |
+
1-4
|
| 600 |
+
ORG .784 .826 .805
|
| 601 |
+
|
| 602 |
+
1-4
|
| 603 |
+
LOC .899 .914 .907
|
| 604 |
+
|
| 605 |
+
1-4
|
| 606 |
+
Overall .891 .912 .901
|
| 607 |
+
|
| 608 |
+
1-4
|
| 609 |
+
|
| 610 |
+
Table 4: Results of the old annotations of the Main NER test set. Adapted from Table 11 (Tanvir et al., 2021).
|
| 611 |
+
|
| 612 |
+
563
|
| 613 |
+
|
| 614 |
+
565
|
| 615 |
+
|
| 616 |
+
566
|
| 617 |
+
|
| 618 |
+
568 the New dataset, compare with the bottom row of Table 3).
|
| 619 |
+
|
| 620 |
+
Figure 1 shows the entity-level comparison of the Joint models and the separate models on the respective validation sets. Figure 1a, which de-
|
| 621 |
+
|
| 622 |
+
578 picts the comparison on the validation set of the Main NER dataset, shows that the Joint model performs the same or better on all entities, ex-
|
| 623 |
+
|
| 624 |
+
581 cept the TIME entity, which performance already was among the lowest one Main dataset and which
|
| 625 |
+
|
| 626 |
+
583 with the Joint model drops even more from 0.553 to 0.433 . On the other hand, the prediction accuracy of the EVENT entity, while still remaining quite low, improves considerably, from 0.072 to 0.310 .
|
| 627 |
+
|
| 628 |
+
When comparing the Joint and separate model results on the New NER dataset (see Figure 1b, we observe that the Joint model performs the same or better on some entity types (PER, ORG, GPE,
|
| 629 |
+
|
| 630 |
+
593 LOC, PROD, PERCENT, MONEY) and somewhat worse on the rest. The largest drop occurs again on the TIME entity, which drops from 0.797 to 0.627 .
|
| 631 |
+
|
| 632 |
+
Overall, we conclude that training a Joint model instead of two separate models is justified. Although with the Joint model, the prediction performance dropped for some entities, especially on the
|
| 633 |
+
|
| 634 |
+
New NER dataset, the overall F1-score was better 622 on the validation sets of both datasets than for the separate models. Therefore we conduct the final
|
| 635 |
+
|
| 636 |
+
evaluations on the test set with the Joint model. 625
|
| 637 |
+
|
| 638 |
+
§ 4.3 TEST RESULTS
|
| 639 |
+
|
| 640 |
+
627
|
| 641 |
+
|
| 642 |
+
The test results of the Joint model on the joint validation set are shown in the fourth column of
|
| 643 |
+
|
| 644 |
+
the Table 5. The overall F1-score is somewhat 630 higher on the test set than on the validation set. For
|
| 645 |
+
|
| 646 |
+
some entities (PER, ORG, TITLE, DATE, TIME, 632 MONEY), the test score is higher than the validation score and for others it is somewhat lower. The test F1-score drops the most for the EVENT entity
|
| 647 |
+
|
| 648 |
+
(from 0.370 to 0.264). 637 All previous results were presented as averages over ten different runs. Finally we also picked one model to make it publicly available. We chose the Joint model with the highest overall F1-score on
|
| 649 |
+
|
| 650 |
+
the validation set. The test scores of this models 642 are shown in the right-most block of the Table 5. The overall F1-score of this best model is in line with the mean F1-score, which means that it was
|
| 651 |
+
|
| 652 |
+
not the model with the highest F1-score. However, 646
|
| 653 |
+
|
| 654 |
+
as the standard deviations are small, all the results 647
|
| 655 |
+
|
| 656 |
+
648 702
|
| 657 |
+
|
| 658 |
+
649 703
|
| 659 |
+
|
| 660 |
+
650 704
|
| 661 |
+
|
| 662 |
+
651 705
|
| 663 |
+
|
| 664 |
+
652 706
|
| 665 |
+
|
| 666 |
+
653 707
|
| 667 |
+
|
| 668 |
+
< g r a p h i c s >
|
| 669 |
+
|
| 670 |
+
Figure 1: Entity-level comparison of the Joint model with models trained on each dataset separately.
|
| 671 |
+
|
| 672 |
+
729
|
| 673 |
+
|
| 674 |
+
654 708
|
| 675 |
+
|
| 676 |
+
655 709
|
| 677 |
+
|
| 678 |
+
656 710
|
| 679 |
+
|
| 680 |
+
657 711
|
| 681 |
+
|
| 682 |
+
658 712
|
| 683 |
+
|
| 684 |
+
659 713
|
| 685 |
+
|
| 686 |
+
660 714
|
| 687 |
+
|
| 688 |
+
661 715
|
| 689 |
+
|
| 690 |
+
662 716
|
| 691 |
+
|
| 692 |
+
663 717
|
| 693 |
+
|
| 694 |
+
664 718
|
| 695 |
+
|
| 696 |
+
665 719
|
| 697 |
+
|
| 698 |
+
666 720
|
| 699 |
+
|
| 700 |
+
667 721
|
| 701 |
+
|
| 702 |
+
668 722
|
| 703 |
+
|
| 704 |
+
669 723
|
| 705 |
+
|
| 706 |
+
670 724
|
| 707 |
+
|
| 708 |
+
671 725
|
| 709 |
+
|
| 710 |
+
672 726
|
| 711 |
+
|
| 712 |
+
673 727
|
| 713 |
+
|
| 714 |
+
674 728
|
| 715 |
+
|
| 716 |
+
676 730
|
| 717 |
+
|
| 718 |
+
677 731
|
| 719 |
+
|
| 720 |
+
678 732
|
| 721 |
+
|
| 722 |
+
679 733
|
| 723 |
+
|
| 724 |
+
680 734
|
| 725 |
+
|
| 726 |
+
681 735
|
| 727 |
+
|
| 728 |
+
682 736
|
| 729 |
+
|
| 730 |
+
683 737
|
| 731 |
+
|
| 732 |
+
684 738
|
| 733 |
+
|
| 734 |
+
685 739
|
| 735 |
+
|
| 736 |
+
686 740
|
| 737 |
+
|
| 738 |
+
687 741
|
| 739 |
+
|
| 740 |
+
688 742
|
| 741 |
+
|
| 742 |
+
689 743
|
| 743 |
+
|
| 744 |
+
690 744
|
| 745 |
+
|
| 746 |
+
691 745
|
| 747 |
+
|
| 748 |
+
692 746
|
| 749 |
+
|
| 750 |
+
693 747
|
| 751 |
+
|
| 752 |
+
694 748
|
| 753 |
+
|
| 754 |
+
695 749
|
| 755 |
+
|
| 756 |
+
696 750
|
| 757 |
+
|
| 758 |
+
697 751
|
| 759 |
+
|
| 760 |
+
698 752
|
| 761 |
+
|
| 762 |
+
699 753
|
| 763 |
+
|
| 764 |
+
700 754
|
| 765 |
+
|
| 766 |
+
701 755
|
| 767 |
+
|
| 768 |
+
756 810
|
| 769 |
+
|
| 770 |
+
max width=
|
| 771 |
+
|
| 772 |
+
2*X 2*Main+New Val F1 2*Main Val F1 2*$\mathbf{{New}}$ Val F1 2*Main+New Test F1 3|c|Main+New Test
|
| 773 |
+
|
| 774 |
+
6-8
|
| 775 |
+
$\mathbf{{Prec}}$ $\mathbf{{Rec}}$ $\mathbf{{F1}}$
|
| 776 |
+
|
| 777 |
+
1-8
|
| 778 |
+
PER .868 (.007) .872 (.008) .854 (.012) .879 (.007) .840 .927 .882
|
| 779 |
+
|
| 780 |
+
1-8
|
| 781 |
+
ORG .690 (.010) .702 (.009) .669 (.021) .700 (.016) .698 .693 .696
|
| 782 |
+
|
| 783 |
+
1-8
|
| 784 |
+
LOC .549 (.019) .541 (.021) .599 (.043) .526 (.025) .478 .563 .517
|
| 785 |
+
|
| 786 |
+
1-8
|
| 787 |
+
GPE .849 (.005) .843 (.005) .884 (.009) .826 (.004) .827 .830 .828
|
| 788 |
+
|
| 789 |
+
1-8
|
| 790 |
+
TITLE .733 (.013) .737 (.011) .709 (.034) .777 (.017) .788 .758 .773
|
| 791 |
+
|
| 792 |
+
1-8
|
| 793 |
+
PROD .598 (.018) .634 (.028) .481 (.042) .568 (.020) .576 .579 .578
|
| 794 |
+
|
| 795 |
+
1-8
|
| 796 |
+
EVENT .370 (.053) .310 (.043) .504 (.053) .264 (.034) .306 .256 .278
|
| 797 |
+
|
| 798 |
+
1-8
|
| 799 |
+
DATE .708 (.013) .699 (.016) .792 (.024) .740 (.010) .727 .768 .747
|
| 800 |
+
|
| 801 |
+
1-8
|
| 802 |
+
TIME .451 (.065) .433 (.075) .627 (.057) .463 (.043) .548 .472 .507
|
| 803 |
+
|
| 804 |
+
1-8
|
| 805 |
+
PERCENT .969 (.019) .969 (.013) .960 (.049) .958 (.013) .967 .983 .975
|
| 806 |
+
|
| 807 |
+
1-8
|
| 808 |
+
MONEY .622 (.032) .625 (.042) .719 (.105) .699 (.014) .789 .614 .690
|
| 809 |
+
|
| 810 |
+
1-8
|
| 811 |
+
Overall .761 (.004) .766 (.002) .752 (.010) .773 (.006) .766 .783 .774
|
| 812 |
+
|
| 813 |
+
1-8
|
| 814 |
+
|
| 815 |
+
Table 5: Evaluations of the Joint models trained on the joined train sets of the both datasets. Left block: F1-scores on the different parts of the validation sets. Middle block: F1-scores on the joined test set. Right block: test scores of the best Joint model.
|
| 816 |
+
|
| 817 |
+
823
|
| 818 |
+
|
| 819 |
+
824
|
| 820 |
+
|
| 821 |
+
825
|
| 822 |
+
|
| 823 |
+
757 811
|
| 824 |
+
|
| 825 |
+
758 812
|
| 826 |
+
|
| 827 |
+
759 813
|
| 828 |
+
|
| 829 |
+
760 814
|
| 830 |
+
|
| 831 |
+
761 815
|
| 832 |
+
|
| 833 |
+
762 816
|
| 834 |
+
|
| 835 |
+
763 817
|
| 836 |
+
|
| 837 |
+
764 818
|
| 838 |
+
|
| 839 |
+
765 819
|
| 840 |
+
|
| 841 |
+
766 820
|
| 842 |
+
|
| 843 |
+
767 821
|
| 844 |
+
|
| 845 |
+
768 822
|
| 846 |
+
|
| 847 |
+
826
|
| 848 |
+
|
| 849 |
+
827
|
| 850 |
+
|
| 851 |
+
828
|
| 852 |
+
|
| 853 |
+
830 of all models are in close range, with the highest F1-score obtained on the test set was 0.785 .
|
| 854 |
+
|
| 855 |
+
§ 5 DISCUSSION
|
| 856 |
+
|
| 857 |
+
Although NER datasets have been created before for the Estonian language, this study presents the first attempt to annotate a richer set of entities beyond the most common person, organization and location names. As can be seen from inter-annotator agreements, there were some entities (PER, GPE, PERCENT) that the annotators
|
| 858 |
+
|
| 859 |
+
789 labelled with high consistency, while the reliability is lower for the other entities. The annotations of the EVENT entities had to lowest inter-annotator agreement, which suggests that the inconsistencies in annotation in this and other entities could be analysed more thoroughly to understand the sources of confusion and improve the annotation guidelines.
|
| 860 |
+
|
| 861 |
+
Following previous attempts in other languages
|
| 862 |
+
|
| 863 |
+
799 (notably Finnish), we decided to annotate also nested entities, by allowing up to three levels of nesting. The data statistics showed that very few entities were annotated on the 3rd level and even when considerable number of entities were la-
|
| 864 |
+
|
| 865 |
+
804 belled on the 2nd level, their reliability in terms of inter-annotator agreements is not high enough and thus using these labels for training predictive models might not be productive.
|
| 866 |
+
|
| 867 |
+
The experimental results with the BERT-based
|
| 868 |
+
|
| 869 |
+
809 model showed that while there seems to be some
|
| 870 |
+
|
| 871 |
+
domain shift between the two datasets on the level 831
|
| 872 |
+
|
| 873 |
+
on some entities, overall training a single joint 833 model on both datasets is justified. We only trained baseline models based on EstBERT and ac-
|
| 874 |
+
|
| 875 |
+
cording to previous studies (Kittask et al., 2020; 836 Tanvir et al., 2021), adopting other base models
|
| 876 |
+
|
| 877 |
+
like Estonian WikiBERT (Pyysalo et al., 2021) or 838
|
| 878 |
+
|
| 879 |
+
XLM-RoBERTa might lead to higher results. 839
|
| 880 |
+
|
| 881 |
+
840
|
| 882 |
+
|
| 883 |
+
841
|
| 884 |
+
|
| 885 |
+
§ 6 CONCLUSIONS
|
| 886 |
+
|
| 887 |
+
842
|
| 888 |
+
|
| 889 |
+
843
|
| 890 |
+
|
| 891 |
+
844
|
| 892 |
+
|
| 893 |
+
We described the annotation process of the two 845
|
| 894 |
+
|
| 895 |
+
Estonian NER datasets, labelled with a rich an- 846
|
| 896 |
+
|
| 897 |
+
notation scheme involving eleven different entity 848 types. The datasets also include nested annotations of up two three levels, although the reliabil-
|
| 898 |
+
|
| 899 |
+
ity of the nested annotations proved to be much 851 less reliable than the first level entities. To es-
|
| 900 |
+
|
| 901 |
+
tablish the baseline predictive accuracy, we ex- 853 perimented with two modeling scenarios on these newly annotated datasets, by training two models, one for each dataset separately, and by training a joint model on the joined dataset. Overall, the joint
|
| 902 |
+
|
| 903 |
+
model performed better than the separate models, 858 except for few entity types, suggesting that the domain differences between these datasets are relatively small. Therefore we suggest using these two datasets jointly as a single, more varied, dataset.
|
| 904 |
+
|
| 905 |
+
863 864 918
|