Daoze commited on
Commit
5b363ab
·
verified ·
1 Parent(s): d8ce6de

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. UAI/UAI 2022/UAI 2022 Conference/B0gGoUIiqx9/Initial_manuscript_md/Initial_manuscript.md +644 -0
  2. UAI/UAI 2022/UAI 2022 Conference/B0l8-wLjql5/Initial_manuscript_md/Initial_manuscript.md +688 -0
  3. UAI/UAI 2022/UAI 2022 Conference/B0l8-wLjql5/Initial_manuscript_tex/Initial_manuscript.tex +385 -0
  4. UAI/UAI 2022/UAI 2022 Conference/B0l_lDLs9gq/Initial_manuscript_md/Initial_manuscript.md +0 -0
  5. UAI/UAI 2022/UAI 2022 Conference/B0l_lDLs9gq/Initial_manuscript_tex/Initial_manuscript.tex +323 -0
  6. UAI/UAI 2022/UAI 2022 Conference/B0xLpILs5ec/Initial_manuscript_md/Initial_manuscript.md +487 -0
  7. UAI/UAI 2022/UAI 2022 Conference/B0xLpILs5ec/Initial_manuscript_tex/Initial_manuscript.tex +484 -0
  8. UAI/UAI 2022/UAI 2022 Conference/B248iw8jce5/Initial_manuscript_md/Initial_manuscript.md +529 -0
  9. UAI/UAI 2022/UAI 2022 Conference/B248iw8jce5/Initial_manuscript_tex/Initial_manuscript.tex +279 -0
  10. UAI/UAI 2022/UAI 2022 Conference/B3M4CS8oql9/Initial_manuscript_md/Initial_manuscript.md +533 -0
  11. UAI/UAI 2022/UAI 2022 Conference/B3M4CS8oql9/Initial_manuscript_tex/Initial_manuscript.tex +463 -0
  12. UAI/UAI 2022/UAI 2022 Conference/B5Lf6PUoqg5/Initial_manuscript_md/Initial_manuscript.md +0 -0
  13. UAI/UAI 2022/UAI 2022 Conference/B5Lf6PUoqg5/Initial_manuscript_tex/Initial_manuscript.tex +371 -0
  14. UAI/UAI 2022/UAI 2022 Conference/BAeO6LIjcec/Initial_manuscript_md/Initial_manuscript.md +349 -0
  15. UAI/UAI 2022/UAI 2022 Conference/BAeO6LIjcec/Initial_manuscript_tex/Initial_manuscript.tex +346 -0
  16. UAI/UAI 2022/UAI 2022 Conference/BAlqxvUs5lq/Initial_manuscript_md/Initial_manuscript.md +594 -0
  17. UAI/UAI 2022/UAI 2022 Conference/BAlqxvUs5lq/Initial_manuscript_tex/Initial_manuscript.tex +575 -0
  18. UAI/UAI 2022/UAI 2022 Conference/BCg4lD8ice5/Initial_manuscript_md/Initial_manuscript.md +575 -0
  19. UAI/UAI 2022/UAI 2022 Conference/BCg4lD8ice5/Initial_manuscript_tex/Initial_manuscript.tex +330 -0
  20. UAI/UAI 2022/UAI 2022 Conference/BElGwDLoqlc/Initial_manuscript_md/Initial_manuscript.md +702 -0
  21. UAI/UAI 2022/UAI 2022 Conference/BElGwDLoqlc/Initial_manuscript_tex/Initial_manuscript.tex +283 -0
  22. UAI/UAI 2022/UAI 2022 Conference/BElx3S8s5e9/Initial_manuscript_md/Initial_manuscript.md +480 -0
  23. UAI/UAI 2022/UAI 2022 Conference/BElx3S8s5e9/Initial_manuscript_tex/Initial_manuscript.tex +307 -0
  24. UAI/UAI 2022/UAI 2022 Conference/BFULBwUocxq/Initial_manuscript_md/Initial_manuscript.md +707 -0
  25. UAI/UAI 2022/UAI 2022 Conference/BFULBwUocxq/Initial_manuscript_tex/Initial_manuscript.tex +263 -0
  26. UAI/UAI 2022/UAI 2022 Conference/BFZL7ULicg5/Initial_manuscript_md/Initial_manuscript.md +913 -0
  27. UAI/UAI 2022/UAI 2022 Conference/BFZL7ULicg5/Initial_manuscript_tex/Initial_manuscript.tex +473 -0
  28. UAI/UAI 2022/UAI 2022 Conference/BGGevIUicl9/Initial_manuscript_md/Initial_manuscript.md +383 -0
  29. UAI/UAI 2022/UAI 2022 Conference/BGGevIUicl9/Initial_manuscript_tex/Initial_manuscript.tex +227 -0
  30. UAI/UAI 2022/UAI 2022 Conference/BGe6r8i9x5/Initial_manuscript_md/Initial_manuscript.md +334 -0
  31. UAI/UAI 2022/UAI 2022 Conference/BGe6r8i9x5/Initial_manuscript_tex/Initial_manuscript.tex +225 -0
  32. UAI/UAI 2022/UAI 2022 Conference/BGfLS_8j5eq/Initial_manuscript_md/Initial_manuscript.md +245 -0
  33. UAI/UAI 2022/UAI 2022 Conference/BGfLS_8j5eq/Initial_manuscript_tex/Initial_manuscript.tex +210 -0
  34. UAI/UAI 2022/UAI 2022 Conference/BKZCKwIjcl5/Initial_manuscript_md/Initial_manuscript.md +0 -0
  35. UAI/UAI 2022/UAI 2022 Conference/BKZCKwIjcl5/Initial_manuscript_tex/Initial_manuscript.tex +483 -0
  36. UAI/UAI 2022/UAI 2022 Conference/BKZIivLs9xc/Initial_manuscript_md/Initial_manuscript.md +381 -0
  37. UAI/UAI 2022/UAI 2022 Conference/BKZIivLs9xc/Initial_manuscript_tex/Initial_manuscript.tex +281 -0
  38. UAI/UAI 2022/UAI 2022 Conference/BKbcdPUs9ec/Initial_manuscript_md/Initial_manuscript.md +0 -0
  39. UAI/UAI 2022/UAI 2022 Conference/BKbcdPUs9ec/Initial_manuscript_tex/Initial_manuscript.tex +0 -0
  40. UAI/UAI 2022/UAI 2022 Conference/BbGv8Lsql9/Initial_manuscript_md/Initial_manuscript.md +0 -0
  41. UAI/UAI 2022/UAI 2022 Conference/BbGv8Lsql9/Initial_manuscript_tex/Initial_manuscript.tex +465 -0
  42. UAI/UAI 2022/UAI 2022 Conference/BcIfJuIscx5/Initial_manuscript_md/Initial_manuscript.md +721 -0
  43. UAI/UAI 2022/UAI 2022 Conference/BcIfJuIscx5/Initial_manuscript_tex/Initial_manuscript.tex +467 -0
  44. UAI/UAI 2022/UAI 2022 Conference/BcLqJUIs5x5/Initial_manuscript_md/Initial_manuscript.md +0 -0
  45. UAI/UAI 2022/UAI 2022 Conference/BcLqJUIs5x5/Initial_manuscript_tex/Initial_manuscript.tex +263 -0
  46. UAI/UAI 2022/UAI 2022 Conference/BcLwrLLi5xq/Initial_manuscript_md/Initial_manuscript.md +444 -0
  47. UAI/UAI 2022/UAI 2022 Conference/BcLwrLLi5xq/Initial_manuscript_tex/Initial_manuscript.tex +293 -0
  48. UAI/UAI 2022/UAI 2022 Conference/BcU_UIIjqg9/Initial_manuscript_md/Initial_manuscript.md +0 -0
  49. UAI/UAI 2022/UAI 2022 Conference/BcU_UIIjqg9/Initial_manuscript_tex/Initial_manuscript.tex +384 -0
  50. UAI/UAI 2022/UAI 2022 Conference/BfMuG_Iiqgc/Initial_manuscript_md/Initial_manuscript.md +892 -0
UAI/UAI 2022/UAI 2022 Conference/B0gGoUIiqx9/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,644 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Multi-winner Approval Voting Goes Epistemic
2
+
3
+ ## Abstract
4
+
5
+ Epistemic voting interprets votes as noisy signals about a ground truth. We consider contexts where the truth consists of a set of objective winners, knowing a lower and upper bound on its cardinality. A prototypical problem for this setting is the aggregation of multi-label annotations with prior knowledge on the size of the ground truth. We posit noise models, for which we define rules that output an optimal set of winners. We report on experiments on multi-label annotations (which we collected).
6
+
7
+ ## 1 INTRODUCTION
8
+
9
+ The epistemic view of voting assumes the existence of a ground truth which, usually, is either an alternative or a ranking over alternatives. Votes reflect opinions or beliefs about this ground truth; the goal is to aggregate these votes so as to identify it. Usual methods define a noise model specifying the probability of each voting profile given the ground truth, and output the alternative that is the most likely state of the world, or the ranking that is most likely the true ranking.
10
+
11
+ Now, there are contexts where the ground truth does not consist of a single alternative nor a ranking, but of a set of alternatives. Typical examples are multi-label crowdsourc-ing (find the items in a set that satisfy some property, e.g. the sport teams appearing on a picture) or finding the objectively $k$ best candidates (best papers at a conference, best performance in artistic sports, $k$ patients with highest probabilities of survival if being assigned a scarce medical resource).
12
+
13
+ These alternatives that are truly in the ground truth are called 'winning' alternatives. Depending on the context, the number of winning alternatives can be fixed, unconstrained, or more generally, constrained to be in a given interval. This constraint expresses some prior knowledge on the cardinality of the ground truth. This prior knowledge is held by the central authority that aggregates the votes, and not necessarily by the voters themselves. Here are some examples:
14
+
15
+ - Picture annotation via crowdsourcing: participants are shown a picture taken from a soccer match and have to identify the team(s) appearing in it. The ground truth is known to contain one or two teams.
16
+
17
+ - Guitar chord transcription: voters are base classifier algorithms Nguyen et al. [2020] which, for a given chord, select the set of notes constitute it. The true set of notes can contain three to six alternatives.
18
+
19
+ - Jury: participants are members of a jury which has to give an award to three papers presented at a conference: the number of objective winners is fixed to three. (In a variant, the number of awards would be at most three.)
20
+
21
+ - Resource allocation: participants are doctors and alternatives are Covid-19 patients in urgent need of intensive care; there is a limited number $k$ of intensive care units. The ground truth consists of those patients who most deserve to be cured (for example those with the $k$ highest probabilities of survival if cured).
22
+
23
+ We assume that voters provide a simple form of information: approval ballots, indicating which alternatives they consider plausible winners. These approval ballots are not subject to any cardinality constraint: a voter may approve a number of alternatives, even if it does not lie in the interval bearing on the output. This is typically the case for totally ignorant voters, who may plausibly approve all alternatives.
24
+
25
+ Sometimes, the aggregating mechanism has some prior information about the likelihood of alternatives and the reliability of voters. We first study a simple case where this information is specified in the input: in the noise model, each voter has a probability ${p}_{i}$ (resp. ${q}_{i}$ ) of approving a winning (resp. non-winning) alternative, and each alternative has a prior probability to be winning. This departs from classical voting, where voters are usually treated equally (anonymity), and similarly for alternatives (neutrality).
26
+
27
+ This simple case serves as a building component for the more complex case where these parameters are not known beforehand but estimated from the votes: votes allow to infer information about plausibly winning alternatives, from which we infer information about voter reliabilities, which leads to revise information about winning alternatives, and so on until the process converges. Here we move back to an anonymous and neutral setting, since all alternatives (resp. voters) are treated equally before votes are known.
28
+
29
+ After discussing related work (Section 2), we introduce the model (Section 3) and give an estimation algorithm (Section 4), first in the case where the parameters are known, and then in the case where they are estimated from the votes. In Section 5 we present a data gathering task and analyse the results of the experiments. Section 6 concludes.
30
+
31
+ ## 2 RELATED WORK
32
+
33
+ Epistemic social choice Epistemic social choice consists in recovering an objective ground truth from votes seen as noisy reports about the ground truth, using maximum likelihood estimation. It dates back from Condorcet's jury theorem [Condorcet, 1785]: n independent, equally reliable voters vote on two alternatives that are a priori equally likely; if every vote is correct with probability $p > \frac{1}{2}$ , then majority outputs the correct alternative with a probability increasing with $n$ and tending to 1 when $n$ grows to infinity.
34
+
35
+ There are several extensions of Condorcet's jury theorem: Young [1988] for an arbitrary number of alternatives; Shapley and Grofman [1984] and Drissi-Bakhkhat and Truchon [2004] for voters with various competence degrees; Ben-Yashar and Nitzan [1997] and Ben-Yashar and Paroush [2001] for nonuniform priors over alternatives; Pi-vato [2013] and Pivato [2017] for dependent voters. Conitzer and Sandholm [2005] and Conitzer et al. [2009] characterize various voting rules as maximum likelihood estimators, each associated with a particular noise model. See Nitzan and Paroush [2017] and Elkind and Slinko [2016]. for surveys on recent developments.
36
+
37
+ Multi-winner voting Multi-winner voting rules map voting profiles into sets of alternatives. A voting profile can be either a collection of subsets of alternatives (approval ballots) or a collection of ranking over alternatives (ordinal ballots). The output is often constrained to have a fixed cardinality, but not always: see Kilgour [2016], Faliszewski et al. [2020]. There have been a lot of recent developments in the field: see the recent surveys Faliszewski et al. [2017]) and Lackner and Skowron [2020]. They, however, deal only with the classical (non-epistemic) view of social choice, where votes express preferences.
38
+
39
+ Multi-winner epistemic voting Multi-winner epistemic voting has received only little attention so far. Procaccia et al. [2012] assume a ground truth ranking over alternatives, and identify rules that output the $k$ alternatives maximizing the likelihood to contain the best alternative, or the likelihood to coincide with the top- $k$ alternatives. The last section of [Xia and Conitzer, 2011] defines a noise model where the ground truth is a set of $k$ alternatives (and the reported votes are partial orders). The only work we know where the noise models produce random approval votes from a ground truth consisting of a set of alternatives is Caragiannis et al., 2020]. They define a family of distance-based noise models, whose prototypical instance generates approval votes selecting an alternative in the ground truth (resp. not in the ground truth) with probability $p$ (resp. $1 - p$ ); as we see further, this is a specific case of our noise model. Generalizing multiwinner voting, Xia et al. [2010] study epistemic voting on combinatorial (or multi-attribute) domains.
40
+
41
+ Epistemic approval voting Epistemic voting with approval ballots has scarcely been considered. Procaccia and Shah [2015] assume that the ground truth is a ranking over alternatives, and identify noise models for which approval voting is optimal given $k$ -approval votes, in the sense that the objectively best alternative gets elected. Allouche et al. [2022] continue this line of research but assume instead that the ground truth consists of a single alternative. They define various noise models and show that those that work best on real datasets are those that give a higher confidence to voters who approve few alternatives. Caragiannis and Micha [2017] study the number of samples needed to recover the ground truth ranking over alternatives with high enough probability from approval ballots; they show that is is exponential if ballots are required to approve $k$ candidates, but polynomial if the size of the ballots is randomized.
42
+
43
+ Crowdsourcing and social choice A social choice-theoretic study of collective annotation tasks was done by Kruger et al. [2014] and Qing et al. [2014]. Mechanisms for incentive-compatible elicitation with approval ballots in crowdsourcing applications have been designed by Shah and Zhou [2020]. Meir et al. [2019] define a method to aggregate votes weighted according to their average proximity to the other votes as an estimation of their reliability.
44
+
45
+ Prelec et al. [2017] introduce the Bayesian truth serum approach: eliciting, in addition to the voters' answers, their prediction of the distribution of answers, gives much better results. This approach was generalized by Hosseini et al. [2021] to contexts where the ground truth is a ranking.
46
+
47
+ Beyond social choice, collective multi-label annotation was first addressed by Nowak and Rüger [2010], who study the agreement between experts and non-experts in some multi-labelling tasks, and by Deng et al. [2014], who solve the multi-label estimation problem with a scalable aggregation method.
48
+
49
+ ## 3 THE MODEL
50
+
51
+ Let $\mathcal{N} = \{ 1,\ldots , n\}$ be a set of voters, and $\mathcal{A} =$ $\left\{ {{a}_{1},\ldots ,{a}_{m}}\right\}$ a set of alternatives (possible objects in images, notes in chords, papers, patients...). Consider a set of $L$ instances: an instance $z$ consists of an approval profile ${A}^{z} = \left( {{A}_{1}^{z},\ldots ,{A}_{n}^{z}}\right)$ where ${A}_{i}^{z} \subseteq \mathcal{A}$ is an approval ballot for every $i \in \mathcal{N}$ . For example, in a crowdsourcing context, a task usually contains multiple questions, and an instance comprises the voters' answers to one of these questions.
52
+
53
+ For each instance $z \in L$ , there exists an unknown ground truth ${S}_{z}^{ * }$ belonging to $\mathcal{S} = {2}^{\mathcal{A}}$ , which is the set of objectively correct alternatives in instance $z$ . It is prior knowledge by the central authority (but not necessarily by voters), that the number of alternatives in each of them lies in the interval $\left\lbrack {l, u}\right\rbrack : {S}_{z}^{ * } \in {\mathcal{S}}_{l, u} = \{ S \in \mathcal{S}, l \leq \left| S\right| \leq u\}$ , for given bounds $0 \leq l \leq u \leq m$ .
54
+
55
+ Our goal is to unveil the ground truth for each of these instance using the votes and the prior knowledge on the number of winning alternatives. We define a noise model consisting of two parametric distributions, namely, a conditional distribution of the approval ballots given the ground truth, and a prior distribution on the ground truth. Here we depart from classical noise models in epistemic social choice, as we suppose that the parameters of these distributions may be unknown and thus need to be estimated.
56
+
57
+ For each voter $i \in \mathcal{N}$ , we suppose that there exist two unknown parameters $\left( {{p}_{i},{q}_{i}}\right)$ in(0,1)such that the approval ballot ${A}_{i}^{z}$ on an instance $z \in L$ is drawn according to the following distribution: for each $a \in \mathcal{A}$ ,
58
+
59
+ $$
60
+ P\left( {a \in {A}_{i}^{z} \mid {S}_{z}^{ * } = S}\right) = \left\{ \begin{array}{ll} {p}_{i} & \text{ if }a \in S \\ {q}_{i} & \text{ if }a \notin S \end{array}\right.
61
+ $$
62
+
63
+ where ${p}_{i}$ (resp. ${q}_{i}$ ) is the (unknown) probability that voter $i$ approves a correct (resp. incorrect) alternative. Then we make the following assumptions:
64
+
65
+ (1) A voter's approvals of alternatives are mutually independent given the ground truth and parameters ${\left( {p}_{i},{q}_{i}\right) }_{i \in \mathcal{N}}$ .
66
+
67
+ (2) Voters' ballots are mutually independent given the ground truth.
68
+
69
+ (3) Instances are independent given the parameters ${\left( {p}_{i},{q}_{i}\right) }_{i \in \mathcal{N}}$ and the ground truths.
70
+
71
+ To model the prior probability of any set $S$ to be the ground truth ${S}^{ * }$ , we define parameters ${t}_{j} = P\left( {{a}_{j} \in {S}^{ * }}\right)$ . ${t}_{j}$ can be understood as the prior probability of ${a}_{j}$ to be in the ground truth set ${S}^{ * }$ before the cardinality constraints are taken into account. These, together with an independence assumption on the events $\left\{ {{a}_{j} \in {S}^{ * }}\right\}$ , gives $P\left( {S = {S}^{ * }}\right) = \mathop{\prod }\limits_{{{a}_{j} \in S}}{t}_{j}\mathop{\prod }\limits_{{{a}_{j} \notin S}}1 - {t}_{j}$ . Note that the choice of the parameters ${t}_{j}$ is not crucial when running the algorithm for estimating the ground truth: we will see in Section 4.3 that it converges whatever their values. The distribution conditional to the prior knowledge on the size of the ground truth can be seen as a projection on the constraints followed by a normalization:
72
+
73
+ $$
74
+ \widetilde{P}\left( S\right) = P\left( {{S}^{ * } = S\left| {l \leq }\right| {S}^{ * } \mid \leq u}\right) = \frac{P\left( {{S}^{ * } = S \cap \left| {S}^{ * }\right| \in \left\lbrack {l, u}\right\rbrack }\right) }{P\left( {\left| {S}^{ * }\right| \in \left\lbrack {l, u}\right\rbrack }\right) }
75
+ $$
76
+
77
+ It follows:
78
+
79
+ $$
80
+ \widetilde{P}\left( S\right) = \left\{ \begin{array}{ll} \frac{1}{\beta \left( {l, u, t}\right) }\mathop{\prod }\limits_{{{a}_{j} \in S}}{t}_{j}\mathop{\prod }\limits_{{{a}_{j} \notin S}}\left( {1 - {t}_{j}}\right) & \text{ if }S \in {\mathcal{S}}_{l, u} \\ 0 & \text{ if }S \notin {\mathcal{S}}_{l, u} \end{array}\right.
81
+ $$
82
+
83
+ where $\beta \left( {l, u, t}\right) = \mathop{\sum }\limits_{{S \in {\mathcal{S}}_{l, u}}}\mathop{\prod }\limits_{{{a}_{j} \in S}}{t}_{j}\mathop{\prod }\limits_{{{a}_{j} \notin S}}\left( {1 - {t}_{j}}\right)$ .
84
+
85
+ The ground truths associated with different instances are assumed to be mutually independent given the parameters.
86
+
87
+ Two particular cases are worth discussing. First, when $\left( {l, u}\right) = \left( {0, m}\right)$ , the problem is unconstrained and we have $\beta \left( {0, m, t}\right) = P\left( {\left| {S}^{ * }\right| \in \left\lbrack {0, m}\right\rbrack }\right) = 1$ , so $\widetilde{P}\left( S\right) = P(S =$ $\left. {S}^{ * }\right)$ . In this case the problem degenerates into a series of independent binary label-wise estimations (see Subsection 4.1).
88
+
89
+ Second, in the single-winner case $\left( {l, u}\right) = \left( {1,1}\right)$ , we have $\widetilde{P}\left( \left\{ {a}_{j}\right\} \right) = \frac{{t}_{j}\mathop{\prod }\limits_{{h \neq j}}1 - {t}_{h}}{\beta \left( {1,1, t}\right) }$ , therefore, for any approval profile $A, P\left( {{S}^{ * } = \left\{ {a}_{j}\right\} \mid A,\left| {S}^{ * }\right| = 1}\right) \propto \frac{{t}_{j}}{1 - {t}_{j}}P\left( {A \mid {S}^{ * } = \left\{ {a}_{j}\right\} }\right) .$ We recover the same estimation problem if we simply introduce ${\alpha }_{j} = P\left( {{S}^{ * } = \left\{ {a}_{j}\right\} }\right)$ with $\sum {\alpha }_{j} = 1$ as in Ben-Yashar and Paroush [2001], in which case we have $P\left( {{S}^{ * } = \left\{ {a}_{j}\right\} \left| {A,}\right| {S}^{ * } \mid = 1}\right) \propto {\alpha }_{j}P\left( {A \mid {S}^{ * } = \left\{ {a}_{j}\right\} }\right) .$
90
+
91
+ ## 4 ESTIMATING THE GROUND TRUTH
92
+
93
+ Our aim is the intertwined estimation of the ground truth and the parameters via maximizing the total likelihood of the instances:
94
+
95
+ $$
96
+ \mathcal{L}\left( {A, S, p, q, t}\right) = \mathop{\prod }\limits_{{z = 1}}^{L}\widetilde{P}\left( {S}_{z}\right) \mathop{\prod }\limits_{{i = 1}}^{n}P\left( {{A}_{i}^{z} \mid {S}_{z}}\right)
97
+ $$
98
+
99
+ where:
100
+
101
+ $$
102
+ P\left( {{A}_{i}^{z} \mid {S}_{z}}\right) = {p}_{i}^{\left| {A}_{i}^{z} \cap {S}_{z}\right| }{q}_{i}^{\left| {A}_{i}^{z} \cap \overline{{S}_{z}}\right| }{\left( 1 - {p}_{i}\right) }^{\left| \overline{{A}_{i}^{z}} \cap {S}_{z}\right| }{\left( 1 - {q}_{i}\right) }^{\left| \overline{{A}_{i}^{z}} \cap \overline{{S}_{z}}\right| }
103
+ $$
104
+
105
+ To this aim, we will introduce an iterative algorithm whose main two steps will be presented in sequence, in the next subsections, before the main algorithm is formally defined and its convergence shown. These two steps are:
106
+
107
+ - Estimating the ground truths given the parameters.
108
+
109
+ - Estimating the parameters given the ground truths.
110
+
111
+ Simply put, the algorithm consists in iterating these two steps until it converges to a fixed point.
112
+
113
+ ### 4.1 ESTIMATING THE GROUND TRUTH GIVEN THE VOTES AND THE PARAMETERS
114
+
115
+ Since instances are independent given the parameters, we focus here on one instance with ground truth ${S}^{ * }$ and profile $A = \left( {{A}_{1},\ldots ,{A}_{n}}\right)$ . Before diving into maximum likelihood estimation (MLE), we introduce some notions and prove some lemmas. In this subsection, we suppose that the parameters ${\left( {p}_{i},{q}_{i}\right) }_{i \in \mathcal{N}}$ and ${\left( {t}_{j}\right) }_{j \in \mathcal{A}}$ are known (later on, these parameters will be replaced by their estimations at each iteration of the algorithm). Thus, all in all, input and output are as follows:
116
+
117
+ - Input: approval profile $A$ ; parameters ${\left( {p}_{i},{q}_{i}\right) }_{i \in \mathcal{N}}$ and ${\left( {t}_{j}\right) }_{j \in \mathcal{A}}$ .
118
+
119
+ - Output: MLE of the ground truth ${S}^{ * }$ .
120
+
121
+ Definition 1 (weighted approval score). Given an approval profile $\left( {{A}_{1},\ldots ,{A}_{n}}\right)$ , noise parameters ${\left( {p}_{i},{q}_{i}\right) }_{1 \leq i \leq n}$ and prior parameters ${\left( {t}_{j}\right) }_{1 \leq j \leq m}$ , define:
122
+
123
+ $$
124
+ {ap}{p}_{w}\left( {a}_{j}\right) = \ln \left( \frac{{t}_{j}}{1 - {t}_{j}}\right) + \mathop{\sum }\limits_{{i : {a}_{j} \in {A}_{i}}}\ln \left( \frac{{p}_{i}\left( {1 - {q}_{i}}\right) }{{q}_{i}\left( {1 - {p}_{i}}\right) }\right)
125
+ $$
126
+
127
+ The scores ${\operatorname{app}}_{w}\left( {a}_{j}\right)$ can be interpreted as weighted approval scores for a $\left( {n + m}\right)$ -voter profile where:
128
+
129
+ - for each voter $1 \leq i \leq n : i$ has a weight ${w}_{i} =$ $\ln \left( \frac{{p}_{i}\left( {1 - {q}_{i}}\right) }{{q}_{i}\left( {1 - {p}_{i}}\right) }\right)$ and casts approval ballot ${A}_{i}$ .
130
+
131
+ - for each $1 \leq j \leq m$ : there is a virtual voter with weight ${w}_{j} = \ln \left( \frac{{t}_{j}}{1 - {t}_{j}}\right)$ who casts approval ballot ${A}_{j} = \left\{ {a}_{j}\right\}$ .
132
+
133
+ While the weight of each voter $i \in \mathcal{N}$ depends on her reliability, each prior information on an alternative plays the role of a virtual voter who only selects the concerned alternative, with a weight that increases as the prior parameter increases.
134
+
135
+ From now on, we suppose without loss of generality that the alternatives are ranked according to their score:
136
+
137
+ $$
138
+ {ap}{p}_{w}\left( {a}_{1}\right) \geq {ap}{p}_{w}\left( {a}_{2}\right) \geq \cdots \geq {ap}{p}_{w}\left( {a}_{m}\right)
139
+ $$
140
+
141
+ Definition 2 (threshold and partition). Define the threshold:
142
+
143
+ $$
144
+ {\tau }_{n} = \mathop{\sum }\limits_{{i = 1}}^{n}\ln \left( \frac{1 - {q}_{i}}{1 - {p}_{i}}\right)
145
+ $$
146
+
147
+ and the partition of the set of alternatives in three sets:
148
+
149
+ $$
150
+ \left\{ \begin{array}{ll} {S}_{max}^{{\tau }_{n}} & = \left\{ {a \in A,{ap}{p}_{w}\left( a\right) > {\tau }_{n}}\right\} \\ {S}_{tie}^{{\tau }_{n}} & = \left\{ {a \in A,{ap}{p}_{w}\left( a\right) = {\tau }_{n}}\right\} \\ {S}_{min}^{{\tau }_{n}} & = \mathcal{A} \backslash \left( {{S}_{max}^{{\tau }_{n}} \cup {S}_{tie}^{{\tau }_{n}}}\right) \end{array}\right.
151
+ $$
152
+
153
+ and let ${k}_{\max }^{{\tau }_{n}} = \left| {S}_{\max }^{{\tau }_{n}}\right| ,{k}_{\text{tie }}^{{\tau }_{n}} = \left| {S}_{\text{tie }}^{{\tau }_{n}}\right| ,{k}_{\min }^{{\tau }_{n}} = \left| {S}_{\min }^{{\tau }_{n}}\right|$ .
154
+
155
+ The next result characterizes the sets in $\mathcal{S}$ that are MLEs of the ground truth given the parameters.
156
+
157
+ Theorem 1. $\widetilde{S} \in \arg \mathop{\max }\limits_{{S \in \mathcal{S}}}\mathcal{L}\left( {A, S, p, q, t}\right)$ if and only if there exists $k \in \left\lbrack {l, u}\right\rbrack$ such that $\widetilde{S}$ is the set of $k$ alternatives with the highest $k$ values of ${ap}{p}_{w}$ and:
158
+
159
+ $$
160
+ \left\{ \begin{array}{ll} \left| {\widetilde{S} \cap {S}_{max}^{{\tau }_{n}}}\right| & = \min \left( {u,{k}_{max}^{{\tau }_{n}}}\right) \\ \left| {\widetilde{S} \cap {S}_{min}^{{\tau }_{n}}}\right| & = \max \left( {0, l - {k}_{tie}^{{\tau }_{n}} - {k}_{max}^{{\tau }_{n}}}\right) \end{array}\right. \tag{1}
161
+ $$
162
+
163
+ So the estimator $\widetilde{S}$ is made of some top- $k$ alternatives, where the possible values of $k$ are determined by Eq. (1). The first equation imposes that $\widetilde{S}$ includes as many elements as possible from ${S}_{\max }^{{\tau }_{n}}$ (without exceeding the upper-bound $u$ ), whereas the second one imposes that $\widetilde{S}$ includes as few elements as possible from ${S}_{min}^{{\tau }_{n}}$ (without getting below the lower-bound $l$ ). An example is included in the appendix.
164
+
165
+ Proof. Since $\widetilde{P}\left( S\right) > 0 \Leftrightarrow S \in {\mathcal{S}}_{l, u}$ , we have that $\arg \mathop{\max }\limits_{{S \in \mathcal{S}}}L\left( S\right) = \arg \mathop{\max }\limits_{{S \in {\mathcal{S}}_{l, u}}}L\left( S\right)$ . Moreover, we have that for any $S \in {\mathcal{S}}_{l, u}$ :
166
+
167
+ $$
168
+ L\left( S\right) = \widetilde{P}\left( S\right) \mathop{\prod }\limits_{{i = 1}}^{n}{p}_{i}^{\left| {A}_{i} \cap S\right| }{q}_{i}^{\left| {A}_{i} \cap \bar{S}\right| }{\left( 1 - {p}_{i}\right) }^{\left| \overline{{A}_{i}} \cap S\right| }{\left( 1 - {q}_{i}\right) }^{\left| \overline{{A}_{i}} \cap \bar{S}\right| }
169
+ $$
170
+
171
+ $$
172
+ = \widetilde{P}\left( S\right) \mathop{\prod }\limits_{{i = 1}}^{n}{p}_{i}^{\left| {A}_{i} \cap S\right| }{q}_{i}^{\left| {A}_{i}\right| - \left| {{A}_{i} \cap S}\right| }{\left( 1 - {p}_{i}\right) }^{\left| S\right| - \left| {{A}_{i} \cap S}\right| }
173
+ $$
174
+
175
+ $$
176
+ {\left( 1 - {q}_{i}\right) }^{\left| \overline{{A}_{i}}\right| - \left| S\right| + \left| {{A}_{i} \cap S}\right| }
177
+ $$
178
+
179
+ $$
180
+ \propto \widetilde{P}\left( S\right) \mathop{\prod }\limits_{{i = 1}}^{n}{\left\lbrack \frac{1 - {p}_{i}}{1 - {q}_{i}}\right\rbrack }^{\left| S\right| }{\left\lbrack \frac{{p}_{i}\left( {1 - {q}_{i}}\right) }{{q}_{i}\left( {1 - {p}_{i}}\right) }\right\rbrack }^{\left| {A}_{i} \cap S\right| }
181
+ $$
182
+
183
+ $$
184
+ \propto \frac{1}{\beta }\mathop{\prod }\limits_{{{a}_{j} \in S}}{t}_{j}\mathop{\prod }\limits_{{{a}_{j} \notin S}}\left( {1 - {t}_{j}}\right) \mathop{\prod }\limits_{{i = 1}}^{n}{\left\lbrack \frac{1 - {p}_{i}}{1 - {q}_{i}}\right\rbrack }^{\left| S\right| }{\left\lbrack \frac{{p}_{i}\left( {1 - {q}_{i}}\right) }{{q}_{i}\left( {1 - {p}_{i}}\right) }\right\rbrack }^{\left| {A}_{i} \cap S\right| }
185
+ $$
186
+
187
+ $$
188
+ \propto \mathop{\prod }\limits_{{{a}_{j} \in S}}\frac{{t}_{j}}{1 - {t}_{j}}\mathop{\prod }\limits_{{i = 1}}^{n}{\left\lbrack \frac{1 - {p}_{i}}{1 - {q}_{i}}\right\rbrack }^{\left| S\right| }{\left\lbrack \frac{{p}_{i}\left( {1 - {q}_{i}}\right) }{{q}_{i}\left( {1 - {p}_{i}}\right) }\right\rbrack }^{\left| {A}_{i} \cap S\right| }
189
+ $$
190
+
191
+ Thus the log-likelihood reads:
192
+
193
+ $$
194
+ l\left( S\right) = \mathop{\sum }\limits_{{{a}_{j} \in S}}\ln \frac{{t}_{j}}{1 - {t}_{j}} + \mathop{\sum }\limits_{{i = 1}}^{n}\left| S\right| \ln \frac{1 - {p}_{i}}{1 - {q}_{i}} + \left| {{A}_{i} \cap S}\right| \ln \frac{{p}_{i}\left( {1 - {q}_{i}}\right) }{{q}_{i}\left( {1 - {p}_{i}}\right) }
195
+ $$
196
+
197
+ $$
198
+ = \mathop{\sum }\limits_{{{a}_{j} \in S}}\left\lbrack \underset{{ap}{p}_{w}\left( {a}_{j}\right) }{\underbrace{\ln \frac{{t}_{j}}{1 - {t}_{j}} + \mathop{\sum }\limits_{\substack{{i : {a}_{j} \in {A}_{i}} \\ {{ap}{p}_{w}\left( {a}_{j}\right) } }}\ln \frac{{p}_{i}\left( {1 - {q}_{i}}\right) }{{q}_{i}\left( {1 - {p}_{i}}\right) } - \mathop{\sum }\limits_{{i = 1}}^{n}\ln \frac{1 - {q}_{i}}{1 - {p}_{i}}}}\right\rbrack
199
+ $$
200
+
201
+ This means that $a \in {S}_{\max }^{{\tau }_{n}}$ if and only if $l\left( a\right) > 0, a \in$ ${S}_{\min }^{{\tau }_{n}}$ if and only if $l\left( a\right) < 0$ and $a \in {S}_{\text{tie }}^{{\tau }_{n}}$ if and only if $l\left( a\right) = 0$ . Now, let ${S}_{M}$ be a maximizer of the likelihood. Since $l\left( {a}_{j}\right) \geq l\left( {a}_{h}\right) \Leftrightarrow {ap}{p}_{w}\left( {a}_{j}\right) \geq {ap}{p}_{w}\left( {a}_{h}\right)$ we have that ${S}_{M}$ , which maximizes $\mathop{\sum }\limits_{{{a}_{j} \in S}}l\left( {a}_{j}\right)$ , is made of top- $k$ alternatives for some $k \in \left\lbrack {l\ldots u}\right\rbrack$ .
202
+
203
+ Furthermore, $\left| {{S}_{M} \cap {S}_{min}^{{\tau }_{n}}}\right| = \max \left( {0, l - {k}_{tie}^{{\tau }_{n}} - {k}_{max}^{{\tau }_{n}}}\right)$ . Start by noticing that $\left| {{S}_{M} \cap {S}_{min}^{{\tau }_{n}}}\right| \geq \max \left( {0, l - {k}_{\text{tie }}^{{\tau }_{n}} - {k}_{max}^{{\tau }_{n}}}\right)$ , since $\left| {{S}_{M} \cap {S}_{min}^{{\tau }_{n}}}\right| \geq l - \left| {{S}_{M} \cap {S}_{max}^{{\tau }_{n}}}\right| - \left| {{S}_{M} \cap {S}_{tie}^{{\tau }_{n}}}\right| \geq l -$ ${k}_{\max }^{{\tau }_{n}} - {k}_{\text{tie }}^{{\tau }_{n}}$ . Suppose that $\left| {{S}_{M} \cap {S}_{\min }^{{\tau }_{n}}}\right| > \max \left( {0, l - {k}_{\text{tie }}^{{\tau }_{n}} - }\right.$ $\left. {k}_{\max }^{{\tau }_{n}}\right)$ . Then we have that $\left| {S}_{M}\right| > l$ because otherwise, if $\left| {S}_{M}\right| = l$ , then $\left| {{S}_{M} \cap {S}_{\max }^{{\tau }_{n}}}\right| + \left| {{S}_{M} \cap {S}_{\text{tie }}^{{\tau }_{n}}}\right| = l - \mid {S}_{M} \cap$ ${S}_{min}^{{\tau }_{n}} \mid < {k}_{max}^{{\tau }_{n}} + {k}_{tie}^{{\tau }_{n}}$ , which would mean that there are elements in ${S}_{\text{tie }}^{{\tau }_{n}}$ and ${S}_{\max }^{{\tau }_{n}}$ which are not in ${S}_{M}$ , which is a contradiction since $\left| {{S}_{M} \cap {S}_{min}^{{\tau }_{n}}}\right| > 0$ and ${S}_{M}$ is a top- $k$ set. Now consider $a \in {S}_{M} \cap {S}_{\min }^{{\tau }_{n}}$ , we have that $\left| {{S}_{M}\smallsetminus \{ a\} }\right| \geq l$ and $l\left( {S}_{M}\right) = l\left( {{S}_{M}\smallsetminus \{ a\} }\right) + l\left( a\right) < l\left( {{S}_{M}\smallsetminus \{ a\} }\right)$ which is a contradiction.
204
+
205
+ With the same idea we can prove that $\left| {{S}_{M} \cap {S}_{\text{max }}^{{\tau }_{n}}}\right| =$ $\min \left( {u,{k}_{\max }^{{\tau }_{n}}}\right)$ .
206
+
207
+ Conversely, consider an admissible set $S$ of top- $k$ alternatives that verifies the constraints (1). Let ${S}_{M}$ be a MLE which, by the first part of the proof, is a top- ${k}^{\prime }$ set that also satisfies the same constraints (1). Thus we have that $\left| {{S}_{M} \cap {S}_{max}^{{\tau }_{n}}}\right| = \left| {S \cap {S}_{max}^{{\tau }_{n}}}\right| = \min \left( {u,{k}_{max}^{{\tau }_{n}}}\right)$ , and since $S$ and ${S}_{M}$ are top- $k$ and top- ${k}^{\prime }$ sets, we have that $S \cap {S}_{\max }^{{\tau }_{n}} =$ ${S}_{M} \cap {S}_{\max }^{{\tau }_{n}}$ . Similarly we have that $S \cap {S}_{\min }^{{\tau }_{n}} = {S}_{M} \cap {S}_{\min }^{{\tau }_{n}}$ . This suffices to prove that $l\left( S\right) = l\left( {S}_{M}\right)$ is maximal.
208
+
209
+ Notice that when $\left( {l, u}\right) = \left( {0, m}\right)$ , the problem degenerates into a collection of label-wise problems, one for each alternative: ${a}_{j}$ is selected if ${a}_{j} \in {S}_{\max }^{{\tau }_{n}}$ , rejected if ${a}_{j} \in {S}_{\min }^{{\tau }_{n}}$ , and those that are on the fence can be arbitrarily selected or not.
210
+
211
+ Example 1. Consider 5 alternatives $\mathcal{A} = \{ a, b, c, d, e\}$ and 10 voters $\mathcal{N}$ all sharing the same parameters $\left( {p, q}\right) =$ (0.7,0.4). We thus have that all voters share the same weight $w = \ln \left( \frac{p\left( {1 - q}\right) }{q\left( {1 - p}\right) }\right) = {1.25}$ and ${\tau }_{n} = \mathop{\sum }\limits_{{i = 1}}^{n}\ln \left( \frac{1 - q}{1 - p}\right) =$ 6.93. We consider the constraints $\left( {l, u}\right) = \left( {1,4}\right)$
212
+
213
+ First, suppose that ${t}_{d} = {0.6}$ and that ${t}_{j} = {0.5}$ for all the remaining candidates. Consider also the approval counts (and weighted approval scores) in the table below.
214
+
215
+ <table><tr><td>Candidate</td><td>$a$</td><td>$b$</td><td>$C$</td><td>$d$</td><td>$e$</td></tr><tr><td>Approval count</td><td>9</td><td>8</td><td>7</td><td>5</td><td>5</td></tr><tr><td>${ap}{p}_{w}$</td><td>11.25</td><td>10</td><td>8.75</td><td>6.65</td><td>6.25</td></tr></table>
216
+
217
+ We can easily check, by Theorem 1 that $\widetilde{S} =$ $\arg \mathop{\max }\limits_{{S \in \mathcal{S}}}P\left( {S = {S}^{ * } \mid A}\right) = \{ a, b, c\}$ . We have that ${S}_{max}^{{\tau }_{n}} = \{ a, b, c\} ,{S}_{tie}^{{\tau }_{n}} = \varnothing$ and ${S}_{min}^{{\tau }_{n}} = \{ d, e\}$ . We know that there exists some $k \in \left\lbrack {1,4}\right\rbrack$ such that $\widetilde{S}$ would consist of the top $k$ alternatives. We also have that:
218
+
219
+ $$
220
+ \left\{ \begin{array}{ll} \left| {\widetilde{S} \cap {S}_{max}^{{\tau }_{n}}}\right| & = \min \left( {u,{k}_{max}^{{\tau }_{n}}}\right) = 3 \Rightarrow \{ a, b, c\} \subseteq \widetilde{S} \\ \left| {\widetilde{S} \cap {S}_{min}^{{\tau }_{n}}}\right| & = \max \left( {0, l - {k}_{tie}^{{\tau }_{n}} - {k}_{max}^{{\tau }_{n}}}\right) = 0 \Rightarrow d, e \notin \widetilde{S} \end{array}\right.
221
+ $$
222
+
223
+ So the only possibility is $\widetilde{S} = \{ a, b, c\}$ .
224
+
225
+ ### 4.2 ESTIMATING THE PARAMETERS GIVEN THE GROUND TRUTH
226
+
227
+ #### 4.2.1 Estimating the prior parameters over alternatives
228
+
229
+ Once the ground truths are estimated at one iteration of the algorithm, the next step consists in estimating the prior parameters ${\left( {t}_{j}\right) }_{j \in \mathcal{A}}$ , with the ground truths being given (in Subsection 4.3 the ground truth will be replaced by its estimation at each iteration). The next proposition explicits the closed-form expression of the MLE of the prior parameter of each alternative given the ground truth of each instance ${S}_{z}^{ * }$ once the prior parameters of all other alternatives are fixed.
230
+
231
+ - Input: Approval profile $\left( {{A}_{1},\ldots ,{A}_{n}}\right)$ , ground truths ${S}_{z}^{ * }$ , and all but one prior parameters ${\left( {t}_{h}\right) }_{h \neq j}$ .
232
+
233
+ - Output: MLE of ${t}_{j}$ .
234
+
235
+ Proposition 2. For every ${a}_{j} \in \mathcal{A}$ :
236
+
237
+ $$
238
+ \underset{t \in \left( {0,1}\right) }{\arg \max }\mathcal{L}\left( {A, S, p, q, t,{t}_{-j}}\right) = \frac{{occ}\left( j\right) {\bar{\alpha }}_{j}}{\left( {L - {occ}\left( j\right) }\right) {\underline{\alpha }}_{j} + {occ}\left( j\right) {\bar{\alpha }}_{j}}
239
+ $$
240
+
241
+ $$
242
+ \text{where:}\left\{ \begin{array}{ll} {\bar{\alpha }}_{j} & = \mathop{\sum }\limits_{{S \in {S}_{l, u}}}\mathop{\prod }\limits_{{{a}_{h} \in S}}{t}_{h}\mathop{\prod }\limits_{{{a}_{h} \notin S}}\left( {1 - {t}_{h}}\right) \\ {\underline{\alpha }}_{j} & = \mathop{\sum }\limits_{{S \in {S}_{l, u}}}\mathop{\prod }\limits_{{{a}_{h} \in S}}{t}_{h}\mathop{\prod }\limits_{{{a}_{h} \in S}}\left( {1 - {t}_{h}}\right) \\ {\underline{\alpha }}_{j} & = \mathop{\sum }\limits_{{{a}_{j} \notin S}}\mathop{\prod }\limits_{{{a}_{h} \in S}}{t}_{h}\mathop{\prod }\limits_{{{a}_{h} \in S}}\left( {1 - {t}_{h}}\right) \\ \operatorname{occ}\left( j\right) & = \left| {z \in \{ 1,\ldots , L\} ,{a}_{j} \in {S}_{z}}\right| \end{array}\right.
243
+ $$
244
+
245
+ Notice that ${\bar{\alpha }}_{j} = P\left( {l \leq \left| {S}^{ * }\right| \leq u \mid {a}_{j} \in {S}^{ * }}\right)$ and ${\underline{\alpha }}_{j} =$ $P\left( {l \leq \left| {S}^{ * }\right| \leq u \mid {a}_{j} \notin {S}^{ * }}\right)$ so $\beta = {\bar{\alpha }}_{j}{t}_{j} + {\underline{\alpha }}_{j}\left( {1 - {t}_{j}}\right)$ . occ(j) is the number of instances whose ground truth contains ${a}_{j}$ . The proof is deferred to the Appendix.
246
+
247
+ We will see later that the algorithm applies Proposition 2 sequentially to estimate the alternatives' parameters one by one (see Example 2).
248
+
249
+ #### 4.2.2 Estimating the voter parameters
250
+
251
+ Once the ground truths are known (or estimated), we can estimate the voters’ parameters(p, q).
252
+
253
+ - Input: Instances $\left( {{A}^{1},\ldots ,{A}^{L}}\right)$ , ground truths $\left( {{S}_{1}^{ * },\ldots ,{S}_{L}^{ * }}\right)$ .
254
+
255
+ - Output: MLE of voter reliabilities(p, q).
256
+
257
+ The next result simply states that the maximum likelihood estimator of ${p}_{i}$ of some voter is the fraction of alternatives that the voter approves and that actually belong to the ground truth; the estimation of ${q}_{i}$ is similar. See Example 2.
258
+
259
+ Proposition 3. Fix sets ${S}_{z} \in {\mathcal{S}}_{l, u}$ and prior parameters ${t}_{j}$ . Then:
260
+
261
+ $$
262
+ \underset{\left( {p, q}\right) \in {\left( 0,1\right) }^{2n}}{\arg \max }\mathcal{L}\left( {A, S, p, q, t}\right) = \left( {\widehat{p},\widehat{q}}\right)
263
+ $$
264
+
265
+ where: ${\widehat{p}}_{i} = \frac{\mathop{\sum }\limits_{{z \in L}}\left| {{A}_{i}^{z} \cap {S}_{z}}\right| }{\mathop{\sum }\limits_{{z \in L}}\left| {S}_{z}\right| },{\widehat{q}}_{i} = \frac{\mathop{\sum }\limits_{{z \in L}}\left| {{A}_{i}^{z} \cap \overline{{S}_{z}}}\right| }{\mathop{\sum }\limits_{{z \in L}}\left| \overline{{S}_{z}}\right| }$
266
+
267
+ The (simple) proof is omitted.
268
+
269
+ ### 4.3 ALTERNATING MAXIMUM LIKELIHOOD ESTIMATION
270
+
271
+ Now the estimation of the ground truths and that of the parameters are intertwined to maximize the overall likelihood $\mathcal{L}\left( {A, S, p, q, t}\right)$ by the Alternating Maximum Likelihood Estimation algorithm. AMLE is an iterative procedure similar to the Expectation-Maximization procedure introduced in Baharad et al. [2011] but with a coordinate-steepest-ascent-like iteration, whose aim is to intertwinedly estimate the voter reliabilities, the alternatives' prior parameters and the instances' ground truths. The idea behind this estimation consists in alternating a MLE of the ground truths given the current estimate of the parameters, and an updating of these parameters via a MLE based on the current estimate of the ground truths. ${}^{\top }$ Each of these steps have been discussed in the previous subsections and are now incorporated into Algo. 1.
272
+
273
+ Algorithm 1 AMLE procedure
274
+
275
+ ---
276
+
277
+ Input: Approval ballots ${\left( {A}_{i}^{z}\right) }_{1 \leq z \leq L, i \in \mathcal{N}}$
278
+
279
+ Initial parameters ${\widehat{\theta }}^{\left( 0\right) }$ , Bounds(l, u), Tolerance $\varepsilon$
280
+
281
+ Output: Estimations $\left( {\widehat{S}}_{z}\right) ,\left( {{\widehat{p}}_{i},{\widehat{q}}_{i}}\right) ,\left( {\widehat{t}}_{j}\right)$
282
+
283
+ repeat
284
+
285
+ for $z = 1\ldots L$ do
286
+
287
+ Compute ${\widehat{S}}_{z}^{\left( v + 1\right) } = \left\{ {{a}_{1},\ldots ,{a}_{k}}\right\}$ with $k \in \left\lbrack {l, u}\right\rbrack$
288
+
289
+ and:
290
+
291
+ $$
292
+ \left\{ \begin{array}{ll} \left| {{\widehat{S}}_{z}^{\left( v + 1\right) } \cap {S}_{\max , z}^{\left( v\right) }}\right| & = \min \left( {u,{k}_{\max , z}^{\left( v\right) }}\right) \\ \left| {{\widehat{S}}_{z}^{\left( v + 1\right) } \cap {S}_{\min , z}^{\left( v\right) }}\right| & = \max \left( {0, l - {k}_{{tie}, z}^{\left( v\right) } - {k}_{\max , z}^{\left( v\right) }}\right. \end{array}\right.
293
+ $$
294
+
295
+ end for
296
+
297
+ for $i = 1\ldots \mathcal{N}$ do
298
+
299
+ Update the parameters $\left( {{p}_{i},{q}_{i}}\right)$ given ${\widehat{S}}^{\left( v + 1\right) }$ :
300
+
301
+ $$
302
+ {\widehat{p}}_{i}^{\left( v + 1\right) } = \frac{\mathop{\sum }\limits_{{z \in L}}\left| {{A}_{i}^{z} \cap {\widehat{S}}_{z}^{\left( v + 1\right) }}\right| }{\mathop{\sum }\limits_{{z \in L}}\left| {\widehat{S}}_{z}^{\left( v + 1\right) }\right| },{\widehat{q}}_{i}^{\left( v + 1\right) } = \frac{\mathop{\sum }\limits_{{z \in L}}\left| {{A}_{i}^{z} \cap \overline{{\widehat{S}}_{z}^{\left( v + 1\right) }}}\right| }{\mathop{\sum }\limits_{{z \in L}}\left| \overline{{\widehat{S}}_{z}^{\left( v + 1\right) }}\right| }
303
+ $$
304
+
305
+ end for
306
+
307
+ for $j = 1\ldots m$ do
308
+
309
+ Update ${\widehat{t}}_{j}^{\left( v + 1\right) }$ by:
310
+
311
+ $$
312
+ {\widehat{t}}_{j}^{\left( v + 1\right) } = \frac{{oc}{c}^{\left( v + 1\right) }\left( j\right) {\bar{\alpha }}_{j}^{\left( v + 1\right) }}{{oc}{c}^{\left( v + 1\right) }\left( j\right) {\bar{\alpha }}_{j}^{\left( v + 1\right) } + \left( {L - {oc}{c}^{\left( v + 1\right) }\left( j\right) }\right) {\underline{\alpha }}_{j}^{\left( v + 1\right) }}
313
+ $$
314
+
315
+ where :
316
+
317
+ $$
318
+ \left\{ \begin{array}{ll} {\operatorname{occ}}^{\left( v + 1\right) }\left( j\right) & = \mathop{\sum }\limits_{{z = 1}}^{L}\mathbb{1}\left\{ {{a}_{j} \in {\widehat{S}}_{z}^{\left( v + 1\right) }}\right\} \\ {\bar{\alpha }}_{j}^{\left( v + 1\right) } & = \beta \left( {{\left( l - 1\right) }^{ + }, u - 1,{\widehat{t}}_{ < j}^{\left( v + 1\right) },{\widehat{t}}_{ > j}^{\left( v\right) }}\right) \\ {\underline{\alpha }}_{j}^{\left( v + 1\right) } & = \beta \left( {l, u,{\widehat{t}}_{ < j}^{\left( v + 1\right) },{\widehat{t}}_{ > j}^{\left( v\right) }}\right) \end{array}\right.
319
+ $$
320
+
321
+ end for
322
+
323
+ until $\begin{Vmatrix}{{\widehat{\theta }}^{\left( v + 1\right) } - {\widehat{\theta }}^{\left( v\right) }}\end{Vmatrix} \leq \varepsilon$
324
+
325
+ ---
326
+
327
+ The algorithm continues to run until a convergence criterion is met in the form of a bound on the norm of the change in the parameters’ estimations. In practice we chose ${\ell }_{\infty }$ , but any other norm could be used in Algorithm 1 as in finite dimensions, all norms are equivalent (if a sequence converges according to one norm then it does so for any norm).
328
+
329
+ We define the vector of parameters ${\widehat{\theta }}^{\left( v\right) } = \left( {{\widehat{p}}^{\left( v\right) },{\widehat{q}}^{\left( v\right) },{\widehat{t}}^{\left( v\right) }}\right)$ containing the voters' estimated noise parameters as well as the prior information estimated parameters at iteration $v$ . In particular ${\widehat{\theta }}^{\left( 0\right) }$ is the input initial values. The choice of the exact initial values depends on the application at hand.
330
+
331
+ Note that at convergence, only local optimality is guaranteed, as classical in optimization.
332
+
333
+ Theorem 4. For any initial values ${\widehat{\theta }}^{\left( 0\right) }$ , AMLE converges to a fixed point after a finite number of iterations.
334
+
335
+ We only provide a sketch of proof and defer the full proof to the Appendix.
336
+
337
+ Proof. First we have by Theorem 1 that:
338
+
339
+ $$
340
+ \mathcal{L}\left( {A,{\widehat{S}}^{\left( v + 1\right) },{\widehat{\theta }}^{\left( v\right) }}\right) \geq \mathcal{L}\left( {A,{\widehat{S}}^{\left( v\right) },{\widehat{\theta }}^{\left( v\right) }}\right)
341
+ $$
342
+
343
+ By Proposition 2 and Proposition 3, we deduce that:
344
+
345
+ $$
346
+ \mathcal{L}\left( {A,{\widehat{S}}^{\left( v + 1\right) },{\widehat{\theta }}^{\left( v + 1\right) }}\right) \geq \mathcal{L}\left( {A,{\widehat{S}}^{\left( v + 1\right) },{\widehat{\theta }}^{\left( v\right) }}\right)
347
+ $$
348
+
349
+ Hence, the likelihood increases at every step. Since there is a finite number of possible values for the ground truth (namely ${2}^{mL}$ ), the convergence of the algorithm is guaranteed.
350
+
351
+ Because $\mathcal{L}\left( {A,{\widehat{S}}^{\left( v + 1\right) },{\widehat{\theta }}^{\left( v + 1\right) }}\right) \geq \mathcal{L}\left( {A,{\widehat{S}}^{\left( v + 1\right) },{\widehat{\theta }}^{\left( v\right) }}\right) \geq$ $\mathcal{L}\left( {A,{\widehat{S}}^{\left( v\right) },{\widehat{\theta }}^{\left( v\right) }}\right)$ , the likelihood increases at each step of the algorithm. This guarantees that whenever the execution stops, the likelihood is closer to the maximum than it initially was. Therefore the algorithm can not only be run until convergence, but it can also be run as an anytime algorithm.
352
+
353
+ Example 2. Take $n = 3, m = 5, l = 1, u = 2, L = 4$ , and the following profile and initial parameters:
354
+
355
+ $$
356
+ \left\{ \begin{array}{lll} {\widehat{p}}_{1}^{\left( 0\right) } = {0.5} & {\widehat{p}}_{2}^{\left( 0\right) } = {0.5} & {\widehat{p}}_{3}^{\left( 0\right) } = {0.5} \\ {\widehat{q}}_{1}^{\left( 0\right) } = {0.44} & {\widehat{q}}_{2}^{\left( 0\right) } = {0.41} & {\widehat{q}}_{3}^{\left( 0\right) } = {0.32} \\ {\widehat{t}}_{1}^{\left( 0\right) } = \cdots = {\widehat{t}}_{5}^{\left( 0\right) } & = {0.5} & \end{array}\right.
357
+ $$
358
+
359
+ <table><tr><td/><td>${A}^{1}$</td><td>${A}^{2}$</td><td>${A}^{3}$</td><td>${A}^{4}$</td></tr><tr><td>Voter 1</td><td>$\left\{ {{a}_{1},{a}_{4}}\right\}$</td><td>$\left\{ {a}_{1}\right\}$</td><td>$\left\{ {a}_{3}\right\}$</td><td>$\left\{ {a}_{1}\right\}$</td></tr><tr><td>Voter 2</td><td>$\left\{ {a}_{2}\right\}$</td><td>$\left\{ {a}_{5}\right\}$</td><td>$\left\{ {a}_{4}\right\}$</td><td>$\left\{ {a}_{1}\right\}$</td></tr><tr><td>Voter 3</td><td>$\left\{ {{a}_{2},{a}_{3},{a}_{4}}\right\}$</td><td>$\left\{ {{a}_{2},{a}_{3},{a}_{5}}\right\}$</td><td>$\left\{ {{a}_{2},{a}_{3}}\right\}$</td><td>$\left\{ {a}_{3}\right\}$</td></tr></table>
360
+
361
+ Estimating the ground truth: The first step is the application of Theorem 1 to estimate the ground truth of the instances given the initial parameters, yielding ${\widehat{S}}_{1}^{\left( 1\right) } = \left\{ {{a}_{2},{a}_{4}}\right\} ,{\widehat{S}}_{2}^{\left( 1\right) } = \left\{ {{a}_{2},{a}_{5}}\right\} ,{\widehat{S}}_{3}^{\left( 1\right) } = \left\{ {{a}_{2},{a}_{3}}\right\} ,{\widehat{S}}_{4}^{\left( 1\right) } = \left\{ {{a}_{1},{a}_{3}}\right\}$
362
+
363
+ ---
364
+
365
+ ${}^{1}$ In case of ties between subsets when estimating the ground truth, a tie-breaking priority over subsets is used. No ties occurred in our experiments.
366
+
367
+ ---
368
+
369
+ Estimating the voter reliabilities: In the next step we use these estimates of the ground truths to compute the MLEs of the voter reliabilities. For instance, voter 1 has 2 false positive labels from a total of 12 negative labels so ${\widehat{q}}_{1}^{\left( 1\right) } =$ $\frac{2}{12} = {0.17}$ and she has 3 true positive labels out of 8 positive ones so ${\widehat{p}}_{1}^{\left( 1\right) } = \frac{3}{8} = {0.38}$ . In the end, we get:
370
+
371
+ $$
372
+ \left\{ \begin{array}{lll} {\widehat{p}}_{1}^{\left( 1\right) } = {0.38} & {\widehat{p}}_{2}^{\left( 1\right) } = {0.38} & {\widehat{p}}_{3}^{\left( 1\right) } = {0.88} \\ {\widehat{q}}_{1}^{\left( 1\right) } = {0.17} & {\widehat{q}}_{2}^{\left( 1\right) } = {0.08} & {\widehat{q}}_{3}^{\left( 1\right) } = {0.17} \end{array}\right.
373
+ $$
374
+
375
+ Estimating the prior parameters: The final step of this iteration consists in updating the estimations of the prior parameters by applying Proposition 2 sequentially. First we estimate ${\widehat{t}}_{1}^{\left( 1\right) }$ given ${\widehat{S}}^{\left( 1\right) }$ and ${\widehat{t}}_{2}^{\left( 0\right) },\ldots ,{\widehat{t}}_{5}^{\left( 0\right) }$ by maximum likelihood estimation. We first compute ${\bar{\alpha }}_{1} =$ $\beta \left( {0,1,{t}_{2},\ldots ,{t}_{5}}\right) = {0.3125},{\underline{\alpha }}_{1} = \beta \left( {1,2,{t}_{2},\ldots ,{t}_{5}}\right) = 1$ and $\operatorname{occ}\left( {a}_{1}\right) = 1$ . Then the MLE of ${t}_{1}$ is:
376
+
377
+ $$
378
+ {\widehat{t}}_{1} = \frac{\operatorname{occ}\left( {a}_{1}\right) {\bar{\alpha }}_{1}}{\left( {L - \operatorname{occ}\left( {a}_{1}\right) }\right) {\underline{\alpha }}_{1} + \operatorname{occ}\left( {a}_{1}\right) {\bar{\alpha }}_{1}} = {0.09}
379
+ $$
380
+
381
+ The next steps are to estimate ${\widehat{t}}_{2}^{\left( 1\right) }$ given ${\widehat{t}}_{1}^{\left( 1\right) },{\widehat{t}}_{3}^{\left( 0\right) },{\widehat{t}}_{4}^{\left( 0\right) },{\widehat{t}}_{5}^{\left( 0\right) }$ and so on. Finally, we get:
382
+
383
+ $$
384
+ {\widehat{t}}_{1}^{\left( 1\right) } = {0.09},{\widehat{t}}_{2}^{\left( 1\right) } = {0.56},{\widehat{t}}_{3}^{\left( 1\right) } = {0.28},{\widehat{t}}_{4}^{\left( 1\right) } = {0.14},{\widehat{t}}_{5}^{\left( 1\right) } = {0.20}
385
+ $$
386
+
387
+ Fix $\varepsilon = {10}^{-5}$ . We repeat all steps until convergence (according to ${\ell }_{\infty }$ ), after 5 full iterations. In the fixed point, the estimations of the ground truths are:
388
+
389
+ $$
390
+ {\widehat{S}}_{1} = \left\{ {{a}_{2},{a}_{3}}\right\} ,{\widehat{S}}_{2} = \left\{ {{a}_{2},{a}_{3}}\right\} ,{\widehat{S}}_{3} = \left\{ {{a}_{2},{a}_{3}}\right\} ,{\widehat{S}}_{4} = \left\{ {a}_{3}\right\}
391
+ $$
392
+
393
+ ## 5 EXPERIMENTS
394
+
395
+ ### 5.1 EXPERIMENT DESIGN AND DATA COLLECTION
396
+
397
+ We designed an image annotation task as a football quiz. ${}^{2}$ We selected 15 pictures taken during different matches between two of the following teams: Real Madrid, Inter Milan, Bayern Munich, Barcelona, Paris Saint-Germain. In each picture, it may be the case that players from both teams appear, or players from only one team, therefore $l = 1$ and $u = 2$ . Each participant is shown the instances one by one, and is each time asked to select all the teams she can spot (see Figure 1). We designed a simple incentive for participants, consisting in ranking them according to the following principle:
398
+
399
+ - The participants get one point whenever their answer contains all correct alternatives for a picture. They are then ranked according to their cumulated points.
400
+
401
+ - To break ties, the participant who selected a smaller number of alternatives overall is ranked first.
402
+
403
+ ![019639be-6f69-7943-ba80-eceb7179efdb_6_975_202_499_240_0.jpg](images/019639be-6f69-7943-ba80-eceb7179efdb_6_975_202_499_240_0.jpg)
404
+
405
+ Figure 1: Example of Annotation Task
406
+
407
+ We gathered the answers of 76 participants: only two of them spammed by simply selecting all the alternatives. Figure 2 shows that voters responded well to the incentives by mostly selecting one or two alternatives.
408
+
409
+ ![019639be-6f69-7943-ba80-eceb7179efdb_6_954_753_531_239_0.jpg](images/019639be-6f69-7943-ba80-eceb7179efdb_6_954_753_531_239_0.jpg)
410
+
411
+ Figure 2: Histogram of answers' size
412
+
413
+ ### 5.2 ANNA KARENINA'S INITIALIZATION
414
+
415
+ Inspired by the Anna Karenina Principle in Meir et al. [2019], we assign more weight to voters who are closer to the others on average, initializing the precision parameters $\left( {{p}_{i},{q}_{i}}\right)$ accordingly. This suits our context, where voter competence is highly polarized: some voters are experts and cast similar answers close to the ground truth, the others are less reliable and their answers are dispersed among all combinations. We use the following heuristics (see Algorithm 2) for the initialization:
416
+
417
+ Algorithm 2 Initializing ${\left( {p}_{i},{q}_{i}\right) }_{i}$
418
+
419
+ ---
420
+
421
+ Input: Approval ballots ${\left( {A}_{i}^{z}\right) }_{z, i}$
422
+
423
+ Output: Initialization $\left( {{\widehat{p}}_{i}^{\left( 0\right) },{\widehat{q}}_{i}^{\left( 0\right) }}\right)$
424
+
425
+ -Compute ${w}_{\max } = \frac{n}{1 + n},{w}_{\min } = \frac{1}{1 + n}$
426
+
427
+ -Compute ${d}_{i} = \mathop{\sum }\limits_{{j \neq i}}{d}_{Jacc}\left( {{A}_{i},{A}_{j}}\right)$ (Jaccard distance)
428
+
429
+ -Compute ${d}_{\max } = \max {d}_{i},{d}_{\min } = \min {d}_{i}$
430
+
431
+ -Compute ${w}_{i} = \left( {{w}_{\max } - {w}_{\min }}\right) \left( \frac{\frac{1}{{d}_{i}} - \frac{1}{{d}_{\max }}}{\frac{1}{{d}_{\min }} - \frac{1}{{d}_{\max }}}\right) + {w}_{\min }$
432
+
433
+ $- \operatorname{Fix}{\widehat{p}}_{i}^{\left( 0\right) } = \frac{1}{2}$ and ${\widehat{q}}_{i}^{\left( 0\right) } = \frac{1 - \frac{{e}^{{w}_{i}} - 1}{{e}^{{w}_{i}} + 1}}{2}$
434
+
435
+ ---
436
+
437
+ Algorithm 2 guarantees that the parameters $\left( {{\widehat{p}}_{i}^{\left( 0\right) },{\widehat{q}}_{i}^{\left( 0\right) }}\right)$ of a voter are such that her initial weight is equal to ${w}_{i}$ , and that $\frac{{w}_{\max }}{{w}_{\min }} = n$ : therefore, initially, the voter closest in average to the other voters counts $n$ times more than the voter with the largest average distance.
438
+
439
+ ---
440
+
441
+ ${}^{2}$ The dataset and code are in the supplementary material.
442
+
443
+ ---
444
+
445
+ ![019639be-6f69-7943-ba80-eceb7179efdb_7_176_176_656_772_0.jpg](images/019639be-6f69-7943-ba80-eceb7179efdb_7_176_176_656_772_0.jpg)
446
+
447
+ Figure 3: Accuracies of different aggregation methods
448
+
449
+ In the Appendix we give an example illustrating this initialization, and an empirical comparison with other classical initializations.
450
+
451
+ ### 5.3 RESULTS
452
+
453
+ To assess the importance of prior information on the size of the ground truth, we tested the AMLE algorithm with free bounds $\left( {l, u}\right) = \left( {0, m}\right)$ (will be referred to as ${\mathrm{{AMLE}}}_{f}$ ) and the ${\operatorname{AMLE}}_{c}$ algorithm with $\left( {l, u}\right) = \left( {1,2}\right)$ . We also apply the modal rule Caragiannis et al. [2020] which outputs the subset of alternatives that most frequently appears as an approval ballot arg $\mathop{\max }\limits_{{S \in \mathcal{S}}}\left| {i \in \mathcal{N}, S = {A}_{i}}\right|$ , and a variant of label-wise majority rule which outputs the subset of alternatives $S$ such that $a \in S \Leftrightarrow \left| {i \in \mathcal{N}, a \in {A}_{i}}\right| > \frac{n}{2}$ . If this subset is empty it is replaced by the alternative with highest approval count, and if it has more than two alternatives then we only keep the top-2 alternatives.
454
+
455
+ We took 20 batches of $n = {10}$ to $n = {74}$ randomly drawn voters and applied the four methods to all of them (see Figure 3a, 3b). As classically done in the literature Nguyen et al. [2020], we use the Hamming accuracy $\frac{1}{mL}\mathop{\sum }\limits_{{z = 1}}^{L} \mid {S}_{z}^{ * } \cap$ ${\widehat{S}}^{z}\left| +\right| \overline{{S}_{z}^{ * }} \cap \overline{{\widehat{S}}^{z}} \mid$ and the $0/1$ accuracy $\frac{1}{L}\mathop{\sum }\limits_{{z = 1}}^{L}\mathbb{1}\left\{ {{S}_{z}^{ * } = {\widehat{S}}^{z}}\right\}$ as metrics and report their 0.95 confidence intervals.
456
+
457
+ We notice that the majority and the modal rule are outperformed by AMLE, which can be explained by the fact that they do not take into account the voters' reliabilities. Comparing the performances of ${\mathrm{{AMLE}}}_{c}$ and ${\mathrm{{AMLE}}}_{f}$ emphasizes the importance of the prior knowledge on the committee size to improve the quality of the estimation.
458
+
459
+ We also compared the execution time of ${\mathrm{{AMLE}}}_{c}$ and ${\mathrm{{AMLE}}}_{f}$ (see Figure 4) when run on Intel Core i7-10610U CPU @1.80Ghz 4 cores, 8 threads and 32Gb RAM. Unsurprisingly, ${\mathrm{{AMLE}}}_{c}$ needs more running time, especially for more than 40 voters.
460
+
461
+ ![019639be-6f69-7943-ba80-eceb7179efdb_7_909_629_646_274_0.jpg](images/019639be-6f69-7943-ba80-eceb7179efdb_7_909_629_646_274_0.jpg)
462
+
463
+ Figure 4: Execution time
464
+
465
+ ## 6 CONCLUSION
466
+
467
+ We study multi-winner approval voting from an epistemic point of view. The specificity of our work is threefold: (a) the ground truth consists of a set of alternatives; (b) the input consists of approval votes; (c) the competence of the various voters is not known a priori but learnt from the input. We proposed a noise model that incorporates the prior belief about the size of the ground truth. Then we derived an iterative algorithm to intertwinedly estimate the ground truth labels, the voter noise parameters and the prior belief parameters and we prove its convergence. Our algorithm is based on a simplification of Expectation-Maximization (EM), and its simple steps are more easily explainable to voters than EM and other similar statistical learning approaches.
468
+
469
+ Although we mainly considered a general multi-instance task that fits the collective annotation framework, where each voter answers several questions on the same set of alternatives, we can nonetheless apply the same algorithm to single-instance problems (such as the allocation of scarce medical resources) where only one question is answered. In this case, the prior parameters cannot be updated and it suffices to fix them once and for all and alternate between the estimation of the ground truth and the voter parameters.
470
+
471
+ In some contexts (e.g., patients in a hospital), alternatives and votes are not observed at once but streamed. To cope with this online setup we consider extending our AMLE algorithm in the spirit of Cappé and Moulines [2009].
472
+
473
+ Tahar Allouche, Jérôme Lang, and Florian Yger. Truth-tracking via approval voting: Size matters. In ${AAAI},{2022}$ .
474
+
475
+ Eyal Baharad, Jacob Goldberger, Moshe Koppel, and Shmuel Nitzan. Distilling the wisdom of crowds: weighted aggregation of decisions on multiple issues. Autonomous Agents and Multi-Agent Systems, 2011.
476
+
477
+ Ruth Ben-Yashar and Jacob Paroush. Optimal decision rules for fixed-size committees in polychotomous choice situations. Soc. Choice Welf., 18(4):737-746, 2001. doi: 10.1007/s003550000080. URL https://doi.org/10.1007/ s003550000080
478
+
479
+ Ruth C. Ben-Yashar and Shmuel I. Nitzan. The optimal decision rule for fixed-size committees in dichotomous choice situations: The general result. International Economic Review, 1997.
480
+
481
+ Olivier Cappé and Eric Moulines. On-line expectation-maximization algorithm for latent data models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 71(3):593-613, 2009.
482
+
483
+ Ioannis Caragiannis and Evi Micha. Learning a ground truth ranking using noisy approval votes. In IJCAI, 2017.
484
+
485
+ Ioannis Caragiannis, Christos Kaklamanis, Nikos Karaniko-las, and George A. Krimpas. Evaluating approval-based multiwinner voting in terms of robustness to noise. In IJCAI, 2020.
486
+
487
+ Condorcet. Essai sur l'application de l'analyse à la proba-bilité des décisions rendues à la pluralité des voix. 1785.
488
+
489
+ Vincent Conitzer and Tuomas Sandholm. Common voting rules as maximum likelihood estimators. In UAI, 2005.
490
+
491
+ Vincent Conitzer, Matthew Rognlie, and Lirong Xia. Preference functions that score rankings and maximum likelihood estimation. In ${IJCAI},{2009}$ .
492
+
493
+ Jia Deng, Olga Russakovsky, Jonathan Krause, Michael S. Bernstein, Alexander C. Berg, and Fei-Fei Li. Scalable multi-label annotation. In CHI Conference on Human Factors in Computing Systems, 2014.
494
+
495
+ Mohamed Drissi-Bakhkhat and Michel Truchon. Maximum likelihood approach to vote aggregation with variable probabilities. Social Choice and Welfare, 2004.
496
+
497
+ Edith Elkind and Arkadii Slinko. Rationalizations of voting rules. In Handbook of Computational Social Choice. 2016.
498
+
499
+ Piotr Faliszewski, Piotr Skowron, Arkadii Slinko, and Nimrod Talmon. Multiwinner voting: A new challenge for social choice theory. In Trends in Computational Social Choice. 2017.
500
+
501
+ Piotr Faliszewski, Arkadii Slinko, and Nimrod Talmon. Mul-
502
+
503
+ tiwinner rules with variable number of winners. In ECAI, volume 325 of Frontiers in Artificial Intelligence and Applications, pages 67-74. IOS Press, 2020.
504
+
505
+ Hadi Hosseini, Debmalya Mandal, Nisarg Shah, and Kevin Shi. Surprisingly popular voting recovers rankings, surprisingly! In IJCAI, pages 245-251. ijcai.org, 2021.
506
+
507
+ D. Marc Kilgour. Approval elections with a variable number of winners. Theory and Decision, 81(2):199- 211, August 2016. doi: 10.1007/s11238-016-9535-2. URL https://ideas.repec.org/a/kap/theord/v81y2016i2d10 1007_s11238-016-9535-2.html.
508
+
509
+ Justin Kruger, Ulle Endriss, Raquel Fernández, and Ciyang Qing. Axiomatic analysis of aggregation methods for collective annotation. In ${AAMAS},{2014}$ .
510
+
511
+ Martin Lackner and Piotr Skowron. Approval-based committee voting: Axioms, algorithms, and applications. CoRR, abs/2007.01795, 2020. URL https://arxiv.org/abs/ 2007.01795.
512
+
513
+ Reshef Meir, Ofra Amir, Gal Cohensius, Omer Ben-Porat, and Lirong Xia. Truth discovery via proxy voting. arXiv:1905.00629, 2019.
514
+
515
+ Vu-Linh Nguyen, Eyke Hüllermeier, Michael Rapp, Eneldo Loza Mencía, and Johannes Fürnkranz. On aggregation in ensembles of multilabel classifiers. In Discovery Science, 2020.
516
+
517
+ Shmuel Nitzan and Jacob Paroush. Collective decision making and jury theorems. The Oxford Handbook of Law and Economics, 1, 2017.
518
+
519
+ Stefanie Nowak and Stefan M. Rüger. How reliable are annotations via crowdsourcing: a study about inter-annotator agreement for multi-label image annotation. In MIR, 2010.
520
+
521
+ Marcus Pivato. Voting rules as statistical estimators. Social Choice and Welfare, 2013.
522
+
523
+ Marcus Pivato. Epistemic democracy with correlated voters. Journal of Mathematical Economics, 2017.
524
+
525
+ Dražen Prelec, H Sebastian Seung, and John McCoy. A solution to the single-question crowd wisdom problem. Nature, 541(7638):532-535, 2017.
526
+
527
+ Ariel D. Procaccia and Nisarg Shah. Is approval voting optimal given approval votes? In NIPS, 2015.
528
+
529
+ Ariel D. Procaccia, Sashank Jakkam Reddi, and Nisarg Shah. A maximum likelihood approach for selecting sets of alternatives. In ${UAI},{2012}$ .
530
+
531
+ Ciyang Qing, Ulle Endriss, Raquel Fernández, and Justin Kruger. Empirical analysis of aggregation methods for collective annotation. In COLING, 2014.
532
+
533
+ Nihar B. Shah and Dengyong Zhou. Approval voting and incentives in crowdsourcing. ACM Transactions on Economics and Computation, 2020.
534
+
535
+ Lloyd Shapley and Bernard Grofman. Optimizing group judgmental accuracy in the presence of interdependencies. Public Choice, 1984.
536
+
537
+ Lirong Xia and Vincent Conitzer. A maximum likelihood approach towards aggregating partial orders. In IJCAI, 2011.
538
+
539
+ Lirong Xia, Vincent Conitzer, and Jérôme Lang. Aggregating preferences in multi-issue domains by using maximum likelihood estimators. In ${AAMAS},{2010}$ .
540
+
541
+ H. Peyton Young. Condorcet's theory of voting. American Political science review, 1988.
542
+
543
+ ## A DATA COLLECTION AND INCENTIVES
544
+
545
+ To see how the participants behave given the ranking incentives that we defined in the football quiz, we plotted the histogram of the sizes of the answers (see Figure 5). It appears that although the platform enables to select every alternative, only two voters did so for all the questions. Moreover, figures ${5b}$ and ${5a}$ show that the majority of the voters tend to select exactly the number of teams that appear in an image.
546
+
547
+ ![019639be-6f69-7943-ba80-eceb7179efdb_10_223_633_558_989_0.jpg](images/019639be-6f69-7943-ba80-eceb7179efdb_10_223_633_558_989_0.jpg)
548
+
549
+ Figure 5: Histogram of the ballots' sizes
550
+
551
+ ## B INITIALIZING VOTERS' RELIABILITIES
552
+
553
+ Inspired by the Anna Karenina Principle in Meir et al. [2019], we devised an initialisation strategy for the voters' reliabilities. In his book, Leo Tolstoi stated that "Happy families are all alike; every unhappy family is unhappy in its own way". In the same spirit, it seems reasonable to make the hypothesis that accurate users tend to make similar answers, whereas inaccurate users have each their own way of being inaccurate.
554
+
555
+ Here follows an example of the Anna Karenina initialization scheme.
556
+
557
+ Example 3. Consider following the approval profile (Table 1 for 3 voters, 5 alternatives and 4 Instances. Here we have
558
+
559
+ <table><tr><td/><td>${A}^{1}$</td><td>${A}^{2}$</td><td>${A}^{3}$</td><td>${A}^{4}$</td></tr><tr><td>Voter 1</td><td>$\left\{ {{a}_{1},{a}_{4}}\right\}$</td><td>$\left\{ {a}_{1}\right\}$</td><td>$\left\{ {a}_{3}\right\}$</td><td>$\left\{ {a}_{1}\right\}$</td></tr><tr><td>Voter 2</td><td>$\left\{ {a}_{2}\right\}$</td><td>$\left\{ {a}_{5}\right\}$</td><td>$\left\{ {a}_{4}\right\}$</td><td>$\left\{ {a}_{1}\right\}$</td></tr><tr><td>Voter 3</td><td>$\left\{ {{a}_{2},{a}_{3},{a}_{4}}\right\}$</td><td>$\left\{ {{a}_{2},{a}_{3},{a}_{5}}\right\}$</td><td>$\left\{ {{a}_{2},{a}_{3}}\right\}$</td><td>$\left\{ {a}_{3}\right\}$</td></tr></table>
560
+
561
+ Table 1: Approval Ballots of 3 Voters on 4 Instances
562
+
563
+ that:
564
+
565
+ $$
566
+ {w}_{\max } = \frac{n}{n + 1} = {0.75},{w}_{\min } = \frac{1}{n + 1} = {0.25}
567
+ $$
568
+
569
+ First, compute the mean Jaccard distance of all voters: ${d}_{1} =$ ${1.71},{d}_{2} = {1.69},{d}_{3} = {1.65}$ . So ${d}_{\max } = {d}_{1} = {1.71}$ and ${d}_{min} = {d}_{3} = {1.65}$ , which means that voter 3 (the closest in average to all the voters) will get the biggest weight ${w}_{3} = {w}_{\max } = {0.75}$ and voter 1 gets the smallest weight ${w}_{1} = {w}_{\min }$ . Next, compute the weight that will be assigned to each voter, for instance:
570
+
571
+ $$
572
+ {w}_{2} = \left( {{w}_{\max } - {w}_{\min }}\right) \frac{\frac{1}{{d}_{2}} - \frac{1}{{d}_{\max }}}{\frac{1}{{d}_{\min }} - \frac{1}{{d}_{\max }}} + {w}_{\min } = {0.38}
573
+ $$
574
+
575
+ Now we can set the initial values for the reliability parameters accordingly:
576
+
577
+ $$
578
+ {\widehat{p}}_{2}^{\left( 0\right) } = \frac{1}{2},{\widehat{q}}_{2}^{\left( 0\right) } = \frac{1 - \frac{{e}^{{w}_{2}} - 1}{{e}^{{w}_{2}} + 1}}{2}
579
+ $$
580
+
581
+ We can check that these parameters are such that:
582
+
583
+ $$
584
+ \ln \left\lbrack \frac{{p}_{2}\left( {1 - {q}_{2}}\right) }{{q}_{2}\left( {1 - {p}_{2}}\right) }\right\rbrack = {w}_{2}
585
+ $$
586
+
587
+ After proceeding in the same fashion with all the voters, we get the initial parameters:
588
+
589
+ $$
590
+ \left\{ \begin{array}{lll} {\widehat{p}}_{1}^{\left( 0\right) } = {0.5} & {\widehat{p}}_{2}^{\left( 0\right) } = {0.5} & {\widehat{p}}_{3}^{\left( 0\right) } = {0.5} \\ {\widehat{q}}_{1}^{\left( 0\right) } = {0.44} & {\widehat{q}}_{2}^{\left( 0\right) } = {0.41} & {\widehat{q}}_{3}^{\left( 0\right) } = {0.32} \end{array}\right.
591
+ $$
592
+
593
+ Since the AMLE only guarantees convergence to a local maximum, which makes the result depending on the initial point, we compared the results of this initialization (Anna Karenina) to other procedures to motivate its choice, see Figure 6, namely we tested:
594
+
595
+ - Uniform weights: Initially all the voters in the batch are given the same weight.
596
+
597
+ - Random weights: Initially, for each voter in the batch, ${p}_{i}$ is randomly picked from(0.5,1)and ${q}_{i}$ is randomly picked from(0,0.5).
598
+
599
+ We can notice that these two baseline procedures show very similar performances, and that they are both outperformed by the Anna Karenina initialization.
600
+
601
+ ![019639be-6f69-7943-ba80-eceb7179efdb_11_190_570_676_1183_0.jpg](images/019639be-6f69-7943-ba80-eceb7179efdb_11_190_570_676_1183_0.jpg)
602
+
603
+ Figure 6: Accuracies of different initializations
604
+
605
+ <table><tr><td/><td>${\mathrm{{AMLE}}}_{c}$</td><td>${\mathrm{{AMLE}}}_{f}$</td><td>Modal</td><td>Majority</td></tr><tr><td>Hamming</td><td>0.88</td><td>0.86</td><td>0.84</td><td>0.80</td></tr><tr><td>Harmonic</td><td>0.78</td><td>0.74</td><td>0.69</td><td>0.61</td></tr><tr><td>0/1</td><td>0.60</td><td>0.53</td><td>0.46</td><td>0.26</td></tr></table>
606
+
607
+ Table 2: Hamming and 0/1 accuracy for entire dataset
608
+
609
+ ## C LOSSES
610
+
611
+ ### C.1 HAMMING, HARMONIC AND 0-1 SUBSET METRICS
612
+
613
+ In addition to the Hamming and 0-1 subset accuracies, we introduced a new metric which can be considered as an intermediate one. The Hamming metric considers each label independently and the 0-1 subset loss considers them jointly in a strict fashion, whereas the harmonic accuracies that we introduced considers all the instance's labels jointly but with different convex weights depending on the number of correctly predicted ones:
614
+
615
+ $$
616
+ T\left( {S,{S}^{ * }}\right) = \mathop{\sum }\limits_{{k = 1}}^{\left| S \cap {S}^{ * }\right| }\frac{1}{6 - k}
617
+ $$
618
+
619
+ So out of the 5 labels:
620
+
621
+ - if 0 labels are correct then $T = 0$ .
622
+
623
+ - if 1 labels is correct then $T = \frac{1}{5}$ .
624
+
625
+ - if 2 labels are correct then $T = \frac{1}{5} + \frac{1}{4}$ .
626
+
627
+ - if 3 labels are correct then $T = \frac{1}{5} + \frac{1}{4} + \frac{1}{3}$ .
628
+
629
+ - if 4 labels are correct then $T = \frac{1}{5} + \frac{1}{4} + \frac{1}{3} + \frac{1}{2}$ .
630
+
631
+ - if 5 labels are correct then $T = \frac{1}{5} + \frac{1}{4} + \frac{1}{3} + \frac{1}{2} + 1$ .
632
+
633
+ Defined as such, this accuracy favours the estimators that are able to correctly estimate most of the instance's labels without being as rigid as the 0-1 subset accuracy.
634
+
635
+ This metric is reminiscent of the Proportional Approval Voting rule for multiwinner elections, which defines the score of a subset of candidates $W$ for a voter as $1 + \frac{1}{2} + \ldots + \frac{1}{j}$ , where $j$ is the number of candidates in $W$ approved by the voter. We could consider more generally a class of metrics defined by a vector $\overrightarrow{w}$ , such that $T\left( {S,{S}^{ * }}\right) = {w}_{\left| S \cap {S}^{ * }\right| }$ . This class generalizes Hamming, 0-1 and Harmonic and is reminiscent of the class of Thiele rules (see for instance Lackner and Skowron [2020] for an extended presentation of multiwinner approval-based committee rules).
636
+
637
+ ### C.2 RESULTS
638
+
639
+ We show in Table 2 the accuracies of the considered methods when applied to the entire annotation dataset. In Figure 7 we show the evolution of the Harmonic accuracies when the number of randomly picked voters in each batch increase.
640
+
641
+ ![019639be-6f69-7943-ba80-eceb7179efdb_12_172_874_771_577_0.jpg](images/019639be-6f69-7943-ba80-eceb7179efdb_12_172_874_771_577_0.jpg)
642
+
643
+ Figure 7: Normalized Harmonic accuracy
644
+
UAI/UAI 2022/UAI 2022 Conference/B0l8-wLjql5/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,688 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Balancing Utility and Scalability in Metric Differential Privacy
2
+
3
+ ## Abstract
4
+
5
+ Metric differential privacy (mDP) is a modification of differential privacy that is more suitable when records can be represented in a general metric space, such as text data represented as word embed-dings or geographical coordinates on a map. We consider the task of releasing elements of the metric space under metric differential privacy where utility is measured as the distance of the released element to the original element. Linear programming (LP) can be used to construct a mechanism that achieves the optimal utility for a particular mDP constraint. However, these LPs suffer from a polynomial explosion of variables and constraints that render them impractical for solving real-world problems. An important question is how to design rigorous mDP mechanisms that balance the utility-scalability tradeoff.
6
+
7
+ Our main contribution is a new method for reducing the LP size used to generate mDP mechanisms by constraining the search space such that certain input and output pairs have transition probabilities derived from the exponential mechanism. Our method produces mDP mechanisms whose LPs are smaller that all prior work in this area. We also provide a lower bound on the best possible mechanism utility. Our experiments on real-world metric spaces highlight the superior utility-scalability tradeoff of our mechanism.
8
+
9
+ ## 1 INTRODUCTION
10
+
11
+ Privacy has emerged as a topic of strategic consequence across all computational fields. Differential privacy (DP), a mathematical formulation of privacy proposed by Dwork et al. [2006], provides provable protection guarantees against adversaries with arbitrary side information and computational power. See the book by Dwork and Roth [2013] for a primer on differential privacy and a survey of different techniques proposed in the literature.
12
+
13
+ More recently, researchers have noted that differential privacy does not take the underlying metric space of the data domain into account. Differential privacy provides the same level of protection to all perturbations of a single user's data which makes it inflexible when these perturbations are not all the same. For example, if the data consists of locations on earth, there is a large difference between discerning whether a user is in a 1 or a 100-mile radius. In many scenarios, the former type of privacy beach is more significant because a user's location is more accurately determined. This has led to the development of metric DP (mDP) which provides different protections depending on an underlying metric space, and has been adopted in applications involving releasing sensitive geolocation data [Andrés et al., 2013, Bordenabe et al., 2014] and textual data [Fernandes et al., 2019, Feyisetan et al., 2019, 2020, Xu et al., 2020, Feyisetan and Kasiviswanathan, 2021].
14
+
15
+ Mechanism utility for mDP is less well-understood than that of general DP, as the metric strongly influences the permitted behavior of the mechanism. While it is possible to design an optimal mechanism under mDP, it is also a computationally challenging task that requires solving a linear program (LP) with $O\left( {n}^{2}\right)$ variables and $O\left( {n}^{3}\right)$ constraints [Bordenabe et al.,2014], where $n$ is the size of the metric space (i.e., cardinality of the set). In fact, most mDP mechanisms [Feyisetan et al., 2020, 2019, Xu et al., 2020] do not provide any rigorous guarantees on the utility. Our key contributions are as follows:
16
+
17
+ (a) We present a general framework for designing mDP mechanisms which have a better tradeoff between mechanism utility and the size of the LP used to compute the mechanism (Section 3). This framework is based on adding to the optimal mDP LP new constraints that certain transition probabilities are equal to a weighted version of those arising from the exponential mechanism. ${}^{1}$ As a concrete instantiation of this framework, we construct a LP based on $r$ nearest neighbors of each point, under which the resulting LP has just $O\left( {nr}\right)$ variables and $O\left( {{n}^{2}r}\right)$ constraints, and in practice, $r$ can be set as a small constant. Therefore, our new mechanism substantially increases the size of the metric space on which mDP mechanisms can be practically applied.
18
+
19
+ ---
20
+
21
+ ${}^{1}$ Exponential mechanism [McSherry and Talwar,2007] is a popular approach for differentially private selection.
22
+
23
+ ---
24
+
25
+ (b) We prove a lower bound, depending on the underlying metric space, on the loss that any mechanism must make (Section 4). This provides the first non-trivial loss lower bound on any mDP mechanism, including the optimal one. This lower bound is valuable, especially in situations when the LP for the optimal mechanism is infeasible to solve.
26
+
27
+ (c) We perform extensive experiments comparing the utility and privacy of our proposed mechanism and existing mechanisms in text and geolocation applications (Section 5). These experiments indicate that our proposed mechanism performs more closely to the optimal mechanism than others tested and can result in a utility improvement of about ${25}\%$ compared to the non-optimal mechanisms. In terms of scalability, our results indicate that our proposed mechanism can scale to metric spaces four times larger than the optimal mechanism.
28
+
29
+ Related Work in Metric DP. Metric DP originated in the context of location privacy where given a dataset of ge-olocation coordinates (longitude and latitude) on a plane, the notion of adjacency could be better captured using the Euclidean distance between the coordinates [Andrés et al., 2013, Chatzikokolakis et al., 2013]. Metric DP mechanisms have been investigated for various choices of metrics, including Euclidean, Manhattan, and Chebyshev metrics, among others [Chatzikokolakis et al., 2013, Andrés et al., 2013, Chatzikokolakis et al., 2015, Fernandes et al., 2019, Feyise-tan et al., 2019, 2020]. Unlike our focus here, none of these results compare the loss of their proposed mechanisms to the optimal loss.
30
+
31
+ The most directly related work to ours is that of Borden-abe et al. [2014]. This paper proposes finding the optimal mDP mechanism using linear programming. They propose a method based on spanner graphs to reduce the size of the LP (outlined in Appendix A). A $\delta$ -spanner graph is a set of edges between points in a metric space such that the distance between two points in the graph approximates the metric up to a multiplicative factor $\delta$ . Bordenabe et al. [2014] use a 3-spanner, for which a construction using just $O\left( {n}^{1.5}\right)$ edges exists, to reduce the number of constraints in the LP from $O\left( {n}^{3}\right)$ to $O\left( {n}^{2.5}\right)$ .
32
+
33
+ Related Work in Privately Releasing Text Embeddings. Vector representations of words, sentences, and documents have all become basic building blocks in NLP pipelines and algorithms. Hence, it is natural to consider privacy mechanisms that target these representations in the underlying metric space [Fernandes et al., 2019, Feyisetan et al., 2019, Xu et al., 2020, Feyisetan et al., 2020]. The most relevant result to our setting is the mechanism of Feyisetan et al. [2020] (referred to as Madlib). In Section 5, we compare our mechanism to Madlib as a baseline. 2
34
+
35
+ ## 2 TECHNICAL PRELIMINARIES
36
+
37
+ Throughout this paper, we consider data that comes from a finite metric space $\left( {\mathcal{W},{d}_{\mathcal{W}}}\right)$ where $\mathcal{W}$ is the set of values the data may take. For example, in the text release usecase, $\mathcal{W}$ consists of a vocabulary set, and in the geo-locations usecase, $\mathcal{W}$ consists of a set of locations. The metric ${d}_{\mathcal{W}} : \mathcal{W} \times \mathcal{W} \rightarrow$ $\mathbb{R}$ captures dissimilarity between elements in the set. In NLP applications, it is very common to represent words via a high-dimensional text embedding $\phi : \mathcal{W} \rightarrow {\mathcal{W}}^{\prime } \subseteq {\mathbb{R}}^{d}$ . Then we can define the distance between the words as the distance between the embedded words: i.e., for all ${w}_{1},{w}_{2} \in$ $\mathcal{W}$ , we define ${d}_{\mathcal{W}}\left( {{w}_{1},{w}_{2}}\right) = {d}_{\mathcal{W}}^{\prime }\left( {\phi \left( {w}_{1}\right) ,\phi \left( {w}_{2}\right) }\right)$ .
38
+
39
+ ### 2.1 PRIVACY ON METRIC SPACES
40
+
41
+ Informally, a mechanism $\mathcal{M}$ satisfies metric ${\mathrm{{DP}}}^{4}$ if its behavior is nearly the same on inputs that are close together in the metric space. This is formalized by the following notion of $\epsilon$ - ${d}_{\mathcal{W}}$ privacy.
42
+
43
+ Definition 1 (Metric DP (mDP)). Given a finite set $\mathcal{W}$ , a metric ${d}_{\mathcal{W}} : \mathcal{W} \times \mathcal{W} \rightarrow \mathbb{R}$ , and a privacy parameter $\epsilon > 0$ , a mechanism $\mathcal{M} : \mathcal{W} \rightarrow \mathcal{W}$ satisfies $\epsilon$ - ${d}_{\mathcal{W}}$ privacy iffor all ${w}_{1},{w}_{2}, w \in \mathcal{W}$ :
44
+
45
+ $$
46
+ \mathbb{P}r\left\lbrack {\mathcal{M}\left( {w}_{1}\right) = w}\right\rbrack \leq \exp \left( {\epsilon {d}_{\mathcal{W}}\left( {{w}_{1},{w}_{2}}\right) }\right) \mathbb{P}r\left\lbrack {\mathcal{M}\left( {w}_{2}\right) = w}\right\rbrack .
47
+ $$
48
+
49
+ The above definition is closely related to the definition of local ${DP}$ [Kasiviswanathan et al.,2011] in that we apply $\mathcal{M}$ to each element of some database $D \in {\mathcal{W}}^{m}$ independently. The difference of mDP over local DP is that, because of the ${d}_{\mathcal{W}}$ term (which is absent in the local DP formulation), mDP mechanism guarantees indistinguishability for those ${w}_{1},{w}_{2} \in \mathcal{W}$ based on the distance ${d}_{\mathcal{W}}\left( {\phi \left( {w}_{1}\right) ,\phi \left( {w}_{2}\right) }\right)$ between them. Similar to traditional differential privacy, mDP is preserved under post-processing and composition of mechanisms [Koufogiannis et al., 2016]. In metric spaces, a natural definition of mechanism loss on an element $w \in \mathcal{W}$ is the expected distance between $w$ and $\mathcal{M}\left( w\right) : \mathcal{L}\left( {\mathcal{M}, w}\right) = {\mathbb{E}}_{\mathcal{M}}\left\lbrack {{d}_{\mathcal{W}}\left( {w,\mathcal{M}\left( w\right) }\right) }\right\rbrack$ . Here, the expectation is over the random bits in $\mathcal{M}$ . We define the loss of $\mathcal{M}$ to be the worst-case loss of $\mathcal{M}$ on any particular element $w \in \mathcal{W}$ :
50
+
51
+ $$
52
+ \mathcal{L}\left( \mathcal{M}\right) = \mathop{\max }\limits_{{w \in \mathcal{W}}}\mathcal{L}\left( {\mathcal{M}, w}\right) \tag{1}
53
+ $$
54
+
55
+ ---
56
+
57
+ ${}^{2}$ While our mDP mechanism is applicable to any metric space, our experiments are over word embeddings and geolocations in the Euclidean space. Therefore, we do not directly compare with Fernandes et al., 2019, Feyisetan et al., 2019, Xu et al., 2020] which work with embeddings in non-Euclidean spaces.
58
+
59
+ ${}^{3}$ Our results do not depend on the choice of the embedding.
60
+
61
+ ${}^{4}$ Metric DP is sometimes referred to as Lipschitz privacy [Koufogiannis et al., 2016], motivated by the fact that the privacy guarantee can be viewed as a Lipschitz condition on the mechanism, $\left| {\ln \left( {\mathbb{P}r\left\lbrack {\mathcal{M}\left( {w}_{1}\right) = w}\right\rbrack }\right) - \ln \left( {\mathbb{P}r\left\lbrack {\mathcal{M}\left( {w}_{2}\right) = w}\right\rbrack }\right) }\right| \leq$ $\epsilon {d}_{\mathcal{W}}\left( {{w}_{1},{w}_{2}}\right)$ .
62
+
63
+ ---
64
+
65
+ Notice that $\mathcal{L}\left( \mathcal{M}\right)$ is non-negative due to ${d}_{\mathcal{W}}$ being a metric. Considering the loss as a worst-case instead of an average has the advantage that there cannot exist "adversarial" elements $w \in \mathcal{W}$ such that $\mathcal{L}\left( {\mathcal{M}, w}\right)$ is much higher than $\mathcal{L}\left( \mathcal{M}\right)$ . Similar loss functions have been studied in other DP settings such as in [Hardt and Talwar, 2009].
66
+
67
+ Optimal Mechanism with LP. It is easy to see that the constraints of mDP are linear. For a mechanism $\mathcal{M}$ , we can consider its stochastic matrix $M$ given by $M = \left\{ {M}_{uv}\right.$ : $u, v \in \mathcal{W}\}$ with ${M}_{uv} = \Pr \left\lbrack {\overline{\mathcal{M}}\left( u\right) = v}\right\rbrack$ . Then, $\mathcal{M}$ satisfies mDP if and only if $M$ is stochastic and satisfies the following constraints
68
+
69
+ $$
70
+ {M}_{uw} \leq {M}_{vw} \cdot \exp \left( {{d}_{\mathcal{W}}\left( {u, v}\right) \epsilon }\right) \;\forall u, v, w \in \mathcal{W} \tag{2}
71
+ $$
72
+
73
+ Since the constraints are linear, $\mathrm{{mDP}}$ constrains $M$ to be in a polytope. We will overload notation and write $\mathcal{L}\left( {M, w}\right)$ and $\mathcal{L}\left( M\right)$ as the losses of the mechanism given by transition matrix $M$ . These losses are given by:
74
+
75
+ $$
76
+ \mathcal{L}\left( M\right) = \mathop{\max }\limits_{{u \in \mathcal{W}}}\mathcal{L}\left( {M, w}\right)
77
+ $$
78
+
79
+ $$
80
+ = \mathop{\max }\limits_{{u \in \mathcal{W}}}\mathop{\sum }\limits_{{v \in \mathcal{W}}}{d}_{\mathcal{W}}\left( {u, v}\right) {M}_{uv}. \tag{3}
81
+ $$
82
+
83
+ Over the variables ${M}_{uv},\mathcal{L}\left( \mathcal{M}\right)$ is a maximum of linear functions. The optimal mechanism is given by the stochastic matrix $M$ that minimizes $\mathcal{L}\left( \mathcal{M}\right)$ subject to the privacy constraints (2). Using standard techniques in linear programming, we can compute the best mechanism with the following LP over the variables $M, k$ :
84
+
85
+ $$
86
+ {\mathcal{P}}_{\text{OPTMECH }}\left( \epsilon \right) = \text{minimize}k\text{subject to}
87
+ $$
88
+
89
+ $$
90
+ \mathcal{L}\left( {M, w}\right) \leq k,\;\forall w \in \mathcal{W}
91
+ $$
92
+
93
+ $M$ stochastic
94
+
95
+ $M$ satisfies (2)
96
+
97
+ This LP problem has $O\left( {n}^{2}\right)$ variables and $O\left( {n}^{3}\right)$ constraints, where $n = \left| \mathcal{W}\right|$ . Therefore, even with the state-of-the-art LP approaches, which all require $\Omega \left( {N}^{2}\right)$ time, where $N$ is the number of variables [Jiang et al., 2021], scalability is problematic (here $N = {n}^{2}$ ). This is the central motivation for our work.
98
+
99
+ ## 3 BALANCING UTILITY-SCALABILITY
100
+
101
+ Given the scalability issues in solving ${\mathcal{P}}_{\text{OPTMECH }}\left( \epsilon \right)$ , a natural idea is to reduce the LP size. In this section, we present a new method to reduce the size of the LP in ${\mathcal{P}}_{\text{OPTMECH }}$ while still maintaining the mDP guarantee (Definition 1). Our method is based on adding exponential mechanism (EXPMECH) [McSherry and Talwar, 2007] equality constraints to the LP. Before we do this, we make use of an observation that the EXPMECH is provably not optimal in mDP which may be of independent interest. Thus, the constraints we add come from a "weighted" version of the EXPMECH. All missing proofs are collected in Appendix B.
102
+
103
+ ### 3.1 IMPROVING THE EXPMECH IN MDP
104
+
105
+ Informally, the exponential mechanism [McSherry and Talwar, 2007] is a method for deferentially private selection from a discrete set of candidate outputs. Due to its flexibility the EXPMECH has become a popular tool for designing DP mechanisms. Furthermore, the EXPMECH is known to be optimal in DP for many choices of utility function [Hardt and Talwar, 2009, Aldà and Simon, 2017].
106
+
107
+ However, in the mDP setting, the exponential mechanism can be fooled by outlier elements. Informally, the dense areas of the metric space can act as a "black hole" where the EXPMECH will output elements in the dense area with high probability, even for the outlier elements. This drives up the loss for the outlier elements.
108
+
109
+ For a metric space $\mathcal{W} = \left\{ {{w}_{1},\ldots ,{w}_{n}}\right\}$ and metric ${d}_{\mathcal{W}}$ , the EXPMECH has the following transition probability
110
+
111
+ $$
112
+ \mathbb{P}r\left\lbrack {\operatorname{EXPMECH}\left( {w}_{i}\right) = {w}_{j}}\right\rbrack = \frac{{e}^{-\epsilon {d}_{\mathcal{W}}\left( {{w}_{i},{w}_{j}}\right) /2}}{\mathop{\sum }\limits_{{k = 1}}^{n}{e}^{-\epsilon {d}_{\mathcal{W}}\left( {{w}_{i},{w}_{k}}\right) /2}}.
113
+ $$
114
+
115
+ To illustrate on a concrete example, consider the metric space where $\mathcal{W} = \left\{ {{w}_{1},\ldots ,{w}_{n}}\right\}$ and where ${d}_{\mathcal{W}}$ satisfies (1) ${d}_{\mathcal{W}}\left( {{w}_{1},{w}_{i}}\right) = 1$ for $i \geq 2$ , and (2) ${d}_{\mathcal{W}}\left( {{w}_{i},{w}_{j}}\right) < \delta$ when $i, j \geq 2$ , where $\delta$ is a small constant. For each $j \geq 2$ , the EXPMECH satisfies $\Pr \left\lbrack {\operatorname{EXPMECH}\left( {w}_{1}\right) = {w}_{j}}\right\rbrack = \frac{{e}^{-\epsilon }}{1 + n{e}^{-\epsilon }}$ , and thus $\Pr \left\lbrack {\operatorname{EXPMECH}\left( {w}_{1}\right) \neq {w}_{1}}\right\rbrack = \frac{n{e}^{-\epsilon }}{1 + n{e}^{-\epsilon }}$ . As $n$ grows, the probability of this occurring approaches 1 , and the loss does as well. The elements ${w}_{i}$ for $i \geq 2$ are acting as a "black hole".
116
+
117
+ This can be fixed by considering a more general mechanism that assigns weights to the output probabilities. For positive weights $\mathbf{Y} = \left( {{Y}_{1},\ldots ,{Y}_{n}}\right) \in {\left( {\mathbb{R}}^{ + }\right) }^{n}$ , consider the more general mechanism given by
118
+
119
+ $$
120
+ \mathbb{P}r\left\lbrack {{\operatorname{EXPMECH}}_{\mathbf{Y}}\left( {w}_{i}\right) = {w}_{j}}\right\rbrack = \frac{{Y}_{j}{e}^{-\epsilon {d}_{\mathcal{W}}\left( {{w}_{i},{w}_{j}}\right) /2}}{\mathop{\sum }\limits_{{k = 1}}^{n}{Y}_{k}{e}^{-\epsilon {d}_{\mathcal{W}}\left( {{w}_{i},{w}_{k}}\right) /2}}.
121
+ $$
122
+
123
+ This mechanism can be shown to satisfy $\epsilon - {d}_{\mathcal{W}}$ privacy.
124
+
125
+ Proposition 1. For any metric space and any $\mathbf{Y} \in {\left( {\mathbb{R}}^{ + }\right) }^{n}$ , the mechanism ${\mathrm{{EXPMECH}}}_{\mathbf{Y}}$ satisfies $\epsilon$ - ${d}_{\mathcal{W}}$ privacy. In our example, to avoid the problem encountered by the regular EXPMECH, we can weight ${w}_{1}$ higher than ${w}_{2},\ldots ,{w}_{n}$ . For example, there exists a weighting $\mathbf{Y}$ such that the following loss is possible:
126
+
127
+ ---
128
+
129
+ ${}^{5}$ A stochastic matrix is a square matrix whose rows are probability vectors.
130
+
131
+ ---
132
+
133
+ Lemma 1. With $\mathcal{W} = \left\{ {{w}_{1},\ldots ,{w}_{n}}\right\}$ and ${d}_{\mathcal{W}}$ defined as above, when $\mathbf{Y} = \left( {1,1/\left( {n - 1}\right) ,1/\left( {n - 1}\right) ,\ldots }\right)$ , we have that
134
+
135
+ $$
136
+ \mathcal{L}\left( {\operatorname{EXPMECH}}_{\mathbf{Y}}\right) \leq \frac{\frac{1}{n - 1} + {e}^{-\epsilon /2}}{1 + {e}^{-\epsilon /2}}\mathcal{L}\left( \text{ EXPMECH }\right) .
137
+ $$
138
+
139
+ When $\epsilon \geq 2\log n$ , we have $\mathcal{L}\left( {\mathrm{{EXPMECH}}}_{\mathbf{Y}}\right) \leq 2/(n -$ 1) $\mathcal{L}\left( \text{EXPMECH}\right)$ .
140
+
141
+ This establishes that the EXPMECH is provably not optimal on our example metric space. However, one problem with the more general ${\mathrm{{EXPMECH}}}_{\mathbf{Y}}$ is that it is not clear how to set $\mathbf{Y}$ to optimize the loss other than the rule of thumb that dense elements should be weighted less. In the next section, we leave it to the LP solver to optimize these weights.
142
+
143
+ ### 3.2 BALANCING LP LOSS AND SCALABILITY
144
+
145
+ To reduce the number of LP constraints required to find the optimal mechanism, our key idea is to add equality constraints in such a way that many of the original constraints in ${\mathcal{P}}_{\text{OPTMECH }}$ are trivially satisfied. This results in potentially a much smaller LP; however, optimality is no longer guaranteed. The balance between optimality and LP size is decided by the number of equality constraints. In fact, we will develop a general framework for balancing this tradeoff, which we call ConstOPTMech (Algorithm 1).
146
+
147
+ Specifically, to obtain the LP describing ConstOPTMech, we start with ${\mathcal{P}}_{\text{OPTMECH }}\left( \epsilon \right)$ and add non-negative variables $\left\{ {{Y}_{w} : w \in \mathcal{W}}\right\}$ . Then, for certain variables ${M}_{uv}$ , we add additional "weighted exponential mechanism-like" constraints: ${M}_{uv} = {Y}_{v}{e}^{-\epsilon {d}_{\mathcal{W}}\left( {u, v}\right) }$ . We leave the weights ${Y}_{v}$ to be optimized by the LP solver.
148
+
149
+ We allow deviations from the additional constraints in the form of a replacement function $I\left( v\right) : \mathcal{W} \rightarrow {2}^{\mathcal{W}}$ that returns the elements $u \in \mathcal{W}$ for which the weighted exponential mechanism should not be used to set ${M}_{uv}$ . To encode this, we add the following constraints:
150
+
151
+ $$
152
+ {M}_{uv} = {Y}_{v}{e}^{-\epsilon {d}_{\mathcal{W}}\left( {u, v}\right) }\forall u, v \in \mathcal{W};u \notin I\left( v\right) \tag{4}
153
+ $$
154
+
155
+ The replacement function $I\left( v\right)$ indicates where we do not want to use the exponential mechanism, and there are many candidates. We will later consider the following instantiation of this replacement function
156
+
157
+ $$
158
+ {I}_{\mathrm{{NN}}, r}\left( v\right) = \{ u \in \mathcal{W} \mid v\text{ is an }r\text{-nearest neighbor of }u.\} ,
159
+ $$
160
+
161
+ (5)
162
+
163
+ which returns the elements $u$ such that $v$ is one of the $r$ nearest neighbors of $u$ . We employ this function ${I}_{\mathrm{{NN}}, r}$ because the exponential mechanism already assigns exponentially-low probabilities of returning the farthest elements from a given element, and we conjecture there is not much improvement to be made for such scenarios.
164
+
165
+ Adding the constraints [4] to ${\mathcal{P}}_{\text{OPTMECH }}\left( \epsilon \right)$ will satisfy the privacy constraints (2), but it may be impossible for $M$ to be stochastic. One can see this in the extreme example by setting $I\left( v\right) = \varnothing$ ; then ([4] holds for all $u, v \in \mathcal{W}$ , and nonnegative assignment to ${Y}_{v}$ that makes $M$ stochastic need not exist. To fix this, we relax the constraint that $M$ be stochastic, and only insist that its rows sum to values more than 1 . We add a penalization term involving a constant $\lambda > 0$ to the loss of each element to penalize the extent to which the rows sum to more than 1 . As such, our loss function now takes the following form:
166
+
167
+ $$
168
+ \widetilde{\mathcal{L}}\left( {M, w}\right) = \mathop{\sum }\limits_{{u \in \mathcal{W}}}{M}_{wu}{d}_{\mathcal{W}}\left( {u, w}\right) + \lambda \mathop{\sum }\limits_{{u \in \mathcal{W}}}{M}_{wu} \tag{6}
169
+ $$
170
+
171
+ With the modified loss function and relaxed stochasticity requirement, we obtain the LP giving ConstOPTMech.
172
+
173
+ ${\mathcal{P}}_{\text{CONSTOPTMECH }}\left( \epsilon \right) =$ minimize $k$ subject to
174
+
175
+ $$
176
+ \widetilde{\mathcal{L}}\left( {M, w}\right) \leq k\forall w \in \mathcal{W}
177
+ $$
178
+
179
+ $$
180
+ {M}_{uv} \geq 0,\mathop{\sum }\limits_{{v \in \mathcal{W}}}{M}_{uv} \geq 1\;\forall u, v \in \mathcal{W}
181
+ $$
182
+
183
+ $$
184
+ M\text{satisfies (2) and (4)}
185
+ $$
186
+
187
+ $$
188
+ {Y}_{v} \geq 0\forall v \in \mathcal{W}
189
+ $$
190
+
191
+ Notice that this LP is always feasible because one valid solution is ${M}_{uv} = {Y}_{v}{e}^{-\epsilon {d}_{W}\left( {u, v}\right) }$ for all $u, v$ -since there are no restrictions on the ${Y}_{v}$ variables, we can set them high enough so that $\mathop{\sum }\limits_{{v \in \mathcal{W}}}{M}_{uv} \geq 1$ .
192
+
193
+ The benefit of the equality constraints (4) is that we can drop a large number of the constraints in (2), as they are trivially satisfied. This allows us to find a solution to ${\mathcal{P}}_{\text{CONSTOPTMECH }}\left( \epsilon \right)$ much faster than ${\mathcal{P}}_{\text{OPTMECH }}\left( \epsilon \right)$ .
194
+
195
+ Theorem 1. ${\mathcal{P}}_{\text{CONSTOPTMECH }}$ is feasible, and it is possible to solve it using a linear program with $n + 1 + \mathop{\sum }\limits_{{v \in \mathcal{W}}}\left| {I\left( v\right) }\right|$ variables and ${2n} + \mathop{\sum }\limits_{{v \in \mathcal{W}}}2{\left| I\left( v\right) \right| }^{2} + 3\left| {I\left( v\right) }\right|$ constraints. The number of non-zero coefficients in the LP is at most $2{n}^{2} + \mathop{\sum }\limits_{{v \in \mathcal{W}}}2{\left| I\left( v\right) \right| }^{2} + 5\left| {I\left( v\right) }\right|$ .
196
+
197
+ Our choice to drop the stochasticity requirement of $M$ gave us feasibility of ${\mathcal{P}}_{\text{CONSTOPTMECH }}$ , but the solution $M$ is no longer a mechanism because it is not stochastic. We obtain ConstOPTMech by normalizing the solution to ${\mathcal{P}}_{\text{CONSTOPTMECH }}\left( \epsilon \right)$ . Furthermore, any choice of $\lambda$ gives rise to a valid mechanism.
198
+
199
+ Mechanism CONSTOPTMECH uses half of the privacy budget for solving ${\mathcal{P}}_{\text{CONSTOPTMECH }}\left( \frac{\epsilon }{2}\right)$ because normalization may increase the privacy guarantee by a factor of 2 . We
200
+
201
+ Algorithm 1: Mechanism ConstOPTMech
202
+
203
+ ---
204
+
205
+ Data: Universe $\mathcal{W}$ , metric ${d}_{\mathcal{W}}$ , budget $\epsilon$ , replacement
206
+
207
+ function $I\left( v\right) ,\lambda \in {\mathbb{R}}^{ + }$ .
208
+
209
+ Result: Transition matrix $H$ .
210
+
211
+ $M$ , loss $\leftarrow$ Solve $\left( {{\mathcal{P}}_{\text{ConstOPTMech }}\left( \frac{\epsilon }{2}\right) }\right)$ with $\lambda$ ;
212
+
213
+ for $u, v \in \mathcal{W}$ do
214
+
215
+ ${H}_{uv} \leftarrow \frac{{M}_{uv}}{\mathop{\sum }\limits_{{w \in \mathcal{W}}}{M}_{uw}};$
216
+
217
+ return $H$
218
+
219
+ ---
220
+
221
+ are able to show the following privacy guarantee on CON-STOPTMECH. The proof is a generalization of Proposition [1]
222
+
223
+ Theorem 2. For any set $\mathcal{W}$ , metric ${d}_{\mathcal{W}}$ , and budget $\epsilon$ and replacement function $I\left( v\right) : \mathcal{W} \rightarrow {2}^{\mathcal{W}}$ , Mechanism CON-STOPTMECH satisfies $\epsilon$ - ${d}_{\mathcal{W}}$ privacy.
224
+
225
+ Setting the Replacement Function. In Theorem 1 when the replacement function $I\left( v\right) = {I}_{\mathrm{{NN}}, r}\left( v\right)$ , then the number of variables in ${\mathcal{P}}_{\text{CONSTOPTMECH }}$ is at most ${nr} + n + 1$ , the number of constraints is at most ${n}^{2}r + {3nr} + {2n}$ , and the number of non-zero elements is at most $2{n}^{2} + {5nr} + 2{n}^{2}r$ (see Corollary 1).
226
+
227
+ When we apply with $r = n - 1$ , we add no equality constraints, and (4) becomes equivalent to (2). The LP size of ${\mathcal{P}}_{\text{CONSTOPTMECH }}$ is $O\left( {n}^{3}\right)$ , the same as that of ${\mathcal{P}}_{\text{OPTMECH }}$ . When $r$ is a constant much less than $n$ , then the number of variables is $O\left( {nr}\right)$ and the number of constraints is $O\left( {{n}^{2}r}\right)$ , each term saving a factor of $n$ . We note that $O\left( {{n}^{2}r}\right)$ is a worst-case bound that is not always tight. If one assumes that there is no $v \in \mathcal{W}$ such that at least ${10r}$ other elements of $\mathcal{W}$ count $v$ as an $r$ -nearest neighbor, then $\mathop{\sum }\limits_{{v \in \mathcal{W}}}{\left| I\left( v\right) \right| }^{2} + \left| {I\left( v\right) }\right| \leq \mathop{\sum }\limits_{{v \in \mathcal{W}}}{110}{r}^{2} \leq O\left( {n{r}^{2}}\right) .$
228
+
229
+ In Table 1, we compare the number of variables, constraints, and non-zero coefficients arising in ${\mathcal{P}}_{\text{CONSTOPTMECH }}$ as compared to those in the optimal mechanism $\left( {\mathcal{P}}_{\text{OPTMECH }}\right)$ and the LP based of spanner graphs [Bordenabe et al., 2014] (referred to as ${\mathcal{P}}_{\text{SPANNERMECH }}$ , see Appendix A). We see that ${\mathcal{P}}_{\text{CONSTOPTMECH }}$ improves on all three of these quantities compared to the other two LPs when $r \ll n$ . However, we note that these are worst-case upper bounds, and in practice the LP complexity measures may smaller. Based on setting of $r$ , we have a tradeoff between scalability and increase to the loss compared to the optimal mechanism. We perform an empirical analysis of this tradeoff in Section 5.
230
+
231
+ ## 4 LOWER BOUNDS
232
+
233
+ In this section, we propose an easy to compute lower bound for mechanism loss. Our lower bound builds on the intuition that for an element $w \in \mathcal{W}$ , if there are many elements that are far, but not too far, from $w$ , then mDP forces the distribution $\mathcal{M}\left( w\right)$ to place significant mass on the elements which are farther away. This gives a lower bound on the loss of $\mathcal{M}$ . To make this intuition formal, we define a packing of
234
+
235
+ $\mathcal{W}$ to be a set of elements which are at least a certain distance from each other. In the following, let $B\left( {x, r}\right)$ denote the elements $y \in \mathcal{W}$ such that ${d}_{\mathcal{W}}\left( {x, y}\right) \leq r$ .
236
+
237
+ Definition 2. Let $\mathcal{W}$ be a set. A finite set $S \subseteq \mathcal{W}$ is called $a\left( {c, r, Q}\right)$ -packing w.r.t. metric ${d}_{\mathcal{W}}$ if the following hold: $\left| S\right| = c$ ; for all $x,{x}^{\prime } \in S, B\left( {x, r}\right) \cap B\left( {{x}^{\prime }, r}\right) = \varnothing$ , i.e. the balls around the elements in $S$ of radius $r$ are disjoint; and for all $x,{x}^{\prime } \in S,{d}_{\mathcal{W}}\left( {x,{x}^{\prime }}\right) \leq Q$ , i.e. the maximum distance between any two elements in $S$ is at most $Q$ .
238
+
239
+ The lower bound we derive holds for any(c, r, Q)-packing of the metric space $\mathcal{W}$ . The catch is that if a packing with a small $r$ or $c$ is used, the bound will not be strong. Our lower bound involves the quantity $N\left( {w, S}\right)$ that depends on a $w \in \mathcal{W}$ and a(c, r, Q)-packing $S.N\left( {w, S}\right)$ is given by:
240
+
241
+ $$
242
+ N\left( {w, S}\right) = \mathop{\sum }\limits_{{s \in S}}\exp \left( {-{d}_{\mathcal{W}}\left( {w, s}\right) \epsilon }\right) ,
243
+ $$
244
+
245
+ We also have the lower bound $N\left( {w, S}\right) \geq 1 + (c -$ 1) $\exp \left( {-{Q\epsilon }}\right)$ which follows because $S$ is a(c, r, Q)- packing. Notice that $N\left( {w, S}\right) \geq 1$ (because $w \in S$ and ${d}_{\mathcal{W}}\left( {w, w}\right) = 0$ ), and $N\left( {w, S}\right)$ grows linearly with the number of elements in $S$ and grows exponentially when the elements in $S$ are closer to $w$ or when $\epsilon$ decreases. This represents the increasing amount of mass that must be placed on these elements according to mDP. Our lower bound will grow stronger with increasing $N\left( {w, S}\right)$ . The lower bound is as follows (proof in Appendix B).
246
+
247
+ Theorem 3. Consider an arbitrary set $\mathcal{W}$ and metric ${d}_{\mathcal{W}}$ : $\mathcal{W} \times \mathcal{W} \rightarrow \mathbb{R}$ . Then, for any $\epsilon > 0$ , any mechanism $\mathcal{M}$ satisfying $\epsilon$ - ${d}_{\mathcal{W}}$ privacy and any(c, r, Q)-packing $S$ of $\mathcal{W}$ , it holds that
248
+
249
+ $$
250
+ \mathcal{L}\left( \mathcal{M}\right) \geq \mathop{\max }\limits_{{w \in \mathcal{W}}}r\left( {1 - \frac{1}{N\left( {w, S}\right) }}\right) . \tag{7}
251
+ $$
252
+
253
+ It follows that
254
+
255
+ $$
256
+ \mathcal{L}\left( \mathcal{M}\right) \geq r\left( {1 - \frac{1}{1 + \left( {c - 1}\right) \exp \left( {-{Q\epsilon }}\right) }}\right) . \tag{8}
257
+ $$
258
+
259
+ Both (7) and (8) have a simple interpretation. The $r$ term represents the minimum loss that must be incurred when a mechanism returns an element that is in a different ball than the starting ball in the packing. The $r$ term is multiplied by $P = 1 - \frac{1}{N\left( {w, S}\right) }$ , which can be interpreted as a probability since it is between 0 and 1 (since $N\left( {w, S}\right) \geq 1$ ). As we show in the theorem proof, $P$ is a lower bound on the probability that the mechanism returns an element in a different ball from $w$ and thus incurs the error $r.P$ increases with $N\left( {w, S}\right)$ , which depends on the packing in the ways we identified above. $P$ is small only when $N\left( {w, S}\right)$ approaches 1, and using the bound $N\left( {w, S}\right) \geq 1 + \left( {c - 1}\right) \exp \left( {-{Q\epsilon }}\right)$ , we see the central term controlling its closeness to 1 is $\left( {c - 1}\right) \exp \left( {-{Q\epsilon }}\right)$ . Here the parameter $Q$ crucially comes into play because if the elements in the packing are too far apart, then mDP is a weak privacy guarantee, $N\left( {w, S}\right)$ will approach 1 , and the lower bound will weaken.
260
+
261
+ <table><tr><td/><td>#Variables</td><td>#Constraints</td><td>#Non-Zeroes</td></tr><tr><td>Optimal LP: Poptmech</td><td>$O\left( {n}^{2}\right)$</td><td>$O\left( {n}^{3}\right)$</td><td>$O\left( {n}^{3}\right)$</td></tr><tr><td>Spanner-based LP: ${\mathcal{P}}_{\text{SpannerMECH }}$ [Bordenabe et al.,2014]</td><td>$O\left( {n}^{2}\right)$</td><td>$O\left( {n}^{2.5}\right)$</td><td>$O\left( {n}^{2.5}\right)$</td></tr><tr><td>Our Method: ${\mathcal{P}}_{\text{CONSTOPTMECH }}$ with function ${I}_{\mathrm{{NN}}, r}$</td><td>$O\left( {nr}\right)$</td><td>$O\left( {{n}^{2}r}\right)$</td><td>$O\left( {{n}^{2}r}\right)$</td></tr></table>
262
+
263
+ Table 1: Comparison of the number of variables and constraints for the various LP-based methods achieving mDP. Note that ${\mathcal{P}}_{\text{CONSTOPTMECH }}$ improves on existing methods when $r \ll n$ .
264
+
265
+ An important special case of our theorem occurs when we take $S$ to be the two farthest elements ${w}_{max}^{1},{w}_{max}^{2} \in$ $\mathcal{W}$ . In this case, $S$ is a $\left( {2,\frac{{r}^{ * }}{2},{r}^{ * }}\right)$ -packing where ${r}^{ * } =$ ${d}_{\mathcal{W}}\left( {{w}_{\max }^{1},{w}_{\max }^{2}}\right)$ . Our lower bound then reads $\mathcal{L}\left( \mathcal{M}\right) \geq$ $\frac{{r}^{ * }}{2}\left( \frac{\exp \left( {-{r}^{ * }\epsilon }\right) }{1 + \exp \left( {-{r}^{ * }\epsilon }\right) }\right)$ .
266
+
267
+ ## 5 EXPERIMENTAL RESULTS
268
+
269
+ We investigate through experiments how the loss of our proposed mechanism, ConstOPTMech, compares to other state-of-the-art mDP mechanisms. "We also include comparisons to our loss lower bound (derived in Section 4). Furthermore, we perform studies to compare ConstOPTMech to SpannerMech, as it is the most directly related work. To do this, we experimentally evaluate the complexity of solving the LPs used to compute both mechanisms. We focus on text embeddings and geolocation metric spaces because, as noted in Section 1, mDP mechanisms have primarily been used for privately releasing text and location data.
270
+
271
+ ### 5.1 EXPERIMENTAL SETUP
272
+
273
+ Our experiments consist of generating metric spaces in both application domains and then running two types of experiments. The first evaluates privacy vs. loss on a fixed metric space. The second evaluates scalability as the size of the metric space grows.
274
+
275
+ We measure utility (loss) of a mechanism based on (1). Since this loss is agnostic to any downstream modeling task performed on these private releases, we do not focus on any specific downstream task.
276
+
277
+ Metric Space Generation. To produce a metric space of a specific size, we sample metric spaces differently depending on the application.
278
+
279
+ For text embeddings, we sample from a base metric space consisting of the set of English words and the metric ${d}_{\mathcal{W}}$ induced by a text embedding $\phi : \mathcal{W} \rightarrow {\mathbb{R}}^{d}$ . Precisely, we used ${d}_{\mathcal{W}}\left( {u, v}\right) = d\left( {\phi \left( u\right) ,\phi \left( v\right) }\right)$ , where $d$ is the Euclidean distance. We used both the FastText [Bojanowski et al., 2017] and the GloVe embedding [Pennington et al., 2014] for our embedding $\phi$ . To sample a metric space from a base metric space, we selected a subset ${\mathcal{W}}^{\prime }$ of English words. Instead of selecting ${\mathcal{W}}^{\prime }$ at random, which would likely produce a set of completely unrelated words with roughly the same distance between each pair, we used a clustered approach. First, we let ${\mathcal{W}}^{\prime }$ consist of one random word. To sample another word, we add a random word to ${\mathcal{W}}^{\prime }$ with ${50}\%$ chance. Otherwise we select a random word $w$ in ${\mathcal{W}}^{\prime }$ and add one of $w$ ’s 50 closest English words according to ${d}_{\mathcal{W}}$ . This allows us to produce samples of the larger metric space that have representative clusters of words. We repeat this process until the metric space has the desired size.
280
+
281
+ For geolocation application, we used the method of Bor-denabe et al. [2014], which uses the Geolife [Zheng et al., 2010] dataset. The Geolife dataset consists of 17621 location traces clustered around Beijing. We divide Beijing into rectangular regions of ${0.005}^{ \circ }$ (about ${0.6}\mathrm{\;{km}}$ ) in width and height. For each trace, we consider its top 30 regions, and we form a histogram of top regions across all traces. To form a metric space of size $n$ , we take the $n$ most popular regions in the histogram. Our metric is the Euclidean distance between the centers of the regions.
282
+
283
+ Performance Benchmarks. We use the following benchmarks to measure mechanism privacy, loss, and scalability.
284
+
285
+ Privacy: In practice it is usually acceptable to use $\left( {\epsilon ,\delta }\right)$ -DP for some small $\delta$ . We adopt $\left( {\epsilon ,\delta }\right)$ -mDP for our experiments, as we do not want to penalize an algorithm for having some small probability of two elements $u, v$ being distinguished. We say $M$ satisfies $\left( {\epsilon ,\delta }\right)$ -mDP if for any $u, v \in \mathcal{W}$ and $S \subseteq \mathcal{W}$ , we have
286
+
287
+ $$
288
+ \Pr \left\lbrack {M\left( u\right) \in S}\right\rbrack \leq {e}^{\epsilon {d}_{\mathcal{W}}\left( {u, v}\right) }\Pr \left\lbrack {M\left( v\right) \in S}\right\rbrack + \delta . \tag{9}
289
+ $$
290
+
291
+ For a fixed $\delta$ , we let ${\epsilon }_{\text{tight }}$ be the smallest $\epsilon$ such that $M$ satisfies $\left( {\epsilon ,\delta }\right)$ -MDP:
292
+
293
+ $$
294
+ {\epsilon }_{\text{tight }}\left( M\right) = \mathop{\inf }\limits_{{\epsilon \geq 0}}M\text{ satisfies }\left( {\epsilon ,\delta }\right) \text{-mDP } \tag{10}
295
+ $$
296
+
297
+ In our experiments, we set $\delta = {0.001}$ .
298
+
299
+ Loss: For practical considerations, we use a more robust measurement of loss in our experiments, where it may not be problematic if the mechanism performs poorly on a small fraction of elements. Instead of using the maximum loss over all elements $\mathcal{L}\left( M\right)$ (1), we use the $q$ th-quantile over
300
+
301
+ $$
302
+ {\mathcal{L}}_{q}\left( M\right) = {\operatorname{quantile}}_{q}\left( {\{ \mathcal{L}\left( {M, w}\right) : w \in \mathcal{W}\} }\right) \tag{11}
303
+ $$
304
+
305
+ ---
306
+
307
+ ${}^{6}$ In this section, we use ConstOPTMech to denote Algorithm 1 invoked with replacement function ${I}_{\mathrm{{NN}}, r}$ .
308
+
309
+ ---
310
+
311
+ ![019638e8-71d8-7466-89c8-4ab198d4c272_6_153_174_703_535_0.jpg](images/019638e8-71d8-7466-89c8-4ab198d4c272_6_153_174_703_535_0.jpg)
312
+
313
+ Figure 1: Loss of Madlib (EuclidMech), EXPMECH, Con-stOPTMech, SpannerMech, and OPTMech versus ${\epsilon }_{\text{tight }}$ on 50 and 200-size metric spaces generated from FastText and Geolife, along with the lower bound. The horizontal line indicates the loss of returning a uniform random element.
314
+
315
+ the set $\{ \mathcal{L}\left( {M, w}\right) : w \in \mathcal{W}\}$ :
316
+
317
+ This loss estimate allows mechanisms to perform poorly on a small subset of the metric space, which in practice may be outlier or noisy data. In all experiments, we use $q = {95}\%$ so that mechanisms are evaluated based on their losses on the best 95% of elements.
318
+
319
+ LP Scalability: To measure scalability of our mechanisms, we measured the time and number of nonzero coefficients (NNZ) used in the LPs. We use the number of nonzero coefficients over the number of variables or constraints since LP solvers tend to be optimized toward solving sparse LPs. We also consider computation time to be an important measure, as it captures complexity beyond the NNZ. For mechanisms that do not require linear programs, the computational requirements are trivial and we do not test them.
320
+
321
+ Specific Details for Each Mechanism. We tested five mechanisms: a) Our proposed ConstOPTMech, b) OPT-Mech (based on solving ${\mathcal{P}}_{\text{OPTMECH }}$ ), c) SpannerMech [Bor-denabe et al., 2014], d) Madlib mechanism [Feyisetan et al., 2020], and e) EXPMECH [McSherry and Talwar, 2007]. Madlib, EXPMECH, and OPTMech have no further parameters other than $\epsilon$ . For SpannerMech, we implement the algorithm as it is described in Bordenabe et al. [2014].
322
+
323
+ Mechanism ConstOPTMech takes $\lambda$ and $I\left( v\right)$ as parameters (see Algorithm 1). We optimize over $\lambda$ with possible values in $\{ {0.001},{0.1},{1.0}\}$ . For the replacement function $I\left( v\right)$ , we use ${I}_{\mathrm{{NN}}, r}\left( v\right)$ (5). We try both $r = 5$ and $r = {10}$ , and we will designate these values in our results.
324
+
325
+ Evaluating Lower Bound. For each metric space, we computed our lower bound according to Theorem 3. This theorem produces a lower bound for any(c, r, Q)packing of $\mathcal{W}$ and any $\epsilon$ . However, it is infeasible to try out every possible (c, r, Q), packing. Instead, we generated candidate(c, r, Q)- packings using a $k$ -center algorithm, using values of $k$ that varied from 1 to the size of the metric space. For each value of $\epsilon$ that we tested, we used the strongest lower bound given by one of our generated(c, r, Q)-packings.
326
+
327
+ Experimental Outline. The first experiments we conducted are utility experiments. We test which mechanisms are better at minimizing loss, subject to privacy constraints. To do this, we plot ${\mathcal{L}}_{q}$ versus ${\epsilon }_{\text{tight }}$ for a metric space consisting of 50 and 200 elements generated from FastText, GloVe, or Geolife. We also plot the lower bound. For ConstOPTMech, we use $r = {10}$ . We do not run OPTMech for metric spaces of size 200, as the number of constraints would be ${200}^{3}$ which is too large. We do not include comparisons to Madlib for the Geolife dataset, as it is designed for text embeddings.
328
+
329
+ Next, we conduct scalability experiments on the mechanisms that involve solving LPs. We do this by fixing a privacy constraint ${\epsilon }_{\text{tight }}$ and, for each mechanism, testing the NNZ and time taken as the size of the metric space grows. We increased the number of samples in the metric space starting at 50 and increasing in increments of 50 until we reached 400 elements or the time spent solving the LP exceeded 1800 seconds. We fix ${\epsilon }_{\text{tight }} = {2.0}$ for metric spaces sampled from FastText, at 1.0 for those sampled from GloVe, and at 0.3 for those sampled from Geolife.
330
+
331
+ ### 5.2 RESULTS
332
+
333
+ We discuss the experimental results for metric spaces generated from FastText and Geolife. The results for GloVe are similar, and they appear in Appendix C.
334
+
335
+ Utility Experiments: Plots appear in Figure 1. In all tests, ConstOPTMech has lower loss than all other non-optimal mechanisms at all values of $\epsilon$ . This includes high privacy regimes, where the loss is near the loss of returning a uniformly random element, and low privacy regimes, where the loss approaches zero. The most pronounced improvement in loss occurs in middle ranges of $\epsilon$ (about $\left\lbrack {{1.0},{3.5}}\right\rbrack$ for FastText and about $\left\lbrack {{0.3},{0.8}}\right\rbrack$ for Geolife) where ConstOPT-Mech offers an improvement of about ${15} - {30}\%$ over all other non-optimal mechanisms. For example, when $\epsilon = 3$ on the 200-size FastText sample, ConstOPTMech offers a loss of 1.0 , while the next-best mechanism, the EXPMECH, offers a loss of 1.3. This represents a ${23}\%$ reduction. On the metric space sampled from Geolife of size 50 , the loss reduction is as high as ${50}\%$ at $\epsilon = {0.3}$ . The middle ranges of $\epsilon$ where ConstOPTMech is superior are the values with the most practical importance, since at these ranges, the losses are far from the random baseline yet are still nonzero-the mechanisms are offering both utility and privacy.
336
+
337
+ On the sampled metric spaces of size 50 , ConstOPTMech attains only slightly worse loss than OPTMech, the optimal mechanism. As $\epsilon$ grows past 1.5 for FastText and 0.7 for
338
+
339
+ <table><tr><td>FastText</td><td>50 elements</td><td>150 elements</td><td>300 elements</td><td>400 elements</td></tr><tr><td>ConstOPTMech (10)</td><td>1.89 sec, 3.23e4 nnz</td><td>24.91 sec, 1.47e5 nnz</td><td>163.75 sec, 3.79e5 nnz</td><td>234.63 sec, 5.71e5 nnz</td></tr><tr><td>ConstOPTMech (5)</td><td>0.47 sec, 1.21e4 nnz</td><td>12.85 sec, 6.60e4 nnz</td><td>65.56 sec, 2.19e5 nnz</td><td>141.90 sec, 3.70e5 nnz</td></tr><tr><td>SpannerMech</td><td>1.25 sec, 2.12e4 nnz</td><td>62.16 sec, 2.56e5 nnz</td><td>1001.82 sec, 1.19e6 nnz</td><td>> 1800 sec, — nnz</td></tr><tr><td>OPTMech</td><td>8.85 sec, 2.50e5 nnz</td><td colspan="3">> 1800 sec, — nnz</td></tr><tr><td>Geolife</td><td>50 elements</td><td>150 elements</td><td>300 elements</td><td>400 elements</td></tr><tr><td>ConstOPTMech (10)</td><td>1.41 sec. ${1.74}\mathrm{e}4\mathrm{{nnz}}$</td><td>13.85 sec, ${8.05}\mathrm{e}4\mathrm{{nnz}}$</td><td>${80.24}\mathrm{{sec}},{2.51}\mathrm{e}5\mathrm{{nnz}}$</td><td>169.52 sec, ${4.13}\mathrm{e}5\mathrm{{nnz}}$</td></tr><tr><td>ConstOPTMech (5)</td><td>${0.40}\mathrm{{sec}},{8.30}\mathrm{e}3\mathrm{{nnz}}$</td><td>12.53 sec. ${5.47}\mathrm{e}4\mathrm{{nnz}}$</td><td>${66.28}\mathrm{{sec}},{1.99}\mathrm{e}5\mathrm{{nnz}}$</td><td>145.74 sec. ${3.46}\mathrm{e}5\mathrm{{nnz}}$</td></tr><tr><td>SpannerMech</td><td>${0.95}\mathrm{{sec}},{1.56}\mathrm{e}4\mathrm{{nnz}}$</td><td>40.38 sec, 1.49e5 nnz</td><td>928.72 sec. ${6.05}\mathrm{e}5\mathrm{{nnz}}$</td><td>$> {1800}\mathrm{{sec}}$ ,- nnz</td></tr><tr><td>OPTMech</td><td>25.39 sec, 2.50e5 nnz</td><td colspan="3">> 1800 sec. — nnz</td></tr></table>
340
+
341
+ Table 2: Computation times and memory requirements for computing ConstOPTMech when $r = {10},5$ ; SpannerMech; and OPTMech, for varying metric space sizes generated from FastText and Geolffe datasets. Each mechanism satisfies ${\epsilon }_{\text{tight }} = {2.0}$ (FastText) and 0.3 (Geolife).
342
+
343
+ ![019638e8-71d8-7466-89c8-4ab198d4c272_7_151_714_699_275_0.jpg](images/019638e8-71d8-7466-89c8-4ab198d4c272_7_151_714_699_275_0.jpg)
344
+
345
+ Figure 2: Loss of EXPMECH, ConstOPTMech (with $r =$ $5,{10})$ , and SpannerMech versus size of metric space for metric spaces generated from FastText and Geolife. Here, ${\epsilon }_{\text{tight }}$ is fixed at 2.0 (FastText) and 0.3 (Geolife).
346
+
347
+ Geolife, their losses become virtually the same. Because we are using $r = {10}$ , this means just ${50} \times {10} = {500}$ entries out of the 2500 entries in the transition matrix are not fixed. This suggests that the 10 nearest neighbors to an element play the largest role in minimizing the element's loss.
348
+
349
+ In all scenarios tested, there is a large gap between the lower bound and the losses of the mechanisms, even the optimal mechanism. Hence, it is uncertain how close ConstOPT-Mech is to OPTMech on the metric spaces of size 200 .
350
+
351
+ Scalability Experiments: We were able to run ConstOPT-Mech until 400 elements, whereas OPTMech timed out at 100 elements and SpannerMech timed out at 350 elements. Table 2 shows some of the time and NNZ data for the mechanisms. These results indicate that computing Con-stOPTMech is faster than computing SpannerMech. This is particularly evident for the metric spaces with size 150 (resp. 300), where ConstOPTMech with $r = {10}$ uses at most ${40}\%$ (resp. ${16}\%$ ) as much time as SpannerMech, and ConstOPTMech with $r = 5$ uses at most ${20}\%$ (resp. 6.5%) as much time. These faster times come despite our optimization of $\lambda$ in ConstOPTMech, which requires solving three LPs. In other words, the actual time to solve one LP used in ConstOPTMech is one third as high as the reported times.
352
+
353
+ In terms of NNZ, ConstOPTMech with $r = {10}$ uses at most ${57}\%$ (resp. ${32}\%$ ) as much time as SpannerMech, and ConstOPTMech with $r = 5$ uses at most 37% (resp 18%) as many non-zero coefficients on the metric spaces with size 150,300. Note that one of the reasons why these savings are less than that observed for the time improvements is because ConstOPTMech uses LPs which are simpler than the LPs used by SpannerMech, which for example have more variables $\left( {O\left( {n}^{2}\right) }\right.$ variables versus $O\left( {nr}\right)$ ).
354
+
355
+ All the previous mechanisms have improved performance over OPTMech, which uses moderate time and NNZ on metric spaces with 50 elements and does not scale to 100 elements and beyond.
356
+
357
+ In Figure 2, we show our plots of loss versus the size of the metric space when ${\epsilon }_{\text{tight }}$ is fixed. The plots confirms that ConstOPTMech also has lower loss than SpannerMech. At metric space sizes below 200, ConstOPTMech with $r = {10}$ has approximately a ${22}\%$ (resp. ${12}\%$ ) reduction in loss compared to SpannerMech on FastText and Geolife, though this reduces to about ${10}\%$ on FastText (resp. $< 5\%$ on Geolife) for metric space sizes greater than 200. ConstOPTMech with $r = 5$ similarly outperforms SpannerMech on smaller metric spaces, though on FastText, ConstOPTMech with $r = 5$ performs worse than SpannerMech, a symptom of it using too few nearest neighbors. On large metric spaces, more nearest neighbors must be used to maintain low loss.
358
+
359
+ ## 6 CONCLUSION
360
+
361
+ We tackle the problem of designing scalable metric differential privacy mechanisms that achieve near optimal utility. Our new mechanism combines the optimal LP-based mechanism and the exponential mechanism to achieve a better utility-scalability tradeoff than existing mechanisms. We also provide a simple to compute lower bound that improves our understanding of the optimal utility. Our experiments show that our mechanism is computationally tractable on larger metric spaces while also almost matching the utility of the optimal LP-based mechanism. While our mechanism operates on any metric space, an interesting question is whether the geometry of the metric space can be leveraged to improve either utility or scalability.
362
+
363
+ Reyan Ahmed, Greg Bodwin, Faryad Darabi Sahneh, Keaton Hamm, Mohammad Javad Latifi Jebelli, Stephen Kobourov, and Richard Spence. Graph Spanners: A Tutorial Review. arXiv:1909.03152 [cs, math], March 2020. URL http://arxiv.org/abs/ 1909.03152. arXiv: 1909.03152.
364
+
365
+ Francesco Aldà and Hans Ulrich Simon. On the Optimality of the Exponential Mechanism. In Shlomi Dolev and Sachin Lodha, editors, Cyber Security Cryptography and Machine Learning, volume 10332, pages 68-85. Springer International Publishing, Cham, 2017. ISBN 978-3-319-60079-6 978- 3-319-60080-2. doi: 10.1007/978-3-319-60080-2_5. URL http://link.springer.com/10.1007/ 978-3-319-60080-2_5. Series Title: Lecture Notes in Computer Science.
366
+
367
+ Miguel E Andrés, Nicolás E Bordenabe, Konstantinos Chatzikokolakis, and Catuscia Palamidessi. Geo-indistinguishability: Differential privacy for location-based systems. In Proceedings of the 2013 ACM SIGSAC conference on Computer & communications security, pages 901-914, 2013.
368
+
369
+ Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors with subword information. TACL, 5, 2017.
370
+
371
+ Nicolás E. Bordenabe, Konstantinos Chatzikokolakis, and Catuscia Palamidessi. Optimal Geo-Indistinguishable Mechanisms for Location Privacy. Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, pages 251-262, November 2014. doi: 10.1145/2660267.2660345. URL http://arxiv.org/abs/1402.5029.arXiv: 1402.5029.
372
+
373
+ Konstantinos Chatzikokolakis, Miguel E Andrés, Nicolás Emilio Bordenabe, and Catuscia Palamidessi. Broadening the scope of differential privacy using metrics. In PETS, 2013.
374
+
375
+ Konstantinos Chatzikokolakis, Catuscia Palamidessi, and Marco Stronati. Constructing elastic distinguishability metrics for location privacy. PETS, 2015.
376
+
377
+ Cynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. Theoretical Computer Science, 9(3-4), 2013.
378
+
379
+ Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In ${TCC}$ , pages 265-284. Springer,2006.
380
+
381
+ Natasha Fernandes, Mark Dras, and Annabelle McIver. Generalised differential privacy for text document processing. Principles of Security and Trust, 2019.
382
+
383
+ Oluwaseyi Feyisetan and Shiva Kasiviswanathan. Private release of text embedding vectors. In Proceedings of the First Workshop on Trustworthy Natural Language Processing, pages 15-27, 2021.
384
+
385
+ Oluwaseyi Feyisetan, Tom Diethe, and Thomas Drake. Leveraging hierarchical representations for preserving privacy and utility in text. In IEEE ICDM, 2019.
386
+
387
+ Oluwaseyi Feyisetan, Borja Balle, Thomas Drake, and Tom Diethe. Privacy-and utility-preserving textual analysis via calibrated multivariate perturbations. In Proceedings of the 13th International Conference on Web Search and Data Mining, pages 178-186, 2020.
388
+
389
+ Moritz Hardt and Kunal Talwar. On the Geometry of Differential Privacy. arXiv:0907.3754 [cs], November 2009. URL http://arxiv.org/abs/0907 3754. arXiv: 0907.3754.
390
+
391
+ Shunhua Jiang, Zhao Song, Omri Weinstein, and Hengjie Zhang. A faster algorithm for solving general lps. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pages 823-832, 2021.
392
+
393
+ Shiva Kasiviswanathan, Homin Lee, Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. What can we learn privately? SIAM Journal on Computing, 40(3), 2011.
394
+
395
+ Fragkiskos Koufogiannis, Shuo Han, and George J Pappas. Gradual release of sensitive data under differential privacy. Journal of Privacy and Confidentiality, 7(2), 2016.
396
+
397
+ Frank McSherry and Kunal Talwar. Mechanism design via differential privacy. In FOCS, volume 7, pages 94-103, 2007.
398
+
399
+ Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In EMNLP, pages 1532-1543, 2014.
400
+
401
+ Zekun Xu, Abhinav Aggarwal, Oluwaseyi Feyisetan, and Nathanael Teissier. A differentially private text perturbation method using regularized mahalanobis metric. In Proceedings of the Second Workshop on Privacy in NLP at EMNLP 2020, pages 7-17, 2020.
402
+
403
+ Yu Zheng, Xing Xie, and Wei-Ying Ma. Geolife: A collaborative social networking service among user, location and trajectory. IEEE Data(base) Engineering Bulletin, June 2010. URL https://www.microsoft com/en-us/research/publication/ geolife-a-collaborative-social-networking-serv
404
+
405
+ ## A REDUCING THE SIZE OF THE LP WITH SPANNER GRAPHS
406
+
407
+ Bordenabe et al. [2014] observed that the number of privacy constraints in (2) can be reduced using a spanner graph with far fewer than $O\left( {n}^{2}\right)$ edges to approximate the metric ${d}_{\mathcal{W}}$ . Thus, the privacy constraints (2) only need to be applied for elements $u, v$ such that(u, v)is an edge in the spanner graph, and the spanner property ensures that all privacy constraints will approximately hold. In a spanner graph with $\left| E\right|$ edges, this results in $O\left( {n\left| E\right| }\right)$ privacy constraints, rather than $O\left( {n}^{3}\right)$ . We now provide the formal details.
408
+
409
+ Definition 3. (Spanner graph) Given a metric space $\left( {\mathcal{W},{d}_{\mathcal{W}}}\right)$ , a graph $G = \left( {\mathcal{W}, E}\right)$ induces a metric ${d}_{G}\left( {u, v}\right)$ on $\mathcal{W}$ defined by the shortest path from $u$ to $v$ in $G$ . The graph $G$ is called a $\delta$ -spanner if for all $u, v \in \mathcal{W},{d}_{G}\left( {u, v}\right) \leq {d}_{\mathcal{W}}\left( {u, v}\right) \leq$ $\delta {d}_{G}\left( {u, v}\right)$ .
410
+
411
+ To construct a spanner, one can use the folklore greedy algorithm (e.g. [Ahmed et al.,2020]). For a positive integer $k$ , the greedy algorithm uses constructs a(2k - 1)-spanner with $O\left( {n}^{1 + 1/k}\right)$ edges. Instead of requiring that mDP holds for all pairs $u, v \in \mathcal{W}$ , the spanner method only requires it to hold for edges in $G$ :
412
+
413
+ $$
414
+ {M}_{uw} \leq {M}_{vw}\exp \left( {{d}_{\mathcal{W}}\left( {u, v}\right) \epsilon }\right) \;\forall \left( {u, v}\right) \in G, w \in \mathcal{W} \tag{12}
415
+ $$
416
+
417
+ The modified linear program for finding the optimal mechanism under just the spanner property is:
418
+
419
+ $$
420
+ {\mathcal{P}}_{\text{SPANNERMECH }}\left( {\epsilon , G}\right) = \text{minimize}k\text{subject to}
421
+ $$
422
+
423
+ $$
424
+ \mathcal{L}\left( {M, w}\right) \leq k\forall w \in \mathcal{W}
425
+ $$
426
+
427
+ $M$ stochastic
428
+
429
+ $M$ satisfies (12)
430
+
431
+ Bordenabe et al. [2014] prove that the mechanism produced by ${\mathcal{P}}_{\text{SPANNERMECH }}\left( {\epsilon , G}\right)$ satisfies ${\delta \epsilon }{d}_{\mathcal{W}}$ privacy. This proof follows simply from the definition of a $\delta$ -spanner.
432
+
433
+ Theorem 4. [Bordenabe et al. 2014]: Let $G = \left( {V, E}\right)$ be a $\delta$ -spanner. The mechanism $M$ produced by ${\mathcal{P}}_{\text{SPANNERMECH }}\left( {\epsilon , G}\right)$ satisfies ${\delta \epsilon }{d}_{W}$ privacy.
434
+
435
+ The number of variables in ${\mathcal{P}}_{\text{SPANNERMECH }}$ is about the same as that of ${\mathcal{P}}_{\text{OPTMECH }}$ , but the number of constraints is reduced from ${n}^{3}$ to $n\left| E\right|$ . Using the greedy spanner algorithm, the number of constraints and non-zero coefficients is $O\left( {n}^{2 + 1/k}\right)$ , while the mDP guarantee is $\left( {{2k} - 1}\right) \epsilon$ . Following [Bordenabe et al.,2014], we instantiate the algorithm with a greedy spanner with $k = 2$ . Thus, the number of constraints and non-zero coefficients is bounded by $O\left( {n}^{2.5}\right)$ .
436
+
437
+ ## B OMITTED PROOFS
438
+
439
+ The following proof involved proving that a weighted form of the exponential mechanism satisfies mDP.
440
+
441
+ Proposition 1. For any metric space and any $\mathbf{Y} \in {\left( {\mathbb{R}}^{ + }\right) }^{n}$ , the mechanism ${\operatorname{EXPMecH}}_{\mathbf{Y}}$ satisfies $\epsilon$ - ${d}_{\mathcal{W}}$ privacy.
442
+
443
+ Proof. Let $\mathbf{Y} \in {\left( {\mathbb{R}}^{ + }\right) }^{n}$ be aribtrary. Our goal is to show, for all $u,{u}^{\prime }, v \in \mathcal{W}$ ,
444
+
445
+ $$
446
+ \Pr \left\lbrack {{\operatorname{EXPMECH}}_{\mathbf{Y}}\left( u\right) = v}\right\rbrack \leq \exp \left( {\epsilon {d}_{\mathcal{W}}\left( {u,{u}^{\prime }}\right) }\right) \Pr \left\lbrack {{\operatorname{EXPMECH}}_{\mathbf{Y}}\left( {u}^{\prime }\right) = v}\right\rbrack
447
+ $$
448
+
449
+ For positive real numbers $A, B,\delta$ , let $A{ \approx }_{\delta }B$ denote that $A \leq \exp \left( \delta \right) B$ and that $B \leq \exp \left( \delta \right) A$ . We will show that
450
+
451
+ $$
452
+ \Pr \left\lbrack {{\operatorname{EXPMECH}}_{\mathbf{Y}}\left( u\right) = v}\right\rbrack { \approx }_{\epsilon {d}_{\mathcal{W}}\left( {u,{u}^{\prime }}\right) }\Pr \left\lbrack {{\operatorname{EXPMECH}}_{\mathbf{Y}}\left( {u}^{\prime }\right) = v}\right\rbrack , \tag{13}
453
+ $$
454
+
455
+ and this will imply our goal.
456
+
457
+ It is easy to see that if ${A}_{1}{ \approx }_{\delta }{B}_{1}$ and ${A}_{2}{ \approx }_{\delta }{B}_{2}$ , then (i) ${A}_{1} + {A}_{2}{ \approx }_{\delta }{B}_{1} + {B}_{2}$ ,(ii) ${A}_{1}{A}_{2}{ \approx }_{2\delta }{B}_{1}{B}_{2}$ , and (iii) $\frac{1}{{A}_{1}}{ \approx }_{\delta }\frac{1}{{B}_{1}}$ . By the triangle inequality, we have that
458
+
459
+ $$
460
+ {d}_{\mathcal{W}}\left( {{u}^{\prime }, v}\right) \leq {d}_{\mathcal{W}}\left( {u,{u}^{\prime }}\right) + {d}_{\mathcal{W}}\left( {u, v}\right)
461
+ $$
462
+
463
+ $$
464
+ \frac{-{d}_{\mathcal{W}}\left( {u, v}\right) }{2} \leq \frac{{d}_{\mathcal{W}}\left( {u,{u}^{\prime }}\right) - {d}_{\mathcal{W}}\left( {{u}^{\prime }, v}\right) }{2}
465
+ $$
466
+
467
+ $$
468
+ {e}^{-{\epsilon d}\mathcal{W}\left( {u, v}\right) /2} \leq {e}^{{\epsilon d}\mathcal{W}\left( {u,{u}^{\prime }}\right) /2}{e}^{-{\epsilon d}\mathcal{W}\left( {{u}^{\prime }, v}\right) /2}
469
+ $$
470
+
471
+ $$
472
+ {d}_{\mathcal{W}}\left( {u, v}\right) \leq {d}_{\mathcal{W}}\left( {{u}^{\prime }, v}\right) + {d}_{\mathcal{W}}\left( {u,{u}^{\prime }}\right)
473
+ $$
474
+
475
+ $$
476
+ \frac{-{d}_{\mathcal{W}}\left( {{u}^{\prime }, v}\right) }{2} \leq \frac{{d}_{\mathcal{W}}\left( {u,{u}^{\prime }}\right) - {d}_{\mathcal{W}}\left( {u, v}\right) }{2}
477
+ $$
478
+
479
+ $$
480
+ {e}^{-\epsilon {d}_{\mathcal{W}}\left( {{u}^{\prime }, v}\right) /2} \leq {e}^{\epsilon {d}_{\mathcal{W}}\left( {u,{u}^{\prime }}\right) /2}{e}^{-\epsilon {d}_{\mathcal{W}}\left( {u, v}\right) /2}
481
+ $$
482
+
483
+ This implies that ${e}^{-\epsilon {d}_{\mathcal{W}}\left( {u, v}\right) /2}{ \approx }_{\epsilon {d}_{\mathcal{W}}\left( {u,{u}^{\prime }}\right) /2}{e}^{-\epsilon {d}_{\mathcal{W}}\left( {{u}^{\prime }, v}\right) /2}$ for any choice of $u,{u}^{\prime }, v \in \mathcal{W}$ . Thus,
484
+
485
+ $$
486
+ {Y}_{v}{e}^{-\epsilon {d}_{\mathcal{W}}\left( {u, v}\right) /2}{ \approx }_{\epsilon {d}_{\mathcal{W}}\left( {u,{u}^{\prime }}\right) /2}{Y}_{v}{e}^{-\epsilon {d}_{\mathcal{W}}\left( {{u}^{\prime }, v}\right) /2} \tag{14}
487
+ $$
488
+
489
+ Applying property (i), we also have
490
+
491
+ $$
492
+ \mathop{\sum }\limits_{{w \in \mathcal{W}}}{Y}_{w}{e}^{-\epsilon {d}_{\mathcal{W}}\left( {u, w}\right) /2}{ \approx }_{\epsilon {d}_{\mathcal{W}}\left( {u,{u}^{\prime }}\right) /2}\mathop{\sum }\limits_{{w \in \mathcal{W}}}{Y}_{w}{e}^{-\epsilon {d}_{\mathcal{W}}\left( {{u}^{\prime }, w}\right) /2} \tag{15}
493
+ $$
494
+
495
+ Dividing (14) and (15) and applying property (ii) and (iii), we obtain (13).
496
+
497
+ Lemma 1. With $\mathcal{W} = \left\{ {{w}_{1},\ldots ,{w}_{n}}\right\}$ and ${d}_{\mathcal{W}}$ defined as above, when $\mathbf{Y} = \left( {1,1/\left( {n - 1}\right) ,1/\left( {n - 1}\right) ,\ldots }\right)$ , we have that
498
+
499
+ $$
500
+ \mathcal{L}\left( {\operatorname{EXPMECH}}_{\mathbf{Y}}\right) \leq \frac{\frac{1}{n - 1} + {e}^{-\epsilon /2}}{1 + {e}^{-\epsilon /2}}\mathcal{L}\left( \text{ EXPMECH }\right) .
501
+ $$
502
+
503
+ When $\epsilon \geq 2\log n$ , we have $\mathcal{L}\left( {\operatorname{EXPMECH}}_{\mathbf{Y}}\right) \leq 2/\left( {n - 1}\right) \mathcal{L}\left( \text{ EXPMECH }\right)$ .
504
+
505
+ Proof. Recall that ${d}_{\mathcal{W}}\left( {{w}_{1},{w}_{j}}\right) = 1$ for all $j \geq 2$ and ${d}_{\mathcal{W}}\left( {{w}_{i},{w}_{j}}\right) = \delta$ for $i, j \geq 2$ .
506
+
507
+ Observe that as $\delta \rightarrow 0$ , we have
508
+
509
+ $$
510
+ \mathcal{L}\left( {\operatorname{EXPMECH},{w}_{1}}\right) \rightarrow \frac{\left( {n - 1}\right) {e}^{-\epsilon /2}}{1 + \left( {n - 1}\right) {e}^{-\epsilon /2}},\;\text{ and }\;\forall i \geq 2.\mathcal{L}\left( {\operatorname{EXPMECH},{w}_{i}}\right) \rightarrow \frac{{e}^{-\epsilon /2}}{\left( {n - 1}\right) + {e}^{-\epsilon /2}}. \tag{16}
511
+ $$
512
+
513
+ On the other hand, for the weighted EM, we have as $\delta \rightarrow 0$ ,
514
+
515
+ $$
516
+ \mathcal{L}\left( {{\operatorname{ExpMecH}}_{\mathbf{Y}},{w}_{1}}\right) \rightarrow \frac{{e}^{-\epsilon /2}}{1 + {e}^{-\epsilon /2}},\;\text{ and }\;\forall i \geq 2.\mathcal{L}\left( {{\operatorname{ExpMecH}}_{\mathbf{Y}},{w}_{i}}\right) \rightarrow \frac{{e}^{-\epsilon /2}}{1 + {e}^{-\epsilon /2}}. \tag{17}
517
+ $$
518
+
519
+ Dividing the two losses, we obtain
520
+
521
+ $$
522
+ \frac{\mathcal{L}\left( {\mathrm{{EXPMECH}}}_{\mathbf{Y}}\right) }{\mathcal{L}\left( \mathrm{{EXPMECH}}\right) } = \frac{1 + \left( {n - 1}\right) {e}^{-\epsilon /2}}{\left( {n - 1}\right) \left( {1 + {e}^{-\epsilon /2}}\right) } = \frac{\frac{1}{n - 1} + {e}^{-\epsilon /2}}{1 + {e}^{-\epsilon /2}}.
523
+ $$
524
+
525
+ When ${e}^{-\epsilon /2} \leq \frac{1}{n - 1}$ , we have
526
+
527
+ $$
528
+ \frac{\frac{1}{n - 1} + {e}^{-\epsilon /2}}{1 + {e}^{-\epsilon /2}} \leq \frac{2}{n - 1}.
529
+ $$
530
+
531
+ This establishes the results of the lemma.
532
+
533
+ Theorem 1. ${\mathcal{P}}_{\text{CONSTOPTMECH }}$ is feasible, and it is possible to solve it using a linear program with $n + 1 + \mathop{\sum }\limits_{{v \in \mathcal{W}}}\left| {I\left( v\right) }\right|$ variables and ${2n} + \mathop{\sum }\limits_{{v \in \mathcal{W}}}2{\left| I\left( v\right) \right| }^{2} + 3\left| {I\left( v\right) }\right|$ constraints. The number of non-zero coefficients in the ${LP}$ is at most $2{n}^{2} + \mathop{\sum }\limits_{{v \in \mathcal{W}}}2{\left| I\left( v\right) \right| }^{2} + 5\left| {I\left( v\right) }\right|$ .
534
+
535
+ Proof. We will exhibit a valuation of ${M}_{uv}$ and $\mathbf{Y} = \left\{ {{Y}_{w} : w \in \mathcal{W}}\right\}$ variables such that ${\mathcal{P}}_{\text{CONSTOPTMECH }}$ is feasible. Our valuation will set ${M}_{uv} = {Y}_{v}{e}^{-\epsilon {d}_{W}\left( {u, v}\right) }$ , and we will let $\mathbf{Y}$ be non-negative with values to be decided later. Notice that each ${M}_{uv}$ is at least 0, and (4) is satisfied for all choices of $I\left( v\right)$ . Furthermore, we can show that (2) is satisfied for any choice of $\mathbf{Y}$ :
536
+
537
+ $$
538
+ {d}_{\mathcal{W}}\left( {u, w}\right) \geq {d}_{\mathcal{W}}\left( {v, w}\right) - {d}_{\mathcal{W}}\left( {u, v}\right)
539
+ $$
540
+
541
+ $$
542
+ {e}^{-\epsilon {d}_{\mathcal{W}}\left( {u, w}\right) } \leq {e}^{-\epsilon \left( {{d}_{\mathcal{W}}\left( {v, w}\right) - {d}_{\mathcal{W}}\left( {u, v}\right) }\right) }
543
+ $$
544
+
545
+ $$
546
+ {Y}_{w}{e}^{-{\epsilon d}\mathcal{W}\left( {u, w}\right) } \leq {Y}_{w}{e}^{-{\epsilon d}\mathcal{W}\left( {v, w}\right) }{e}^{{\epsilon d}\mathcal{W}\left( {u, v}\right) }
547
+ $$
548
+
549
+ $$
550
+ {M}_{uw} \leq {M}_{vw}{e}^{\epsilon {d}_{\mathcal{W}}\left( {u, v}\right) }
551
+ $$
552
+
553
+ Finally, we can set the entries of $\mathbf{Y}$ high enough so that $\mathop{\sum }\limits_{{v \in \mathcal{W}}}{M}_{uv} \geq 1$ for all $u \in \mathcal{W}$ . Since all the constraints are satisfied, this produces a feasible solution to ${\mathcal{P}}_{\text{CONSTOPTMECH }}$ .
554
+
555
+ Now, we will show that by doing variable substitution into ${\mathcal{P}}_{\text{CONSTOPTMECH }}\left( \epsilon \right)$ , we can eliminate many of the variables and constraints and obtain an LP of much smaller size. We will do this by directly substituting the equalities of [4] into the remaining constraints. The constraints that $\widetilde{\mathcal{L}}\left( {M, w}\right) \leq k$ for all $w \in \mathcal{W}$ become
556
+
557
+ $$
558
+ \mathop{\sum }\limits_{{v \in \mathcal{W}, w \in I\left( v\right) }}{M}_{wv}{d}_{\mathcal{W}}\left( {u, v}\right) + \mathop{\sum }\limits_{{v \in \mathcal{W}, w \notin I\left( v\right) }}{Y}_{v}{e}^{-\epsilon {d}_{\mathcal{W}}\left( {w, v}\right) }{d}_{\mathcal{W}}\left( {w, v}\right) \leq k\;\forall w \in \mathcal{W} \tag{18}
559
+ $$
560
+
561
+ The constraints that ${M}_{uv} \geq 0$ are implied by ${Y}_{v} \geq 0$ if $u \notin I\left( v\right)$ . Thus, the only remaining constraints are:
562
+
563
+ $$
564
+ {M}_{uv} \geq 0\forall u \in \mathcal{W}, u \in I\left( v\right) \tag{19}
565
+ $$
566
+
567
+ Substituting into $\mathop{\sum }\limits_{{v \in \mathcal{W}}}{M}_{uv} \geq 1$ , we obtain
568
+
569
+ $$
570
+ \mathop{\sum }\limits_{{v \in \mathcal{W}, u \in I\left( v\right) }}{M}_{uv} + \mathop{\sum }\limits_{{v \in \mathcal{W}, u \notin I\left( v\right) }}{Y}_{v}{e}^{-\epsilon {d}_{\mathcal{W}}\left( {u, v}\right) } \geq 1\;\forall u \in \mathcal{W} \tag{20}
571
+ $$
572
+
573
+ The difficult case is substituting into the privacy constraints (2). We split them into four sets:
574
+
575
+ $$
576
+ {Y}_{w}{e}^{-\epsilon {d}_{\mathcal{W}}\left( {u, w}\right) } \leq {Y}_{w}{e}^{-\epsilon {d}_{\mathcal{W}}\left( {v, w}\right) }{e}^{\epsilon {d}_{\mathcal{W}}\left( {u, v}\right) }\forall w \in \mathcal{W}, u \notin I\left( w\right) , v \notin I\left( w\right) \tag{21}
577
+ $$
578
+
579
+ $$
580
+ {M}_{uw} \leq {M}_{vw}{e}^{\epsilon {d}_{\mathcal{W}}\left( {u, v}\right) }\forall w \in \mathcal{W}, u \in I\left( w\right) , v \in I\left( w\right) \tag{22}
581
+ $$
582
+
583
+ $$
584
+ {M}_{uw} \leq {Y}_{w}{e}^{-\epsilon {d}_{\mathcal{W}}\left( {v, w}\right) }{e}^{\epsilon {d}_{\mathcal{W}}\left( {u, v}\right) }\forall w \in \mathcal{W}, u \in I\left( w\right) , v \notin I\left( w\right) \tag{23}
585
+ $$
586
+
587
+ $$
588
+ {Y}_{w}{e}^{-\epsilon {d}_{\mathcal{W}}\left( {u, w}\right) } \leq {M}_{vw}{e}^{\epsilon {d}_{\mathcal{W}}\left( {u, v}\right) }\forall w \in \mathcal{W}, u \notin I\left( w\right) , v \in I\left( w\right) \tag{24}
589
+ $$
590
+
591
+ The constraints in (21) are always satisfied by following an argument identical to what we used to show feasibility. Thus, we may drop these constraints. Focusing on (23), if we let $w \in \mathcal{W}$ be arbitrary and $u \in I\left( w\right)$ be arbitrary, then the set of inequalities $\left\{ {{M}_{uw} \leq {Y}_{w}{e}^{-\epsilon {d}_{\mathcal{W}}\left( {v, w}\right) }{e}^{\epsilon {d}_{\mathcal{W}}\left( {u, v}\right) } : v \notin I\left( w\right) }\right\}$ is equivalent to ${M}_{uw} \leq \mathop{\min }\limits_{{v \notin I\left( w\right) }}{Y}_{w}{e}^{-\epsilon {d}_{\mathcal{W}}\left( {v, w}\right) }{e}^{\epsilon {d}_{\mathcal{W}}\left( {u, v}\right) }$ . Thus, (23) is equivalent to:
592
+
593
+ $$
594
+ {M}_{uw} \leq {Y}_{w}\mathop{\min }\limits_{{v \notin I\left( w\right) }}{e}^{-\epsilon {d}_{\mathcal{W}}\left( {v, w}\right) }{e}^{\epsilon {d}_{\mathcal{W}}\left( {u, v}\right) }\forall w \in \mathcal{W}, u \in I\left( w\right) \tag{25}
595
+ $$
596
+
597
+ Similarly, (24) is equivalent to:
598
+
599
+ $$
600
+ {Y}_{w}\mathop{\max }\limits_{{u \notin I\left( w\right) }}{e}^{-\epsilon {d}_{\mathcal{W}}\left( {u, w}\right) } \leq {M}_{vw}{e}^{\epsilon {d}_{\mathcal{W}}\left( {u, v}\right) }\forall w \in \mathcal{W}, v \in I\left( w\right) \tag{26}
601
+ $$
602
+
603
+ After substituting, we have the following LP (which no longer involves the variables ${M}_{uv}$ such that $u \notin I\left( v\right)$ ):
604
+
605
+ ${\mathcal{P}}_{\text{CONSTOPTMECH,}{red}}\left( \epsilon \right) =$ minimize $k$ subject to
606
+
607
+ $M$ satisfies ([18])
608
+
609
+ $M$ satisfies ([19]), (20)
610
+
611
+ $M$ satisfies ([22], [25], [26])
612
+
613
+ ${Y}_{v} \geq 0\forall v \in \mathcal{W}$
614
+
615
+ This LP has the variables ${Y}_{v}$ and ${M}_{uv}$ when $v \in \mathcal{W}$ and $u \in I\left( v\right)$ , for a total of $n + \mathop{\sum }\limits_{{w \in \mathcal{W}}}\left| {I\left( w\right) }\right| + 1$ variables. The number of constraints in (18), and (20) is ${2n}$ ; the number of constraints in (19) is $\mathop{\sum }\limits_{{u \in \mathcal{W}}}\left| {I\left( u\right) }\right|$ , the number of constraints in (25) and (26) is $2\mathop{\sum }\limits_{{w \in \mathcal{W}}}\left| {I\left( w\right) }\right|$ , and the number of constraints in (22) is $\mathop{\sum }\limits_{{w \in \mathcal{W}}}{\left| I\left( w\right) \right| }^{2}$ .
616
+
617
+ Similarly, the number of nonzero coefficients in (18) is ${n}^{2}$ , in (19) is $\mathop{\sum }\limits_{{w \in \mathcal{W}}}\left| {I\left( w\right) }\right|$ , in (20) is ${n}^{2}$ , in (25) and (26) is $4\mathop{\sum }\limits_{{w \in \mathcal{W}}}\left| {I\left( u\right) }\right|$ , and in (22) is $2\mathop{\sum }\limits_{{w \in \mathcal{W}}}{\left| I\left( w\right) \right| }^{2}$ .
618
+
619
+ Corollary 1. When $I\left( v\right) = {I}_{\mathrm{{NN}}, r}\left( v\right)$ , then the number of variables in ${\mathcal{P}}_{\text{ConstOPTMECH }}$ is at most ${nr} + n + 1$ , the number of constraints is at most ${n}^{2}r + {3nr} + {2n}$ , and the number of non-zero elements is at most $2{n}^{2} + {5nr} + 2{n}^{2}r$ .
620
+
621
+ Proof. When $I\left( w\right) = {I}_{\mathrm{{NN}}, r}\left( w\right)$ , we have $\mathop{\sum }\limits_{{w \in \mathcal{W}}}\left| {I\left( w\right) }\right| \leq {rn}$ . Subject to this inequality, the maximum value of $\mathop{\sum }\limits_{{w \in \mathcal{W}}}{\left| I\left( w\right) \right| }^{2}$ is ${n}^{2}r$ , achieved when $r$ of the $I\left( w\right)$ ’s have size $n$ . Thus, the total number of constraints is at most ${2n} + {3nr} + {n}^{2}r$ . The total number of nonzero coefficients is $2{n}^{2} + {5nr} + 2{n}^{2}r$ .
622
+
623
+ Theorem 2. For any set $\mathcal{W}$ , metric ${d}_{\mathcal{W}}$ , and budget $\epsilon$ and replacement function $I\left( v\right) : \mathcal{W} \rightarrow {2}^{\mathcal{W}}$ , Mechanism CONSTOPT-MECH satisfies $\epsilon$ - ${d}_{\mathcal{W}}$ privacy.
624
+
625
+ Proof. Notice that the variables ${M}_{uv}$ outputted by ${\mathcal{P}}_{\text{CONSTOPTMECH }}\left( \epsilon \right)$ satisfy ${M}_{uw} \leq {e}^{\epsilon {d}_{\mathcal{W}}\left( {u, v}\right) /2}{M}_{vw}$ for all $u, v, w \in \mathcal{W}$ , which are exactly the constraints to ensure $\frac{\epsilon }{2}$ mDP. Using a method identical to the proof of Proposition 1, we can show that the quantities ${H}_{uv} = \frac{{M}_{uv}}{\mathop{\sum }\limits_{{w \in W}}{M}_{uw}}$ satisfy ${H}_{uw} \leq {e}^{\epsilon {d}_{\mathcal{W}}\left( {u, v}\right) }{H}_{vw}$ for all $u, v, w \in \mathcal{W}$ . Thus, the mechanism outputted by Algorithm 1 satisfies $\epsilon$ mDP.
626
+
627
+ Theorem 3. Consider an arbitrary set $\mathcal{W}$ and metric ${d}_{\mathcal{W}} : \mathcal{W} \times \mathcal{W} \rightarrow \mathbb{R}$ . Then, for any $\epsilon > 0$ , any mechanism $\mathcal{M}$ satisfying $\epsilon$ - ${d}_{\mathcal{W}}$ privacy and any(c, r, Q)-packing $S$ of $\mathcal{W}$ , it holds that
628
+
629
+ $$
630
+ \mathcal{L}\left( \mathcal{M}\right) \geq \mathop{\max }\limits_{{w \in \mathcal{W}}}r\left( {1 - \frac{1}{N\left( {w, S}\right) }}\right) . \tag{7}
631
+ $$
632
+
633
+ It follows that
634
+
635
+ $$
636
+ \mathcal{L}\left( \mathcal{M}\right) \geq r\left( {1 - \frac{1}{1 + \left( {c - 1}\right) \exp \left( {-{Q\epsilon }}\right) }}\right) . \tag{8}
637
+ $$
638
+
639
+ Proof. Let $S$ be any(c, r, Q)-packing of $\mathcal{W}$ , and let $\kappa = \mathop{\max }\limits_{{w \in \mathcal{W}}}1 - \frac{1}{N\left( {w, S}\right) }$ . Suppose for the sake of contradiction that there exists a mechanism $\mathcal{M}$ satisfying $\epsilon$ -mDP such that $\mathcal{L}\left( \mathcal{M}\right) < {r\kappa }$ . By definition, this implies $\mathcal{L}\left( {\mathcal{M}, w}\right) < {r\kappa }$ for all $w \in \mathcal{W}$ . By Markov’s inequality, this means $\Pr \left\lbrack {\mathcal{M}\left( w\right) \in B\left( {w, r}\right) }\right\rbrack > 1 - \kappa$ . Using the mDP property, the following holds for any $w,{w}^{\prime } \in \mathcal{W}$ :
640
+
641
+ $$
642
+ \Pr \left\lbrack {\mathcal{M}\left( w\right) \in B\left( {s, r}\right) }\right\rbrack \geq \exp \left( {-{d}_{\mathcal{W}}\left( {w, s}\right) \epsilon }\right) \Pr \left\lbrack {\mathcal{M}\left( s\right) \in B\left( {s, r}\right) }\right\rbrack
643
+ $$
644
+
645
+ $$
646
+ > \exp \left( {-{d}_{\mathcal{W}}\left( {w, s}\right) \epsilon }\right) \left( {1 - \kappa }\right) \text{.}
647
+ $$
648
+
649
+ For all $s,{s}^{\prime } \in S$ , the events $\mathcal{M}\left( w\right) \in B\left( {s, r}\right)$ and $\mathcal{M}\left( w\right) \in B\left( {{s}^{\prime }, r}\right)$ are disjoint by the properties of a(c, r, Q)-packing.
650
+
651
+ This implies, for all $w \in \mathcal{W}$ ,
652
+
653
+ $$
654
+ 1 \geq \mathop{\sum }\limits_{{s \in S}}\Pr \left\lbrack {\mathcal{M}\left( w\right) \in B\left( {s, r}\right) }\right\rbrack
655
+ $$
656
+
657
+ We know from above that $\Pr \left\lbrack {\mathcal{M}\left( w\right) \in B\left( {s, r}\right) }\right\rbrack > \exp \left( {-{d}_{\mathcal{W}}\left( {w, s}\right) \epsilon }\right) \left( {1 - \kappa }\right)$ . Thus, for all $w$ ,
658
+
659
+ $$
660
+ 1 > \left( {1 - \kappa }\right) \mathop{\sum }\limits_{{s \in S}}\exp \left( {-{d}_{\mathcal{W}}\left( {w, s}\right) \epsilon }\right) = \left( {1 - \kappa }\right) N\left( {w, S}\right)
661
+ $$
662
+
663
+ $$
664
+ \kappa > 1 - \frac{1}{N\left( {w, S}\right) } \tag{27}
665
+ $$
666
+
667
+ However, since $\kappa = \mathop{\max }\limits_{{w \in \mathcal{W}}}1 - \frac{1}{N\left( {w, S}\right) }$ , there is some $w$ such that $\kappa = 1 - \frac{1}{N\left( {w, S}\right) }$ . This contradicts ([27).
668
+
669
+ ![019638e8-71d8-7466-89c8-4ab198d4c272_13_238_173_1271_513_0.jpg](images/019638e8-71d8-7466-89c8-4ab198d4c272_13_238_173_1271_513_0.jpg)
670
+
671
+ Figure 3: Loss of Madlib (EuclidMech), EM, ConstOPTMech, SpannerMech, and OPTMech versus ${\epsilon }_{\text{actual }}$ on 50 and 200-size metric spaces generated from GloVe, along with the lower bound. The horizontal dotted line indicates the loss of returning a uniform random element.
672
+
673
+ ![019638e8-71d8-7466-89c8-4ab198d4c272_13_546_837_648_506_0.jpg](images/019638e8-71d8-7466-89c8-4ab198d4c272_13_546_837_648_506_0.jpg)
674
+
675
+ Figure 4: Loss of EM, ConstOPTMech (with $r = 5,{10}$ ), and SpannerMech versus size of metric space for metric spaces generated from GloVe. Here, ${\epsilon }_{\text{actual }}$ is fixed at 1.0 .
676
+
677
+ ## C ADDITIONAL EXPERIMENTAL RESULTS
678
+
679
+ Here, we include omitted plots for metric spaces generated from the GloVe word embedding. In Figure 3, we plot the losses of the tested mechanisms for generated metric spaces of size 50 and 200 . Our results indicate that ConstOPTMech achieves the best error out of all non-optimal mechanisms by about 5 to ${10}\%$ (size 50) and ${20}\%$ (size 200) for the middle ranges of epsilon (approximately $\left\lbrack {{0.5},{1.0}}\right\rbrack$ ). This is consistent with the experiments on the other metric spaces. Furthermore, SpannerMech has much higher loss than the other mechanisms, particularly for higher values of $\epsilon$ . This is likely due to distortions in the underlying metric space caused by the spanner.
680
+
681
+ Table 3 shows the time and NNZ required to compute the LPs in the LP-based mechanisms with ${\epsilon }_{\text{tight }} = {1.0}$ and increasing metric space sample sizes. SpannerMech was unable to scale past 300 elements and timed out, but unlike previous experiments, ConstOPTMech with $r = {10}$ was also unable to scale past 300 elements due to timing out. However, SpannerMech with $r = 5$ was able to scale to 400 elements without timing out. While it executed much more quickly than SpannerMech, there was not a large difference in the NNZ between the two methods. These results still indicate that ConstOPTMech is more scalable than SpannerMech, though with the caveat that one must choose $r$ carefully and the answer depends on the metric space.
682
+
683
+ Figure 4 indicates that ConstOPTMech with $r = 5,{10}$ result in much lower loss of about ${50} - {75}\%$ compared to SpannerMech with ${\epsilon }_{\text{tight }}$ fixed, with the gap larger for smaller metric spaces. This is consistent with our observations made in Figure 3.
684
+
685
+ <table><tr><td>GloVe</td><td>50</td><td>150</td><td>300</td><td>400</td></tr><tr><td>ConstOPTMech (10)</td><td>1.68 sec, 3.33e4 nnz</td><td>31.28 sec, 2.71e5 nnz</td><td>611.05 sec, 1.10e6 nnz</td><td>> 1800 sec, — nnz</td></tr><tr><td>ConstOPTMech (5)</td><td>0.80 sec, 1.48e4 nnz</td><td>15.76 sec, 1.56e5 nnz</td><td>127.07 sec, 4.59e5 nnz</td><td>447.10 sec, 7.76e5 nnz</td></tr><tr><td>SpannerMech</td><td>0.85 sec, 1.62e4 nnz</td><td>74.77 sec, 1.52e5 nnz</td><td>495.96 sec, 6.59e5 nnz</td><td>> 1800 sec, — nnz</td></tr><tr><td>OPTMech</td><td>8.56 sec, 2.50e5 nnz</td><td colspan="3">> 1800 sec, — nnz</td></tr></table>
686
+
687
+ Table 3: Running times and memory requirement of computing ConstOPTMech when $r = {10},5$ ; SpannerMech; and OPTMech for varying metric space sizes generated from FastText and Geolife. Each mechanism satisfies ${\epsilon }_{\text{tight }} = {2.0}$ (FastText) and 0.3 (Geolife).
688
+
UAI/UAI 2022/UAI 2022 Conference/B0l8-wLjql5/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,385 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § BALANCING UTILITY AND SCALABILITY IN METRIC DIFFERENTIAL PRIVACY
2
+
3
+ § ABSTRACT
4
+
5
+ Metric differential privacy (mDP) is a modification of differential privacy that is more suitable when records can be represented in a general metric space, such as text data represented as word embed-dings or geographical coordinates on a map. We consider the task of releasing elements of the metric space under metric differential privacy where utility is measured as the distance of the released element to the original element. Linear programming (LP) can be used to construct a mechanism that achieves the optimal utility for a particular mDP constraint. However, these LPs suffer from a polynomial explosion of variables and constraints that render them impractical for solving real-world problems. An important question is how to design rigorous mDP mechanisms that balance the utility-scalability tradeoff.
6
+
7
+ Our main contribution is a new method for reducing the LP size used to generate mDP mechanisms by constraining the search space such that certain input and output pairs have transition probabilities derived from the exponential mechanism. Our method produces mDP mechanisms whose LPs are smaller that all prior work in this area. We also provide a lower bound on the best possible mechanism utility. Our experiments on real-world metric spaces highlight the superior utility-scalability tradeoff of our mechanism.
8
+
9
+ § 1 INTRODUCTION
10
+
11
+ Privacy has emerged as a topic of strategic consequence across all computational fields. Differential privacy (DP), a mathematical formulation of privacy proposed by Dwork et al. [2006], provides provable protection guarantees against adversaries with arbitrary side information and computational power. See the book by Dwork and Roth [2013] for a primer on differential privacy and a survey of different techniques proposed in the literature.
12
+
13
+ More recently, researchers have noted that differential privacy does not take the underlying metric space of the data domain into account. Differential privacy provides the same level of protection to all perturbations of a single user's data which makes it inflexible when these perturbations are not all the same. For example, if the data consists of locations on earth, there is a large difference between discerning whether a user is in a 1 or a 100-mile radius. In many scenarios, the former type of privacy beach is more significant because a user's location is more accurately determined. This has led to the development of metric DP (mDP) which provides different protections depending on an underlying metric space, and has been adopted in applications involving releasing sensitive geolocation data [Andrés et al., 2013, Bordenabe et al., 2014] and textual data [Fernandes et al., 2019, Feyisetan et al., 2019, 2020, Xu et al., 2020, Feyisetan and Kasiviswanathan, 2021].
14
+
15
+ Mechanism utility for mDP is less well-understood than that of general DP, as the metric strongly influences the permitted behavior of the mechanism. While it is possible to design an optimal mechanism under mDP, it is also a computationally challenging task that requires solving a linear program (LP) with $O\left( {n}^{2}\right)$ variables and $O\left( {n}^{3}\right)$ constraints [Bordenabe et al.,2014], where $n$ is the size of the metric space (i.e., cardinality of the set). In fact, most mDP mechanisms [Feyisetan et al., 2020, 2019, Xu et al., 2020] do not provide any rigorous guarantees on the utility. Our key contributions are as follows:
16
+
17
+ (a) We present a general framework for designing mDP mechanisms which have a better tradeoff between mechanism utility and the size of the LP used to compute the mechanism (Section 3). This framework is based on adding to the optimal mDP LP new constraints that certain transition probabilities are equal to a weighted version of those arising from the exponential mechanism. ${}^{1}$ As a concrete instantiation of this framework, we construct a LP based on $r$ nearest neighbors of each point, under which the resulting LP has just $O\left( {nr}\right)$ variables and $O\left( {{n}^{2}r}\right)$ constraints, and in practice, $r$ can be set as a small constant. Therefore, our new mechanism substantially increases the size of the metric space on which mDP mechanisms can be practically applied.
18
+
19
+ ${}^{1}$ Exponential mechanism [McSherry and Talwar,2007] is a popular approach for differentially private selection.
20
+
21
+ (b) We prove a lower bound, depending on the underlying metric space, on the loss that any mechanism must make (Section 4). This provides the first non-trivial loss lower bound on any mDP mechanism, including the optimal one. This lower bound is valuable, especially in situations when the LP for the optimal mechanism is infeasible to solve.
22
+
23
+ (c) We perform extensive experiments comparing the utility and privacy of our proposed mechanism and existing mechanisms in text and geolocation applications (Section 5). These experiments indicate that our proposed mechanism performs more closely to the optimal mechanism than others tested and can result in a utility improvement of about ${25}\%$ compared to the non-optimal mechanisms. In terms of scalability, our results indicate that our proposed mechanism can scale to metric spaces four times larger than the optimal mechanism.
24
+
25
+ Related Work in Metric DP. Metric DP originated in the context of location privacy where given a dataset of ge-olocation coordinates (longitude and latitude) on a plane, the notion of adjacency could be better captured using the Euclidean distance between the coordinates [Andrés et al., 2013, Chatzikokolakis et al., 2013]. Metric DP mechanisms have been investigated for various choices of metrics, including Euclidean, Manhattan, and Chebyshev metrics, among others [Chatzikokolakis et al., 2013, Andrés et al., 2013, Chatzikokolakis et al., 2015, Fernandes et al., 2019, Feyise-tan et al., 2019, 2020]. Unlike our focus here, none of these results compare the loss of their proposed mechanisms to the optimal loss.
26
+
27
+ The most directly related work to ours is that of Borden-abe et al. [2014]. This paper proposes finding the optimal mDP mechanism using linear programming. They propose a method based on spanner graphs to reduce the size of the LP (outlined in Appendix A). A $\delta$ -spanner graph is a set of edges between points in a metric space such that the distance between two points in the graph approximates the metric up to a multiplicative factor $\delta$ . Bordenabe et al. [2014] use a 3-spanner, for which a construction using just $O\left( {n}^{1.5}\right)$ edges exists, to reduce the number of constraints in the LP from $O\left( {n}^{3}\right)$ to $O\left( {n}^{2.5}\right)$ .
28
+
29
+ Related Work in Privately Releasing Text Embeddings. Vector representations of words, sentences, and documents have all become basic building blocks in NLP pipelines and algorithms. Hence, it is natural to consider privacy mechanisms that target these representations in the underlying metric space [Fernandes et al., 2019, Feyisetan et al., 2019, Xu et al., 2020, Feyisetan et al., 2020]. The most relevant result to our setting is the mechanism of Feyisetan et al. [2020] (referred to as Madlib). In Section 5, we compare our mechanism to Madlib as a baseline. 2
30
+
31
+ § 2 TECHNICAL PRELIMINARIES
32
+
33
+ Throughout this paper, we consider data that comes from a finite metric space $\left( {\mathcal{W},{d}_{\mathcal{W}}}\right)$ where $\mathcal{W}$ is the set of values the data may take. For example, in the text release usecase, $\mathcal{W}$ consists of a vocabulary set, and in the geo-locations usecase, $\mathcal{W}$ consists of a set of locations. The metric ${d}_{\mathcal{W}} : \mathcal{W} \times \mathcal{W} \rightarrow$ $\mathbb{R}$ captures dissimilarity between elements in the set. In NLP applications, it is very common to represent words via a high-dimensional text embedding $\phi : \mathcal{W} \rightarrow {\mathcal{W}}^{\prime } \subseteq {\mathbb{R}}^{d}$ . Then we can define the distance between the words as the distance between the embedded words: i.e., for all ${w}_{1},{w}_{2} \in$ $\mathcal{W}$ , we define ${d}_{\mathcal{W}}\left( {{w}_{1},{w}_{2}}\right) = {d}_{\mathcal{W}}^{\prime }\left( {\phi \left( {w}_{1}\right) ,\phi \left( {w}_{2}\right) }\right)$ .
34
+
35
+ § 2.1 PRIVACY ON METRIC SPACES
36
+
37
+ Informally, a mechanism $\mathcal{M}$ satisfies metric ${\mathrm{{DP}}}^{4}$ if its behavior is nearly the same on inputs that are close together in the metric space. This is formalized by the following notion of $\epsilon$ - ${d}_{\mathcal{W}}$ privacy.
38
+
39
+ Definition 1 (Metric DP (mDP)). Given a finite set $\mathcal{W}$ , a metric ${d}_{\mathcal{W}} : \mathcal{W} \times \mathcal{W} \rightarrow \mathbb{R}$ , and a privacy parameter $\epsilon > 0$ , a mechanism $\mathcal{M} : \mathcal{W} \rightarrow \mathcal{W}$ satisfies $\epsilon$ - ${d}_{\mathcal{W}}$ privacy iffor all ${w}_{1},{w}_{2},w \in \mathcal{W}$ :
40
+
41
+ $$
42
+ \mathbb{P}r\left\lbrack {\mathcal{M}\left( {w}_{1}\right) = w}\right\rbrack \leq \exp \left( {\epsilon {d}_{\mathcal{W}}\left( {{w}_{1},{w}_{2}}\right) }\right) \mathbb{P}r\left\lbrack {\mathcal{M}\left( {w}_{2}\right) = w}\right\rbrack .
43
+ $$
44
+
45
+ The above definition is closely related to the definition of local ${DP}$ [Kasiviswanathan et al.,2011] in that we apply $\mathcal{M}$ to each element of some database $D \in {\mathcal{W}}^{m}$ independently. The difference of mDP over local DP is that, because of the ${d}_{\mathcal{W}}$ term (which is absent in the local DP formulation), mDP mechanism guarantees indistinguishability for those ${w}_{1},{w}_{2} \in \mathcal{W}$ based on the distance ${d}_{\mathcal{W}}\left( {\phi \left( {w}_{1}\right) ,\phi \left( {w}_{2}\right) }\right)$ between them. Similar to traditional differential privacy, mDP is preserved under post-processing and composition of mechanisms [Koufogiannis et al., 2016]. In metric spaces, a natural definition of mechanism loss on an element $w \in \mathcal{W}$ is the expected distance between $w$ and $\mathcal{M}\left( w\right) : \mathcal{L}\left( {\mathcal{M},w}\right) = {\mathbb{E}}_{\mathcal{M}}\left\lbrack {{d}_{\mathcal{W}}\left( {w,\mathcal{M}\left( w\right) }\right) }\right\rbrack$ . Here, the expectation is over the random bits in $\mathcal{M}$ . We define the loss of $\mathcal{M}$ to be the worst-case loss of $\mathcal{M}$ on any particular element $w \in \mathcal{W}$ :
46
+
47
+ $$
48
+ \mathcal{L}\left( \mathcal{M}\right) = \mathop{\max }\limits_{{w \in \mathcal{W}}}\mathcal{L}\left( {\mathcal{M},w}\right) \tag{1}
49
+ $$
50
+
51
+ ${}^{2}$ While our mDP mechanism is applicable to any metric space, our experiments are over word embeddings and geolocations in the Euclidean space. Therefore, we do not directly compare with Fernandes et al., 2019, Feyisetan et al., 2019, Xu et al., 2020] which work with embeddings in non-Euclidean spaces.
52
+
53
+ ${}^{3}$ Our results do not depend on the choice of the embedding.
54
+
55
+ ${}^{4}$ Metric DP is sometimes referred to as Lipschitz privacy [Koufogiannis et al., 2016], motivated by the fact that the privacy guarantee can be viewed as a Lipschitz condition on the mechanism, $\left| {\ln \left( {\mathbb{P}r\left\lbrack {\mathcal{M}\left( {w}_{1}\right) = w}\right\rbrack }\right) - \ln \left( {\mathbb{P}r\left\lbrack {\mathcal{M}\left( {w}_{2}\right) = w}\right\rbrack }\right) }\right| \leq$ $\epsilon {d}_{\mathcal{W}}\left( {{w}_{1},{w}_{2}}\right)$ .
56
+
57
+ Notice that $\mathcal{L}\left( \mathcal{M}\right)$ is non-negative due to ${d}_{\mathcal{W}}$ being a metric. Considering the loss as a worst-case instead of an average has the advantage that there cannot exist "adversarial" elements $w \in \mathcal{W}$ such that $\mathcal{L}\left( {\mathcal{M},w}\right)$ is much higher than $\mathcal{L}\left( \mathcal{M}\right)$ . Similar loss functions have been studied in other DP settings such as in [Hardt and Talwar, 2009].
58
+
59
+ Optimal Mechanism with LP. It is easy to see that the constraints of mDP are linear. For a mechanism $\mathcal{M}$ , we can consider its stochastic matrix $M$ given by $M = \left\{ {M}_{uv}\right.$ : $u,v \in \mathcal{W}\}$ with ${M}_{uv} = \Pr \left\lbrack {\overline{\mathcal{M}}\left( u\right) = v}\right\rbrack$ . Then, $\mathcal{M}$ satisfies mDP if and only if $M$ is stochastic and satisfies the following constraints
60
+
61
+ $$
62
+ {M}_{uw} \leq {M}_{vw} \cdot \exp \left( {{d}_{\mathcal{W}}\left( {u,v}\right) \epsilon }\right) \;\forall u,v,w \in \mathcal{W} \tag{2}
63
+ $$
64
+
65
+ Since the constraints are linear, $\mathrm{{mDP}}$ constrains $M$ to be in a polytope. We will overload notation and write $\mathcal{L}\left( {M,w}\right)$ and $\mathcal{L}\left( M\right)$ as the losses of the mechanism given by transition matrix $M$ . These losses are given by:
66
+
67
+ $$
68
+ \mathcal{L}\left( M\right) = \mathop{\max }\limits_{{u \in \mathcal{W}}}\mathcal{L}\left( {M,w}\right)
69
+ $$
70
+
71
+ $$
72
+ = \mathop{\max }\limits_{{u \in \mathcal{W}}}\mathop{\sum }\limits_{{v \in \mathcal{W}}}{d}_{\mathcal{W}}\left( {u,v}\right) {M}_{uv}. \tag{3}
73
+ $$
74
+
75
+ Over the variables ${M}_{uv},\mathcal{L}\left( \mathcal{M}\right)$ is a maximum of linear functions. The optimal mechanism is given by the stochastic matrix $M$ that minimizes $\mathcal{L}\left( \mathcal{M}\right)$ subject to the privacy constraints (2). Using standard techniques in linear programming, we can compute the best mechanism with the following LP over the variables $M,k$ :
76
+
77
+ $$
78
+ {\mathcal{P}}_{\text{ OPTMECH }}\left( \epsilon \right) = \text{ minimize }k\text{ subject to }
79
+ $$
80
+
81
+ $$
82
+ \mathcal{L}\left( {M,w}\right) \leq k,\;\forall w \in \mathcal{W}
83
+ $$
84
+
85
+ $M$ stochastic
86
+
87
+ $M$ satisfies (2)
88
+
89
+ This LP problem has $O\left( {n}^{2}\right)$ variables and $O\left( {n}^{3}\right)$ constraints, where $n = \left| \mathcal{W}\right|$ . Therefore, even with the state-of-the-art LP approaches, which all require $\Omega \left( {N}^{2}\right)$ time, where $N$ is the number of variables [Jiang et al., 2021], scalability is problematic (here $N = {n}^{2}$ ). This is the central motivation for our work.
90
+
91
+ § 3 BALANCING UTILITY-SCALABILITY
92
+
93
+ Given the scalability issues in solving ${\mathcal{P}}_{\text{ OPTMECH }}\left( \epsilon \right)$ , a natural idea is to reduce the LP size. In this section, we present a new method to reduce the size of the LP in ${\mathcal{P}}_{\text{ OPTMECH }}$ while still maintaining the mDP guarantee (Definition 1). Our method is based on adding exponential mechanism (EXPMECH) [McSherry and Talwar, 2007] equality constraints to the LP. Before we do this, we make use of an observation that the EXPMECH is provably not optimal in mDP which may be of independent interest. Thus, the constraints we add come from a "weighted" version of the EXPMECH. All missing proofs are collected in Appendix B.
94
+
95
+ § 3.1 IMPROVING THE EXPMECH IN MDP
96
+
97
+ Informally, the exponential mechanism [McSherry and Talwar, 2007] is a method for deferentially private selection from a discrete set of candidate outputs. Due to its flexibility the EXPMECH has become a popular tool for designing DP mechanisms. Furthermore, the EXPMECH is known to be optimal in DP for many choices of utility function [Hardt and Talwar, 2009, Aldà and Simon, 2017].
98
+
99
+ However, in the mDP setting, the exponential mechanism can be fooled by outlier elements. Informally, the dense areas of the metric space can act as a "black hole" where the EXPMECH will output elements in the dense area with high probability, even for the outlier elements. This drives up the loss for the outlier elements.
100
+
101
+ For a metric space $\mathcal{W} = \left\{ {{w}_{1},\ldots ,{w}_{n}}\right\}$ and metric ${d}_{\mathcal{W}}$ , the EXPMECH has the following transition probability
102
+
103
+ $$
104
+ \mathbb{P}r\left\lbrack {\operatorname{EXPMECH}\left( {w}_{i}\right) = {w}_{j}}\right\rbrack = \frac{{e}^{-\epsilon {d}_{\mathcal{W}}\left( {{w}_{i},{w}_{j}}\right) /2}}{\mathop{\sum }\limits_{{k = 1}}^{n}{e}^{-\epsilon {d}_{\mathcal{W}}\left( {{w}_{i},{w}_{k}}\right) /2}}.
105
+ $$
106
+
107
+ To illustrate on a concrete example, consider the metric space where $\mathcal{W} = \left\{ {{w}_{1},\ldots ,{w}_{n}}\right\}$ and where ${d}_{\mathcal{W}}$ satisfies (1) ${d}_{\mathcal{W}}\left( {{w}_{1},{w}_{i}}\right) = 1$ for $i \geq 2$ , and (2) ${d}_{\mathcal{W}}\left( {{w}_{i},{w}_{j}}\right) < \delta$ when $i,j \geq 2$ , where $\delta$ is a small constant. For each $j \geq 2$ , the EXPMECH satisfies $\Pr \left\lbrack {\operatorname{EXPMECH}\left( {w}_{1}\right) = {w}_{j}}\right\rbrack = \frac{{e}^{-\epsilon }}{1 + n{e}^{-\epsilon }}$ , and thus $\Pr \left\lbrack {\operatorname{EXPMECH}\left( {w}_{1}\right) \neq {w}_{1}}\right\rbrack = \frac{n{e}^{-\epsilon }}{1 + n{e}^{-\epsilon }}$ . As $n$ grows, the probability of this occurring approaches 1, and the loss does as well. The elements ${w}_{i}$ for $i \geq 2$ are acting as a "black hole".
108
+
109
+ This can be fixed by considering a more general mechanism that assigns weights to the output probabilities. For positive weights $\mathbf{Y} = \left( {{Y}_{1},\ldots ,{Y}_{n}}\right) \in {\left( {\mathbb{R}}^{ + }\right) }^{n}$ , consider the more general mechanism given by
110
+
111
+ $$
112
+ \mathbb{P}r\left\lbrack {{\operatorname{EXPMECH}}_{\mathbf{Y}}\left( {w}_{i}\right) = {w}_{j}}\right\rbrack = \frac{{Y}_{j}{e}^{-\epsilon {d}_{\mathcal{W}}\left( {{w}_{i},{w}_{j}}\right) /2}}{\mathop{\sum }\limits_{{k = 1}}^{n}{Y}_{k}{e}^{-\epsilon {d}_{\mathcal{W}}\left( {{w}_{i},{w}_{k}}\right) /2}}.
113
+ $$
114
+
115
+ This mechanism can be shown to satisfy $\epsilon - {d}_{\mathcal{W}}$ privacy.
116
+
117
+ Proposition 1. For any metric space and any $\mathbf{Y} \in {\left( {\mathbb{R}}^{ + }\right) }^{n}$ , the mechanism ${\mathrm{{EXPMECH}}}_{\mathbf{Y}}$ satisfies $\epsilon$ - ${d}_{\mathcal{W}}$ privacy. In our example, to avoid the problem encountered by the regular EXPMECH, we can weight ${w}_{1}$ higher than ${w}_{2},\ldots ,{w}_{n}$ . For example, there exists a weighting $\mathbf{Y}$ such that the following loss is possible:
118
+
119
+ ${}^{5}$ A stochastic matrix is a square matrix whose rows are probability vectors.
120
+
121
+ Lemma 1. With $\mathcal{W} = \left\{ {{w}_{1},\ldots ,{w}_{n}}\right\}$ and ${d}_{\mathcal{W}}$ defined as above, when $\mathbf{Y} = \left( {1,1/\left( {n - 1}\right) ,1/\left( {n - 1}\right) ,\ldots }\right)$ , we have that
122
+
123
+ $$
124
+ \mathcal{L}\left( {\operatorname{EXPMECH}}_{\mathbf{Y}}\right) \leq \frac{\frac{1}{n - 1} + {e}^{-\epsilon /2}}{1 + {e}^{-\epsilon /2}}\mathcal{L}\left( \text{ EXPMECH }\right) .
125
+ $$
126
+
127
+ When $\epsilon \geq 2\log n$ , we have $\mathcal{L}\left( {\mathrm{{EXPMECH}}}_{\mathbf{Y}}\right) \leq 2/(n -$ 1) $\mathcal{L}\left( \text{ EXPMECH }\right)$ .
128
+
129
+ This establishes that the EXPMECH is provably not optimal on our example metric space. However, one problem with the more general ${\mathrm{{EXPMECH}}}_{\mathbf{Y}}$ is that it is not clear how to set $\mathbf{Y}$ to optimize the loss other than the rule of thumb that dense elements should be weighted less. In the next section, we leave it to the LP solver to optimize these weights.
130
+
131
+ § 3.2 BALANCING LP LOSS AND SCALABILITY
132
+
133
+ To reduce the number of LP constraints required to find the optimal mechanism, our key idea is to add equality constraints in such a way that many of the original constraints in ${\mathcal{P}}_{\text{ OPTMECH }}$ are trivially satisfied. This results in potentially a much smaller LP; however, optimality is no longer guaranteed. The balance between optimality and LP size is decided by the number of equality constraints. In fact, we will develop a general framework for balancing this tradeoff, which we call ConstOPTMech (Algorithm 1).
134
+
135
+ Specifically, to obtain the LP describing ConstOPTMech, we start with ${\mathcal{P}}_{\text{ OPTMECH }}\left( \epsilon \right)$ and add non-negative variables $\left\{ {{Y}_{w} : w \in \mathcal{W}}\right\}$ . Then, for certain variables ${M}_{uv}$ , we add additional "weighted exponential mechanism-like" constraints: ${M}_{uv} = {Y}_{v}{e}^{-\epsilon {d}_{\mathcal{W}}\left( {u,v}\right) }$ . We leave the weights ${Y}_{v}$ to be optimized by the LP solver.
136
+
137
+ We allow deviations from the additional constraints in the form of a replacement function $I\left( v\right) : \mathcal{W} \rightarrow {2}^{\mathcal{W}}$ that returns the elements $u \in \mathcal{W}$ for which the weighted exponential mechanism should not be used to set ${M}_{uv}$ . To encode this, we add the following constraints:
138
+
139
+ $$
140
+ {M}_{uv} = {Y}_{v}{e}^{-\epsilon {d}_{\mathcal{W}}\left( {u,v}\right) }\forall u,v \in \mathcal{W};u \notin I\left( v\right) \tag{4}
141
+ $$
142
+
143
+ The replacement function $I\left( v\right)$ indicates where we do not want to use the exponential mechanism, and there are many candidates. We will later consider the following instantiation of this replacement function
144
+
145
+ $$
146
+ {I}_{\mathrm{{NN}},r}\left( v\right) = \{ u \in \mathcal{W} \mid v\text{ is an }r\text{ -nearest neighbor of }u.\} ,
147
+ $$
148
+
149
+ (5)
150
+
151
+ which returns the elements $u$ such that $v$ is one of the $r$ nearest neighbors of $u$ . We employ this function ${I}_{\mathrm{{NN}},r}$ because the exponential mechanism already assigns exponentially-low probabilities of returning the farthest elements from a given element, and we conjecture there is not much improvement to be made for such scenarios.
152
+
153
+ Adding the constraints [4] to ${\mathcal{P}}_{\text{ OPTMECH }}\left( \epsilon \right)$ will satisfy the privacy constraints (2), but it may be impossible for $M$ to be stochastic. One can see this in the extreme example by setting $I\left( v\right) = \varnothing$ ; then ([4] holds for all $u,v \in \mathcal{W}$ , and nonnegative assignment to ${Y}_{v}$ that makes $M$ stochastic need not exist. To fix this, we relax the constraint that $M$ be stochastic, and only insist that its rows sum to values more than 1 . We add a penalization term involving a constant $\lambda > 0$ to the loss of each element to penalize the extent to which the rows sum to more than 1 . As such, our loss function now takes the following form:
154
+
155
+ $$
156
+ \widetilde{\mathcal{L}}\left( {M,w}\right) = \mathop{\sum }\limits_{{u \in \mathcal{W}}}{M}_{wu}{d}_{\mathcal{W}}\left( {u,w}\right) + \lambda \mathop{\sum }\limits_{{u \in \mathcal{W}}}{M}_{wu} \tag{6}
157
+ $$
158
+
159
+ With the modified loss function and relaxed stochasticity requirement, we obtain the LP giving ConstOPTMech.
160
+
161
+ ${\mathcal{P}}_{\text{ CONSTOPTMECH }}\left( \epsilon \right) =$ minimize $k$ subject to
162
+
163
+ $$
164
+ \widetilde{\mathcal{L}}\left( {M,w}\right) \leq k\forall w \in \mathcal{W}
165
+ $$
166
+
167
+ $$
168
+ {M}_{uv} \geq 0,\mathop{\sum }\limits_{{v \in \mathcal{W}}}{M}_{uv} \geq 1\;\forall u,v \in \mathcal{W}
169
+ $$
170
+
171
+ $$
172
+ M\text{ satisfies (2) and (4) }
173
+ $$
174
+
175
+ $$
176
+ {Y}_{v} \geq 0\forall v \in \mathcal{W}
177
+ $$
178
+
179
+ Notice that this LP is always feasible because one valid solution is ${M}_{uv} = {Y}_{v}{e}^{-\epsilon {d}_{W}\left( {u,v}\right) }$ for all $u,v$ -since there are no restrictions on the ${Y}_{v}$ variables, we can set them high enough so that $\mathop{\sum }\limits_{{v \in \mathcal{W}}}{M}_{uv} \geq 1$ .
180
+
181
+ The benefit of the equality constraints (4) is that we can drop a large number of the constraints in (2), as they are trivially satisfied. This allows us to find a solution to ${\mathcal{P}}_{\text{ CONSTOPTMECH }}\left( \epsilon \right)$ much faster than ${\mathcal{P}}_{\text{ OPTMECH }}\left( \epsilon \right)$ .
182
+
183
+ Theorem 1. ${\mathcal{P}}_{\text{ CONSTOPTMECH }}$ is feasible, and it is possible to solve it using a linear program with $n + 1 + \mathop{\sum }\limits_{{v \in \mathcal{W}}}\left| {I\left( v\right) }\right|$ variables and ${2n} + \mathop{\sum }\limits_{{v \in \mathcal{W}}}2{\left| I\left( v\right) \right| }^{2} + 3\left| {I\left( v\right) }\right|$ constraints. The number of non-zero coefficients in the LP is at most $2{n}^{2} + \mathop{\sum }\limits_{{v \in \mathcal{W}}}2{\left| I\left( v\right) \right| }^{2} + 5\left| {I\left( v\right) }\right|$ .
184
+
185
+ Our choice to drop the stochasticity requirement of $M$ gave us feasibility of ${\mathcal{P}}_{\text{ CONSTOPTMECH }}$ , but the solution $M$ is no longer a mechanism because it is not stochastic. We obtain ConstOPTMech by normalizing the solution to ${\mathcal{P}}_{\text{ CONSTOPTMECH }}\left( \epsilon \right)$ . Furthermore, any choice of $\lambda$ gives rise to a valid mechanism.
186
+
187
+ Mechanism CONSTOPTMECH uses half of the privacy budget for solving ${\mathcal{P}}_{\text{ CONSTOPTMECH }}\left( \frac{\epsilon }{2}\right)$ because normalization may increase the privacy guarantee by a factor of 2 . We
188
+
189
+ Algorithm 1: Mechanism ConstOPTMech
190
+
191
+ Data: Universe $\mathcal{W}$ , metric ${d}_{\mathcal{W}}$ , budget $\epsilon$ , replacement
192
+
193
+ function $I\left( v\right) ,\lambda \in {\mathbb{R}}^{ + }$ .
194
+
195
+ Result: Transition matrix $H$ .
196
+
197
+ $M$ , loss $\leftarrow$ Solve $\left( {{\mathcal{P}}_{\text{ ConstOPTMech }}\left( \frac{\epsilon }{2}\right) }\right)$ with $\lambda$ ;
198
+
199
+ for $u,v \in \mathcal{W}$ do
200
+
201
+ ${H}_{uv} \leftarrow \frac{{M}_{uv}}{\mathop{\sum }\limits_{{w \in \mathcal{W}}}{M}_{uw}};$
202
+
203
+ return $H$
204
+
205
+ are able to show the following privacy guarantee on CON-STOPTMECH. The proof is a generalization of Proposition [1]
206
+
207
+ Theorem 2. For any set $\mathcal{W}$ , metric ${d}_{\mathcal{W}}$ , and budget $\epsilon$ and replacement function $I\left( v\right) : \mathcal{W} \rightarrow {2}^{\mathcal{W}}$ , Mechanism CON-STOPTMECH satisfies $\epsilon$ - ${d}_{\mathcal{W}}$ privacy.
208
+
209
+ Setting the Replacement Function. In Theorem 1 when the replacement function $I\left( v\right) = {I}_{\mathrm{{NN}},r}\left( v\right)$ , then the number of variables in ${\mathcal{P}}_{\text{ CONSTOPTMECH }}$ is at most ${nr} + n + 1$ , the number of constraints is at most ${n}^{2}r + {3nr} + {2n}$ , and the number of non-zero elements is at most $2{n}^{2} + {5nr} + 2{n}^{2}r$ (see Corollary 1).
210
+
211
+ When we apply with $r = n - 1$ , we add no equality constraints, and (4) becomes equivalent to (2). The LP size of ${\mathcal{P}}_{\text{ CONSTOPTMECH }}$ is $O\left( {n}^{3}\right)$ , the same as that of ${\mathcal{P}}_{\text{ OPTMECH }}$ . When $r$ is a constant much less than $n$ , then the number of variables is $O\left( {nr}\right)$ and the number of constraints is $O\left( {{n}^{2}r}\right)$ , each term saving a factor of $n$ . We note that $O\left( {{n}^{2}r}\right)$ is a worst-case bound that is not always tight. If one assumes that there is no $v \in \mathcal{W}$ such that at least ${10r}$ other elements of $\mathcal{W}$ count $v$ as an $r$ -nearest neighbor, then $\mathop{\sum }\limits_{{v \in \mathcal{W}}}{\left| I\left( v\right) \right| }^{2} + \left| {I\left( v\right) }\right| \leq \mathop{\sum }\limits_{{v \in \mathcal{W}}}{110}{r}^{2} \leq O\left( {n{r}^{2}}\right) .$
212
+
213
+ In Table 1, we compare the number of variables, constraints, and non-zero coefficients arising in ${\mathcal{P}}_{\text{ CONSTOPTMECH }}$ as compared to those in the optimal mechanism $\left( {\mathcal{P}}_{\text{ OPTMECH }}\right)$ and the LP based of spanner graphs [Bordenabe et al., 2014] (referred to as ${\mathcal{P}}_{\text{ SPANNERMECH }}$ , see Appendix A). We see that ${\mathcal{P}}_{\text{ CONSTOPTMECH }}$ improves on all three of these quantities compared to the other two LPs when $r \ll n$ . However, we note that these are worst-case upper bounds, and in practice the LP complexity measures may smaller. Based on setting of $r$ , we have a tradeoff between scalability and increase to the loss compared to the optimal mechanism. We perform an empirical analysis of this tradeoff in Section 5.
214
+
215
+ § 4 LOWER BOUNDS
216
+
217
+ In this section, we propose an easy to compute lower bound for mechanism loss. Our lower bound builds on the intuition that for an element $w \in \mathcal{W}$ , if there are many elements that are far, but not too far, from $w$ , then mDP forces the distribution $\mathcal{M}\left( w\right)$ to place significant mass on the elements which are farther away. This gives a lower bound on the loss of $\mathcal{M}$ . To make this intuition formal, we define a packing of
218
+
219
+ $\mathcal{W}$ to be a set of elements which are at least a certain distance from each other. In the following, let $B\left( {x,r}\right)$ denote the elements $y \in \mathcal{W}$ such that ${d}_{\mathcal{W}}\left( {x,y}\right) \leq r$ .
220
+
221
+ Definition 2. Let $\mathcal{W}$ be a set. A finite set $S \subseteq \mathcal{W}$ is called $a\left( {c,r,Q}\right)$ -packing w.r.t. metric ${d}_{\mathcal{W}}$ if the following hold: $\left| S\right| = c$ ; for all $x,{x}^{\prime } \in S,B\left( {x,r}\right) \cap B\left( {{x}^{\prime },r}\right) = \varnothing$ , i.e. the balls around the elements in $S$ of radius $r$ are disjoint; and for all $x,{x}^{\prime } \in S,{d}_{\mathcal{W}}\left( {x,{x}^{\prime }}\right) \leq Q$ , i.e. the maximum distance between any two elements in $S$ is at most $Q$ .
222
+
223
+ The lower bound we derive holds for any(c, r, Q)-packing of the metric space $\mathcal{W}$ . The catch is that if a packing with a small $r$ or $c$ is used, the bound will not be strong. Our lower bound involves the quantity $N\left( {w,S}\right)$ that depends on a $w \in \mathcal{W}$ and a(c, r, Q)-packing $S.N\left( {w,S}\right)$ is given by:
224
+
225
+ $$
226
+ N\left( {w,S}\right) = \mathop{\sum }\limits_{{s \in S}}\exp \left( {-{d}_{\mathcal{W}}\left( {w,s}\right) \epsilon }\right) ,
227
+ $$
228
+
229
+ We also have the lower bound $N\left( {w,S}\right) \geq 1 + (c -$ 1) $\exp \left( {-{Q\epsilon }}\right)$ which follows because $S$ is a(c, r, Q)- packing. Notice that $N\left( {w,S}\right) \geq 1$ (because $w \in S$ and ${d}_{\mathcal{W}}\left( {w,w}\right) = 0$ ), and $N\left( {w,S}\right)$ grows linearly with the number of elements in $S$ and grows exponentially when the elements in $S$ are closer to $w$ or when $\epsilon$ decreases. This represents the increasing amount of mass that must be placed on these elements according to mDP. Our lower bound will grow stronger with increasing $N\left( {w,S}\right)$ . The lower bound is as follows (proof in Appendix B).
230
+
231
+ Theorem 3. Consider an arbitrary set $\mathcal{W}$ and metric ${d}_{\mathcal{W}}$ : $\mathcal{W} \times \mathcal{W} \rightarrow \mathbb{R}$ . Then, for any $\epsilon > 0$ , any mechanism $\mathcal{M}$ satisfying $\epsilon$ - ${d}_{\mathcal{W}}$ privacy and any(c, r, Q)-packing $S$ of $\mathcal{W}$ , it holds that
232
+
233
+ $$
234
+ \mathcal{L}\left( \mathcal{M}\right) \geq \mathop{\max }\limits_{{w \in \mathcal{W}}}r\left( {1 - \frac{1}{N\left( {w,S}\right) }}\right) . \tag{7}
235
+ $$
236
+
237
+ It follows that
238
+
239
+ $$
240
+ \mathcal{L}\left( \mathcal{M}\right) \geq r\left( {1 - \frac{1}{1 + \left( {c - 1}\right) \exp \left( {-{Q\epsilon }}\right) }}\right) . \tag{8}
241
+ $$
242
+
243
+ Both (7) and (8) have a simple interpretation. The $r$ term represents the minimum loss that must be incurred when a mechanism returns an element that is in a different ball than the starting ball in the packing. The $r$ term is multiplied by $P = 1 - \frac{1}{N\left( {w,S}\right) }$ , which can be interpreted as a probability since it is between 0 and 1 (since $N\left( {w,S}\right) \geq 1$ ). As we show in the theorem proof, $P$ is a lower bound on the probability that the mechanism returns an element in a different ball from $w$ and thus incurs the error $r.P$ increases with $N\left( {w,S}\right)$ , which depends on the packing in the ways we identified above. $P$ is small only when $N\left( {w,S}\right)$ approaches 1, and using the bound $N\left( {w,S}\right) \geq 1 + \left( {c - 1}\right) \exp \left( {-{Q\epsilon }}\right)$ , we see the central term controlling its closeness to 1 is $\left( {c - 1}\right) \exp \left( {-{Q\epsilon }}\right)$ . Here the parameter $Q$ crucially comes into play because if the elements in the packing are too far apart, then mDP is a weak privacy guarantee, $N\left( {w,S}\right)$ will approach 1, and the lower bound will weaken.
244
+
245
+ max width=
246
+
247
+ X #Variables #Constraints #Non-Zeroes
248
+
249
+ 1-4
250
+ Optimal LP: Poptmech $O\left( {n}^{2}\right)$ $O\left( {n}^{3}\right)$ $O\left( {n}^{3}\right)$
251
+
252
+ 1-4
253
+ Spanner-based LP: ${\mathcal{P}}_{\text{ SpannerMECH }}$ [Bordenabe et al.,2014] $O\left( {n}^{2}\right)$ $O\left( {n}^{2.5}\right)$ $O\left( {n}^{2.5}\right)$
254
+
255
+ 1-4
256
+ Our Method: ${\mathcal{P}}_{\text{ CONSTOPTMECH }}$ with function ${I}_{\mathrm{{NN}},r}$ $O\left( {nr}\right)$ $O\left( {{n}^{2}r}\right)$ $O\left( {{n}^{2}r}\right)$
257
+
258
+ 1-4
259
+
260
+ Table 1: Comparison of the number of variables and constraints for the various LP-based methods achieving mDP. Note that ${\mathcal{P}}_{\text{ CONSTOPTMECH }}$ improves on existing methods when $r \ll n$ .
261
+
262
+ An important special case of our theorem occurs when we take $S$ to be the two farthest elements ${w}_{max}^{1},{w}_{max}^{2} \in$ $\mathcal{W}$ . In this case, $S$ is a $\left( {2,\frac{{r}^{ * }}{2},{r}^{ * }}\right)$ -packing where ${r}^{ * } =$ ${d}_{\mathcal{W}}\left( {{w}_{\max }^{1},{w}_{\max }^{2}}\right)$ . Our lower bound then reads $\mathcal{L}\left( \mathcal{M}\right) \geq$ $\frac{{r}^{ * }}{2}\left( \frac{\exp \left( {-{r}^{ * }\epsilon }\right) }{1 + \exp \left( {-{r}^{ * }\epsilon }\right) }\right)$ .
263
+
264
+ § 5 EXPERIMENTAL RESULTS
265
+
266
+ We investigate through experiments how the loss of our proposed mechanism, ConstOPTMech, compares to other state-of-the-art mDP mechanisms. "We also include comparisons to our loss lower bound (derived in Section 4). Furthermore, we perform studies to compare ConstOPTMech to SpannerMech, as it is the most directly related work. To do this, we experimentally evaluate the complexity of solving the LPs used to compute both mechanisms. We focus on text embeddings and geolocation metric spaces because, as noted in Section 1, mDP mechanisms have primarily been used for privately releasing text and location data.
267
+
268
+ § 5.1 EXPERIMENTAL SETUP
269
+
270
+ Our experiments consist of generating metric spaces in both application domains and then running two types of experiments. The first evaluates privacy vs. loss on a fixed metric space. The second evaluates scalability as the size of the metric space grows.
271
+
272
+ We measure utility (loss) of a mechanism based on (1). Since this loss is agnostic to any downstream modeling task performed on these private releases, we do not focus on any specific downstream task.
273
+
274
+ Metric Space Generation. To produce a metric space of a specific size, we sample metric spaces differently depending on the application.
275
+
276
+ For text embeddings, we sample from a base metric space consisting of the set of English words and the metric ${d}_{\mathcal{W}}$ induced by a text embedding $\phi : \mathcal{W} \rightarrow {\mathbb{R}}^{d}$ . Precisely, we used ${d}_{\mathcal{W}}\left( {u,v}\right) = d\left( {\phi \left( u\right) ,\phi \left( v\right) }\right)$ , where $d$ is the Euclidean distance. We used both the FastText [Bojanowski et al., 2017] and the GloVe embedding [Pennington et al., 2014] for our embedding $\phi$ . To sample a metric space from a base metric space, we selected a subset ${\mathcal{W}}^{\prime }$ of English words. Instead of selecting ${\mathcal{W}}^{\prime }$ at random, which would likely produce a set of completely unrelated words with roughly the same distance between each pair, we used a clustered approach. First, we let ${\mathcal{W}}^{\prime }$ consist of one random word. To sample another word, we add a random word to ${\mathcal{W}}^{\prime }$ with ${50}\%$ chance. Otherwise we select a random word $w$ in ${\mathcal{W}}^{\prime }$ and add one of $w$ ’s 50 closest English words according to ${d}_{\mathcal{W}}$ . This allows us to produce samples of the larger metric space that have representative clusters of words. We repeat this process until the metric space has the desired size.
277
+
278
+ For geolocation application, we used the method of Bor-denabe et al. [2014], which uses the Geolife [Zheng et al., 2010] dataset. The Geolife dataset consists of 17621 location traces clustered around Beijing. We divide Beijing into rectangular regions of ${0.005}^{ \circ }$ (about ${0.6}\mathrm{\;{km}}$ ) in width and height. For each trace, we consider its top 30 regions, and we form a histogram of top regions across all traces. To form a metric space of size $n$ , we take the $n$ most popular regions in the histogram. Our metric is the Euclidean distance between the centers of the regions.
279
+
280
+ Performance Benchmarks. We use the following benchmarks to measure mechanism privacy, loss, and scalability.
281
+
282
+ Privacy: In practice it is usually acceptable to use $\left( {\epsilon ,\delta }\right)$ -DP for some small $\delta$ . We adopt $\left( {\epsilon ,\delta }\right)$ -mDP for our experiments, as we do not want to penalize an algorithm for having some small probability of two elements $u,v$ being distinguished. We say $M$ satisfies $\left( {\epsilon ,\delta }\right)$ -mDP if for any $u,v \in \mathcal{W}$ and $S \subseteq \mathcal{W}$ , we have
283
+
284
+ $$
285
+ \Pr \left\lbrack {M\left( u\right) \in S}\right\rbrack \leq {e}^{\epsilon {d}_{\mathcal{W}}\left( {u,v}\right) }\Pr \left\lbrack {M\left( v\right) \in S}\right\rbrack + \delta . \tag{9}
286
+ $$
287
+
288
+ For a fixed $\delta$ , we let ${\epsilon }_{\text{ tight }}$ be the smallest $\epsilon$ such that $M$ satisfies $\left( {\epsilon ,\delta }\right)$ -MDP:
289
+
290
+ $$
291
+ {\epsilon }_{\text{ tight }}\left( M\right) = \mathop{\inf }\limits_{{\epsilon \geq 0}}M\text{ satisfies }\left( {\epsilon ,\delta }\right) \text{ -mDP } \tag{10}
292
+ $$
293
+
294
+ In our experiments, we set $\delta = {0.001}$ .
295
+
296
+ Loss: For practical considerations, we use a more robust measurement of loss in our experiments, where it may not be problematic if the mechanism performs poorly on a small fraction of elements. Instead of using the maximum loss over all elements $\mathcal{L}\left( M\right)$ (1), we use the $q$ th-quantile over
297
+
298
+ $$
299
+ {\mathcal{L}}_{q}\left( M\right) = {\operatorname{quantile}}_{q}\left( {\{ \mathcal{L}\left( {M,w}\right) : w \in \mathcal{W}\} }\right) \tag{11}
300
+ $$
301
+
302
+ ${}^{6}$ In this section, we use ConstOPTMech to denote Algorithm 1 invoked with replacement function ${I}_{\mathrm{{NN}},r}$ .
303
+
304
+ < g r a p h i c s >
305
+
306
+ Figure 1: Loss of Madlib (EuclidMech), EXPMECH, Con-stOPTMech, SpannerMech, and OPTMech versus ${\epsilon }_{\text{ tight }}$ on 50 and 200-size metric spaces generated from FastText and Geolife, along with the lower bound. The horizontal line indicates the loss of returning a uniform random element.
307
+
308
+ the set $\{ \mathcal{L}\left( {M,w}\right) : w \in \mathcal{W}\}$ :
309
+
310
+ This loss estimate allows mechanisms to perform poorly on a small subset of the metric space, which in practice may be outlier or noisy data. In all experiments, we use $q = {95}\%$ so that mechanisms are evaluated based on their losses on the best 95% of elements.
311
+
312
+ LP Scalability: To measure scalability of our mechanisms, we measured the time and number of nonzero coefficients (NNZ) used in the LPs. We use the number of nonzero coefficients over the number of variables or constraints since LP solvers tend to be optimized toward solving sparse LPs. We also consider computation time to be an important measure, as it captures complexity beyond the NNZ. For mechanisms that do not require linear programs, the computational requirements are trivial and we do not test them.
313
+
314
+ Specific Details for Each Mechanism. We tested five mechanisms: a) Our proposed ConstOPTMech, b) OPT-Mech (based on solving ${\mathcal{P}}_{\text{ OPTMECH }}$ ), c) SpannerMech [Bor-denabe et al., 2014], d) Madlib mechanism [Feyisetan et al., 2020], and e) EXPMECH [McSherry and Talwar, 2007]. Madlib, EXPMECH, and OPTMech have no further parameters other than $\epsilon$ . For SpannerMech, we implement the algorithm as it is described in Bordenabe et al. [2014].
315
+
316
+ Mechanism ConstOPTMech takes $\lambda$ and $I\left( v\right)$ as parameters (see Algorithm 1). We optimize over $\lambda$ with possible values in $\{ {0.001},{0.1},{1.0}\}$ . For the replacement function $I\left( v\right)$ , we use ${I}_{\mathrm{{NN}},r}\left( v\right)$ (5). We try both $r = 5$ and $r = {10}$ , and we will designate these values in our results.
317
+
318
+ Evaluating Lower Bound. For each metric space, we computed our lower bound according to Theorem 3. This theorem produces a lower bound for any(c, r, Q)packing of $\mathcal{W}$ and any $\epsilon$ . However, it is infeasible to try out every possible (c, r, Q), packing. Instead, we generated candidate(c, r, Q)- packings using a $k$ -center algorithm, using values of $k$ that varied from 1 to the size of the metric space. For each value of $\epsilon$ that we tested, we used the strongest lower bound given by one of our generated(c, r, Q)-packings.
319
+
320
+ Experimental Outline. The first experiments we conducted are utility experiments. We test which mechanisms are better at minimizing loss, subject to privacy constraints. To do this, we plot ${\mathcal{L}}_{q}$ versus ${\epsilon }_{\text{ tight }}$ for a metric space consisting of 50 and 200 elements generated from FastText, GloVe, or Geolife. We also plot the lower bound. For ConstOPTMech, we use $r = {10}$ . We do not run OPTMech for metric spaces of size 200, as the number of constraints would be ${200}^{3}$ which is too large. We do not include comparisons to Madlib for the Geolife dataset, as it is designed for text embeddings.
321
+
322
+ Next, we conduct scalability experiments on the mechanisms that involve solving LPs. We do this by fixing a privacy constraint ${\epsilon }_{\text{ tight }}$ and, for each mechanism, testing the NNZ and time taken as the size of the metric space grows. We increased the number of samples in the metric space starting at 50 and increasing in increments of 50 until we reached 400 elements or the time spent solving the LP exceeded 1800 seconds. We fix ${\epsilon }_{\text{ tight }} = {2.0}$ for metric spaces sampled from FastText, at 1.0 for those sampled from GloVe, and at 0.3 for those sampled from Geolife.
323
+
324
+ § 5.2 RESULTS
325
+
326
+ We discuss the experimental results for metric spaces generated from FastText and Geolife. The results for GloVe are similar, and they appear in Appendix C.
327
+
328
+ Utility Experiments: Plots appear in Figure 1. In all tests, ConstOPTMech has lower loss than all other non-optimal mechanisms at all values of $\epsilon$ . This includes high privacy regimes, where the loss is near the loss of returning a uniformly random element, and low privacy regimes, where the loss approaches zero. The most pronounced improvement in loss occurs in middle ranges of $\epsilon$ (about $\left\lbrack {{1.0},{3.5}}\right\rbrack$ for FastText and about $\left\lbrack {{0.3},{0.8}}\right\rbrack$ for Geolife) where ConstOPT-Mech offers an improvement of about ${15} - {30}\%$ over all other non-optimal mechanisms. For example, when $\epsilon = 3$ on the 200-size FastText sample, ConstOPTMech offers a loss of 1.0, while the next-best mechanism, the EXPMECH, offers a loss of 1.3. This represents a ${23}\%$ reduction. On the metric space sampled from Geolife of size 50, the loss reduction is as high as ${50}\%$ at $\epsilon = {0.3}$ . The middle ranges of $\epsilon$ where ConstOPTMech is superior are the values with the most practical importance, since at these ranges, the losses are far from the random baseline yet are still nonzero-the mechanisms are offering both utility and privacy.
329
+
330
+ On the sampled metric spaces of size 50, ConstOPTMech attains only slightly worse loss than OPTMech, the optimal mechanism. As $\epsilon$ grows past 1.5 for FastText and 0.7 for
331
+
332
+ max width=
333
+
334
+ FastText 50 elements 150 elements 300 elements 400 elements
335
+
336
+ 1-5
337
+ ConstOPTMech (10) 1.89 sec, 3.23e4 nnz 24.91 sec, 1.47e5 nnz 163.75 sec, 3.79e5 nnz 234.63 sec, 5.71e5 nnz
338
+
339
+ 1-5
340
+ ConstOPTMech (5) 0.47 sec, 1.21e4 nnz 12.85 sec, 6.60e4 nnz 65.56 sec, 2.19e5 nnz 141.90 sec, 3.70e5 nnz
341
+
342
+ 1-5
343
+ SpannerMech 1.25 sec, 2.12e4 nnz 62.16 sec, 2.56e5 nnz 1001.82 sec, 1.19e6 nnz > 1800 sec, — nnz
344
+
345
+ 1-5
346
+ OPTMech 8.85 sec, 2.50e5 nnz 3|c|> 1800 sec, — nnz
347
+
348
+ 1-5
349
+ Geolife 50 elements 150 elements 300 elements 400 elements
350
+
351
+ 1-5
352
+ ConstOPTMech (10) 1.41 sec. ${1.74}\mathrm{e}4\mathrm{{nnz}}$ 13.85 sec, ${8.05}\mathrm{e}4\mathrm{{nnz}}$ ${80.24}\mathrm{{sec}},{2.51}\mathrm{e}5\mathrm{{nnz}}$ 169.52 sec, ${4.13}\mathrm{e}5\mathrm{{nnz}}$
353
+
354
+ 1-5
355
+ ConstOPTMech (5) ${0.40}\mathrm{{sec}},{8.30}\mathrm{e}3\mathrm{{nnz}}$ 12.53 sec. ${5.47}\mathrm{e}4\mathrm{{nnz}}$ ${66.28}\mathrm{{sec}},{1.99}\mathrm{e}5\mathrm{{nnz}}$ 145.74 sec. ${3.46}\mathrm{e}5\mathrm{{nnz}}$
356
+
357
+ 1-5
358
+ SpannerMech ${0.95}\mathrm{{sec}},{1.56}\mathrm{e}4\mathrm{{nnz}}$ 40.38 sec, 1.49e5 nnz 928.72 sec. ${6.05}\mathrm{e}5\mathrm{{nnz}}$ $> {1800}\mathrm{{sec}}$ ,- nnz
359
+
360
+ 1-5
361
+ OPTMech 25.39 sec, 2.50e5 nnz 3|c|> 1800 sec. — nnz
362
+
363
+ 1-5
364
+
365
+ Table 2: Computation times and memory requirements for computing ConstOPTMech when $r = {10},5$ ; SpannerMech; and OPTMech, for varying metric space sizes generated from FastText and Geolffe datasets. Each mechanism satisfies ${\epsilon }_{\text{ tight }} = {2.0}$ (FastText) and 0.3 (Geolife).
366
+
367
+ < g r a p h i c s >
368
+
369
+ Figure 2: Loss of EXPMECH, ConstOPTMech (with $r =$ $5,{10})$ , and SpannerMech versus size of metric space for metric spaces generated from FastText and Geolife. Here, ${\epsilon }_{\text{ tight }}$ is fixed at 2.0 (FastText) and 0.3 (Geolife).
370
+
371
+ Geolife, their losses become virtually the same. Because we are using $r = {10}$ , this means just ${50} \times {10} = {500}$ entries out of the 2500 entries in the transition matrix are not fixed. This suggests that the 10 nearest neighbors to an element play the largest role in minimizing the element's loss.
372
+
373
+ In all scenarios tested, there is a large gap between the lower bound and the losses of the mechanisms, even the optimal mechanism. Hence, it is uncertain how close ConstOPT-Mech is to OPTMech on the metric spaces of size 200 .
374
+
375
+ Scalability Experiments: We were able to run ConstOPT-Mech until 400 elements, whereas OPTMech timed out at 100 elements and SpannerMech timed out at 350 elements. Table 2 shows some of the time and NNZ data for the mechanisms. These results indicate that computing Con-stOPTMech is faster than computing SpannerMech. This is particularly evident for the metric spaces with size 150 (resp. 300), where ConstOPTMech with $r = {10}$ uses at most ${40}\%$ (resp. ${16}\%$ ) as much time as SpannerMech, and ConstOPTMech with $r = 5$ uses at most ${20}\%$ (resp. 6.5%) as much time. These faster times come despite our optimization of $\lambda$ in ConstOPTMech, which requires solving three LPs. In other words, the actual time to solve one LP used in ConstOPTMech is one third as high as the reported times.
376
+
377
+ In terms of NNZ, ConstOPTMech with $r = {10}$ uses at most ${57}\%$ (resp. ${32}\%$ ) as much time as SpannerMech, and ConstOPTMech with $r = 5$ uses at most 37% (resp 18%) as many non-zero coefficients on the metric spaces with size 150,300. Note that one of the reasons why these savings are less than that observed for the time improvements is because ConstOPTMech uses LPs which are simpler than the LPs used by SpannerMech, which for example have more variables $\left( {O\left( {n}^{2}\right) }\right.$ variables versus $O\left( {nr}\right)$ ).
378
+
379
+ All the previous mechanisms have improved performance over OPTMech, which uses moderate time and NNZ on metric spaces with 50 elements and does not scale to 100 elements and beyond.
380
+
381
+ In Figure 2, we show our plots of loss versus the size of the metric space when ${\epsilon }_{\text{ tight }}$ is fixed. The plots confirms that ConstOPTMech also has lower loss than SpannerMech. At metric space sizes below 200, ConstOPTMech with $r = {10}$ has approximately a ${22}\%$ (resp. ${12}\%$ ) reduction in loss compared to SpannerMech on FastText and Geolife, though this reduces to about ${10}\%$ on FastText (resp. $< 5\%$ on Geolife) for metric space sizes greater than 200. ConstOPTMech with $r = 5$ similarly outperforms SpannerMech on smaller metric spaces, though on FastText, ConstOPTMech with $r = 5$ performs worse than SpannerMech, a symptom of it using too few nearest neighbors. On large metric spaces, more nearest neighbors must be used to maintain low loss.
382
+
383
+ § 6 CONCLUSION
384
+
385
+ We tackle the problem of designing scalable metric differential privacy mechanisms that achieve near optimal utility. Our new mechanism combines the optimal LP-based mechanism and the exponential mechanism to achieve a better utility-scalability tradeoff than existing mechanisms. We also provide a simple to compute lower bound that improves our understanding of the optimal utility. Our experiments show that our mechanism is computationally tractable on larger metric spaces while also almost matching the utility of the optimal LP-based mechanism. While our mechanism operates on any metric space, an interesting question is whether the geometry of the metric space can be leveraged to improve either utility or scalability.
UAI/UAI 2022/UAI 2022 Conference/B0l_lDLs9gq/Initial_manuscript_md/Initial_manuscript.md ADDED
The diff for this file is too large to render. See raw diff
 
UAI/UAI 2022/UAI 2022 Conference/B0l_lDLs9gq/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,323 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § INTERVENTION TARGET ESTIMATION IN THE PRESENCE OF LATENT VARIABLES
2
+
3
+ § ABSTRACT
4
+
5
+ This paper considers the problem of estimating unknown intervention targets in causal directed acyclic graphs from observational and interventional data in the presence of latent variables. The focus is on linear structural equation models with soft interventions. The existing approaches to this problem involve extensive conditional independence tests, and they estimate the unknown intervention targets alongside learning the structure of the causal model in its entirety. This joint learning approach to estimating the intervention targets results in algorithms that are not scalable as graph sizes grow. This paper proposes an approach that does not necessitate learning the entire causal model and focuses on learning only the intervention targets. The key idea of this approach is the property that interventions impose sparse changes in the precision matrix of a linear model. Leveraging this property, the proposed framework consists of a sequence of precision difference estimation steps. Furthermore, the necessary knowledge to refine an observational Markov equivalence class (MEC) to an interventional MEC is inferred. Simulation results are provided to illustrate the scalabil-ity of the proposed algorithm and compare it with those of the existing approaches.
6
+
7
+ § 1 INTRODUCTION
8
+
9
+ Enabling modern machine learning systems to reason involves predicting the effect of an intervention and counterfactual estimation [Pearl, 2009]. Forming such predictions crucially depends on the knowledge of causal models [Pearl and Mackenzie, 2018]. One approach to represent causal knowledge is through a causal Bayesian network, which is a directed graphical model specified by a directed acyclic graph (DAG). The nodes of a DAG represent random variables, and its directed edges represent the cause-and-effect relationships among the random variables. Such a model facilitates factorizing the observed distribution, where each factor is a conditional distribution of a variable given its causal parents. These conditionals specify the local causal mechanisms of the variables. Based on purely observational data, a causal DAG is identifiable only up to an equivalence class of DAGs. This is due to the fact that different DAGs can encode different ways of factorizing the same observed distribution into conditionals. The equivalence class of DAGs that can be identified from the observational data alone is called the Markov equivalence class (MEC) [Peters et al., 2017].
10
+
11
+ To reduce the ambiguity in the MEC obtained from the observational data, interventional data can be leveraged. An intervention on a variable refers to modifying the causal mechanism (the conditional distribution) that connects this variable and its parents in the true causal DAG while leaving the other factors unchanged. The combination of observational and interventional data reduces the number of possible factorizations that are consistent with both data types. In this paper, we perform soft interventions. A soft intervention induces a change in the causal mechanism by replacing it with a different one, without requiring the causal effects on the target node to be removed. While hard interventions, e.g., assigning fixed values to intervention targets, can be performed too, they can be difficult in practice [Eaton and Murphy, 2007].
12
+
13
+ In many applications, when interventional data is available, the set of variables whose causal mechanisms have been changed, called the intervention targets, are unknown. For instance, there is a recent growing interest in using causal discovery for fault localization in systems of microservices in cloud-native applications [Bogatinovski et al., 2021, Aggarwal et al., 2020]. During the faulty operation of these systems, it is imperative to localize the faults quickly. The faults are modeled as soft interventions in a microservice causing a delay or a lack of response. Hence, the data is collected under (unknown) faults, rendering fault localization a causal discovery task from interventional data of unknown intervention targets. Another example is the gene knockout experiment in biology. In these experiments a target set of genes is knocked out in an assay and gene expressions collected. These are known to have effects on off-target genome sites [Fu et al., 2013]. Sometimes drugs are injected into protein signaling networks, and the expression levels are measured. In these settings, the intervention targets are unknown. [Sachs et al., 2005, Ness et al., 2017].
14
+
15
+ Identifying unknown intervention targets in fully observed graphs was recently explored [Varici et al., 2021]. However, in this study, all variables of a true causal DAG are typically not observed. This induces confounding between observed variables due to unobserved or latent variables. A model with such confounding is called a causally insufficient model. Recent studies have characterized the interventional MEC for causally insufficient models and have provided algorithms for learning their structures. These algorithms leverage invariance testing and conditional independence testing by using both interventional and observational data, and accommodate both settings of known and unknown intervention targets [Mooij et al., 2020, Kocaoglu et al., 2019, Jaber et al., 2020]. In these algorithms, the intervention targets are usually learned along with the interventional MEC. In this paper, we focus on the following question: is there an efficient way to learn only the intervention targets given interventional and observational data?
16
+
17
+ Our Contributions: We address the above question in linear structural equation models (SEMs) under soft interventions. To this end, we first show that the difference in the precision matrices of the interventional datasets can be used to deduce intervention status of a node. Next, we use the fact that these precision differences have sparse support to narrow down our interest to the nodes that are directly affected by the interventions. Then, we show how to further refine this sparse set by repeated precision difference estimations to obtain the intervention targets. In the process, we also infer the causal knowledge newly induced by the interventions. Finally, using these results, we propose a scalable algorithm to estimate the intervention targets.
18
+
19
+ There are two close works to this paper. Jaber et al. [2020] characterizes the interventional MEC for soft interventions, and proves that the intervention targets can be identified only up to a superset which they graphically describe. Hence, in this paper, we focus on estimating this superset, which we call the effective intervention targets.
20
+
21
+ Varici et al. [2021] addresses a similar problem. However, their method is limited to causally sufficient models. We present theoretical results for causally insufficient models. Our results can be seen as a non-trivial generalization that combines precision difference approach to the problem and the graphical characterization of the soft interventions on causally insufficient models.
22
+
23
+ The existing interventional causal discovery algorithms for insufficient systems learn the causal structure and the intervention targets jointly. This approach requires a significant number of conditional independence and invariance tests, and prevents them from being scalable to large graphs [Jaber et al., 2020]. However, unlike interventional causal discovery, there exist highly efficient algorithms for causal discovery with observational data. One of the byproducts of our results is that our scalable algorithm for intervention target discovery can be used in conjunction with any observational learning algorithm for insufficient systems to refine the observational MEC efficiently to an interventional MEC. Finally, we perform experiments on real and synthetic datasets to illustrate the scalability of the proposed algorithm.
24
+
25
+ § 2 RELATED WORK
26
+
27
+ Interventional causal learning for causally sufficient systems. There is extensive literature on interventional learning for causally sufficient models. Among them, Eaton and Murphy [2007] was the earliest work that proposed a dynamic programming approach to interventional learning. Hauser and Bühlmann [2012] considers the interventional MEC under hard interventions and provides a score-based algorithm for interventional learning. Rothenhäusler et al. [2015] learns causal cyclic graphs using shift interventions. Yang et al. [2018] characterizes interventional MEC under hard and soft interventions using invariance testing, and it provides an algorithm for learning when the intervention targets are known. Squires et al. [2020] greedily searches over the space of permutations to score DAGs when the intervention targets are unknown. Ke et al. [2019] and Brouil-lard et al. [2020] leverage differentiable methods through continuous optimization to learn the causal structure from interventional data. For linear SEMs and causally sufficient models, Wang et al. [2018] proposes to learn the difference graph, which is the set of edge weights in the linear SEM that have been changed across two environments. Ghoshal et al. [2021] leverages precision difference estimates to address the same problem under more stringent assumptions. Varici et al. [2021] uses precision difference estimates and achieves a higher level of scalability through a hierarchical grouping of the nodes.
28
+
29
+ Learning from observational data for causally insufficient systems. The fast causal inference (FCI) algorithm of Spirtes et al. [2000] is a classic constraint-based method for learning causally insufficient models from observational data. Many efficient variants such as really fast causal inference (RFCI) algorithm of Colombo et al. [2012] and greedy fast causal inference (GFCI) algorithm of Ogarrio et al. [2016] have been proposed to improve scalability. Bernstein et al. [2020] extends the greedy permutation search to partially ordered sets to include the effects of latent variables in ordering.
30
+
31
+ Learning from interventions on causally insufficient systems. Triantafillou and Tsamardinos [2015] considers multiple interventions for causally insufficient systems. It applies ideal hard interventions and provides a solution based on constraint satisfaction and conditional independence testing. Mooij et al. [2020] proposes a joint causal inference framework to pool interventional datasets to learn the causal graph. Jaber et al. [2020] characterizes the interventional MEC and proposes a variant of FCI to learn from soft interventional data in causally insufficient systems. The key shortcoming of these methods is that their runtime becomes prohibitive for large graphs.
32
+
33
+ § 3 PRELIMINARIES
34
+
35
+ We introduce some essential concepts and notations pertinent to causal discovery in causally insufficient systems.
36
+
37
+ Let $\mathcal{D} \triangleq \left( {\mathbf{W},\mathbf{E}}\right)$ denote a causal graph in which $\mathbf{W}$ represents the set of nodes and $\mathbf{E}$ represents the set of edges. Denote the number of nodes by $p \triangleq \left| \mathbf{W}\right|$ . We associate the random variable ${X}_{i}$ to node $i$ , for $i \in \left\lbrack p\right\rbrack \triangleq \{ 1,\ldots ,p\}$ , and accordingly define the random vector $X \triangleq {\left( {X}_{1},\ldots ,{X}_{p}\right) }^{\top }{}^{\top }$ . We consider a linear SEM, according to which
38
+
39
+ $$
40
+ X = {B}^{\top }X + \epsilon , \tag{1}
41
+ $$
42
+
43
+ where $B \in {\mathbb{R}}^{p \times p}$ is the edge weights matrix in which ${B}_{i,j} \neq 0$ if and only if ${X}_{i} \rightarrow {X}_{j} \in \mathbf{E}$ . The random noise vector $\epsilon \in {\mathbb{R}}^{p \times 1}$ has zero mean with covariance matrix $\Omega \triangleq \operatorname{diag}\left( {{\sigma }_{1}^{2},\ldots ,{\sigma }_{p}^{2}}\right)$ . We denote the covariance matrix of $X$ by $\sum$ , and the precision matrix by $\Theta$ , which satisfies $\Theta = \left( {I - B}\right) {\Omega }^{-1}{\left( I - B\right) }^{\top }$ . For the entries of $\Theta$ we have
44
+
45
+ $$
46
+ {\Theta }_{i,j} = - \frac{{B}_{i,j}}{{\sigma }_{j}^{2}} - \frac{{B}_{j,i}}{{\sigma }_{i}^{2}} + \mathop{\sum }\limits_{{k \in \operatorname{ch}\left( i\right) \cap \operatorname{ch}\left( j\right) }}\frac{{B}_{i,k}{B}_{j,k}}{{\sigma }_{k}^{2}},\;\forall i \neq j, \tag{2}
47
+ $$
48
+
49
+ $$
50
+ {\Theta }_{i,i} = \frac{1}{{\sigma }_{i}^{2}} + \mathop{\sum }\limits_{{j \in \operatorname{ch}\left( i\right) }}{\sigma }_{j}^{-2}{B}_{i,j}^{2},\;\forall i \in \left\lbrack p\right\rbrack , \tag{3}
51
+ $$
52
+
53
+ where $\operatorname{ch}\left( i\right)$ denotes the set of children of node $i \in \left\lbrack p\right\rbrack$ . In the causal graph $\mathcal{D}$ , we have two sets of nodes: set of observed variables denoted by $\mathbf{V}$ , and set of latent variables denoted by $\mathbf{L}$ . Clearly, $\mathbf{V} \cup \mathbf{L} = \mathbf{W}$ . The observational data, consequently, is represented by $\left\{ {{X}_{i} : i \in \mathbf{V}}\right\}$ .
54
+
55
+ From the observational data alone, a DAG with only observed variables can be identified up to its MEC [Verma and Pearl, 1992]. For causally insufficient systems with latent variables $\mathbf{L}$ , we can only describe the MEC in terms of a family of graphs called maximal ancestral graphs (MAGs), which we formally specify later in this section. The MAG associated with $\mathbf{V}$ represents the pairwise ancestral and confounding relationships among the observed variables $\left\{ {{X}_{i} : i \in \mathbf{V}}\right\}$ that cannot be made conditionally independent. Therefore, for the true causal graph $\mathcal{D}$ , there exists a unique MAG. This cannot be identified uniquely. However, it is possible to recover it up to a family of equivalent MAGs that contains the true one. Next, we describe how a MAG is obtained from a DAG and then proceed to describe the MEC of MAGs and how they are represented.
56
+
57
+ Mixed Graphs: From a structure learning perspective, causally insufficient systems are often represented by mixed graphs. A mixed graph can contain both directed $\left( \rightarrow \right)$ and bi-directed $\left( \leftrightarrow \right)$ edges. In our notations, we use $\leftarrow$ to emphasize that an edge represents either a directed or a bi-directed edge. If there is a directed path from node $A$ to node $B$ , then $A$ is an ancestor of $B$ and $B$ is a descendant of $A$ . Bi-directed edges create spouses, that is, $A$ is a spouse of $B$ if $A \leftrightarrow B$ is present. A node on a path is a collider if both of its edges on the path are into the node. A triple $\langle X,Y,Z\rangle$ is an unshielded collider if $X \circ \rightarrow Y \leftarrow \circ Z$ , and $X$ and $Z$ are not adjacent. A path $\langle X,\ldots ,W,Z,Y\rangle$ is a discriminating path for $Z$ if every node between $X$ and $Z$ is a collider on the path, and is also a parent of $Y$ . An inducing path relative to $\mathbf{L}$ is a path on $\mathcal{D}$ such that on this path, every non-endpoint node $X \notin \mathbf{L}$ is a collider on the path, and every collider is an ancestor of an endpoint of the path.
58
+
59
+ Maximal Ancestral Graphs: Consider the causal graph $\mathcal{D} = \left( {\mathbf{V} \cup \mathbf{L},\mathbf{E}}\right)$ . A unique mixed graph called the MAG [Richardson and Spirtes,2002] ${\mathcal{M}}_{\mathcal{D}}$ over $\mathbf{V}$ has the following three properties: (i) in a MAG, there exists an edge between two nodes if and only if their associated variables cannot be made conditionally independent (or d-separated) by conditioning on any subset of observed variables in the true $\mathcal{D}$ ; (ii) if there is an edge in the skeleton that represents the ancestral relationships among the variables in $\mathbf{V}$ in the true $\mathcal{D}$ [Zhang,2008], then a directed edge is used to represent this edge; and (iii) if there is an edge in a MAG that connects two variables that do not have any ancestral relationship in $\mathcal{D}$ , then a bi-directed edge $\leftrightarrow$ is used to represent it. We note that the relationships between DAGs and MAGs is many-to-one, i.e., different DAGs can have the same MAG. Similarly to the DAGs, a MAG can be identified only up to a family of MAGs that are Markov equivalent. This Markov equivalence class is represented by what is called a partial ancestral graph (PAG).
60
+
61
+ Markov Equivalence: Two MAGs are Markov equivalent if and only if they have (i) the same adjacencies; (ii) the same unshielded colliders; and (iii) if a path $\pi$ is a discriminating path for $Z$ in both graphs, then $Z$ is a collider on $\pi$ in one graph if and only if it is a collider on $\pi$ in the other graph as well. A PAG represents the MEC of a MAG that can be learned from the observational data. The skeletons of all MAGs in the MEC are identical. Therefore, the PAG has the same skeleton as all members of the MEC. If an edge is oriented as $\rightarrow$ or $\leftrightarrow$ , this orientation is fixed for that edge in all MAGs of the MEC. If an edge in a PAG is oriented as $\leftarrow$ , this implies that there are at least two MAGs in the MEC, such that for the first MAG this edge is oriented as $\leftrightarrow$ and for the second MAG this edge is oriented as $\leftarrow$ . An edge with circles on both ends means there are three MAGs in the MEC with three distinct orientations $\leftarrow , \rightarrow$ , and $\leftrightarrow$ .
62
+
63
+ ${}^{1}$ Throughout the paper, we use ${X}_{i}$ to represent node $i \in \left\lbrack p\right\rbrack$ .
64
+
65
+ We denote the MAG corresponding to the DAG $\mathcal{D} = (\mathbf{V} \cup$ $\mathbf{L},\mathbf{E})$ by ${\mathcal{M}}_{\mathcal{D}}$ . Let $\operatorname{pa}\left( A\right) ,\operatorname{ch}\left( A\right) ,\operatorname{sp}\left( A\right) ,\operatorname{an}\left( A\right)$ , and $\operatorname{de}\left( A\right)$ denote the sets of parents, children, spouses, ancestors, and descendants of a node $A$ . We also create the set $\operatorname{ps}\left( A\right) =$ $\mathrm{{pa}}\left( A\right) \cup \mathrm{{sp}}\left( A\right)$ to denote the union of parents and spouses of a node $A$ . We denote these relationships with respect to a graph, e.g., ${\operatorname{pa}}_{\mathcal{D}}\left( A\right)$ . The subscript is dropped if the specified graph is clear from the context.
66
+
67
+ § 4 PROBLEM STATEMENT
68
+
69
+ Interventions on causal models improve the identifiability of the underlying causal structure. We consider a soft intervention model, which changes the conditional distributions of an intervention target node given its true parents (both observed and unobserved), in the causal DAG $\mathcal{D}$ without completely removing the causal effects of its parents.
70
+
71
+ Soft Intervention Model. Assume that we have $n$ interventional settings, and denote the collection of the intervention target sets by $\mathcal{I} \triangleq \left\{ {{\mathbf{I}}^{\left( j\right) } : j \in \left\lbrack n\right\rbrack }\right\}$ . In the $j$ -th setting, the nodes in ${\mathbf{I}}^{\left( j\right) } \subset \mathbf{V}$ are targeted for intervention. Soft interventions in the linear SEM specified in (I) change the conditional distributions of variables $\left\{ {{X}_{i} : i \in {\mathbf{I}}^{\left( j\right) }}\right\}$ . Under these changes (i) the variances of the noise terms $\left\{ {{\epsilon }_{i} : i \in {\mathbf{I}}^{\left( j\right) }}\right\}$ change, and (ii) the weights connecting the parents of the nodes associated with $\left\{ {{X}_{i} : i \in {\mathbf{I}}^{\left( j\right) }}\right\}$ in the linear SEM may change. In other words, $\left\{ {{B}_{\mathrm{{pa}}\left( i\right) ,i} : i \in {\mathbf{I}}^{\left( j\right) }}\right\}$ , where ${B}_{\mathrm{{pa}}\left( i\right) ,i} \triangleq \left\{ {{B}_{u,i} : {X}_{u} \in \operatorname{pa}\left( {X}_{i}\right) }\right\}$ , may vary freely.
72
+
73
+ Post-intervention linear SEMs have new parameters. We denote the linear SEM parameters associated with interventions ${\mathbf{I}}^{\left( j\right) }$ by ${B}^{\left( j\right) }$ and ${\Omega }^{\left( j\right) } \triangleq \operatorname{diag}\left( {{\left( {\sigma }_{1}^{\left( j\right) }\right) }^{2},\ldots ,{\left( {\sigma }_{p}^{\left( j\right) }\right) }^{2}}\right)$ . Since the noise variance terms change under soft interventions, ${\mathbf{I}}^{\left( j\right) }$ is described as follows:
74
+
75
+ $$
76
+ {\mathbf{I}}^{\left( j\right) } \triangleq \left\{ {i : i \in \mathbf{V},{\sigma }_{i}^{\left( j\right) } \neq {\sigma }_{i}}\right\} . \tag{4}
77
+ $$
78
+
79
+ We assume that interventions on different target sets have different mechanisms such that upon interventions on sets ${\mathbf{I}}^{\left( j\right) }$ and ${\mathbf{I}}^{\left( l\right) }$ , we have ${\sigma }_{i}^{\left( j\right) } \neq {\sigma }_{i}^{\left( l\right) }$ for all $i \in {\mathbf{I}}^{\left( j\right) } \cup {\mathbf{I}}^{\left( l\right) }$ . We note that this assumption is purely for simplicity in the notation, and can be dropped with a higher level of parameterization.
80
+
81
+ Identifiability conditions of the causal graphs with unknown soft interventions and the corresponding graphical characterization are established in Jaber et al. [2020]. Importantly, causal graphs with the same observed variables but different latent variables and intervention targets can still belong to the same MEC.
82
+
83
+ < g r a p h i c s >
84
+
85
+ Figure 1: An example of a $\langle \mathcal{D},\mathcal{I}\rangle$ with $\mathbf{L} = \{ L\}$ , and the corresponding $I$ -MAG, $\mathcal{M} = \operatorname{MAG}\left( {{\operatorname{Aug}}_{I}\left( \mathcal{D}\right) }\right)$ . Note that $F \rightarrow X$ is constructed in ${\operatorname{Aug}}_{I}\left( \mathcal{D}\right) .F \rightarrow W$ edge on $\mathcal{M}$ is due to the inducing path $F \rightarrow X \leftarrow L \rightarrow W$ . Similarly, $Z \rightarrow W$ is due to the inducing path $Z \rightarrow X \leftarrow L \rightarrow W$ .
86
+
87
+ Definition 1 (Augmented Graph) Consider a causal graph $\mathcal{D} = \left( {\mathbf{V} \cup \mathbf{L},\mathbf{E}}\right)$ and a set of intervention targets $\mathcal{I}$ . Define the multiset $\mathcal{H}$ as $\mathcal{H} = \{ \mathbf{I} \cup \mathbf{J} : \mathbf{I},\mathbf{J} \in \mathcal{I}\}$ . Given $\mathcal{H}$ , generate $h \triangleq \left| \mathcal{H}\right|$ nodes $\mathcal{F} \triangleq \left\{ {{F}_{i} : i \in \left\lbrack h\right\rbrack }\right\}$ and define the augmented graph of $\mathcal{D}$ as ${\operatorname{Aug}}_{I}\left( \mathcal{D}\right) \triangleq \left( {\mathbf{V} \cup \mathbf{L} \cup \mathcal{F},\mathbf{E} \cup \mathcal{E}}\right)$ , where $\mathcal{E} \triangleq \left\{ {\left( {{F}_{i},V}\right) : i \in \left\lbrack h\right\rbrack ,V \in {\mathbf{H}}_{i}}\right\}$ .
88
+
89
+ In other words, for each pair of intervention targets $\mathbf{I},\mathbf{J} \in \mathcal{I}$ , the augmented graph appends the causal graph $\mathcal{D}$ with an auxiliary node and assign directed edges from this node to each node in $\mathbf{H} = \mathbf{I} \cup \mathbf{J}$ . We denote the set of these auxiliary nodes by $\mathcal{F}$ , and refer to the members of $\mathcal{F}$ as $F$ - nodes. Jaber et al. [2020] shows that the augmented graph exactly represents the separation statements among the random variables in interventional settings. Similar to obtaining a unique MAG from a DAG, a corresponding maximal ancestral graph for the augmented graph is constructed next.
90
+
91
+ Definition 2 (I-MAG) Given a causal graph $\mathcal{D} = (\mathbf{V} \cup$ $\mathbf{L},\mathbf{E})$ and a set of intervention targets $I$ , we define $I - {MAG}$ to represent the maximal ancestral graph constructed over $\mathbf{V}$ from ${\operatorname{Aug}}_{I}\left( \mathcal{D}\right)$ , i.e., $\operatorname{MAG}\left( {{\operatorname{Aug}}_{I}\left( \mathcal{D}\right) }\right)$ , and denote its edges by ${\mathcal{E}}_{I}$ .
92
+
93
+ Corresponding to every pair of intervention sets ${\mathbf{I}}^{\left( j\right) }$ and ${\mathbf{I}}^{\left( l\right) }$ , define the set ${\mathbf{I}}_{jl} \triangleq {\mathbf{I}}^{\left( j\right) } \cup {\mathbf{I}}^{\left( l\right) }$ . Denote the single $F$ -node associated with ${\mathbf{I}}^{\left( j\right) }$ and ${\mathbf{I}}^{\left( l\right) }$ by ${F}_{jl} \in \mathcal{F}$ , and denote the set of nodes adjacent to ${F}_{jl}$ in $\mathcal{I}$ -MAG by
94
+
95
+ $$
96
+ {\mathbf{K}}_{jl} \triangleq \left\{ {i : \left( {{F}_{jl},i}\right) \in {\mathcal{E}}_{I}}\right\} . \tag{5}
97
+ $$
98
+
99
+ We remark that, in causally sufficient systems, ${\mathbf{K}}_{jl} = {\mathbf{I}}_{jl}$ . However, in the presence of latent variables, one cannot distinguish between the nodes in ${\mathbf{K}}_{jl} \smallsetminus {\mathbf{I}}_{jl}$ and ${\mathbf{I}}_{jl}$ according to the $I$ -MAG. Therefore, we will focus on estimating ${\mathbf{K}}_{jl}$ , which we call the effective intervention targets.
100
+
101
+ We note that the observational setting can be considered as an interventional setting with an empty target set. When there exist more than two interventional settings, there are multiple $F$ -nodes and intervention targets. Accordingly, we denote the set of intervention targets by
102
+
103
+ $$
104
+ \mathcal{K} \triangleq \left\{ {{\mathbf{K}}_{jl} : \forall j,l \in \left\lbrack n\right\rbrack ,j \neq l}\right\} . \tag{6}
105
+ $$
106
+
107
+ Problem Statement. We focus on two estimation problems. In the first problem, we estimate the set of intervention targets $\mathcal{K}$ given the data from linear SEMs with latent variables under soft interventions. We denote the estimate of $\mathcal{K}$ by $\widehat{\mathcal{K}}$ . Our objective is to design the estimator $\phi : {\left( {\mathbb{R}}^{m \times \left| \mathbf{V}\right| }\right) }^{n} \rightarrow {\left( {2}^{\mathbf{V}}\right) }^{n}$ , in which $\left| \mathbf{V}\right|$ denotes the number of the observed variables, $n$ denotes the interventional settings, and $m$ denotes the number of samples in a setting.
108
+
109
+ In the second problem, based on the estimate $\widehat{\mathcal{K}}$ , for any set $\mathbf{K} \in \widehat{\mathcal{K}}$ , we consider the problem of estimating the parents and spouses of $\mathbf{K}$ in the augmented MAG ( $I$ -MAG). For any $\mathbf{K} \in \widehat{\mathcal{K}}$ , we denote the set of parents and spouses of the nodes in $\mathbf{K}$ by $\mathrm{{ps}}\left( \mathbf{K}\right)$ , and denote its estimate by $\widehat{\mathrm{{ps}}}\left( \mathbf{K}\right)$ . Therefore, our second objective is to design the estimator ${\phi }_{\mathrm{{ps}}\left( \mathbf{K}\right) } : {2}^{\mathbf{V}} \rightarrow {\left( {2}^{\mathbf{V}}\right) }^{\left| \mathbf{K}\right| }$ . These estimates (i.e., $\widehat{\mathcal{K}}$ and $\{ \widehat{\mathrm{{ps}}}\left( \mathbf{K}\right)$ : $\mathbf{K} \in \widehat{\mathcal{K}}\}$ )are sufficient to refine the observational PAG to the $\psi$ -PAG, which is the MEC of $\mathcal{I}$ -MAG.
110
+
111
+ § 5 MAIN RESULTS AND ALGORITHM
112
+
113
+ Overview. In this section, we provide our theoretical results and our Precision Difference-based Intervention Target Estimator (PreDITEr) algorithm. With scalability as the central objective, we focus on estimating only the effective intervention targets. This is a computationally simpler task compared to the task of learning the causal structure of a DAG, and, consequently, facilitates scalability.
114
+
115
+ The pivotal idea in the design of our algorithm is the fact that soft interventions result in only sparse changes in the precision matrix of the linear SEM. Hence, the precision matrix differences have traces of the identities of the intervention sites. We analytically establish how we can use the precision matrix differences between a pair of interventional settings to identify the underlying intervened sites. Upon establishing this property, we then devise an algorithm that successively identifies pairs of intervention settings and estimates the difference between their associated precision matrices. These successive estimates are aggregated to identify the intervention targets. Given the extensive literature on estimating precision matrix differences, we can adopt any generic precision difference estimation (PDE) algorithm to generate the estimates that we need in our algorithm.
116
+
117
+ Once we estimate the target intervention sites, we also provide an estimate for the set of parents and spouses of each of the nodes deemed to be an intervened node. Theoretically, this information enables the increased identifiability of the causal structure due to the interventions. We start describing the details by introducing the precision difference estimation procedure.
118
+
119
+ Precision Difference Estimation (PDE). Consider two intervention target sets ${\mathbf{I}}^{\left( j\right) }$ and ${\mathbf{I}}^{\left( l\right) }$ . We use the algorithm of Jiang et al. [2018] to estimate the difference between their precision matrices ${\Delta }_{jl} \triangleq {\Theta }^{\left( j\right) } - {\Theta }^{\left( l\right) }$ . The algorithm computes sample covariance matrices ${\widehat{\sum }}^{\left( j\right) }$ and ${\widehat{\sum }}^{\left( l\right) }$ from the data. Then, it solves the following convex optimization problem with alternating direction method of multipliers (ADMM):
120
+
121
+ $$
122
+ {\widehat{\Delta }}_{jl} = \mathop{\operatorname{argmin}}\limits_{{\Delta }_{jl}}\left\{ {\frac{1}{2}\operatorname{Tr}\left( {{\Delta }_{jl}^{\top }{\widehat{\sum }}^{\left( j\right) }{\Delta }_{jl}{\widehat{\sum }}^{\left( l\right) }}\right) }\right.
123
+ $$
124
+
125
+ $$
126
+ \left. {-\operatorname{Tr}\left( {{\Delta }_{jl}\left( {{\widehat{\sum }}^{\left( j\right) } - {\widehat{\sum }}^{\left( l\right) }}\right) }\right) + \lambda {\begin{Vmatrix}{\Delta }_{jl}\end{Vmatrix}}_{1}}\right\} , \tag{7}
127
+ $$
128
+
129
+ where $\lambda$ is a tuning parameter. Next, we define the marginal SEM over a subset of observed variables.
130
+
131
+ Definition 3 (Marginal SEM) Corresponding to a subset of nodes $S \subseteq \mathbf{V}$ , we define $\left( {{B}_{S},{\epsilon }_{S}}\right)$ as the marginal SEM that characterizes the relationship among the random variables ${X}_{S} \triangleq \{ i : i \in S\}$ . Accordingly, the corresponding precision matrix is denoted by ${\Theta }_{S}$ . The parametrization of a marginal SEM is given by the following lemma.
132
+
133
+ Lemma 1 (Ghoshal et al. [2021]) Corresponding to a subset $S \subseteq \mathbf{W}$ , denote the removed set of nodes by $U \triangleq \mathbf{W} \smallsetminus S$ and define ${U}_{i} \triangleq U \cap \mathrm{{an}}\left( i\right)$ , for $i \in S$ . For $i,j \in S$ , we have
134
+
135
+ $$
136
+ {\sigma }_{S,i}^{2} = {\sigma }_{i}^{4}{\left( {\sigma }_{i}^{2} - {B}_{{U}_{i},i}^{\top }{\left\lbrack {\Theta }_{\mathrm{{an}}\left( i\right) }\right\rbrack }_{{U}_{i},{U}_{i}}^{-1}{B}_{{U}_{i},i}\right) }^{-1}, \tag{8}
137
+ $$
138
+
139
+ $$
140
+ {\left\lbrack {B}_{S}\right\rbrack }_{j,i} = \frac{{\sigma }_{S,i}^{2}}{{\sigma }_{i}^{2}}\left( {{B}_{j,i} - {B}_{{U}_{i},i}^{\top }{\left\lbrack {\Theta }_{\mathrm{{an}}\left( i\right) }\right\rbrack }_{{U}_{i},{U}_{i}}^{-1}{\left\lbrack {\Theta }_{\mathrm{{an}}\left( i\right) }\right\rbrack }_{{U}_{i},j}}\right) . \tag{9}
141
+ $$
142
+
143
+ Before stating the theoretical results, we need the following faithfulness assumption. This assumption rules out the pathological cases where effect of an intervention is canceled by other changes in the system. Faithfulness assumptions are generally needed for successful learning.
144
+
145
+ Assumption 1 (I-faithfulness) For any choice of $i,j \in$ $S \subseteq \mathbf{V}$ , we have the following properties:
146
+
147
+ * If ${\sigma }_{i}^{\left( 1\right) } \neq {\sigma }_{i}^{\left( 2\right) }$ , then ${\sigma }_{S,i}^{\left( 1\right) } \neq {\sigma }_{S,i}^{\left( 2\right) }$ .
148
+
149
+ * If ${\sigma }_{S,i}^{\left( 1\right) } \neq {\sigma }_{S,i}^{\left( 2\right) }$ , then ${\left\lbrack {\Theta }_{S}^{\left( 1\right) }\right\rbrack }_{i,i} \neq {\left\lbrack {\Theta }_{S}^{\left( 2\right) }\right\rbrack }_{i,i}$ . If further ${\left\lbrack {B}_{S}\right\rbrack }_{j,i} \neq 0$ in either model, then ${\left\lbrack {\Theta }_{S}^{\left( 1\right) }\right\rbrack }_{i,j} \neq {\left\lbrack {\Theta }_{S}^{\left( 2\right) }\right\rbrack }_{i,j}$ .
150
+
151
+ § 5.1 THEORETICAL RESULTS
152
+
153
+ For the rest of the discussion, we consider a pair of interventional settings. Without loss of generality, let them be ${\mathbf{I}}^{\left( 1\right) }$ and ${\mathbf{I}}^{\left( 2\right) }$ . Denote the difference in their precision matrices by ${\Delta }_{12} = {\Theta }^{\left( 1\right) } - {\Theta }^{\left( 2\right) }$ , and the difference in marginal precision matrices for set $S$ by ${\Delta }_{{12}_{S}} = {\Theta }_{S}^{\left( 1\right) } - {\Theta }_{S}^{\left( 2\right) }$ . For simplicity in the notation, we denote the corresponding $F$ -node ${F}_{12}$ by $F,{\mathbf{K}}_{12}$ by $\mathbf{K},{\Delta }_{12}$ by $\Delta$ , and ${\Delta }_{{12}_{S}}$ by ${\Delta }_{S}$ . We also denote the set of affected nodes among the observed variables by ${S}_{\Delta } \triangleq \left\{ {i : {\left\lbrack {\Delta }_{\mathbf{V}}\right\rbrack }_{i,i} \neq 0}\right\} .$
154
+
155
+ Separation Property for Invariance. For a non-intervened node $i \in \mathbf{V} \smallsetminus \mathbf{K}$ , there is no edge between $F$ and $i$ in $\mathcal{I}$ -MAG. Therefore, there exists a set $S$ that separates $F$ and $i$ , and the conditional probability distribution of ${X}_{i}$ is invariant given $S \smallsetminus \left\{ {X}_{i}\right\}$ . Then, the conditional mean and variance of ${X}_{i}$ , and subsequently ${\sigma }_{S,i}$ , are invariant. Finally, applying the result of Wang et al. [2018], ${\left\lbrack {\Theta }_{S}\right\rbrack }_{i,i} = {\sigma }_{S,i}^{-2}$ is also invariant. Therefore, the set $S$ that separates $F$ and $i$ yields ${\left\lbrack {\Delta }_{S}\right\rbrack }_{i,i} = 0$ by the definition of ${\Delta }_{S}$ .
156
+
157
+ Theorem 1 Consider an $F \in \mathcal{F}$ and an observed node $V \in \mathbf{V}$ in the augmented MAG (I-MAG). Then, $\left( {F,V}\right) \in {\mathcal{E}}_{I}$ if and only if $\nexists S \subseteq \mathbf{V}$ such that ${\left\lbrack {\Delta }_{S}\right\rbrack }_{V,V} = 0$ .
158
+
159
+ Theorem 1 states the existence of a conditioning set $S$ for any non-intervened node $V$ , that makes the corresponding diagonal entry of the precision matrix invariant. In the following lemma, we show that the ancestors of $V$ within the set of affected nodes ${S}_{\Delta }$ suffice to separate $F$ and $V$ .
160
+
161
+ Lemma 2 For a node $V \in {S}_{\Delta } \smallsetminus \mathbf{K}$ , consider the set $S = {S}_{\Delta } \cap \operatorname{an}\left( V\right)$ . Diagonal entry corresponding to $V$ in the precision matrix of the marginal SEM over $S$ is invariant, i.e., ${\left\lbrack {\Delta }_{S}\right\rbrack }_{V,V} = 0$ .
162
+
163
+ Lemma 2 implies that we can eliminate all the non-intervened nodes (i.e. nodes not in the effective intervention target set $\mathbf{K}$ ) in ${S}_{\Delta }$ by computing PDE for each subset of ${S}_{\Delta }$ . Therefore, we can identify $\mathbf{K}$ with ${2}^{\left| {S}_{\Delta }\right| }$ number of PDEs. Now that we have a way to recover $\mathbf{K}$ , we show how to identify the parents and/or spouses of the intervened nodes. This property will play a critical role for improving the identifiability of the MAGs under interventions.
164
+
165
+ Lemma 3 Consider $K \in \mathbf{K}$ and $J \in \mathbf{V} \smallsetminus \mathbf{K}$ . If $K \hookleftarrow J$ in $I$ -MAG, there does not exist $S \subseteq {S}_{\Delta }$ containing $\{ K,J\}$ such that ${\left\lbrack {\Delta }_{S}\right\rbrack }_{K,J} = 0$ . On the other hand, if $K \rightarrow J$ , or there is no edge between them in $I$ -MAG, there exists a set $S \subseteq {S}_{\Delta }$ containing $\{ K,J\}$ such that ${\left\lbrack {\Delta }_{S}\right\rbrack }_{K,J} = 0$ .
166
+
167
+ Lemma 2 and Lemma 3 are sufficient to design our algorithm for learning $\mathcal{K}$ .
168
+
169
+ § 5.2 LEARNING ALGORITHM
170
+
171
+ We leverage the results in Lemma 2 and Lemma 3 to learn the intervention targets $\mathcal{K}$ from a tuple of interventional distributions generated by some unknown pair $\langle \mathcal{D},\mathcal{I}\rangle$ . Algorithm 1 presents our main learning algorithm PreDITEr that uses the results to learn $\mathcal{K}$ , and subsequently $\operatorname{ps}\left( \mathbf{K}\right)$ for $\mathbf{K} \in \mathcal{K}$ . We briefly describe PreDITEr and the rationale underlying its design.
172
+
173
+ Algorithm 1 (PreDITEr) takes sample covariance matrices of interventional data as inputs. Since estimating intervention targets $\mathbf{K}$ for each pair of interventional settings is independent, we investigate each pair individually. For each pair of interventional distributions (or the corresponding $F$ -node), we first estimate the set of affected nodes ${S}_{\Delta }$ (lines 7 and 8). Then, we estimate precision difference ${\Delta }_{S}$ for each subset $S$ of ${S}_{\Delta }$ . If there does not exist a set $S$ for a node $V \in {S}_{\Delta }$ such that ${\left\lbrack {\Delta }_{S}\right\rbrack }_{V,V} = 0$ , then by Lemma 2, $V$ is an intervened node and belongs to $\mathbf{K}$ (lines 10-15).
174
+
175
+ Algorithm 1 Precision Difference-based Intervention Target Estimator (PreDITEr)
176
+
177
+ Input: Sample covariance matrices ${\widehat{\sum }}^{\left( 1\right) },\ldots ,{\widehat{\sum }}^{\left( n\right) }$
178
+
179
+ Output. Intervention targets $\mathcal{K}$ , and ps $\left( K\right) \forall K \in \mathbf{K}$ ,
180
+
181
+ $\forall \mathbf{K} \in \mathcal{K}$
182
+
183
+ $\mathcal{K} \leftarrow \varnothing ,\mathcal{F} \leftarrow \varnothing$
184
+
185
+ for $V \in \mathbf{V}$ do $\operatorname{ps}\left( V\right) \leftarrow \varnothing$ end for
186
+
187
+ for all pairs $j,l \in \left\lbrack n\right\rbrack$ do
188
+
189
+ $\mathcal{F} \leftarrow \mathcal{F} \cup \left\{ {F}_{jl}\right\} ,{\mathbf{K}}_{jl} \leftarrow \varnothing$
190
+
191
+ Estimate ${\Delta }_{jl} \leftarrow \operatorname{PDE}\left( {{\widehat{\sum }}^{\left( j\right) },{\widehat{\sum }}^{\left( l\right) }}\right)$
192
+
193
+ ${S}_{\Delta } \leftarrow \left\{ {V : V \in \mathbf{V},{\left\lbrack {\Delta }_{jl}\right\rbrack }_{V,V} \neq 0}\right\}$
194
+
195
+ For all $S \subseteq {S}_{\Delta }$ , estimate ${\Delta }_{j{l}_{S}} \leftarrow \operatorname{PDE}\left( {{\widehat{\sum }}_{S,S}^{\left( j\right) },{\widehat{\sum }}_{S,S}^{\left( l\right) }}\right)$
196
+
197
+ for $V \in \mathbf{V}$ do
198
+
199
+ if $\nexists S \subseteq {S}_{\Delta }$ , such that $V \in S$ , and ${\left\lbrack {\Delta }_{S}\right\rbrack }_{V,V} = 0$
200
+
201
+ then
202
+
203
+ ${\mathbf{K}}_{jl} \leftarrow {\mathbf{K}}_{jl} \cup \{ V\}$
204
+
205
+ end if
206
+
207
+ end for
208
+
209
+ $\mathcal{K} \leftarrow \mathcal{K} \cup {\mathbf{K}}_{jl}$
210
+
211
+ for all pairs $K \in {\mathbf{K}}_{jl},J \in {S}_{\Delta } \smallsetminus {\mathbf{K}}_{jl}$ do
212
+
213
+ if $\nexists S \subseteq {S}_{\Delta }$ , such that $K,J \in S$ , and ${\left\lbrack {\Delta }_{S}\right\rbrack }_{K,J} = 0$
214
+
215
+ then
216
+
217
+ $\operatorname{ps}\left( K\right) \leftarrow \operatorname{ps}\left( K\right) \cup \{ J\}$
218
+
219
+ end if
220
+
221
+ end for
222
+
223
+ end for
224
+
225
+ recision Difference Estimation (PDE) $\left( {{\widehat{\sum }}^{\left( j\right) },{\widehat{\sum }}^{\left( l\right) }}\right)$
226
+
227
+ Estimate ${\Delta }_{jl} = {\left( {\widehat{\sum }}^{\left( j\right) }\right) }^{-1} - {\left( {\widehat{\sum }}^{\left( l\right) }\right) }^{-1}$ using algorithm of
228
+
229
+ Jiang et al. [2018].
230
+
231
+ Symmetrize ${\Delta }_{jl}$ : set ${\Delta }_{jl} = \left( {{\Delta }_{jl} + {\Delta }_{jl}^{\top }}\right) /2$ .
232
+
233
+ Threshold ${\Delta }_{jl} :$ set ${\left\lbrack {\Delta }_{jl}\right\rbrack }_{u,v} = 0$ if $\left| {\left\lbrack {\Delta }_{jl}\right\rbrack }_{u,v}\right| < \varepsilon$ .
234
+
235
+ Return ${\Delta }_{jl}$
236
+
237
+ After identifying $\mathbf{K}$ , consider a $K \in \mathbf{K}$ and $J \in {S}_{\Delta } \smallsetminus \mathbf{K}$ . If there does not exist a set $S$ such that ${\left\lbrack {\Delta }_{S}\right\rbrack }_{K,J} = 0$ , by Lemma 3, $J$ belongs to $\operatorname{ps}\left( K\right)$ (lines 16-20).
238
+
239
+ Algorithm 1 uses PDE as a subroutine. Hence, the quality of the estimate formed by Algorithm 1 hinges on those of the precision difference estimates. To assess the accuracy of Algorithm 1 in estimating the intervention targets irrespectively of the PDE subroutine used, we provide population-level results. In the following theorem, we establish that Algorithm 1 has perfect estimation if the underlying PDE subroutine performs perfectly. This result allows decoupling the accuracy of Algorithm 1 from that of the PDE subroutine used. In practice, however, PDE subroutines are imperfect, which is imposed by having access to only finite samples. To address the convergence to the correct estimates, we discuss the sample complexity and convergence guarantees of the algorithm of Jiang et al. [2018] in Appendix B.
240
+
241
+ Theorem 2 When the covariance estimates are perfect and Assumption [1] holds, Algorithm [1] perfectly estimates the set of effective intervention targets $\mathcal{K}$ under soft interventions with probability 1. Furthermore, Algorithm 1 recovers non-intervened parents and/or spouses (i.e., $\operatorname{ps}\left( K\right)$ ) of an intervened node $K$ with probability 1 .
242
+
243
+ § 5.3 RECOVERING $\PSI$ -MARKOV EQUIVALENCE
244
+
245
+ Next, we show how we can use the intervention target recovery of Algorithm 1 to actually refine the observational MEC represented by a PAG to the interventional MEC for soft interventions $\psi$ -PAG. We first review the interventional equivalence characterization approaches in the existing literature.
246
+
247
+ $\psi$ -Markov equivalence property, i.e., the conditions for two $\mathcal{I}$ -MAGs to be Markov equivalent, is characterized in Jaber et al. [2020, Theorem 1]. For two MAGs ${\mathcal{M}}_{1}$ and ${\mathcal{M}}_{2}$ to be $\psi$ -Markov equivalent:
248
+
249
+ * ${\mathcal{M}}_{1}$ and ${\mathcal{M}}_{2}$ must have the same skeleton.
250
+
251
+ * ${\mathcal{M}}_{1}$ and ${\mathcal{M}}_{2}$ must have the same unshielded colliders.
252
+
253
+ * If a path $\pi$ is a discriminating path for a node $V$ in both ${\mathcal{M}}_{1}$ and ${\mathcal{M}}_{2}$ , then $V$ is a collider on the path in one graph if and only if it is a collider on the path in the other.
254
+
255
+ The following theorem builds on the results of Theorem 2 and Lemma 3 to obtain $\psi$ -PAG.
256
+
257
+ Theorem 3 ( $\psi$ -PAG) Given the PAG for the MAG M, and the results of Algorithm 1, i.e., the sets $\mathcal{K},\operatorname{ps}\left( \mathbf{K}\right) \forall \mathbf{K} \in \mathcal{K}$ , we can obtain $\psi$ -PAG of $\bar{I}$ -MAG.
258
+
259
+ § 6 EMPIRICAL RESULTS
260
+
261
+ First, we run our PreDITEr algorithm on synthetically generated data from linear SEMs to recover intervention targets. Next, we provide comparisons with the state-of-the-art method. Finally, we apply our method on a biological dataset to illustrate its applicability to real data.
262
+
263
+ § 6.1 SYNTHETIC DATA
264
+
265
+ We test the efficiency of PreDITEr for recovering the intervention targets. We generate 100 realizations of Erdős-Rényi random DAGs with the expected neighborhood size $c = 2$ .
266
+
267
+ < g r a p h i c s >
268
+
269
+ Figure 2: Average F1 scores at estimating $\mathbf{K}$ for $\left| \mathbf{L}\right| =$ $5,\left| \mathbf{I}\right| = 5$ intervention targets.
270
+
271
+ We consider one interventional setting in addition to the observational one, i.e., $I = \langle \varnothing ,\mathbf{I}\rangle$ . Therefore, we are estimating a single target set $\mathbf{K}$ . For each model, we set the number of latent variables to $\left| \mathbf{L}\right| = 5$ , and the number of intervened nodes to $\left| \mathbf{I}\right| = 5$ . Edge weights of the causal model, i.e., entries of $B$ , are sampled independently at random according to the uniform distribution on $\left\lbrack {-1, - {0.25}}\right\rbrack \cup \left\lbrack {{0.25},1}\right\rbrack$ . The additive Gaussian noise terms have distribution $\mathcal{N}\left( {0,{I}_{p}}\right)$ . The intervention targets are selected randomly from the observed variables $\mathbf{V}$ . For the intervened nodes $I \in \mathbf{I}$ , upon intervention the variance of the noise term ${\epsilon }_{I}$ changes to 2 .
272
+
273
+ We run PreDITEr with a varying number of samples on graphs the with varying size $p$ . Figure 2 illustrates the target recovery performance. Specifically, it shows that our method recovers the intervention target with high F1 scores. We emphasize that PreDITEr can easily process large graphs (e.g., $p = {100}$ nodes) as shown in Figure 2 . This scalability is due to its computational complexity of $\bar{O}\left( {2}^{\left| {S}_{\Delta }\right| }\right)$ . Since the size of ${S}_{\Delta }$ is determined only by the number of intervened nodes and their parents/spouses, our method is not directly affected by the graph size $p$ .
274
+
275
+ § 6.2 COMPARISON TO THE RELATED WORK
276
+
277
+ We compare the scalability and accuracy of PreDITEr to those of two competing methods under various settings: the $\psi$ -FCI algorithm of Jaber et al. [2020], and the FCI-JCI123 algorithm of Mooij et al. [2020]. Jaber et al. [2020] does not provide simulations for graphs that have more than a few nodes since $\psi$ -FCI requires an exponentially growing number of conditional independence and invariance tests. Mooij et al. [2020] reports experiments with larger graphs and we compare our algorithm to their FCI-JCI123 algorithm. We focus on the scalability aspect and provide additional experiments on small graphs and MEC refinement results in Appendix D.2.
278
+
279
+ To enable comparisons under soft interventions, we adopt mechanism changes of Mooijet al. [2020], in which a constant offset is added to the intervention targets (see page 53, Section 5.2 for details). We note that this is different from our model of soft interventions and results in slight degrading of the performance of our algorithm but since we are using FCI-JCI123 as our benchmark, we adopt its setting.
280
+
281
+ We consider two environments and one intervention target for simplicity of the comparisons. We generate 30 Erdős-Rényi random DAGs. The probability of the presence of an edge in the random graphs is set to $2/p$ , when $p$ is the number of observed and latent variables. We report the precision and recall rates of both algorithms along with their runtimes in Table 1. While both methods have similar performance, there is a significant discrepancy in their runtime. More importantly, the runtime of FCI-JCI123 becomes prohibitive very quickly, even with graphs with as few as 40 nodes. In contrast, PreDITEr has a significantly lower runtime even though the considered setting (i.e., mechanism changes) is not the setting for which it is designed. In Appendix D.2, we provide further results on settings that match our model. All simulations are run on a computer with i7-4960HQ, 16GB 1600MHz RAM.
282
+
283
+ Table 1: Intervention recovery results and median runtime.
284
+
285
+ max width=
286
+
287
+ Method $p$ Precision Recall Runtime (s)
288
+
289
+ 1-5
290
+ PreDITEr 20 1.0 0.83 < 1
291
+
292
+ 1-5
293
+ FCI-JCI123 20 1.0 1.0 80.9
294
+
295
+ 1-5
296
+ PreDITEr 30 1.0 0.80 <1
297
+
298
+ 1-5
299
+ FCI-JCI123 30 1.0 0.97 318.0
300
+
301
+ 1-5
302
+ PreDITEr 40 1.0 0.87 <1
303
+
304
+ 1-5
305
+ FCI-JCI123 40 0.96 0.96 1301.9
306
+
307
+ 1-5
308
+
309
+ § 6.3 BIOLOGICAL DATA
310
+
311
+ We apply PreDITEr algorithm to a real dataset with data from observational and multiple interventional settings. Since PreDITEr estimates the intervention targets, and their corresponding parent-spouse sets for each pair of available settings, we combine the findings from each pair and yield a mixed graph estimate of the associated causal structure.
312
+
313
+ Protein signaling data. We consider the dataset of Sachs et al. [2005], which is a standard benchmark in causal inference literature. The data is obtained from measurements of the proteins involved in T-4 cell signalling. The protein signaling network consists of 11 nodes. In each interventional setting, various drugs are injected to the cells to inhibit or activate different signaling proteins. The target proteins are considered as sites of intervention. Data from observational and five interventional settings are provided. The true ground truth network is not exactly known, and the accepted ground truth has been updated over the years. Importantly, it is represented by a DAG without latent confounders. We use the recent version of Ness et al. [2017], which consists of 16 edges, and use the preprocessed real data provided by Squires et al. [2020].
314
+
315
+ < g r a p h i c s >
316
+
317
+ Figure 3: Recovered causal structure using Algorithm 1, Blue edges represent the edges that are in the skeleton of the consensus network.
318
+
319
+ Figure 3 shows the output of our algorithm. For a pair of nodes, if they are found to be in the parent-spouse sets of each other, they must be spouses and we assign a bi-directed edge. If only one of them lies in the parent-spouse set of the other node, it must be the parent, and we assign a directed edge to them. If we do not have either of the above results, the relationship can be either parent or spouse, and we denote it by $\circ \rightarrow$ on the graph. The recovered edges that are also present in the skeleton of the ground truth DAG are marked in blue. This result illustrates that even though our algorithm is designed for linear models, it can be applied to real datasets with non-linear models, and recovers most of the skeleton correctly.
320
+
321
+ § 7 CONCLUSION
322
+
323
+ In this paper, we have considered the problem of estimating intervention targets for causally insufficient systems, in linear structural equation models (SEMs). We have assumed a soft intervention model that is more realistic than hard interventions which eradicate all causal effects on targets. We have shown the usage of invariance of precision matrix entries and proposed an algorithm to identify intervention targets. The algorithm can be used also to refine the observational MEC to interventional MEC for maximal ancestral graphs. Since there exist efficient algorithms for the former, our algorithm provides scalability for the latter as well. We substantiate our claims through simulations and compare with competing methods. The limitation of our approach is that it only applies to linear SEMs. However, we have demonstrated strong performance in real datasets as well as synthetic ones, which shows applicability of our method to various settings.
UAI/UAI 2022/UAI 2022 Conference/B0xLpILs5ec/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,487 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Individual Fairness in Feature-Based Pricing for Monopoly Markets
2
+
3
+ ## Abstract
4
+
5
+ We study fairness in the context of feature-based price discrimination in monopoly markets. We propose a new notion of individual fairness, namely, $\alpha$ -fairness, which guarantees that individuals with similar features face similar prices. First, we study discrete valuation space and give an analytical solution for optimal fair feature-based pricing. We show that the cost of fair pricing is defined as the ratio of expected revenue in an optimal feature-based pricing to the expected revenue in an optimal fair feature-based pricing (CoF) can be arbitrarily large in general. When the revenue function is continuous and concave with respect to the prices, we show that one can achieve CoF strictly less than 2, irrespective of the model parameters. Finally, we provide an algorithm to compute fair feature-based pricing strategy that achieves this CoF.
6
+
7
+ ## 1 INTRODUCTION
8
+
9
+ The Internet has transformed the way markets function. Today's Internet-based ecosystems such as entertainment and e-commerce marketplaces are more consumer-centric and information-driven than ever before. Data and AI systems are primarily used to power advertising, consumer retention, and personalized experience. These AI systems are deployed to aggregate individual choices and preferences to make personalized experiences possible. It is a common practice to use aggregated information about consumers to offer different prices to different consumers or segments of the market; this practice is commonly termed price discrimination [Varian, 1992].
10
+
11
+ Price discrimination has come under ethical scrutiny on multiple instances in the recent past. For example, it was found that Orbitz, an online travel agency, charges Mac users more than Windows users [Mattioli, 2012]. Uber's strategy to charge personalized prices came under heavy consumer backlash [Dholakia, 2015, Mahadawi, 2018], and thanks to the fine-grained data analysis of consumer behavior, several such instances were reported in e-commerce and retail industry [Hinz et al., 2011]. More recently, Pandey and Caliskan [2021] showed that neighborhoods with high non-white populations, higher poverty, younger residents, and high education levels faced higher cab trip fares in Chicago. Not surprisingly, the regulatory bodies and research community has taken notice. Economists have raised concerns on fairness issues of personalized pricing [Michel, 2016]. Price discrimination based on nationality or residence is made illegal in the EU [2020]. In the USA, a white house report provides guidelines for enforcing existing anti-discrimination, privacy, and consumer protection laws while practicing discriminatory pricing [White House, 2015]. Given the overwhelming evidence and rising concerns, there is an urgent need to study price discrimination and fairness formally.
12
+
13
+ Sellers or firms use price discrimination for multiple reasons, including increasing revenue, covering transportation and storage costs, increasing market reach, rewarding loyal consumers, promoting a social cause, and so on [Cassady, 1946]. In general, price discrimination does not always raise ethical and fairness issues and hence requires a careful inspection to categorize situations where this practice may lead to treatment disparity and invite regulatory intervention [Alan, 2020]. In this work, we focus on designing the pricing strategies for a seller (monopolist) who wants to maximize the revenue via price discrimination while ensuring fairness amongst the consumers.
14
+
15
+ A revenue-maximizing seller with complete knowledge of consumer valuations without fairness consideration would charge each consumer her valuation for the product. This pricing strategy, otherwise called first-degree price discrimination, may result in wild fluctuations in prices and is considered unfair in general [Moriarty, 2021]. Also, in practice, sellers do not have full access to individual consumer valuations but may have a distribution over valuations through features. In such feature-based pricing (FP), the seller segregates the market into segments through the consumer features. The seller's problem then reduces to finding optimal pricing for each segment [Bergemann et al., 2015, Cummings et al., 2020]. Such FP is referred to as third-degree price discrimination in the literature. In this paper, our goal is to ensure fairness issues in feature-based personalized pricing.
16
+
17
+ Our Contributions We introduce the notion of $\alpha$ - fairness in price discrimination which ensures that similar individuals face similar prices. We emphasize that if individuals with similar features are charged differently by segregating them into different segments, the interpersonal price comparison based on their features renders fairness issues. With this, we introduce a model for optimal fair feature-based pricing (FFP) as the problem of maximizing revenue while ensuring $\alpha$ -fairness. We begin with two segments in the market and discrete valuations and propose an optimal FFP scheme (Section 4.2). To quantify the loss in the revenue due to fairness, we then introduce cost of fairness(CoF)- the ratio of expected revenue in an optimal FFP to the expected revenue in an optimal FP. We prove that a constant lower bound on CoF is impossible to achieve in general.
18
+
19
+ Next, in Section 5.1, under the assumption that the revenue function is concave in offered prices [Bergemann et al., ${\left. {2021}\right\rbrack }^{\mathrm{T}}$ , we show that one can achieve a constant upper bound on CoF. Here, first, we show that the seller can compute optimal FFP using a convex program if it has access to distributional information (knows all consumers' valuation distribution functions). We then identify a class of FFP strategies, namely LINP-FFP that satisfy $\alpha$ -fairness. With the help of these pricing strategies, we then show that the CoF is strictly less than 2 irrespective of model parameters. Finally, we propose OPT-LINP-FFP, an $O\left( {K\log \left( K\right) }\right)$ time algorithm where $K$ is the number of segments that does not need access to complete distributional information and computes $\alpha$ -fair pricing that achieves the aforementioned CoF (Algorithm 1 and Theorem 7).
20
+
21
+ ## 2 RELATED WORK
22
+
23
+ The impact of discriminatory pricing on consumer and seller surplus was first considered by Bergemann et al. [2015] when the consumer characteristics are known to the seller. The authors proposed a method to provide the optimal market segmentation. The generalized problem was then considered by Cummings et al. [2020] which extended the work of Bergemann et al. [2015] to the case where only partial information about the consumer's valuation was known to the seller.
24
+
25
+ When the valuations of the consumers are not known, El-machtoub et al. [2019, 2021] propose feature-based pricing and provides bounds on the value generated using idealized personalized pricing and Feature-based pricing over Uniform pricing. The value of feature-based pricing depends on the correlation of valuations and consumer features. Huang et al. [2019] consider the first-degree price discrimination over the social network where the centrality measures in social networks determine the features of the consumers. They provide bounds on the value of network-based personalized pricing in large random social networks with varying edge densities. Our work follows a similar approach because we derive personalized pricing from the features. However, naive feature-based pricing can be very unfair to the consumers, as we show in Proposition 2. Our focus is to design feature-based pricing that is fair at the same time.
26
+
27
+ Recently, many questions have been raised on the ethical side of price discrimination methods. Moriarty [2021] strongly criticizes online personalized pricing and suggests that personalized prices compete unfairly for social surplus created by transactions. Gerlick and Liozu [2020] points out the need to design personalized pricing with ethical considerations, which can provide win-win outcomes for both organizations and consumers. Richards et al. [2016] discusses that discriminatory pricing leads to the perception of unfairness amongst the consumers, which undermines the stability of retail platforms. They discuss that when consumers are involved in forming the prices, this leads to improved fairness perception, thus leading to better retentivity. Levy and Barocas [2017] discusses that web-based platforms typically use many private features of user profiles to connect buyers and sellers. When users interact on such platforms, it leads to discrimination regarding race, gender, and possibly other protected characteristics. All these studies lead to understanding the optimal price discriminatory strategies under the fairness constraint, which is the focus of our work.
28
+
29
+ Finally, Kallus and Zhou [2021] presents a list of metrics like price disparity, equal access, allocative efficiency fairness to measure and analyze fairness in feature-based pricing and study its interplay with welfare. The metrics discussed are mainly the group fairness notions which are entirely different from $\alpha$ -fairness discussed in this paper. We emphasize that though the above papers discuss the ethical issues in price discrimination, none of them provides a systematic approach to design the pricing strategy that maximizes the revenue and ensures the fairness guarantee.
30
+
31
+ ## 3 PRELIMINARIES
32
+
33
+ We consider a market with a monopolist seller seeking to price a single product available in infinite supply. The market is divided into finite number of segments $\mathcal{X} =$ $\left\{ {{x}_{1},{x}_{2},\ldots ,{x}_{K}}\right\}$ , where ${x}_{i}$ represents the ${i}^{\text{th }}$ segment. The seller, given access to $\mathcal{X}$ , can choose to price discriminate across segments to extract maximum revenue.
34
+
35
+ ---
36
+
37
+ ${}^{1}$ this assumption is standard in economics as a large number of probability distributions follow this
38
+
39
+ ---
40
+
41
+ Consumers' valuations for the single product are nonnegative random variables drawn from the set $\mathcal{V}$ (same across all segments). Let ${\mathcal{F}}_{i}\left( \cdot \right)$ be the cumulative distribution function for the valuation of the consumers in ${i}^{\text{th }}$ segment, and ${f}_{i}\left( \cdot \right)$ be corresponding probability density function (probability mass function when $\mathcal{V}$ is discrete). In this paper, we consider the following two cases separately, (a) $\mathcal{V}$ is discrete and finite, and (b) $\mathcal{V}$ is continuous. Next, we present feature-based pricing model.
42
+
43
+ ### 3.1 FEATURE-BASED PRICING MODEL
44
+
45
+ In feature-based pricing (FP), one can consider, without loss of generality, that the consumer feature is a representative of the market segment to which she belongs. Note that multiple consumers may have the same feature vector, and all the consumers having identical features belong to the same market segment. For simplicity, we will write ${p}_{i} \mathrel{\text{:=}}$ price offered to the consumer in the ${i}^{\text{th }}$ segment. A consumer makes the purchase only if her valuation is equal to or more than the offered price. The expected revenue per consumer generated from the ${i}^{\text{th }}$ segment with a price ${p}_{i} \in {\mathbb{R}}_{ + }$ is given by
46
+
47
+ $$
48
+ {\pi }_{i}\left( {p}_{i}\right) = {p}_{i} \cdot \left( {1 - {\mathcal{F}}_{i}\left( {p}_{i}\right) }\right) \tag{1}
49
+ $$
50
+
51
+ Whenever it is clear from the context we refer to expected revenue per consumer from a segment to be expected revenue from that segment. Let ${\beta }_{i}$ be the fraction of consumers in the ${i}^{\text{th }}$ segment, then the expected revenue per consumer generated across all segments is given as $\Pi \left( \mathbf{p}\right) = \mathop{\sum }\limits_{{{x}_{i} \in \mathcal{X}}}{\beta }_{i}{\pi }_{i}\left( {p}_{i}\right)$ . We assume that ${\beta }_{i}$ ’s are known to the seller. We call the sellers problem of revenue maximization as ${\mathrm{{OPT}}}_{FP}\left( {\mathcal{V},\mathcal{X},\mathcal{F},\beta }\right)$ where $\mathcal{F} = \left( {{\mathcal{F}}_{1},\ldots ,{\mathcal{F}}_{K}}\right)$ and $\beta = \left( {{\beta }_{1},\ldots ,{\beta }_{K}}\right)$ .
52
+
53
+ In the absence of fairness constraints, ${\mathrm{{OPT}}}_{FP}\left( \cdot \right)$ reduces to charging each segment separately and optimal FP strategy $\widehat{\mathbf{p}}$ consisting ${\widehat{p}}_{i}$ for segment $i$ is given by ${\widehat{p}}_{i} \in \operatorname{argmax}{\pi }_{i}\left( {p}_{i}\right)$ .
54
+
55
+ Fairness in Feature-based Pricing Let $d : \mathcal{X} \times \mathcal{X} \rightarrow$ ${\mathbb{R}}_{ + }$ be a distance function over $\mathcal{X}$ . We assume that such a function exists and is well defined in $\mathcal{X}$ , i.e.,(X, d)is a metric space. The distance function quantifies the dissimilarity between feature vectors of individuals belonging to market segments. For simplicity we write $d\left( {{x}_{i},{x}_{j}}\right) \mathrel{\text{:=}} {d}_{ij}$ . Individual fairness in FP strategy is defined as:
56
+
57
+ Definition 1 ( $\alpha$ -fairness). A price function $\mathbf{p} : \mathcal{X} \rightarrow {\mathbb{R}}_{ + }^{K}$ is $\alpha$ -fair with respect to $d$ iff for all ${x}_{i},{x}_{j} \in \mathcal{X}$ , we have
58
+
59
+ $$
60
+ \left| {{p}_{i} - {p}_{j}}\right| \leq \alpha \cdot {d}_{ij} \tag{2}
61
+ $$
62
+
63
+ We call a pricing strategy Fair Feature-based Pricing $(\alpha$ - FFP) that satisfies Eq. (2) with a given value of $\alpha$ . It is easy to see from the definition that any $\alpha$ -FFP is also ${\alpha }^{\prime }$ -FFP for any ${\alpha }^{\prime } \geq \alpha$ . We will drop the quantifier $\alpha$ and call it FFP when it is clear from the context.
64
+
65
+ Cost of Fairness (CoF) Next, we define CoF as the deviation from optimality due to fairness constraints given in Eq. (2). It is defined as the ratio of expected revenue generated by optimal feature-based pricing and fair feature-based pricing.
66
+
67
+ Definition 2 (COST OF FAIRNESS (COF)). Cost of fairness for an FFP strategy $\mathbf{p}$ is defined as
68
+
69
+ $$
70
+ \operatorname{CoF} = \frac{\Pi \left( \widehat{\mathbf{p}}\right) }{\Pi \left( \mathbf{p}\right) }. \tag{3}
71
+ $$
72
+
73
+ In the following sections, we analyze FP and FFP strategies and their CoF when $\mathcal{V}$ is discrete (Section 4) and continuous (Section 5).
74
+
75
+ ## 4 FFP FOR DISCRETE VALUATIONS
76
+
77
+ We want to ensure $\alpha$ -fairness in the pricing strategy given the optimal FP. $\alpha$ -fairness is achieved by maximizing revenue while satisfying the fairness constraints. In this section, we derive optimal FP (Section 4.1), propose how to achieve $\alpha$ -fairness (Section 4.2), and provide an upper bound on CoF (Section 4.3) for discrete valuation setting.
78
+
79
+ We consider the simplest setting described as follows: Let the consumer segments be given by $\mathcal{X} = \left\{ {{x}_{1},{x}_{2}}\right\}$ and their valuations are drawn from a discrete set $\mathcal{V} = \left\{ {{v}_{1},{v}_{2}}\right\}$ , we assume ${v}_{1} < {v}_{2}$ without loss of generality. Let ${\beta }_{1} = \beta$ and ${\beta }_{2} = 1 - \beta$ . Further, let ${f}_{1}\left( {v}_{1}\right) = {q}_{1}\left( {{f}_{2}\left( {v}_{1}\right) = {q}_{2}}\right)$ denote the probability that a consumer has valuation ${v}_{1}$ in segment 1 (segment 2). The expected revenue generated by $\mathbf{p}$ is given by:
80
+
81
+ $$
82
+ \Pi \left( \mathbf{p}\right) = \beta {p}_{1}\left\lbrack {{q}_{1}\mathbb{1}\left( {{v}_{1} \geq {p}_{1}}\right) + \left( {1 - {q}_{1}}\right) \mathbb{1}\left( {{v}_{2} \geq {p}_{1}}\right) }\right\rbrack
83
+ $$
84
+
85
+ $$
86
+ + \left( {1 - \beta }\right) {p}_{2}\left\lbrack {{q}_{2}\mathbb{1}\left( {{v}_{1} \geq {p}_{2}}\right) + \left( {1 - {q}_{2}}\right) \mathbb{1}\left( {{v}_{2} \geq {p}_{2}}\right) }\right\rbrack
87
+ $$
88
+
89
+ (4)
90
+
91
+ ### 4.1 OPTIMAL FEATURE-BASED PRICING
92
+
93
+ As discussed earlier, $\Pi \left( \mathbf{p}\right)$ can be maximized by maximizing ${\pi }_{i}\left( {p}_{i}\right)$ for each market segment independently if there are no fairness constraints. This problem is an integer program with price for each consumer type being a discrete variable. The revenue generated depends on ${\beta }_{i}$ and ${f}_{i}\left( \cdot \right) \left( {\beta ,{q}_{1},{q}_{2}}\right.$ in the current simplest case). The optimal FP is then given as
94
+
95
+ $$
96
+ \text{For}i \in \{ 1,2\} : {\widehat{p}}_{i} = \left\{ \begin{array}{ll} {v}_{1} & \text{ if }{q}_{i} \geq 1 - \frac{{v}_{1}}{{v}_{2}} \\ {v}_{2} & \text{ otherwise } \end{array}\right. \tag{5}
97
+ $$
98
+
99
+ <table><tr><td>Notation</td><td>Description</td></tr><tr><td>FP</td><td>Feature-based Pricing</td></tr><tr><td>FFP</td><td>Fair Feature-based Pricing</td></tr><tr><td>${\mathcal{F}}_{k},{f}_{k}\left( \right)$</td><td>Valuations CDF, PDF for ${k}^{\text{th }}$ consumer segment respectively</td></tr><tr><td>X</td><td>Set of all consumer features/types</td></tr><tr><td>Y</td><td>Support set of consumers' valuations</td></tr><tr><td>${x}_{k}$</td><td>Consumer feature of the ${k}^{\text{th }}$ segment</td></tr><tr><td>${\beta }_{k}$</td><td>The fraction of consumers in the ${k}^{\text{th }}$ segment</td></tr><tr><td>$\mathbf{p} = \left( {{p}_{1},{p}_{2},\ldots {p}_{K}}\right)$</td><td>Feature-based price vector</td></tr><tr><td>${\pi }_{k}\left( {p}_{k}\right)$</td><td>Revenue generated per consumer in the ${k}^{\text{th }}$ segment</td></tr><tr><td>$\Pi \left( p\right)$</td><td>Revenue generated by $p$ across all consumer segments</td></tr><tr><td>$\widehat{\mathbf{p}} = \left( {{\widehat{p}}_{1},{\widehat{p}}_{2},\ldots {\widehat{p}}_{K}}\right)$</td><td>Price function in optimal price discrimination</td></tr><tr><td>${d}_{ij} \mathrel{\text{:=}} d\left( {{x}_{i},{x}_{j}}\right)$</td><td>A real-valued metric on the consumer feature space $\mathcal{X}$</td></tr><tr><td>$\alpha$</td><td>Fairness parameter</td></tr><tr><td>${\mathbf{p}}^{ \star } = \left( {{p}_{1}^{ \star },{p}_{2}^{ \star },\ldots {p}_{K}^{ \star }}\right)$</td><td>Optimal fair feature-based price function</td></tr><tr><td>$\widetilde{\mathbf{p}} = \left( {{\widetilde{p}}_{1},{\widetilde{p}}_{2},\ldots ,{\widetilde{p}}_{K}}\right)$</td><td>Price vector for OPT-LINP-FFP</td></tr><tr><td>CoF</td><td>Cost of Fairness</td></tr><tr><td>${L}_{m}$</td><td>Linear approximation of concave revenue curve with $m$ as parameter</td></tr></table>
100
+
101
+ Table 1: Notation Table
102
+
103
+ Proof. For a market segment $i,{\pi }_{i}\left( {v}_{1}\right) = {v}_{1}$ and ${\pi }_{i}\left( {v}_{2}\right) =$ ${v}_{2}\left( {1 - {q}_{i}}\right)$ . So, ${\widehat{p}}_{i} = {v}_{1}$ if
104
+
105
+ $$
106
+ {\pi }_{i}\left( {v}_{1}\right) \geq {\pi }_{i}\left( {v}_{2}\right) \Rightarrow {v}_{1} \geq {v}_{2}\left( {1 - {q}_{i}}\right) \Rightarrow {q}_{i} \geq 1 - \frac{{v}_{1}}{{v}_{2}}
107
+ $$
108
+
109
+ otherwise, ${\widehat{p}}_{i} = {v}_{2}$ .
110
+
111
+ Next, we analyze the fairness aspects of the above pricing strategy.
112
+
113
+ ### 4.2 OPTIMAL FAIR FEATURE-BASED PRICING
114
+
115
+ Let(X, d)be a metric space. We model the Optimal fair feature-based pricing (FFP) problem as integer program which maximizes $\Pi \left( \mathbf{p}\right)$ with $\alpha$ -fairness constraints described in Eq. (2). We denote this problem as ${\mathrm{{OPT}}}_{FFP}\left( {\mathcal{V},\mathcal{X}, d,\mathcal{F},\beta ,\alpha }\right)$ and the corresponding optimal FFP strategy is denoted as ${\mathbf{p}}^{ \star }$ . First we make an interesting and very useful claim for binary valuations.
116
+
117
+ Lemma 1. When $\mathcal{V} = \left\{ {{v}_{1},{v}_{2}}\right\}$ , and if $\widehat{\mathbf{p}}$ is not $\alpha$ -fair, ${OPT}$ ${FFP}\left( {\mathcal{V},\mathcal{X}, d,\mathcal{F},\beta ,\alpha }\right)$ reduces to ${OP}{T}_{FP}\left( {\widetilde{\mathcal{V}},\mathcal{X},\mathcal{F},\beta }\right)$ where $\widetilde{\mathcal{V}}$ is either $\left\{ {v}_{1}\right\}$ , or $\left\{ {v}_{2}\right\}$ , or $\left\{ {{v}_{1},{v}_{1} + \alpha {d}_{12}}\right\}$ .
118
+
119
+ Proof. Let $\left( {{p}_{1},{p}_{2}}\right)$ be the tuple of offered prices. Note that if ${v}_{2} - {v}_{1} \leq \alpha {d}_{12}$ or ${\widehat{p}}_{1} = {\widehat{p}}_{2}$ , then the optimal ${\mathbf{p}}^{ \star } = \widehat{\mathbf{p}}$ with support $\left\{ {{v}_{1},{v}_{2}}\right\}$ and $\widehat{\mathbf{p}}$ will be trivially fair. We consider a more interesting case when ${v}_{2} - {v}_{1} > \alpha {d}_{12}$ and ${\widehat{p}}_{1} \neq {\widehat{p}}_{2}$ . In this case, the only candidate support sets for optimal fair pricing strategy are: $\left\{ {v}_{1}\right\} ,\left\{ {v}_{2}\right\} ,\left\{ {{v}_{1},{v}_{1} + \alpha {d}_{12}}\right\} ,\left\{ {{v}_{2} - }\right.$ $\left. {\alpha {d}_{12},{v}_{2}}\right\}$ . The optimal FFP does not take values from the set $\left\{ {{v}_{2} - \alpha {d}_{12},{v}_{2}}\right\}$ as the consumers with valuation ${v}_{1}$ would not make any purchase. Hence, the expected revenue with support $\left\{ {{v}_{2} - \alpha {d}_{12},{v}_{2}}\right\}$ will be less than or equal to the expected revenue with support $\left\{ {v}_{2}\right\}$ .
120
+
121
+ We now relax the constraint of binary valuation and analyze the optimal fair pricing scheme for $n$ valuations. The consumer segments are $\mathcal{X} = \left\{ {{x}_{1},{x}_{2}}\right\}$ with ${\beta }_{1} =$ $\beta$ and ${\beta }_{2} = 1 - \beta$ , the valuations are drawn from the set $\mathcal{V} = \left\{ {{v}_{1},{v}_{2},\ldots ,{v}_{n}}\right\}$ , and ${f}_{1}\left( {v}_{i}\right) = {q}_{i,1}$ and ${f}_{2}\left( {v}_{i}\right) =$ ${q}_{i,2}$ . This is a simple extension of the pricing problem, ${\mathrm{{OPT}}}_{FP}\left( {\mathcal{V},\mathcal{X},\mathcal{F},\beta }\right)$ modelled as an integer program where the prices are drawn from the set $\mathcal{V}$ . If $\widehat{\mathbf{p}}$ is not $\alpha$ -fair then, the corresponding ${\mathrm{{OPT}}}_{FFP}\left( {\mathcal{V},\mathcal{X}, d,\mathcal{F},\beta ,\alpha }\right)$ can be solved by reducing it to ${\mathrm{{OPT}}}_{FP}\left( {\widetilde{\mathcal{V}},\mathcal{X},\mathcal{F},\beta }\right)$ with $\widetilde{\mathcal{V}}$ given by:
122
+
123
+ $$
124
+ \widetilde{\mathcal{V}} = \left\{ \begin{array}{ll} \left\{ {v}_{i}\right\} ,{v}_{i} \in \mathcal{V} & \text{ if }{p}_{1}^{ \star } = {p}_{2}^{ \star } \\ \left\{ {{v}_{j},{v}_{j} + \alpha {d}_{12},{v}_{j} - \alpha {d}_{12}}\right\} ,{v}_{j} \in \mathcal{V} & \text{ if }{p}_{1}^{ \star } \neq {p}_{2}^{ \star } \end{array}\right.
125
+ $$
126
+
127
+ Given the set $\widehat{\mathcal{V}}$ , the pricing problem ${\operatorname{OPT}}_{FP}\left( {\widetilde{\mathcal{V}},\mathcal{X},\mathcal{F},\beta }\right)$ can be solved in constant time. It is easy to see that computing $\widehat{\mathcal{V}}$ takes $\mathcal{O}\left( {n}^{2}\right)$ time for $n$ valuations and 2 consumer types. Therefore, the fair pricing problem ${\operatorname{OPT}}_{FFP}\left( {\mathcal{V},\mathcal{X}, d,\mathcal{F},\beta ,\alpha }\right)$ can be solved in $\mathcal{O}\left( {n}^{2}\right)$ time.
128
+
129
+ ### 4.3 COF ANALYSIS
130
+
131
+ For $n = 2$ , based on the values of ${q}_{1},{q}_{2}$ we have the following cases:
132
+
133
+ 1. ${p}_{1}^{ \star } = {p}_{2}^{ \star } = {v}_{1}$ 3. ${p}_{1}^{ \star } = {v}_{1} + \alpha {d}_{12},{p}_{2}^{ \star } = {v}_{1}$
134
+
135
+ 2. ${p}_{1}^{ \star } = {p}_{2}^{ \star } = {v}_{2}$ 4. ${p}_{1}^{ \star } = {v}_{1},{p}_{2}^{ \star } = {v}_{1} + \alpha {d}_{12}$
136
+
137
+ In cases 1 and 2, optimal fair pricing is equivalent to uniform pricing and therefore are ’trivially’ fair with $\mathrm{{CoF}} = 1$ , i.e., $\Pi \left( \widehat{\mathbf{p}}\right) = \Pi \left( {\mathbf{p}}^{ \star }\right)$ . For case 3, $\Pi \left( \widehat{\mathbf{p}}\right)$ and $\Pi \left( {\mathbf{p}}^{ \star }\right)$ are given as:
138
+
139
+ $$
140
+ \Pi \left( \widehat{\mathbf{p}}\right) = \beta \left( {v}_{2}\right) \left( {1 - {q}_{1}}\right) + \left( {1 - \beta }\right) {v}_{1}
141
+ $$
142
+
143
+ $$
144
+ \Pi \left( {\mathbf{p}}^{ \star }\right) = \beta \left( {{v}_{1} + \alpha {d}_{12}}\right) \left( {1 - {q}_{1}}\right) + \left( {1 - \beta }\right) {v}_{1}
145
+ $$
146
+
147
+ Then the cost of fairness for case 3 is given as:
148
+
149
+ $$
150
+ \mathrm{{CoF}} = \frac{\Pi \left( \widehat{\mathbf{p}}\right) }{\Pi \left( {\mathbf{p}}^{ \star }\right) } = \frac{\beta \left( {v}_{2}\right) \left( {1 - {q}_{1}}\right) + \left( {1 - \beta }\right) {v}_{1}}{\beta \left( {{v}_{1} + \alpha {d}_{12}}\right) \left( {1 - {q}_{1}}\right) + \left( {1 - \beta }\right) {v}_{1}}
151
+ $$
152
+
153
+ $$
154
+ = \frac{\beta \left( {{v}_{2} - {v}_{1}}\right) + {v}_{1} - \beta {v}_{2}{q}_{1}}{{\beta \alpha }{d}_{12}\left( {1 - {q}_{1}}\right) - \beta {v}_{1}{q}_{1} + {v}_{1}}
155
+ $$
156
+
157
+ $$
158
+ = \frac{\beta \left( {1 - \frac{{v}_{1}}{{v}_{2}}}\right) + \frac{{v}_{1}}{{v}_{2}} - \beta {q}_{1}}{\beta \left( \frac{\alpha {d}_{12}}{{v}_{2}}\right) \left( {1 - {q}_{1}}\right) - \beta \left( \frac{{v}_{1}}{{v}_{2}}\right) {q}_{1} + \frac{{v}_{1}}{{v}_{2}}} \tag{6}
159
+ $$
160
+
161
+ Replacing $\beta$ with $\left( {1 - \beta }\right)$ and ${q}_{1}$ with ${q}_{2}$ in the above expression, we get a similar approximation of CoF for case 4.
162
+
163
+ Proposition 2. Cost of fairness with discrete valuations can go arbitrarily bad.
164
+
165
+ Proof. From Eq. (6) when $\frac{{v}_{1}}{{v}_{2}} \rightarrow 0$ , we have $\operatorname{COF} = \frac{{v}_{2}}{\alpha {d}_{12}}$ . The CoF (in Case 3 and/or Case 4) is arbitrarily bad if ${d}_{12} > 0$ when there is a large difference between ${v}_{1}$ and ${v}_{2}$ . Note that ${d}_{12} = 0$ is uninteresting as the seller is unable to distinguish between two segments.
166
+
167
+ Note that ${v}_{2}$ being arbitrarily large need not be a commonly occurring setting. Hence, we work with bounded support valuations in the backdrop of the above negative results. In the next section, we make assumptions based on standard economic literature about the revenue functions ${\pi }_{i}\left( \cdot \right)$ , i.e., concave revenue functions and common support [Berge-mann et al., 2021]. As argued in Section 3 of Dhangwatnotai et al. [2015], valuation distributions satisfying Monotone Hazard Rate (MHR) satisfy the assumptions as mentioned above regarding revenue functions. It is also observed that the revenue functions are concave for another commonly analyzed family of distributions in literature called the regular distributions in which the virtual valuation is non-decreasing (Section 4.3 of Bergemann et al. [2021]). MHR is a common assumption in Econ-CS [Hartline and Roughgarden, 2009].
168
+
169
+ Therefore, in the following section, we analyze the cost of fairness for such valuation distributions and the associated concave revenue functions.
170
+
171
+ ## 5 FFP FOR CONTINUOUS VALUATIONS
172
+
173
+ In this section, we consider feature-based pricing with continuous valuations. We impose a standard restriction on the revenue functions ${\pi }_{i}\left( \cdot \right)$ such that they are concave on the common support $\mathcal{V} = \left\lbrack {\underline{v},\bar{v}}\right\rbrack$ [Bergemann et al.,2021]. The consumer segments are identified by the associated feature vectors ${x}_{i} \in \mathcal{X}.\underline{v}$ is the marginal cost defined as a minimum feasible valuation for which a seller is willing to sell the product. The marginal cost may include the cost of production, transportation, etc. On the other hand, $\bar{v}$ is the maximum consumer valuation. Without loss of generality, we consider that maximum consumer valuation is greater than marginal cost; i.e., trade occurs.
174
+
175
+ We begin with a tight upper bound on the CoF under conditions as mentioned above (Section 5.1) followed by two pricing schemes based on the available information about the revenue functions (Section 5.2), and finally, we present an algorithm that achieves the COF bound in Section 5.3.
176
+
177
+ ### 5.1 OPTIMAL FFP FOR CONTINUOUS VALUATIONS
178
+
179
+ The problem of determining optimal FFP can be modeled as a convex program with $\alpha$ -fairness as linear constraints. The convex program below describes ${\mathrm{{OPT}}}_{FFP}\left( {\mathcal{V},\mathcal{X}, d,\mathcal{F},\beta ,\alpha }\right)$ model with complete knowledge of revenue functions ${\pi }_{i}\left( \cdot \right)$ .
180
+
181
+ $$
182
+ \mathop{\max }\limits_{{{p}_{k} \in \mathcal{V},\forall k}}\Pi \left( \mathbf{p}\right) = \mathop{\sum }\limits_{{k = 1}}^{K}{\beta }_{k}{\pi }_{k}\left( {p}_{k}\right)
183
+ $$
184
+
185
+ $$
186
+ \text{subject to,}\left| {{p}_{i} - {p}_{j}}\right| \leq {\alpha d}\left( {{x}_{i},{x}_{j}}\right) ,\forall i \neq j
187
+ $$
188
+
189
+ $$
190
+ {p}_{i} \geq 0,\forall i \in \left\lbrack K\right\rbrack
191
+ $$
192
+
193
+ Let ${\mathbf{p}}^{ \star }$ be a solution to the above problem.
194
+
195
+ ### 5.2 LINP-FFP AND COF ANALYSIS
196
+
197
+ Let ${D}_{i} \mathrel{\text{:=}} \mathop{\min }\limits_{{j \neq i}}{d}_{ij}$ . With the following proposition, we propose a class of $\alpha$ -fair pricing strategies.
198
+
199
+ Proposition 3. For a given $m \in \left\lbrack {\underline{v},\bar{v}}\right\rbrack$ , if the price function satisfies $\left| {{p}_{i} - m}\right| \leq \frac{\alpha }{2}{D}_{i}$ for all $i \in \left\lbrack K\right\rbrack$ then it satisfies $\alpha$ - fairness .
200
+
201
+ Proof. From triangle inequality, we have $\left| {{p}_{i} - {p}_{j}}\right| \leq \left| {{p}_{i} - }\right|$ $m\left| +\right| {p}_{j} - m \mid \leq \frac{\alpha }{2}{D}_{i} + \frac{\alpha }{2}{D}_{j} \leq \alpha {d}_{ij}$ . The last inequality results from the fact that ${D}_{i} = \mathop{\min }\limits_{{k \neq i}}{d}_{ik} \leq {d}_{ij}$ and ${D}_{j} =$ $\mathop{\min }\limits_{{k \neq j}}{d}_{ik} \leq {d}_{ji} = {d}_{ij}$ .
202
+
203
+ ![0196393f-b595-7b23-a67f-33ff572df8b4_5_211_182_583_356_0.jpg](images/0196393f-b595-7b23-a67f-33ff572df8b4_5_211_182_583_356_0.jpg)
204
+
205
+ Figure 1: Concave revenue function ${\pi }_{i}\left( \cdot \right)$ and its linear approximation ${L}_{i}\left( \cdot \right)$ (arrows show equations for ${L}_{i}\left( \cdot \right)$ ). Figure represents the case ${\widehat{p}}_{i} - m \geq \alpha {D}_{i}/2$ for which LINP-FFP assigns ${p}_{i} = m + \alpha {D}_{i}/2$ . The case $m - {\widehat{p}}_{i} \geq \alpha {D}_{i}/2$ is similar.
206
+
207
+ In other words, for ensuring that the prices for different segments are not too different, it is enough to ensure that the pricing for each segment is not too different from some common point $m$ . The pricing for all the segments would hence be around this point and could be determined with respect to this point. We term this point as pivot. We now present the second FFP model, an $\alpha$ -fair pricing strategy that is pivot-based and satisfies the condition in Proposition 3, with access to only ${\widehat{p}}_{i}$ for a given $m$ .
208
+
209
+ $$
210
+ {p}_{i} = \left\{ \begin{array}{ll} m + \alpha {D}_{i}/2 & \text{ if }{\widehat{p}}_{i} - m \geq \alpha {D}_{i}/2 \\ m - \alpha {D}_{i}/2 & \text{ if }m - {\widehat{p}}_{i} \geq \alpha {D}_{i}/2 \\ {\widehat{p}}_{i} & \text{ otherwise } \end{array}\right. \tag{8}
211
+ $$
212
+
213
+ We call this pricing scheme LINP-FFP. It is easy to see that the above pricing strategy is $\alpha$ -fair. We now present the CoF bound for LINP-FFP.
214
+
215
+ Theorem 4. The Cost of Fairness for optimal fair price discrimination with concave revenue functions satisfies
216
+
217
+ $$
218
+ \operatorname{CoF} \leq \frac{2}{1 + \min \left\{ {\alpha \frac{\mathop{\min }\limits_{i}{D}_{i}}{\bar{v} - \underline{v}},1}\right\} }
219
+ $$
220
+
221
+ Proof. We prove that the above CoF is satisfied by LINP-FFP and hence the theorem. Let $m \in \left\lbrack {\underline{v},\bar{v}}\right\rbrack$ be a pivot point (See Figure 1). Let
222
+
223
+ $$
224
+ {\gamma }_{i} \mathrel{\text{:=}} \left\{ \begin{array}{ll} \frac{\left( {m - \underline{v}}\right) + \alpha {D}_{i}/2}{{\widehat{p}}_{i} - \underline{v}} & \text{ if }{\widehat{p}}_{i} - m \geq \alpha {D}_{i}/2 \\ \frac{\left( {\bar{v} - m}\right) + \alpha {D}_{i}/2}{\bar{v} - {\widehat{p}}_{i}} & \text{ if }m - {\widehat{p}}_{i} \geq \alpha {D}_{i}/2 \\ 1 & \text{ otherwise } \end{array}\right. \tag{9}
225
+ $$
226
+
227
+ Let ${\widehat{\pi }}_{i}$ be the expected revenue generated from the ${i}^{\text{th }}$ segment under $\widehat{\mathbf{p}}$ . We now show the following supporting lemma.
228
+
229
+ Lemma 5. The pricing strategy given in Eq. 8 guarantees at-least ${\gamma }_{i}$ fraction of optimal revenue from segment $i$ , i.e., ${\pi }_{i} \geq {\gamma }_{i}{\widehat{\pi }}_{i}$
230
+
231
+ Proof. A lower bound to the concave revenue functions ${\pi }_{i}\left( \cdot \right)$ for any segment $i$ is the piecewise linear approximation ${L}_{i}$ , given by (see Figure 1):
232
+
233
+ $$
234
+ {L}_{i}\left( p\right) = \left\{ \begin{array}{ll} \frac{{\widehat{\pi }}_{i}}{{\widehat{p}}_{i} - \underline{v}}\left( {p - \underline{v}}\right) , & p \leq {\widehat{p}}_{i} \\ \frac{-{\widehat{\pi }}_{i}}{\bar{v} - {\widehat{p}}_{i}}\left( {p - \bar{v}}\right) , & p > {\widehat{p}}_{i} \end{array}\right. \tag{10}
235
+ $$
236
+
237
+ So, for each consumer segment $i$ we have,
238
+
239
+ $$
240
+ {L}_{i}\left( p\right) \leq {\pi }_{i}\left( p\right) ,\forall p \in \left\lbrack {\underline{v},\bar{v}}\right\rbrack
241
+ $$
242
+
243
+ Expected revenues generated per consumer in segment $i$ by pricing rule in Eq. 8 for ${\widehat{p}}_{i} - m \geq \alpha {D}_{i}/2, m - {\widehat{p}}_{i} \geq \alpha {D}_{i}/2$ , and remaining cases are given below in the respective order
244
+
245
+ $$
246
+ {\pi }_{i}\left( {p}_{i}\right) \geq {L}_{i}\left( {p}_{i}\right) = \frac{{\widehat{\pi }}_{i}}{{\widehat{p}}_{i} - \underline{v}}\left( {m + \alpha {D}_{i}/2 - \underline{v}}\right) = {\widehat{\pi }}_{i}{\gamma }_{i}
247
+ $$
248
+
249
+ $$
250
+ {\pi }_{i}\left( {p}_{i}\right) \geq {L}_{i}\left( {p}_{i}\right) = \frac{-{\widehat{\pi }}_{i}}{\bar{v} - {\widehat{p}}_{i}}\left( {m - \alpha {D}_{i}/2 - \bar{v}}\right) = {\widehat{\pi }}_{i}{\gamma }_{i}
251
+ $$
252
+
253
+ $$
254
+ {\pi }_{i}\left( {p}_{i}\right) = {L}_{i}\left( {\widehat{p}}_{i}\right) = {\widehat{\pi }}_{i}
255
+ $$
256
+
257
+ This proves the lemma.
258
+
259
+ Let ${\pi }_{i}^{ \star }$ denote the expected revenue generated from the ${i}^{\text{th }}$ segment by ${\mathbf{p}}^{ \star }$ . So, CoF for optimal FPP is given by:
260
+
261
+ $$
262
+ \operatorname{CoF} = \frac{\mathop{\sum }\limits_{{i \in \left\lbrack K\right\rbrack }}{\beta }_{i}{\widehat{\pi }}_{i}}{\mathop{\sum }\limits_{{i \in \left\lbrack K\right\rbrack }}{\beta }_{i}{\pi }_{i}^{ \star }} \leq \frac{\mathop{\sum }\limits_{{i \in \left\lbrack K\right\rbrack }}{\beta }_{i}{\widehat{\pi }}_{i}}{\mathop{\sum }\limits_{{i \in \left\lbrack K\right\rbrack }}{\beta }_{i}{\pi }_{i}}\;\text{ (Optimality of }{\pi }_{i}^{ \star }\text{) }
263
+ $$
264
+
265
+ $$
266
+ \leq \frac{\mathop{\sum }\limits_{{i \in \left\lbrack K\right\rbrack }}{\rho }_{i}{\pi }_{i}}{\mathop{\sum }\limits_{{i \in \left\lbrack K\right\rbrack }}{\beta }_{i}{\gamma }_{i}{\widehat{\pi }}_{i}}
267
+ $$
268
+
269
+ (Lemma 5)
270
+
271
+ In order to prove the said CoF bound, it suffices to show that there exists an $m$ (and hence a corresponding pricing strategy using Eq. (8)) for which the said CoF bound is satisfied. It can be seen that for $m = \left( {\underline{v} + \bar{v}}\right) /2$ , and replacing denominators in Eq. (9) by $\bar{v} - \underline{v}$ , we have that
272
+
273
+ $$
274
+ \mathrm{{COF}} \leq \frac{\mathop{\sum }\limits_{{i \in \left\lbrack K\right\rbrack }}{\beta }_{i}{\widehat{\pi }}_{i}}{\mathop{\sum }\limits_{{i \in \left\lbrack K\right\rbrack }}{\beta }_{i}{\widehat{\pi }}_{i}\left( {\frac{1}{2} + \min \left\{ {\frac{\alpha {D}_{i}}{2\left( {\bar{v} - \underline{v}}\right) },1}\right\} }\right) }
275
+ $$
276
+
277
+ $$
278
+ \leq \frac{\mathop{\sum }\limits_{{i \in \left\lbrack K\right\rbrack }}{\beta }_{i}{\widehat{\pi }}_{i}}{\left( {\mathop{\sum }\limits_{{i \in \left\lbrack K\right\rbrack }}{\beta }_{i}{\widehat{\pi }}_{i}}\right) \left( {\frac{1}{2} + \min \left\{ {\frac{\alpha \mathop{\min }\limits_{j}{D}_{j}}{2\left( {\bar{v} - \underline{v}}\right) },1}\right\} }\right) }
279
+ $$
280
+
281
+ $$
282
+ = \frac{2}{1 + \min \left\{ {\alpha \frac{\mathop{\min }\limits_{j}{D}_{j}}{\bar{v} - \underline{v}},1}\right\} }
283
+ $$
284
+
285
+ It is worth noting here that the cost of fairness does not depend on the number of the segments and the distribution of the population among these segments. So, if the segments are well separated in terms of the distance between features of consumers across segments the number of segments as well as the distribution of consumer population in these segments do not affect revenue guarantee. Also, if the admissible prices are supported over a large interval, the fairness guarantee becomes weaker. This insight discourages pricing schemes with wildly varying prices across segments. Finally, if $\alpha = 0$ , i.e., without any fairness constraints, we recover the bound of 2 proved in Bergemann et al. [2021].
286
+
287
+ We emphasize that the bound is strictly less than 2 because, under fairness constraints, $\alpha \neq 0$ and typically the consumer types are well separated in the feature space according to the metric $d$ else, the consumer types are indistinguishable for the seller hence, ${d}_{ij} \neq 0$ for all $i, j \in \left\lbrack K\right\rbrack$ . This is an improvement of the CoF bound given in Bergemann et al. [2021].
288
+
289
+ Tightness of CoF bound: We claim that the CoF bound presented above is tight. In the following example, equality holds and proves the tightness of the bound.
290
+
291
+ Example 1 (Tightness of the CoF bound). Consider $K = 2$ where ${\beta }_{1} = {\beta }_{2} = \frac{1}{2}$ . Consider ${\mathcal{F}}_{i}$ be such that ${\pi }_{i}\left( \cdot \right) = {L}_{i}\left( \cdot \right)$ with ${\widehat{p}}_{1} = \underline{v} + \varepsilon ,{\widehat{p}}_{2} = \bar{v} - \varepsilon$ , where $\varepsilon \rightarrow 0$ , and ${\widehat{\pi }}_{1} = {\widehat{\pi }}_{2}$ . It can be seen that if $\alpha$ is such that $\alpha {d}_{12} < \bar{v} - \underline{v}$ , any FP satisfying ${p}_{2} - {p}_{1} = \alpha {d}_{12}$ and ${p}_{1},{p}_{2} \in \left\lbrack {{\widehat{p}}_{1},{\widehat{p}}_{2}}\right\rbrack$ is an optimal FFP (fair FP), and the corresponding $\mathrm{{CoF}} = \frac{2}{1 + \frac{\alpha {d}_{12}}{\bar{v} - v}}$ . If $\alpha {d}_{12} \geq \bar{v} - \underline{v}$ , the optimal ${FP}$ is $\alpha$ -fair and so, $\mathrm{{CoF}} = 1$ . Hence, for this example, $\mathrm{{CoF}} = \frac{2}{1 + \min \left\{ {\alpha \frac{{d}_{12}}{\bar{v} - \underline{v}},1}\right\} }$ . This shows the tightness of the CoF bound derived in Theorem 4.
292
+
293
+ We now present an algorithm, OPT-LINP-FFP, to find the optimal pivot ${m}^{ \star }$ in the above LINP-FFP strategy when only $\widehat{p}$ and ${\widehat{\pi }}_{i}$ s are known.
294
+
295
+ ### 5.3 PROPOSED ALGORITHM
296
+
297
+ As LINP-FFP satisfies $\alpha$ -fairness (Proposition 3), and also achieves CoF bounds in Theorem 4, we look for a pricing strategy optimal within class of LINP-FFP. It reduces to finding an optimal pivot that maximizes revenue. In this section, we propose a binary-search-based algorithm for the same. For pricing $\mathbf{p}$ , the expected revenue generated per consumer is given by $\underline{\Pi }\left( \mathbf{p}\right) = \mathop{\sum }\limits_{{i = 1}}^{K}{\beta }_{i}{\pi }_{i}\left( {p}_{i}\right)$ . Let ${\tau }_{i} \mathrel{\text{:=}} \frac{\alpha }{2}{D}_{i}$ . Observe from Lemma 5 that $\Pi \left( \mathbf{p}\right)$ is lower bounded as:
298
+
299
+ $$
300
+ \Pi \left( \mathbf{p}\right) \geq {\Pi }_{m}\left( \mathbf{L}\right) = \mathop{\sum }\limits_{{i = 1}}^{K}{\beta }_{i}{\gamma }_{i}{\widehat{\pi }}_{i} = \mathop{\sum }\limits_{{i : \left| {{\widehat{p}}_{i} - m}\right| < {\tau }_{i}}}{\beta }_{i}{\widehat{\pi }}_{i} +
301
+ $$
302
+
303
+ $$
304
+ \mathop{\sum }\limits_{{i : {\widehat{p}}_{i} - m \geq {\tau }_{i}}}{\beta }_{i}{\widehat{\pi }}_{i}\frac{m + {\tau }_{i} - \underline{v}}{{\widehat{p}}_{i} - \underline{v}} + \mathop{\sum }\limits_{{i : m - {\widehat{p}}_{i} \geq {\tau }_{i}}}{\beta }_{i}{\widehat{\pi }}_{i}\frac{\bar{v} - m + {\tau }_{i}}{\bar{v} - {\widehat{p}}_{i}}
305
+ $$
306
+
307
+
308
+
309
+ ## Determining Optimal Pivot $m$
310
+
311
+ As we can see, the revenue generated by LINP-FFP is lower bounded by a piecewise linear function in $m$ . With the aim of achieving a better lower bound, we now address the problem of determining an optimal pivot ${m}^{ \star } \in \operatorname{argmax}{\Pi }_{m}\left( \mathbf{L}\right)$ . $m \in \left\lbrack {\underline{v},\bar{v}}\right\rbrack$
312
+
313
+ ## Pricing Algorithm
314
+
315
+ In what follows, we call the candidate points $m$ for optimal pivot, i.e., for maximizing ${\Pi }_{m}\left( \mathbf{L}\right)$ , as critical points. We denote the set of these critical points as $\mathcal{M}$ .
316
+
317
+ Lemma 6. ${\Pi }_{m}\left( \mathbf{L}\right)$ as a function of $m$ is concave and piecewise linear with the set of critical points $\mathcal{M} =$ $\left( {{\left\{ {\widehat{p}}_{i} - \frac{\alpha }{2}{D}_{i},{\widehat{p}}_{i} + \frac{\alpha }{2}{D}_{i}\right\} }_{i \in \left\lbrack K\right\rbrack } \cap \left\lbrack {\underline{v},\bar{v}}\right\rbrack }\right) \cup \{ \underline{v},\bar{v}\} .$
318
+
319
+ Proof. It is easy to see that for a segment $i,{\gamma }_{i}$ as a function of $m$ is continuous and piecewise linear with breakpoints (i.e., points at which piecewise linear function changes slope): ${\widehat{p}}_{i} - \frac{\alpha }{2}{D}_{i}$ and ${\widehat{p}}_{i} + \frac{\alpha }{2}{D}_{i}$ provided they are in the range $\left\lbrack {\underline{v},\bar{v}}\right\rbrack$ . The set of breakpoints is hence $\left\{ {{\widehat{p}}_{i} - \frac{\alpha }{2}{D}_{i},{\widehat{p}}_{i} + \frac{\alpha }{2}{D}_{i}}\right\} \cap \left\lbrack {\underline{v},\bar{v}}\right\rbrack$ . Also, the slope monotonically decreases at the breakpoints, i.e., ${\gamma }_{i}$ is a concave function of $m$ .
320
+
321
+ From Eq. (13), we can see that ${\Pi }_{m}\left( \mathbf{L}\right)$ is a weighted sum over all segments, of ${\gamma }_{i}$ ’s with constant weights ${\beta }_{i}{\widehat{\pi }}_{i}$ . So, ${\Pi }_{m}\left( \mathbf{L}\right)$ as a function of $m$ is concave and piecewise linear with breakpoints belonging to the following set: ${\left\{ {\widehat{p}}_{i} - \frac{\alpha }{2}{D}_{i},{\widehat{p}}_{i} + \frac{\alpha }{2}{D}_{i}\right\} }_{i \in \left\lbrack K\right\rbrack } \cap \left\lbrack {\underline{v},\bar{v}}\right\rbrack$ . Hence, a point $m$ that maximizes ${\Pi }_{m}\left( \mathbf{L}\right)$ belongs to either the aforementioned set of breakpoints, or the set of its boundary points $\{ \underline{v},\bar{v}\}$ . Thus, the set of critical points $\mathcal{M} =$ $\left( {{\left\{ {\widehat{p}}_{i} - \frac{\alpha }{2}{D}_{i},{\widehat{p}}_{i} + \frac{\alpha }{2}{D}_{i}\right\} }_{i \in \left\lbrack K\right\rbrack } \cap \left\lbrack {\underline{v},\bar{v}}\right\rbrack }\right) \cup \{ \underline{v},\bar{v}\} .$
322
+
323
+ Our algorithm OPT-LINP-FFP (Optimal Linearized Pivot-based Fair Feature-based Pricing) which determines an optimal pivot ${m}^{ \star }$ and provides an $\alpha$ -fair pricing strategy $\left( \widetilde{\mathbf{p}}\right)$ is presented in Algorithm 1.
324
+
325
+ Theorem 7. The OPT-LINP-FFP algorithm (a) returns optimal pivot point ${m}^{ \star }$ and runs in $\mathcal{O}\left( {K\log \left( K\right) }\right)$ time, and (b) achieves the CoF bound given in Theorem 4.
326
+
327
+ Proof. (a) The first module is the creation and sorting of the set of critical points $\mathcal{M}$ , which takes $\mathcal{O}\left( {K\log \left( K\right) }\right)$ time. Owing to Lemma 6, we can find an optimal pivot ${m}^{ \star }$ using binary search over $\mathcal{M}$ . Here, the number of critical points are at most ${2K} + 2$ , i.e., $\left| \mathcal{M}\right| \leq {2K} + 2$ . So, in the second module that finds an optimal pivot, the binary search in the outer (while) loop runs for $\mathcal{O}\left( {\log \left( \left| \mathcal{M}\right| \right) }\right)$ iterations, and the inner (for) loops run for $\mathcal{O}\left( K\right)$ iterations overall. Thus, the running time of the second module is $\mathcal{O}\left( {K\log \left( K\right) }\right)$ . The third module that computes pricing for the different segments runs in $\mathcal{O}\left( K\right)$ time. So, the total running time of Algorithm 1 is $\mathcal{O}\left( {K\log \left( K\right) }\right)$ .
328
+
329
+ Algorithm 1: OPT-LINP-FFP
330
+
331
+ ---
332
+
333
+ Input: $\alpha ,\widehat{\mathbf{p}},\left( {{\widehat{\pi }}_{1},\ldots ,{\widehat{\pi }}_{K}}\right) ,\left( {{\beta }_{1},\ldots ,{\beta }_{K}}\right) ,\left( {{D}_{1},\ldots ,{D}_{K}}\right)$
334
+
335
+ Output: ${m}^{ \star },\widetilde{\mathbf{p}}$
336
+
337
+ /* Creating and sorting the set of
338
+
339
+ critical points
340
+
341
+ $\mathcal{M} \leftarrow \{ \underline{v},\bar{v}\}$
342
+
343
+ for $i \in \left\lbrack K\right\rbrack$ do
344
+
345
+ ${\tau }_{i} \leftarrow \frac{\alpha }{2}{D}_{i}$
346
+
347
+ if ${\widehat{p}}_{i} - {\tau }_{i} > \underline{v}$ then
348
+
349
+ $\mathcal{M} \leftarrow \mathcal{M} \cup \left\{ {{\widehat{p}}_{i} - {\tau }_{i}}\right\}$
350
+
351
+ if ${\widehat{p}}_{i} + {\tau }_{i} < \bar{v}$ then
352
+
353
+ $\mathcal{M} \leftarrow \mathcal{M} \cup \left\{ {{\widehat{p}}_{i} + {\tau }_{i}}\right\}$
354
+
355
+ $\operatorname{sort}\left( \mathcal{M}\right)$
356
+
357
+ /* Binary search for optimal pivot */
358
+
359
+ $\ell \leftarrow 0, r \leftarrow \left| \mathcal{M}\right| - 1$
360
+
361
+ while $\ell \leq r$ do
362
+
363
+ $z \leftarrow \left\lfloor \frac{\ell + r}{2}\right\rfloor \;//\mathcal{M}\left\lbrack z\right\rbrack$ is the current pivot
364
+
365
+ /* Computing the expression in
366
+
367
+ Eq. (13) at current and adjacent
368
+
369
+ critical points
370
+
371
+ ${\Pi }_{\mathcal{M}\left\lbrack {z - 1}\right\rbrack } \leftarrow 0,{\Pi }_{\mathcal{M}\left\lbrack z\right\rbrack } \leftarrow 0,{\Pi }_{\mathcal{M}\left\lbrack {z + 1}\right\rbrack } \leftarrow 0$
372
+
373
+ for $y \leftarrow \{ z - 1, z, z + 1\}$ do
374
+
375
+ for $i \leftarrow 1$ to $K$ do
376
+
377
+ if ${\widehat{p}}_{i} \geq \mathcal{M}\left\lbrack y\right\rbrack + {\tau }_{i}$ then
378
+
379
+ ${\gamma }_{i} \leftarrow \frac{\widehat{M}\left\lbrack y\right\rbrack - \underline{v} + {\tau }_{i}}{{\widehat{p}}_{i} - \underline{v}}$
380
+
381
+ else if ${\widehat{p}}_{i} \leq \mathcal{M}\left\lbrack y\right\rbrack - {\tau }_{i}$ then
382
+
383
+ ${\gamma }_{i} \leftarrow \frac{\bar{v} - \mathcal{M}\left\lbrack y\right\rbrack + {\tau }_{i}}{\bar{v} - {\widehat{p}}_{i}}$
384
+
385
+ else
386
+
387
+ ${\gamma }_{i} \leftarrow 1$
388
+
389
+ ${\Pi }_{\mathcal{M}\left\lbrack y\right\rbrack } \leftarrow {\Pi }_{\mathcal{M}\left\lbrack y\right\rbrack } + {\beta }_{i}{\gamma }_{i}{\widehat{\pi }}_{i}$
390
+
391
+ if ${\Pi }_{\mathcal{M}\left\lbrack {z - 1}\right\rbrack } \leq {\Pi }_{\mathcal{M}\left\lbrack z\right\rbrack } \leq {\Pi }_{\mathcal{M}\left\lbrack {z + 1}\right\rbrack }$ then
392
+
393
+ $\ell \leftarrow z + 1$
394
+
395
+ else if ${\Pi }_{\mathcal{M}\left\lbrack {z - 1}\right\rbrack } \geq {\Pi }_{\mathcal{M}\left\lbrack z\right\rbrack } \geq {\Pi }_{\mathcal{M}\left\lbrack {z + 1}\right\rbrack }$ then
396
+
397
+ $r \leftarrow z - 1$
398
+
399
+ else
400
+
401
+ ${m}^{ \star } \leftarrow \mathcal{M}\left\lbrack z\right\rbrack$
402
+
403
+ break
404
+
405
+ /* Pricing for the different segments $*$ /
406
+
407
+ for $i \in \left\lbrack K\right\rbrack$ do
408
+
409
+ if ${\widehat{p}}_{i} \geq {m}^{ \star } + {\tau }_{i}$ then
410
+
411
+ ${\widetilde{p}}_{i} \leftarrow {m}^{ \star } + {\tau }_{i}$
412
+
413
+ else if ${\widehat{p}}_{i} \leq {m}^{ \star } - {\tau }_{i}$ then
414
+
415
+ ${\widetilde{p}}_{i} \leftarrow {m}^{ \star } - {\tau }_{i}$
416
+
417
+ else
418
+
419
+ ---
420
+
421
+ ${\widetilde{p}}_{i} \leftarrow {\widehat{p}}_{i}$
422
+
423
+ (b) From Theorem 4, for $m = \left( {\underline{v} + \bar{v}}\right) /2$ , the CoF bound
424
+
425
+ holds. Also, ${\Pi }_{{m}^{ \star }}\left( \mathbf{L}\right) \geq {\Pi }_{m}\left( \mathbf{L}\right)$ for all $m \neq {m}^{ \star }$ . We have:
426
+
427
+ $$
428
+ \operatorname{CoF} = \frac{\Pi \left( \widehat{\mathbf{p}}\right) }{\Pi \left( \widetilde{\mathbf{p}}\right) } \leq \frac{\Pi \left( \widehat{\mathbf{p}}\right) }{{\Pi }_{{m}^{ \star }}\left( \mathbf{L}\right) } \leq \frac{\Pi \left( \widehat{\mathbf{p}}\right) }{{\Pi }_{m}\left( \mathbf{L}\right) }
429
+ $$
430
+
431
+ This completes the proof of the theorem.
432
+
433
+ ## 6 DISCUSSION
434
+
435
+ This paper built a foundation for the design of fair feature-based pricing by proposing a new fairness notion called $\alpha$ -fairness. Our impossibility result on the discrete valuation setting restricted us from attaining a finite cost of fairness (CoF) in general settings. Interestingly, in the continuous valuation setting with concave revenue functions, we showed that a family of pricing schemes, LINP-FFP, provided a CoF strictly less than 2. Finally, we proposed an algorithm, OPT-LINP-FFP, which gave us an optimal pricing strategy within this family. It is worth noting that the algorithm does not require a complete distribution function; peaks of revenue distributions are sufficient statistics for computing optimal fair feature-based pricing.
436
+
437
+ We leave the problem of finding an optimal segmentation (optimal value of $K$ and corresponding $K$ -partition of the market) as interesting future work. We assumed a monopoly market. It will be interesting to study optimal fair pricing in the face of competition and other constraints such as finite supply, non-linear production cost, and variable demand.
438
+
439
+ ## References
440
+
441
+ Kaplan Alan. Hospital price discrimination is deepening racial health inequity. https://catalyst.nejm.org/doi/full/ 10.1056/CAT.20.0593, 2020.
442
+
443
+ Dirk Bergemann, Benjamin Brooks, and Stephen Morris. The limits of price discrimination. American Economic Review, 105(3):921-57, 2015.
444
+
445
+ Dirk Bergemann, Francisco Castro, and Gabriel Weintraub. Third-degree price discrimination versus uniform pricing, 2021.
446
+
447
+ Ralph Cassady. Techniques and purposes of price discrimination. Journal of Marketing (pre-1986), 11(000002): 135, 10 1946.
448
+
449
+ Rachel Cummings, Nikhil R. Devanur, Zhiyi Huang, and Xiangning Wang. Algorithmic price discrimination. In Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA). SIAM, 2020. doi: 10.1137/1. 9781611975994.149.
450
+
451
+ Peerapong Dhangwatnotai, Tim Roughgarden, and Qiqi Yan. Revenue maximization with a single sample. Games and Economic Behavior, 91:318-333, 2015.
452
+
453
+ Utpal Dholakia. Everyone hates uber's surge pricing - here's how to fix it. https://hbr.org/2015/12/ everyone-hates-ubers-surge-pricing-heres-how-to-fix-it, 2015.
454
+
455
+ Adam N. Elmachtoub, V. Gupta, and Michael L. Hamilton. The value of personalized pricing. Revenue & Yield Management eJournal, 2019.
456
+
457
+ Adam N Elmachtoub, Vishal Gupta, and Michael L Hamilton. The value of personalized pricing. Management Science, 2021. doi: 10.1287/mnsc.2020.3821.
458
+
459
+ Communications Authority EU. Pricing and payments. https://europa.eu/youreurope/citizens/consumers/ shopping/pricing-payments/index_en.htm, 2020.
460
+
461
+ Joshua A. Gerlick and Stephan M. Liozu. Ethical and legal considerations of artificial intelligence and algorithmic decision-making in personalized pricing. Journal of Revenue and Pricing Management, 19(2), 2020. ISSN 1477-657X. doi: 10.1057/s41272-019-00225-2.
462
+
463
+ Jason D Hartline and Tim Roughgarden. Simple versus optimal mechanisms. In Proceedings of the 10th ACM conference on Electronic commerce, pages 225-234, 2009.
464
+
465
+ Oliver Hinz, II-Horn Hann, and Martin Spann. Price discrimination in e-commerce? an examination of dynamic pricing in name-your-own price markets. MIS Quarterly, 35(1), 2011. ISSN 02767783.
466
+
467
+ Jiali Huang, Ankur Mani, and Zizhuo Wang. The value of price discrimination in large random networks. In Proceedings of the 2019 ACM Conference on Economics and Computation, EC '19, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450367929. doi: 10.1145/3328526.3329617.
468
+
469
+ Nathan Kallus and Angela Zhou. Fairness, welfare, and equity in personalized pricing. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT '21, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450383097. doi: 10.1145/3442188.3445895.
470
+
471
+ Karen Levy and Solon Barocas. Designing against discrimination in online markets. Berkeley Technology Law Journal, 32(3), 2017. ISSN 10863818, 23804742.
472
+
473
+ Arwa Mahadawi. Is your friend getting a cheaper uber fare than you are? https: //www.theguardian.com/commentisfree/2018/apr/ 13/uber-lyft-prices-personalized-data, 2018.
474
+
475
+ Dana Mattioli. On orbitz, mac users steered to pricier hotels. https://www.wsj.com/articles/ $\mathrm{{SB10001424052702304458604577488822667325882}},$ 2012.
476
+
477
+ Stefan Michel. Is personalized pricing fair? https://www.imd.org/research-knowledge/articles/ is-personalized-pricing-fair/, 2016.
478
+
479
+ Jeffrey Moriarty. Why online personalized pricing is unfair. Ethics and Information Technology, 23:1-9, 09 2021. doi: 10.1007/s10676-021-09592-0.
480
+
481
+ Akshat Pandey and Aylin Caliskan. Disparate Impact of Artificial Intelligence Bias in Ridehailing Economy's Price Discrimination Algorithms, page 822-833. Association for Computing Machinery, New York, NY, USA, 2021. ISBN 9781450384735.
482
+
483
+ Timothy J. Richards, Jura Liaukonyte, and Nadia A. Strelet-skaya. Personalized pricing and price fairness. International Journal of Industrial Organization, 44, 2016. ISSN 0167-7187. doi: doi.org/10.1016/j.ijindorg.2015.11.004.
484
+
485
+ Hal R. Varian. Microeconomic Analysis. Norton, New York, third edition, 1992. ISBN 0393957357 9780393957358.
486
+
487
+ Obama White House. Big data and differential pricing. https://obamawhitehouse.archives.gov/sites/ default/files/whitehouse_files/docs/Big_Data_Report_ Nonembargo_v2.pdf, 2015.
UAI/UAI 2022/UAI 2022 Conference/B0xLpILs5ec/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,484 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § INDIVIDUAL FAIRNESS IN FEATURE-BASED PRICING FOR MONOPOLY MARKETS
2
+
3
+ § ABSTRACT
4
+
5
+ We study fairness in the context of feature-based price discrimination in monopoly markets. We propose a new notion of individual fairness, namely, $\alpha$ -fairness, which guarantees that individuals with similar features face similar prices. First, we study discrete valuation space and give an analytical solution for optimal fair feature-based pricing. We show that the cost of fair pricing is defined as the ratio of expected revenue in an optimal feature-based pricing to the expected revenue in an optimal fair feature-based pricing (CoF) can be arbitrarily large in general. When the revenue function is continuous and concave with respect to the prices, we show that one can achieve CoF strictly less than 2, irrespective of the model parameters. Finally, we provide an algorithm to compute fair feature-based pricing strategy that achieves this CoF.
6
+
7
+ § 1 INTRODUCTION
8
+
9
+ The Internet has transformed the way markets function. Today's Internet-based ecosystems such as entertainment and e-commerce marketplaces are more consumer-centric and information-driven than ever before. Data and AI systems are primarily used to power advertising, consumer retention, and personalized experience. These AI systems are deployed to aggregate individual choices and preferences to make personalized experiences possible. It is a common practice to use aggregated information about consumers to offer different prices to different consumers or segments of the market; this practice is commonly termed price discrimination [Varian, 1992].
10
+
11
+ Price discrimination has come under ethical scrutiny on multiple instances in the recent past. For example, it was found that Orbitz, an online travel agency, charges Mac users more than Windows users [Mattioli, 2012]. Uber's strategy to charge personalized prices came under heavy consumer backlash [Dholakia, 2015, Mahadawi, 2018], and thanks to the fine-grained data analysis of consumer behavior, several such instances were reported in e-commerce and retail industry [Hinz et al., 2011]. More recently, Pandey and Caliskan [2021] showed that neighborhoods with high non-white populations, higher poverty, younger residents, and high education levels faced higher cab trip fares in Chicago. Not surprisingly, the regulatory bodies and research community has taken notice. Economists have raised concerns on fairness issues of personalized pricing [Michel, 2016]. Price discrimination based on nationality or residence is made illegal in the EU [2020]. In the USA, a white house report provides guidelines for enforcing existing anti-discrimination, privacy, and consumer protection laws while practicing discriminatory pricing [White House, 2015]. Given the overwhelming evidence and rising concerns, there is an urgent need to study price discrimination and fairness formally.
12
+
13
+ Sellers or firms use price discrimination for multiple reasons, including increasing revenue, covering transportation and storage costs, increasing market reach, rewarding loyal consumers, promoting a social cause, and so on [Cassady, 1946]. In general, price discrimination does not always raise ethical and fairness issues and hence requires a careful inspection to categorize situations where this practice may lead to treatment disparity and invite regulatory intervention [Alan, 2020]. In this work, we focus on designing the pricing strategies for a seller (monopolist) who wants to maximize the revenue via price discrimination while ensuring fairness amongst the consumers.
14
+
15
+ A revenue-maximizing seller with complete knowledge of consumer valuations without fairness consideration would charge each consumer her valuation for the product. This pricing strategy, otherwise called first-degree price discrimination, may result in wild fluctuations in prices and is considered unfair in general [Moriarty, 2021]. Also, in practice, sellers do not have full access to individual consumer valuations but may have a distribution over valuations through features. In such feature-based pricing (FP), the seller segregates the market into segments through the consumer features. The seller's problem then reduces to finding optimal pricing for each segment [Bergemann et al., 2015, Cummings et al., 2020]. Such FP is referred to as third-degree price discrimination in the literature. In this paper, our goal is to ensure fairness issues in feature-based personalized pricing.
16
+
17
+ Our Contributions We introduce the notion of $\alpha$ - fairness in price discrimination which ensures that similar individuals face similar prices. We emphasize that if individuals with similar features are charged differently by segregating them into different segments, the interpersonal price comparison based on their features renders fairness issues. With this, we introduce a model for optimal fair feature-based pricing (FFP) as the problem of maximizing revenue while ensuring $\alpha$ -fairness. We begin with two segments in the market and discrete valuations and propose an optimal FFP scheme (Section 4.2). To quantify the loss in the revenue due to fairness, we then introduce cost of fairness(CoF)- the ratio of expected revenue in an optimal FFP to the expected revenue in an optimal FP. We prove that a constant lower bound on CoF is impossible to achieve in general.
18
+
19
+ Next, in Section 5.1, under the assumption that the revenue function is concave in offered prices [Bergemann et al., ${\left. {2021}\right\rbrack }^{\mathrm{T}}$ , we show that one can achieve a constant upper bound on CoF. Here, first, we show that the seller can compute optimal FFP using a convex program if it has access to distributional information (knows all consumers' valuation distribution functions). We then identify a class of FFP strategies, namely LINP-FFP that satisfy $\alpha$ -fairness. With the help of these pricing strategies, we then show that the CoF is strictly less than 2 irrespective of model parameters. Finally, we propose OPT-LINP-FFP, an $O\left( {K\log \left( K\right) }\right)$ time algorithm where $K$ is the number of segments that does not need access to complete distributional information and computes $\alpha$ -fair pricing that achieves the aforementioned CoF (Algorithm 1 and Theorem 7).
20
+
21
+ § 2 RELATED WORK
22
+
23
+ The impact of discriminatory pricing on consumer and seller surplus was first considered by Bergemann et al. [2015] when the consumer characteristics are known to the seller. The authors proposed a method to provide the optimal market segmentation. The generalized problem was then considered by Cummings et al. [2020] which extended the work of Bergemann et al. [2015] to the case where only partial information about the consumer's valuation was known to the seller.
24
+
25
+ When the valuations of the consumers are not known, El-machtoub et al. [2019, 2021] propose feature-based pricing and provides bounds on the value generated using idealized personalized pricing and Feature-based pricing over Uniform pricing. The value of feature-based pricing depends on the correlation of valuations and consumer features. Huang et al. [2019] consider the first-degree price discrimination over the social network where the centrality measures in social networks determine the features of the consumers. They provide bounds on the value of network-based personalized pricing in large random social networks with varying edge densities. Our work follows a similar approach because we derive personalized pricing from the features. However, naive feature-based pricing can be very unfair to the consumers, as we show in Proposition 2. Our focus is to design feature-based pricing that is fair at the same time.
26
+
27
+ Recently, many questions have been raised on the ethical side of price discrimination methods. Moriarty [2021] strongly criticizes online personalized pricing and suggests that personalized prices compete unfairly for social surplus created by transactions. Gerlick and Liozu [2020] points out the need to design personalized pricing with ethical considerations, which can provide win-win outcomes for both organizations and consumers. Richards et al. [2016] discusses that discriminatory pricing leads to the perception of unfairness amongst the consumers, which undermines the stability of retail platforms. They discuss that when consumers are involved in forming the prices, this leads to improved fairness perception, thus leading to better retentivity. Levy and Barocas [2017] discusses that web-based platforms typically use many private features of user profiles to connect buyers and sellers. When users interact on such platforms, it leads to discrimination regarding race, gender, and possibly other protected characteristics. All these studies lead to understanding the optimal price discriminatory strategies under the fairness constraint, which is the focus of our work.
28
+
29
+ Finally, Kallus and Zhou [2021] presents a list of metrics like price disparity, equal access, allocative efficiency fairness to measure and analyze fairness in feature-based pricing and study its interplay with welfare. The metrics discussed are mainly the group fairness notions which are entirely different from $\alpha$ -fairness discussed in this paper. We emphasize that though the above papers discuss the ethical issues in price discrimination, none of them provides a systematic approach to design the pricing strategy that maximizes the revenue and ensures the fairness guarantee.
30
+
31
+ § 3 PRELIMINARIES
32
+
33
+ We consider a market with a monopolist seller seeking to price a single product available in infinite supply. The market is divided into finite number of segments $\mathcal{X} =$ $\left\{ {{x}_{1},{x}_{2},\ldots ,{x}_{K}}\right\}$ , where ${x}_{i}$ represents the ${i}^{\text{ th }}$ segment. The seller, given access to $\mathcal{X}$ , can choose to price discriminate across segments to extract maximum revenue.
34
+
35
+ ${}^{1}$ this assumption is standard in economics as a large number of probability distributions follow this
36
+
37
+ Consumers' valuations for the single product are nonnegative random variables drawn from the set $\mathcal{V}$ (same across all segments). Let ${\mathcal{F}}_{i}\left( \cdot \right)$ be the cumulative distribution function for the valuation of the consumers in ${i}^{\text{ th }}$ segment, and ${f}_{i}\left( \cdot \right)$ be corresponding probability density function (probability mass function when $\mathcal{V}$ is discrete). In this paper, we consider the following two cases separately, (a) $\mathcal{V}$ is discrete and finite, and (b) $\mathcal{V}$ is continuous. Next, we present feature-based pricing model.
38
+
39
+ § 3.1 FEATURE-BASED PRICING MODEL
40
+
41
+ In feature-based pricing (FP), one can consider, without loss of generality, that the consumer feature is a representative of the market segment to which she belongs. Note that multiple consumers may have the same feature vector, and all the consumers having identical features belong to the same market segment. For simplicity, we will write ${p}_{i} \mathrel{\text{ := }}$ price offered to the consumer in the ${i}^{\text{ th }}$ segment. A consumer makes the purchase only if her valuation is equal to or more than the offered price. The expected revenue per consumer generated from the ${i}^{\text{ th }}$ segment with a price ${p}_{i} \in {\mathbb{R}}_{ + }$ is given by
42
+
43
+ $$
44
+ {\pi }_{i}\left( {p}_{i}\right) = {p}_{i} \cdot \left( {1 - {\mathcal{F}}_{i}\left( {p}_{i}\right) }\right) \tag{1}
45
+ $$
46
+
47
+ Whenever it is clear from the context we refer to expected revenue per consumer from a segment to be expected revenue from that segment. Let ${\beta }_{i}$ be the fraction of consumers in the ${i}^{\text{ th }}$ segment, then the expected revenue per consumer generated across all segments is given as $\Pi \left( \mathbf{p}\right) = \mathop{\sum }\limits_{{{x}_{i} \in \mathcal{X}}}{\beta }_{i}{\pi }_{i}\left( {p}_{i}\right)$ . We assume that ${\beta }_{i}$ ’s are known to the seller. We call the sellers problem of revenue maximization as ${\mathrm{{OPT}}}_{FP}\left( {\mathcal{V},\mathcal{X},\mathcal{F},\beta }\right)$ where $\mathcal{F} = \left( {{\mathcal{F}}_{1},\ldots ,{\mathcal{F}}_{K}}\right)$ and $\beta = \left( {{\beta }_{1},\ldots ,{\beta }_{K}}\right)$ .
48
+
49
+ In the absence of fairness constraints, ${\mathrm{{OPT}}}_{FP}\left( \cdot \right)$ reduces to charging each segment separately and optimal FP strategy $\widehat{\mathbf{p}}$ consisting ${\widehat{p}}_{i}$ for segment $i$ is given by ${\widehat{p}}_{i} \in \operatorname{argmax}{\pi }_{i}\left( {p}_{i}\right)$ .
50
+
51
+ Fairness in Feature-based Pricing Let $d : \mathcal{X} \times \mathcal{X} \rightarrow$ ${\mathbb{R}}_{ + }$ be a distance function over $\mathcal{X}$ . We assume that such a function exists and is well defined in $\mathcal{X}$ , i.e.,(X, d)is a metric space. The distance function quantifies the dissimilarity between feature vectors of individuals belonging to market segments. For simplicity we write $d\left( {{x}_{i},{x}_{j}}\right) \mathrel{\text{ := }} {d}_{ij}$ . Individual fairness in FP strategy is defined as:
52
+
53
+ Definition 1 ( $\alpha$ -fairness). A price function $\mathbf{p} : \mathcal{X} \rightarrow {\mathbb{R}}_{ + }^{K}$ is $\alpha$ -fair with respect to $d$ iff for all ${x}_{i},{x}_{j} \in \mathcal{X}$ , we have
54
+
55
+ $$
56
+ \left| {{p}_{i} - {p}_{j}}\right| \leq \alpha \cdot {d}_{ij} \tag{2}
57
+ $$
58
+
59
+ We call a pricing strategy Fair Feature-based Pricing $(\alpha$ - FFP) that satisfies Eq. (2) with a given value of $\alpha$ . It is easy to see from the definition that any $\alpha$ -FFP is also ${\alpha }^{\prime }$ -FFP for any ${\alpha }^{\prime } \geq \alpha$ . We will drop the quantifier $\alpha$ and call it FFP when it is clear from the context.
60
+
61
+ Cost of Fairness (CoF) Next, we define CoF as the deviation from optimality due to fairness constraints given in Eq. (2). It is defined as the ratio of expected revenue generated by optimal feature-based pricing and fair feature-based pricing.
62
+
63
+ Definition 2 (COST OF FAIRNESS (COF)). Cost of fairness for an FFP strategy $\mathbf{p}$ is defined as
64
+
65
+ $$
66
+ \operatorname{CoF} = \frac{\Pi \left( \widehat{\mathbf{p}}\right) }{\Pi \left( \mathbf{p}\right) }. \tag{3}
67
+ $$
68
+
69
+ In the following sections, we analyze FP and FFP strategies and their CoF when $\mathcal{V}$ is discrete (Section 4) and continuous (Section 5).
70
+
71
+ § 4 FFP FOR DISCRETE VALUATIONS
72
+
73
+ We want to ensure $\alpha$ -fairness in the pricing strategy given the optimal FP. $\alpha$ -fairness is achieved by maximizing revenue while satisfying the fairness constraints. In this section, we derive optimal FP (Section 4.1), propose how to achieve $\alpha$ -fairness (Section 4.2), and provide an upper bound on CoF (Section 4.3) for discrete valuation setting.
74
+
75
+ We consider the simplest setting described as follows: Let the consumer segments be given by $\mathcal{X} = \left\{ {{x}_{1},{x}_{2}}\right\}$ and their valuations are drawn from a discrete set $\mathcal{V} = \left\{ {{v}_{1},{v}_{2}}\right\}$ , we assume ${v}_{1} < {v}_{2}$ without loss of generality. Let ${\beta }_{1} = \beta$ and ${\beta }_{2} = 1 - \beta$ . Further, let ${f}_{1}\left( {v}_{1}\right) = {q}_{1}\left( {{f}_{2}\left( {v}_{1}\right) = {q}_{2}}\right)$ denote the probability that a consumer has valuation ${v}_{1}$ in segment 1 (segment 2). The expected revenue generated by $\mathbf{p}$ is given by:
76
+
77
+ $$
78
+ \Pi \left( \mathbf{p}\right) = \beta {p}_{1}\left\lbrack {{q}_{1}\mathbb{1}\left( {{v}_{1} \geq {p}_{1}}\right) + \left( {1 - {q}_{1}}\right) \mathbb{1}\left( {{v}_{2} \geq {p}_{1}}\right) }\right\rbrack
79
+ $$
80
+
81
+ $$
82
+ + \left( {1 - \beta }\right) {p}_{2}\left\lbrack {{q}_{2}\mathbb{1}\left( {{v}_{1} \geq {p}_{2}}\right) + \left( {1 - {q}_{2}}\right) \mathbb{1}\left( {{v}_{2} \geq {p}_{2}}\right) }\right\rbrack
83
+ $$
84
+
85
+ (4)
86
+
87
+ § 4.1 OPTIMAL FEATURE-BASED PRICING
88
+
89
+ As discussed earlier, $\Pi \left( \mathbf{p}\right)$ can be maximized by maximizing ${\pi }_{i}\left( {p}_{i}\right)$ for each market segment independently if there are no fairness constraints. This problem is an integer program with price for each consumer type being a discrete variable. The revenue generated depends on ${\beta }_{i}$ and ${f}_{i}\left( \cdot \right) \left( {\beta ,{q}_{1},{q}_{2}}\right.$ in the current simplest case). The optimal FP is then given as
90
+
91
+ $$
92
+ \text{ For }i \in \{ 1,2\} : {\widehat{p}}_{i} = \left\{ \begin{array}{ll} {v}_{1} & \text{ if }{q}_{i} \geq 1 - \frac{{v}_{1}}{{v}_{2}} \\ {v}_{2} & \text{ otherwise } \end{array}\right. \tag{5}
93
+ $$
94
+
95
+ max width=
96
+
97
+ Notation Description
98
+
99
+ 1-2
100
+ FP Feature-based Pricing
101
+
102
+ 1-2
103
+ FFP Fair Feature-based Pricing
104
+
105
+ 1-2
106
+ ${\mathcal{F}}_{k},{f}_{k}\left( \right)$ Valuations CDF, PDF for ${k}^{\text{ th }}$ consumer segment respectively
107
+
108
+ 1-2
109
+ X Set of all consumer features/types
110
+
111
+ 1-2
112
+ Y Support set of consumers' valuations
113
+
114
+ 1-2
115
+ ${x}_{k}$ Consumer feature of the ${k}^{\text{ th }}$ segment
116
+
117
+ 1-2
118
+ ${\beta }_{k}$ The fraction of consumers in the ${k}^{\text{ th }}$ segment
119
+
120
+ 1-2
121
+ $\mathbf{p} = \left( {{p}_{1},{p}_{2},\ldots {p}_{K}}\right)$ Feature-based price vector
122
+
123
+ 1-2
124
+ ${\pi }_{k}\left( {p}_{k}\right)$ Revenue generated per consumer in the ${k}^{\text{ th }}$ segment
125
+
126
+ 1-2
127
+ $\Pi \left( p\right)$ Revenue generated by $p$ across all consumer segments
128
+
129
+ 1-2
130
+ $\widehat{\mathbf{p}} = \left( {{\widehat{p}}_{1},{\widehat{p}}_{2},\ldots {\widehat{p}}_{K}}\right)$ Price function in optimal price discrimination
131
+
132
+ 1-2
133
+ ${d}_{ij} \mathrel{\text{ := }} d\left( {{x}_{i},{x}_{j}}\right)$ A real-valued metric on the consumer feature space $\mathcal{X}$
134
+
135
+ 1-2
136
+ $\alpha$ Fairness parameter
137
+
138
+ 1-2
139
+ ${\mathbf{p}}^{ \star } = \left( {{p}_{1}^{ \star },{p}_{2}^{ \star },\ldots {p}_{K}^{ \star }}\right)$ Optimal fair feature-based price function
140
+
141
+ 1-2
142
+ $\widetilde{\mathbf{p}} = \left( {{\widetilde{p}}_{1},{\widetilde{p}}_{2},\ldots ,{\widetilde{p}}_{K}}\right)$ Price vector for OPT-LINP-FFP
143
+
144
+ 1-2
145
+ CoF Cost of Fairness
146
+
147
+ 1-2
148
+ ${L}_{m}$ Linear approximation of concave revenue curve with $m$ as parameter
149
+
150
+ 1-2
151
+
152
+ Table 1: Notation Table
153
+
154
+ Proof. For a market segment $i,{\pi }_{i}\left( {v}_{1}\right) = {v}_{1}$ and ${\pi }_{i}\left( {v}_{2}\right) =$ ${v}_{2}\left( {1 - {q}_{i}}\right)$ . So, ${\widehat{p}}_{i} = {v}_{1}$ if
155
+
156
+ $$
157
+ {\pi }_{i}\left( {v}_{1}\right) \geq {\pi }_{i}\left( {v}_{2}\right) \Rightarrow {v}_{1} \geq {v}_{2}\left( {1 - {q}_{i}}\right) \Rightarrow {q}_{i} \geq 1 - \frac{{v}_{1}}{{v}_{2}}
158
+ $$
159
+
160
+ otherwise, ${\widehat{p}}_{i} = {v}_{2}$ .
161
+
162
+ Next, we analyze the fairness aspects of the above pricing strategy.
163
+
164
+ § 4.2 OPTIMAL FAIR FEATURE-BASED PRICING
165
+
166
+ Let(X, d)be a metric space. We model the Optimal fair feature-based pricing (FFP) problem as integer program which maximizes $\Pi \left( \mathbf{p}\right)$ with $\alpha$ -fairness constraints described in Eq. (2). We denote this problem as ${\mathrm{{OPT}}}_{FFP}\left( {\mathcal{V},\mathcal{X},d,\mathcal{F},\beta ,\alpha }\right)$ and the corresponding optimal FFP strategy is denoted as ${\mathbf{p}}^{ \star }$ . First we make an interesting and very useful claim for binary valuations.
167
+
168
+ Lemma 1. When $\mathcal{V} = \left\{ {{v}_{1},{v}_{2}}\right\}$ , and if $\widehat{\mathbf{p}}$ is not $\alpha$ -fair, ${OPT}$ ${FFP}\left( {\mathcal{V},\mathcal{X},d,\mathcal{F},\beta ,\alpha }\right)$ reduces to ${OP}{T}_{FP}\left( {\widetilde{\mathcal{V}},\mathcal{X},\mathcal{F},\beta }\right)$ where $\widetilde{\mathcal{V}}$ is either $\left\{ {v}_{1}\right\}$ , or $\left\{ {v}_{2}\right\}$ , or $\left\{ {{v}_{1},{v}_{1} + \alpha {d}_{12}}\right\}$ .
169
+
170
+ Proof. Let $\left( {{p}_{1},{p}_{2}}\right)$ be the tuple of offered prices. Note that if ${v}_{2} - {v}_{1} \leq \alpha {d}_{12}$ or ${\widehat{p}}_{1} = {\widehat{p}}_{2}$ , then the optimal ${\mathbf{p}}^{ \star } = \widehat{\mathbf{p}}$ with support $\left\{ {{v}_{1},{v}_{2}}\right\}$ and $\widehat{\mathbf{p}}$ will be trivially fair. We consider a more interesting case when ${v}_{2} - {v}_{1} > \alpha {d}_{12}$ and ${\widehat{p}}_{1} \neq {\widehat{p}}_{2}$ . In this case, the only candidate support sets for optimal fair pricing strategy are: $\left\{ {v}_{1}\right\} ,\left\{ {v}_{2}\right\} ,\left\{ {{v}_{1},{v}_{1} + \alpha {d}_{12}}\right\} ,\left\{ {{v}_{2} - }\right.$ $\left. {\alpha {d}_{12},{v}_{2}}\right\}$ . The optimal FFP does not take values from the set $\left\{ {{v}_{2} - \alpha {d}_{12},{v}_{2}}\right\}$ as the consumers with valuation ${v}_{1}$ would not make any purchase. Hence, the expected revenue with support $\left\{ {{v}_{2} - \alpha {d}_{12},{v}_{2}}\right\}$ will be less than or equal to the expected revenue with support $\left\{ {v}_{2}\right\}$ .
171
+
172
+ We now relax the constraint of binary valuation and analyze the optimal fair pricing scheme for $n$ valuations. The consumer segments are $\mathcal{X} = \left\{ {{x}_{1},{x}_{2}}\right\}$ with ${\beta }_{1} =$ $\beta$ and ${\beta }_{2} = 1 - \beta$ , the valuations are drawn from the set $\mathcal{V} = \left\{ {{v}_{1},{v}_{2},\ldots ,{v}_{n}}\right\}$ , and ${f}_{1}\left( {v}_{i}\right) = {q}_{i,1}$ and ${f}_{2}\left( {v}_{i}\right) =$ ${q}_{i,2}$ . This is a simple extension of the pricing problem, ${\mathrm{{OPT}}}_{FP}\left( {\mathcal{V},\mathcal{X},\mathcal{F},\beta }\right)$ modelled as an integer program where the prices are drawn from the set $\mathcal{V}$ . If $\widehat{\mathbf{p}}$ is not $\alpha$ -fair then, the corresponding ${\mathrm{{OPT}}}_{FFP}\left( {\mathcal{V},\mathcal{X},d,\mathcal{F},\beta ,\alpha }\right)$ can be solved by reducing it to ${\mathrm{{OPT}}}_{FP}\left( {\widetilde{\mathcal{V}},\mathcal{X},\mathcal{F},\beta }\right)$ with $\widetilde{\mathcal{V}}$ given by:
173
+
174
+ $$
175
+ \widetilde{\mathcal{V}} = \left\{ \begin{array}{ll} \left\{ {v}_{i}\right\} ,{v}_{i} \in \mathcal{V} & \text{ if }{p}_{1}^{ \star } = {p}_{2}^{ \star } \\ \left\{ {{v}_{j},{v}_{j} + \alpha {d}_{12},{v}_{j} - \alpha {d}_{12}}\right\} ,{v}_{j} \in \mathcal{V} & \text{ if }{p}_{1}^{ \star } \neq {p}_{2}^{ \star } \end{array}\right.
176
+ $$
177
+
178
+ Given the set $\widehat{\mathcal{V}}$ , the pricing problem ${\operatorname{OPT}}_{FP}\left( {\widetilde{\mathcal{V}},\mathcal{X},\mathcal{F},\beta }\right)$ can be solved in constant time. It is easy to see that computing $\widehat{\mathcal{V}}$ takes $\mathcal{O}\left( {n}^{2}\right)$ time for $n$ valuations and 2 consumer types. Therefore, the fair pricing problem ${\operatorname{OPT}}_{FFP}\left( {\mathcal{V},\mathcal{X},d,\mathcal{F},\beta ,\alpha }\right)$ can be solved in $\mathcal{O}\left( {n}^{2}\right)$ time.
179
+
180
+ § 4.3 COF ANALYSIS
181
+
182
+ For $n = 2$ , based on the values of ${q}_{1},{q}_{2}$ we have the following cases:
183
+
184
+ 1. ${p}_{1}^{ \star } = {p}_{2}^{ \star } = {v}_{1}$ 3. ${p}_{1}^{ \star } = {v}_{1} + \alpha {d}_{12},{p}_{2}^{ \star } = {v}_{1}$
185
+
186
+ 2. ${p}_{1}^{ \star } = {p}_{2}^{ \star } = {v}_{2}$ 4. ${p}_{1}^{ \star } = {v}_{1},{p}_{2}^{ \star } = {v}_{1} + \alpha {d}_{12}$
187
+
188
+ In cases 1 and 2, optimal fair pricing is equivalent to uniform pricing and therefore are ’trivially’ fair with $\mathrm{{CoF}} = 1$ , i.e., $\Pi \left( \widehat{\mathbf{p}}\right) = \Pi \left( {\mathbf{p}}^{ \star }\right)$ . For case 3, $\Pi \left( \widehat{\mathbf{p}}\right)$ and $\Pi \left( {\mathbf{p}}^{ \star }\right)$ are given as:
189
+
190
+ $$
191
+ \Pi \left( \widehat{\mathbf{p}}\right) = \beta \left( {v}_{2}\right) \left( {1 - {q}_{1}}\right) + \left( {1 - \beta }\right) {v}_{1}
192
+ $$
193
+
194
+ $$
195
+ \Pi \left( {\mathbf{p}}^{ \star }\right) = \beta \left( {{v}_{1} + \alpha {d}_{12}}\right) \left( {1 - {q}_{1}}\right) + \left( {1 - \beta }\right) {v}_{1}
196
+ $$
197
+
198
+ Then the cost of fairness for case 3 is given as:
199
+
200
+ $$
201
+ \mathrm{{CoF}} = \frac{\Pi \left( \widehat{\mathbf{p}}\right) }{\Pi \left( {\mathbf{p}}^{ \star }\right) } = \frac{\beta \left( {v}_{2}\right) \left( {1 - {q}_{1}}\right) + \left( {1 - \beta }\right) {v}_{1}}{\beta \left( {{v}_{1} + \alpha {d}_{12}}\right) \left( {1 - {q}_{1}}\right) + \left( {1 - \beta }\right) {v}_{1}}
202
+ $$
203
+
204
+ $$
205
+ = \frac{\beta \left( {{v}_{2} - {v}_{1}}\right) + {v}_{1} - \beta {v}_{2}{q}_{1}}{{\beta \alpha }{d}_{12}\left( {1 - {q}_{1}}\right) - \beta {v}_{1}{q}_{1} + {v}_{1}}
206
+ $$
207
+
208
+ $$
209
+ = \frac{\beta \left( {1 - \frac{{v}_{1}}{{v}_{2}}}\right) + \frac{{v}_{1}}{{v}_{2}} - \beta {q}_{1}}{\beta \left( \frac{\alpha {d}_{12}}{{v}_{2}}\right) \left( {1 - {q}_{1}}\right) - \beta \left( \frac{{v}_{1}}{{v}_{2}}\right) {q}_{1} + \frac{{v}_{1}}{{v}_{2}}} \tag{6}
210
+ $$
211
+
212
+ Replacing $\beta$ with $\left( {1 - \beta }\right)$ and ${q}_{1}$ with ${q}_{2}$ in the above expression, we get a similar approximation of CoF for case 4.
213
+
214
+ Proposition 2. Cost of fairness with discrete valuations can go arbitrarily bad.
215
+
216
+ Proof. From Eq. (6) when $\frac{{v}_{1}}{{v}_{2}} \rightarrow 0$ , we have $\operatorname{COF} = \frac{{v}_{2}}{\alpha {d}_{12}}$ . The CoF (in Case 3 and/or Case 4) is arbitrarily bad if ${d}_{12} > 0$ when there is a large difference between ${v}_{1}$ and ${v}_{2}$ . Note that ${d}_{12} = 0$ is uninteresting as the seller is unable to distinguish between two segments.
217
+
218
+ Note that ${v}_{2}$ being arbitrarily large need not be a commonly occurring setting. Hence, we work with bounded support valuations in the backdrop of the above negative results. In the next section, we make assumptions based on standard economic literature about the revenue functions ${\pi }_{i}\left( \cdot \right)$ , i.e., concave revenue functions and common support [Berge-mann et al., 2021]. As argued in Section 3 of Dhangwatnotai et al. [2015], valuation distributions satisfying Monotone Hazard Rate (MHR) satisfy the assumptions as mentioned above regarding revenue functions. It is also observed that the revenue functions are concave for another commonly analyzed family of distributions in literature called the regular distributions in which the virtual valuation is non-decreasing (Section 4.3 of Bergemann et al. [2021]). MHR is a common assumption in Econ-CS [Hartline and Roughgarden, 2009].
219
+
220
+ Therefore, in the following section, we analyze the cost of fairness for such valuation distributions and the associated concave revenue functions.
221
+
222
+ § 5 FFP FOR CONTINUOUS VALUATIONS
223
+
224
+ In this section, we consider feature-based pricing with continuous valuations. We impose a standard restriction on the revenue functions ${\pi }_{i}\left( \cdot \right)$ such that they are concave on the common support $\mathcal{V} = \left\lbrack {\underline{v},\bar{v}}\right\rbrack$ [Bergemann et al.,2021]. The consumer segments are identified by the associated feature vectors ${x}_{i} \in \mathcal{X}.\underline{v}$ is the marginal cost defined as a minimum feasible valuation for which a seller is willing to sell the product. The marginal cost may include the cost of production, transportation, etc. On the other hand, $\bar{v}$ is the maximum consumer valuation. Without loss of generality, we consider that maximum consumer valuation is greater than marginal cost; i.e., trade occurs.
225
+
226
+ We begin with a tight upper bound on the CoF under conditions as mentioned above (Section 5.1) followed by two pricing schemes based on the available information about the revenue functions (Section 5.2), and finally, we present an algorithm that achieves the COF bound in Section 5.3.
227
+
228
+ § 5.1 OPTIMAL FFP FOR CONTINUOUS VALUATIONS
229
+
230
+ The problem of determining optimal FFP can be modeled as a convex program with $\alpha$ -fairness as linear constraints. The convex program below describes ${\mathrm{{OPT}}}_{FFP}\left( {\mathcal{V},\mathcal{X},d,\mathcal{F},\beta ,\alpha }\right)$ model with complete knowledge of revenue functions ${\pi }_{i}\left( \cdot \right)$ .
231
+
232
+ $$
233
+ \mathop{\max }\limits_{{{p}_{k} \in \mathcal{V},\forall k}}\Pi \left( \mathbf{p}\right) = \mathop{\sum }\limits_{{k = 1}}^{K}{\beta }_{k}{\pi }_{k}\left( {p}_{k}\right)
234
+ $$
235
+
236
+ $$
237
+ \text{ subject to, }\left| {{p}_{i} - {p}_{j}}\right| \leq {\alpha d}\left( {{x}_{i},{x}_{j}}\right) ,\forall i \neq j
238
+ $$
239
+
240
+ $$
241
+ {p}_{i} \geq 0,\forall i \in \left\lbrack K\right\rbrack
242
+ $$
243
+
244
+ Let ${\mathbf{p}}^{ \star }$ be a solution to the above problem.
245
+
246
+ § 5.2 LINP-FFP AND COF ANALYSIS
247
+
248
+ Let ${D}_{i} \mathrel{\text{ := }} \mathop{\min }\limits_{{j \neq i}}{d}_{ij}$ . With the following proposition, we propose a class of $\alpha$ -fair pricing strategies.
249
+
250
+ Proposition 3. For a given $m \in \left\lbrack {\underline{v},\bar{v}}\right\rbrack$ , if the price function satisfies $\left| {{p}_{i} - m}\right| \leq \frac{\alpha }{2}{D}_{i}$ for all $i \in \left\lbrack K\right\rbrack$ then it satisfies $\alpha$ - fairness .
251
+
252
+ Proof. From triangle inequality, we have $\left| {{p}_{i} - {p}_{j}}\right| \leq \left| {{p}_{i} - }\right|$ $m\left| +\right| {p}_{j} - m \mid \leq \frac{\alpha }{2}{D}_{i} + \frac{\alpha }{2}{D}_{j} \leq \alpha {d}_{ij}$ . The last inequality results from the fact that ${D}_{i} = \mathop{\min }\limits_{{k \neq i}}{d}_{ik} \leq {d}_{ij}$ and ${D}_{j} =$ $\mathop{\min }\limits_{{k \neq j}}{d}_{ik} \leq {d}_{ji} = {d}_{ij}$ .
253
+
254
+ < g r a p h i c s >
255
+
256
+ Figure 1: Concave revenue function ${\pi }_{i}\left( \cdot \right)$ and its linear approximation ${L}_{i}\left( \cdot \right)$ (arrows show equations for ${L}_{i}\left( \cdot \right)$ ). Figure represents the case ${\widehat{p}}_{i} - m \geq \alpha {D}_{i}/2$ for which LINP-FFP assigns ${p}_{i} = m + \alpha {D}_{i}/2$ . The case $m - {\widehat{p}}_{i} \geq \alpha {D}_{i}/2$ is similar.
257
+
258
+ In other words, for ensuring that the prices for different segments are not too different, it is enough to ensure that the pricing for each segment is not too different from some common point $m$ . The pricing for all the segments would hence be around this point and could be determined with respect to this point. We term this point as pivot. We now present the second FFP model, an $\alpha$ -fair pricing strategy that is pivot-based and satisfies the condition in Proposition 3, with access to only ${\widehat{p}}_{i}$ for a given $m$ .
259
+
260
+ $$
261
+ {p}_{i} = \left\{ \begin{array}{ll} m + \alpha {D}_{i}/2 & \text{ if }{\widehat{p}}_{i} - m \geq \alpha {D}_{i}/2 \\ m - \alpha {D}_{i}/2 & \text{ if }m - {\widehat{p}}_{i} \geq \alpha {D}_{i}/2 \\ {\widehat{p}}_{i} & \text{ otherwise } \end{array}\right. \tag{8}
262
+ $$
263
+
264
+ We call this pricing scheme LINP-FFP. It is easy to see that the above pricing strategy is $\alpha$ -fair. We now present the CoF bound for LINP-FFP.
265
+
266
+ Theorem 4. The Cost of Fairness for optimal fair price discrimination with concave revenue functions satisfies
267
+
268
+ $$
269
+ \operatorname{CoF} \leq \frac{2}{1 + \min \left\{ {\alpha \frac{\mathop{\min }\limits_{i}{D}_{i}}{\bar{v} - \underline{v}},1}\right\} }
270
+ $$
271
+
272
+ Proof. We prove that the above CoF is satisfied by LINP-FFP and hence the theorem. Let $m \in \left\lbrack {\underline{v},\bar{v}}\right\rbrack$ be a pivot point (See Figure 1). Let
273
+
274
+ $$
275
+ {\gamma }_{i} \mathrel{\text{ := }} \left\{ \begin{array}{ll} \frac{\left( {m - \underline{v}}\right) + \alpha {D}_{i}/2}{{\widehat{p}}_{i} - \underline{v}} & \text{ if }{\widehat{p}}_{i} - m \geq \alpha {D}_{i}/2 \\ \frac{\left( {\bar{v} - m}\right) + \alpha {D}_{i}/2}{\bar{v} - {\widehat{p}}_{i}} & \text{ if }m - {\widehat{p}}_{i} \geq \alpha {D}_{i}/2 \\ 1 & \text{ otherwise } \end{array}\right. \tag{9}
276
+ $$
277
+
278
+ Let ${\widehat{\pi }}_{i}$ be the expected revenue generated from the ${i}^{\text{ th }}$ segment under $\widehat{\mathbf{p}}$ . We now show the following supporting lemma.
279
+
280
+ Lemma 5. The pricing strategy given in Eq. 8 guarantees at-least ${\gamma }_{i}$ fraction of optimal revenue from segment $i$ , i.e., ${\pi }_{i} \geq {\gamma }_{i}{\widehat{\pi }}_{i}$
281
+
282
+ Proof. A lower bound to the concave revenue functions ${\pi }_{i}\left( \cdot \right)$ for any segment $i$ is the piecewise linear approximation ${L}_{i}$ , given by (see Figure 1):
283
+
284
+ $$
285
+ {L}_{i}\left( p\right) = \left\{ \begin{array}{ll} \frac{{\widehat{\pi }}_{i}}{{\widehat{p}}_{i} - \underline{v}}\left( {p - \underline{v}}\right) , & p \leq {\widehat{p}}_{i} \\ \frac{-{\widehat{\pi }}_{i}}{\bar{v} - {\widehat{p}}_{i}}\left( {p - \bar{v}}\right) , & p > {\widehat{p}}_{i} \end{array}\right. \tag{10}
286
+ $$
287
+
288
+ So, for each consumer segment $i$ we have,
289
+
290
+ $$
291
+ {L}_{i}\left( p\right) \leq {\pi }_{i}\left( p\right) ,\forall p \in \left\lbrack {\underline{v},\bar{v}}\right\rbrack
292
+ $$
293
+
294
+ Expected revenues generated per consumer in segment $i$ by pricing rule in Eq. 8 for ${\widehat{p}}_{i} - m \geq \alpha {D}_{i}/2,m - {\widehat{p}}_{i} \geq \alpha {D}_{i}/2$ , and remaining cases are given below in the respective order
295
+
296
+ $$
297
+ {\pi }_{i}\left( {p}_{i}\right) \geq {L}_{i}\left( {p}_{i}\right) = \frac{{\widehat{\pi }}_{i}}{{\widehat{p}}_{i} - \underline{v}}\left( {m + \alpha {D}_{i}/2 - \underline{v}}\right) = {\widehat{\pi }}_{i}{\gamma }_{i}
298
+ $$
299
+
300
+ $$
301
+ {\pi }_{i}\left( {p}_{i}\right) \geq {L}_{i}\left( {p}_{i}\right) = \frac{-{\widehat{\pi }}_{i}}{\bar{v} - {\widehat{p}}_{i}}\left( {m - \alpha {D}_{i}/2 - \bar{v}}\right) = {\widehat{\pi }}_{i}{\gamma }_{i}
302
+ $$
303
+
304
+ $$
305
+ {\pi }_{i}\left( {p}_{i}\right) = {L}_{i}\left( {\widehat{p}}_{i}\right) = {\widehat{\pi }}_{i}
306
+ $$
307
+
308
+ This proves the lemma.
309
+
310
+ Let ${\pi }_{i}^{ \star }$ denote the expected revenue generated from the ${i}^{\text{ th }}$ segment by ${\mathbf{p}}^{ \star }$ . So, CoF for optimal FPP is given by:
311
+
312
+ $$
313
+ \operatorname{CoF} = \frac{\mathop{\sum }\limits_{{i \in \left\lbrack K\right\rbrack }}{\beta }_{i}{\widehat{\pi }}_{i}}{\mathop{\sum }\limits_{{i \in \left\lbrack K\right\rbrack }}{\beta }_{i}{\pi }_{i}^{ \star }} \leq \frac{\mathop{\sum }\limits_{{i \in \left\lbrack K\right\rbrack }}{\beta }_{i}{\widehat{\pi }}_{i}}{\mathop{\sum }\limits_{{i \in \left\lbrack K\right\rbrack }}{\beta }_{i}{\pi }_{i}}\;\text{ (Optimality of }{\pi }_{i}^{ \star }\text{ ) }
314
+ $$
315
+
316
+ $$
317
+ \leq \frac{\mathop{\sum }\limits_{{i \in \left\lbrack K\right\rbrack }}{\rho }_{i}{\pi }_{i}}{\mathop{\sum }\limits_{{i \in \left\lbrack K\right\rbrack }}{\beta }_{i}{\gamma }_{i}{\widehat{\pi }}_{i}}
318
+ $$
319
+
320
+ (Lemma 5)
321
+
322
+ In order to prove the said CoF bound, it suffices to show that there exists an $m$ (and hence a corresponding pricing strategy using Eq. (8)) for which the said CoF bound is satisfied. It can be seen that for $m = \left( {\underline{v} + \bar{v}}\right) /2$ , and replacing denominators in Eq. (9) by $\bar{v} - \underline{v}$ , we have that
323
+
324
+ $$
325
+ \mathrm{{COF}} \leq \frac{\mathop{\sum }\limits_{{i \in \left\lbrack K\right\rbrack }}{\beta }_{i}{\widehat{\pi }}_{i}}{\mathop{\sum }\limits_{{i \in \left\lbrack K\right\rbrack }}{\beta }_{i}{\widehat{\pi }}_{i}\left( {\frac{1}{2} + \min \left\{ {\frac{\alpha {D}_{i}}{2\left( {\bar{v} - \underline{v}}\right) },1}\right\} }\right) }
326
+ $$
327
+
328
+ $$
329
+ \leq \frac{\mathop{\sum }\limits_{{i \in \left\lbrack K\right\rbrack }}{\beta }_{i}{\widehat{\pi }}_{i}}{\left( {\mathop{\sum }\limits_{{i \in \left\lbrack K\right\rbrack }}{\beta }_{i}{\widehat{\pi }}_{i}}\right) \left( {\frac{1}{2} + \min \left\{ {\frac{\alpha \mathop{\min }\limits_{j}{D}_{j}}{2\left( {\bar{v} - \underline{v}}\right) },1}\right\} }\right) }
330
+ $$
331
+
332
+ $$
333
+ = \frac{2}{1 + \min \left\{ {\alpha \frac{\mathop{\min }\limits_{j}{D}_{j}}{\bar{v} - \underline{v}},1}\right\} }
334
+ $$
335
+
336
+ It is worth noting here that the cost of fairness does not depend on the number of the segments and the distribution of the population among these segments. So, if the segments are well separated in terms of the distance between features of consumers across segments the number of segments as well as the distribution of consumer population in these segments do not affect revenue guarantee. Also, if the admissible prices are supported over a large interval, the fairness guarantee becomes weaker. This insight discourages pricing schemes with wildly varying prices across segments. Finally, if $\alpha = 0$ , i.e., without any fairness constraints, we recover the bound of 2 proved in Bergemann et al. [2021].
337
+
338
+ We emphasize that the bound is strictly less than 2 because, under fairness constraints, $\alpha \neq 0$ and typically the consumer types are well separated in the feature space according to the metric $d$ else, the consumer types are indistinguishable for the seller hence, ${d}_{ij} \neq 0$ for all $i,j \in \left\lbrack K\right\rbrack$ . This is an improvement of the CoF bound given in Bergemann et al. [2021].
339
+
340
+ Tightness of CoF bound: We claim that the CoF bound presented above is tight. In the following example, equality holds and proves the tightness of the bound.
341
+
342
+ Example 1 (Tightness of the CoF bound). Consider $K = 2$ where ${\beta }_{1} = {\beta }_{2} = \frac{1}{2}$ . Consider ${\mathcal{F}}_{i}$ be such that ${\pi }_{i}\left( \cdot \right) = {L}_{i}\left( \cdot \right)$ with ${\widehat{p}}_{1} = \underline{v} + \varepsilon ,{\widehat{p}}_{2} = \bar{v} - \varepsilon$ , where $\varepsilon \rightarrow 0$ , and ${\widehat{\pi }}_{1} = {\widehat{\pi }}_{2}$ . It can be seen that if $\alpha$ is such that $\alpha {d}_{12} < \bar{v} - \underline{v}$ , any FP satisfying ${p}_{2} - {p}_{1} = \alpha {d}_{12}$ and ${p}_{1},{p}_{2} \in \left\lbrack {{\widehat{p}}_{1},{\widehat{p}}_{2}}\right\rbrack$ is an optimal FFP (fair FP), and the corresponding $\mathrm{{CoF}} = \frac{2}{1 + \frac{\alpha {d}_{12}}{\bar{v} - v}}$ . If $\alpha {d}_{12} \geq \bar{v} - \underline{v}$ , the optimal ${FP}$ is $\alpha$ -fair and so, $\mathrm{{CoF}} = 1$ . Hence, for this example, $\mathrm{{CoF}} = \frac{2}{1 + \min \left\{ {\alpha \frac{{d}_{12}}{\bar{v} - \underline{v}},1}\right\} }$ . This shows the tightness of the CoF bound derived in Theorem 4.
343
+
344
+ We now present an algorithm, OPT-LINP-FFP, to find the optimal pivot ${m}^{ \star }$ in the above LINP-FFP strategy when only $\widehat{p}$ and ${\widehat{\pi }}_{i}$ s are known.
345
+
346
+ § 5.3 PROPOSED ALGORITHM
347
+
348
+ As LINP-FFP satisfies $\alpha$ -fairness (Proposition 3), and also achieves CoF bounds in Theorem 4, we look for a pricing strategy optimal within class of LINP-FFP. It reduces to finding an optimal pivot that maximizes revenue. In this section, we propose a binary-search-based algorithm for the same. For pricing $\mathbf{p}$ , the expected revenue generated per consumer is given by $\underline{\Pi }\left( \mathbf{p}\right) = \mathop{\sum }\limits_{{i = 1}}^{K}{\beta }_{i}{\pi }_{i}\left( {p}_{i}\right)$ . Let ${\tau }_{i} \mathrel{\text{ := }} \frac{\alpha }{2}{D}_{i}$ . Observe from Lemma 5 that $\Pi \left( \mathbf{p}\right)$ is lower bounded as:
349
+
350
+ $$
351
+ \Pi \left( \mathbf{p}\right) \geq {\Pi }_{m}\left( \mathbf{L}\right) = \mathop{\sum }\limits_{{i = 1}}^{K}{\beta }_{i}{\gamma }_{i}{\widehat{\pi }}_{i} = \mathop{\sum }\limits_{{i : \left| {{\widehat{p}}_{i} - m}\right| < {\tau }_{i}}}{\beta }_{i}{\widehat{\pi }}_{i} +
352
+ $$
353
+
354
+ $$
355
+ \mathop{\sum }\limits_{{i : {\widehat{p}}_{i} - m \geq {\tau }_{i}}}{\beta }_{i}{\widehat{\pi }}_{i}\frac{m + {\tau }_{i} - \underline{v}}{{\widehat{p}}_{i} - \underline{v}} + \mathop{\sum }\limits_{{i : m - {\widehat{p}}_{i} \geq {\tau }_{i}}}{\beta }_{i}{\widehat{\pi }}_{i}\frac{\bar{v} - m + {\tau }_{i}}{\bar{v} - {\widehat{p}}_{i}}
356
+ $$
357
+
358
+
359
+
360
+ § DETERMINING OPTIMAL PIVOT $M$
361
+
362
+ As we can see, the revenue generated by LINP-FFP is lower bounded by a piecewise linear function in $m$ . With the aim of achieving a better lower bound, we now address the problem of determining an optimal pivot ${m}^{ \star } \in \operatorname{argmax}{\Pi }_{m}\left( \mathbf{L}\right)$ . $m \in \left\lbrack {\underline{v},\bar{v}}\right\rbrack$
363
+
364
+ § PRICING ALGORITHM
365
+
366
+ In what follows, we call the candidate points $m$ for optimal pivot, i.e., for maximizing ${\Pi }_{m}\left( \mathbf{L}\right)$ , as critical points. We denote the set of these critical points as $\mathcal{M}$ .
367
+
368
+ Lemma 6. ${\Pi }_{m}\left( \mathbf{L}\right)$ as a function of $m$ is concave and piecewise linear with the set of critical points $\mathcal{M} =$ $\left( {{\left\{ {\widehat{p}}_{i} - \frac{\alpha }{2}{D}_{i},{\widehat{p}}_{i} + \frac{\alpha }{2}{D}_{i}\right\} }_{i \in \left\lbrack K\right\rbrack } \cap \left\lbrack {\underline{v},\bar{v}}\right\rbrack }\right) \cup \{ \underline{v},\bar{v}\} .$
369
+
370
+ Proof. It is easy to see that for a segment $i,{\gamma }_{i}$ as a function of $m$ is continuous and piecewise linear with breakpoints (i.e., points at which piecewise linear function changes slope): ${\widehat{p}}_{i} - \frac{\alpha }{2}{D}_{i}$ and ${\widehat{p}}_{i} + \frac{\alpha }{2}{D}_{i}$ provided they are in the range $\left\lbrack {\underline{v},\bar{v}}\right\rbrack$ . The set of breakpoints is hence $\left\{ {{\widehat{p}}_{i} - \frac{\alpha }{2}{D}_{i},{\widehat{p}}_{i} + \frac{\alpha }{2}{D}_{i}}\right\} \cap \left\lbrack {\underline{v},\bar{v}}\right\rbrack$ . Also, the slope monotonically decreases at the breakpoints, i.e., ${\gamma }_{i}$ is a concave function of $m$ .
371
+
372
+ From Eq. (13), we can see that ${\Pi }_{m}\left( \mathbf{L}\right)$ is a weighted sum over all segments, of ${\gamma }_{i}$ ’s with constant weights ${\beta }_{i}{\widehat{\pi }}_{i}$ . So, ${\Pi }_{m}\left( \mathbf{L}\right)$ as a function of $m$ is concave and piecewise linear with breakpoints belonging to the following set: ${\left\{ {\widehat{p}}_{i} - \frac{\alpha }{2}{D}_{i},{\widehat{p}}_{i} + \frac{\alpha }{2}{D}_{i}\right\} }_{i \in \left\lbrack K\right\rbrack } \cap \left\lbrack {\underline{v},\bar{v}}\right\rbrack$ . Hence, a point $m$ that maximizes ${\Pi }_{m}\left( \mathbf{L}\right)$ belongs to either the aforementioned set of breakpoints, or the set of its boundary points $\{ \underline{v},\bar{v}\}$ . Thus, the set of critical points $\mathcal{M} =$ $\left( {{\left\{ {\widehat{p}}_{i} - \frac{\alpha }{2}{D}_{i},{\widehat{p}}_{i} + \frac{\alpha }{2}{D}_{i}\right\} }_{i \in \left\lbrack K\right\rbrack } \cap \left\lbrack {\underline{v},\bar{v}}\right\rbrack }\right) \cup \{ \underline{v},\bar{v}\} .$
373
+
374
+ Our algorithm OPT-LINP-FFP (Optimal Linearized Pivot-based Fair Feature-based Pricing) which determines an optimal pivot ${m}^{ \star }$ and provides an $\alpha$ -fair pricing strategy $\left( \widetilde{\mathbf{p}}\right)$ is presented in Algorithm 1.
375
+
376
+ Theorem 7. The OPT-LINP-FFP algorithm (a) returns optimal pivot point ${m}^{ \star }$ and runs in $\mathcal{O}\left( {K\log \left( K\right) }\right)$ time, and (b) achieves the CoF bound given in Theorem 4.
377
+
378
+ Proof. (a) The first module is the creation and sorting of the set of critical points $\mathcal{M}$ , which takes $\mathcal{O}\left( {K\log \left( K\right) }\right)$ time. Owing to Lemma 6, we can find an optimal pivot ${m}^{ \star }$ using binary search over $\mathcal{M}$ . Here, the number of critical points are at most ${2K} + 2$ , i.e., $\left| \mathcal{M}\right| \leq {2K} + 2$ . So, in the second module that finds an optimal pivot, the binary search in the outer (while) loop runs for $\mathcal{O}\left( {\log \left( \left| \mathcal{M}\right| \right) }\right)$ iterations, and the inner (for) loops run for $\mathcal{O}\left( K\right)$ iterations overall. Thus, the running time of the second module is $\mathcal{O}\left( {K\log \left( K\right) }\right)$ . The third module that computes pricing for the different segments runs in $\mathcal{O}\left( K\right)$ time. So, the total running time of Algorithm 1 is $\mathcal{O}\left( {K\log \left( K\right) }\right)$ .
379
+
380
+ Algorithm 1: OPT-LINP-FFP
381
+
382
+ Input: $\alpha ,\widehat{\mathbf{p}},\left( {{\widehat{\pi }}_{1},\ldots ,{\widehat{\pi }}_{K}}\right) ,\left( {{\beta }_{1},\ldots ,{\beta }_{K}}\right) ,\left( {{D}_{1},\ldots ,{D}_{K}}\right)$
383
+
384
+ Output: ${m}^{ \star },\widetilde{\mathbf{p}}$
385
+
386
+ /* Creating and sorting the set of
387
+
388
+ critical points
389
+
390
+ $\mathcal{M} \leftarrow \{ \underline{v},\bar{v}\}$
391
+
392
+ for $i \in \left\lbrack K\right\rbrack$ do
393
+
394
+ ${\tau }_{i} \leftarrow \frac{\alpha }{2}{D}_{i}$
395
+
396
+ if ${\widehat{p}}_{i} - {\tau }_{i} > \underline{v}$ then
397
+
398
+ $\mathcal{M} \leftarrow \mathcal{M} \cup \left\{ {{\widehat{p}}_{i} - {\tau }_{i}}\right\}$
399
+
400
+ if ${\widehat{p}}_{i} + {\tau }_{i} < \bar{v}$ then
401
+
402
+ $\mathcal{M} \leftarrow \mathcal{M} \cup \left\{ {{\widehat{p}}_{i} + {\tau }_{i}}\right\}$
403
+
404
+ $\operatorname{sort}\left( \mathcal{M}\right)$
405
+
406
+ /* Binary search for optimal pivot */
407
+
408
+ $\ell \leftarrow 0,r \leftarrow \left| \mathcal{M}\right| - 1$
409
+
410
+ while $\ell \leq r$ do
411
+
412
+ $z \leftarrow \left\lfloor \frac{\ell + r}{2}\right\rfloor \;//\mathcal{M}\left\lbrack z\right\rbrack$ is the current pivot
413
+
414
+ /* Computing the expression in
415
+
416
+ Eq. (13) at current and adjacent
417
+
418
+ critical points
419
+
420
+ ${\Pi }_{\mathcal{M}\left\lbrack {z - 1}\right\rbrack } \leftarrow 0,{\Pi }_{\mathcal{M}\left\lbrack z\right\rbrack } \leftarrow 0,{\Pi }_{\mathcal{M}\left\lbrack {z + 1}\right\rbrack } \leftarrow 0$
421
+
422
+ for $y \leftarrow \{ z - 1,z,z + 1\}$ do
423
+
424
+ for $i \leftarrow 1$ to $K$ do
425
+
426
+ if ${\widehat{p}}_{i} \geq \mathcal{M}\left\lbrack y\right\rbrack + {\tau }_{i}$ then
427
+
428
+ ${\gamma }_{i} \leftarrow \frac{\widehat{M}\left\lbrack y\right\rbrack - \underline{v} + {\tau }_{i}}{{\widehat{p}}_{i} - \underline{v}}$
429
+
430
+ else if ${\widehat{p}}_{i} \leq \mathcal{M}\left\lbrack y\right\rbrack - {\tau }_{i}$ then
431
+
432
+ ${\gamma }_{i} \leftarrow \frac{\bar{v} - \mathcal{M}\left\lbrack y\right\rbrack + {\tau }_{i}}{\bar{v} - {\widehat{p}}_{i}}$
433
+
434
+ else
435
+
436
+ ${\gamma }_{i} \leftarrow 1$
437
+
438
+ ${\Pi }_{\mathcal{M}\left\lbrack y\right\rbrack } \leftarrow {\Pi }_{\mathcal{M}\left\lbrack y\right\rbrack } + {\beta }_{i}{\gamma }_{i}{\widehat{\pi }}_{i}$
439
+
440
+ if ${\Pi }_{\mathcal{M}\left\lbrack {z - 1}\right\rbrack } \leq {\Pi }_{\mathcal{M}\left\lbrack z\right\rbrack } \leq {\Pi }_{\mathcal{M}\left\lbrack {z + 1}\right\rbrack }$ then
441
+
442
+ $\ell \leftarrow z + 1$
443
+
444
+ else if ${\Pi }_{\mathcal{M}\left\lbrack {z - 1}\right\rbrack } \geq {\Pi }_{\mathcal{M}\left\lbrack z\right\rbrack } \geq {\Pi }_{\mathcal{M}\left\lbrack {z + 1}\right\rbrack }$ then
445
+
446
+ $r \leftarrow z - 1$
447
+
448
+ else
449
+
450
+ ${m}^{ \star } \leftarrow \mathcal{M}\left\lbrack z\right\rbrack$
451
+
452
+ break
453
+
454
+ /* Pricing for the different segments $*$ /
455
+
456
+ for $i \in \left\lbrack K\right\rbrack$ do
457
+
458
+ if ${\widehat{p}}_{i} \geq {m}^{ \star } + {\tau }_{i}$ then
459
+
460
+ ${\widetilde{p}}_{i} \leftarrow {m}^{ \star } + {\tau }_{i}$
461
+
462
+ else if ${\widehat{p}}_{i} \leq {m}^{ \star } - {\tau }_{i}$ then
463
+
464
+ ${\widetilde{p}}_{i} \leftarrow {m}^{ \star } - {\tau }_{i}$
465
+
466
+ else
467
+
468
+ ${\widetilde{p}}_{i} \leftarrow {\widehat{p}}_{i}$
469
+
470
+ (b) From Theorem 4, for $m = \left( {\underline{v} + \bar{v}}\right) /2$ , the CoF bound
471
+
472
+ holds. Also, ${\Pi }_{{m}^{ \star }}\left( \mathbf{L}\right) \geq {\Pi }_{m}\left( \mathbf{L}\right)$ for all $m \neq {m}^{ \star }$ . We have:
473
+
474
+ $$
475
+ \operatorname{CoF} = \frac{\Pi \left( \widehat{\mathbf{p}}\right) }{\Pi \left( \widetilde{\mathbf{p}}\right) } \leq \frac{\Pi \left( \widehat{\mathbf{p}}\right) }{{\Pi }_{{m}^{ \star }}\left( \mathbf{L}\right) } \leq \frac{\Pi \left( \widehat{\mathbf{p}}\right) }{{\Pi }_{m}\left( \mathbf{L}\right) }
476
+ $$
477
+
478
+ This completes the proof of the theorem.
479
+
480
+ § 6 DISCUSSION
481
+
482
+ This paper built a foundation for the design of fair feature-based pricing by proposing a new fairness notion called $\alpha$ -fairness. Our impossibility result on the discrete valuation setting restricted us from attaining a finite cost of fairness (CoF) in general settings. Interestingly, in the continuous valuation setting with concave revenue functions, we showed that a family of pricing schemes, LINP-FFP, provided a CoF strictly less than 2. Finally, we proposed an algorithm, OPT-LINP-FFP, which gave us an optimal pricing strategy within this family. It is worth noting that the algorithm does not require a complete distribution function; peaks of revenue distributions are sufficient statistics for computing optimal fair feature-based pricing.
483
+
484
+ We leave the problem of finding an optimal segmentation (optimal value of $K$ and corresponding $K$ -partition of the market) as interesting future work. We assumed a monopoly market. It will be interesting to study optimal fair pricing in the face of competition and other constraints such as finite supply, non-linear production cost, and variable demand.
UAI/UAI 2022/UAI 2022 Conference/B248iw8jce5/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,529 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Bayesian Quantile and Expectile Optimisation
2
+
3
+ ## Abstract
4
+
5
+ Bayesian optimisation (BO) is widely used to optimise stochastic black box functions. While most BO approaches focus on optimising conditional expectations, many applications require risk-averse strategies and alternative criteria accounting for the distribution tails need to be considered. In this paper, we propose new variational models for Bayesian quantile and expectile regression that are well-suited for heteroscedastic noise settings. Our models consist of two latent Gaussian processes accounting respectively for the conditional quantile (or expectile) and the scale parameter of an asymmetric likelihood functions. Furthermore, we propose two BO strategies based on entropy search and Thompson sampling, that are tailored to such models and that can accommodate large batches of points. Contrary to existing $\mathrm{{BO}}$ approaches for risk-averse optimisation, our strategies can directly optimise for the quantile and expectile, without requiring replicating observations or assuming a parametric form for the noise. As illustrated in the experimental section, the proposed approach clearly outperforms the state of the art in the het-eroscedastic, non-Gaussian case.
6
+
7
+ ## 1 INTRODUCTION
8
+
9
+ Let $\Psi : \mathcal{X} \times \Omega \rightarrow \mathbb{R}$ be an unknown function, where $\mathcal{X} \subset {\left\lbrack 0,1\right\rbrack }^{D}$ and $\Omega$ denotes a probability space representing some uncontrolled variables. For any fixed $x \in \mathcal{X},{Y}_{x} = \Psi \left( {x, \cdot }\right)$ is a random variable of distribution ${\mathbb{P}}_{x}$ . We assume here a classical black-box optimisation framework: $\Psi$ is available only through (costly) pointwise evaluations of ${Y}_{x}$ . Typical examples may include stochastic simulators in physics or biology (see Skullerud (1968) for simulations of ion motion and Székely Jr and Burrage (2014) for simulations of heterogeneous natural systems), but $\Psi$ can also represent the performance of a machine learning algorithm according to some hyperparameters (see Bergstra et al. (2011) for instance). In the latter case, the randomness can come from the use of minibatching in the training procedure, the choice of a stochastic optimiser or the randomness in the initialisation of the optimiser.
10
+
11
+ Let $g\left( x\right) = \rho \left( {\mathbb{P}}_{x}\right)$ be the objective function we want to maximise, where $\rho$ is a real-valued functional defined on probability measures. The canonical choice for $\rho$ is the expectation, which is sensible when the exposition to extreme values is not a significant aspect of the decision. However, in a large variety of fields such as agronomy, medicine or finance, decision makers have an incentive to protect themselves against extreme events since they may lead to severe consequences. To take these rare events into account, one should consider alternative choices for $\rho$ that can capture the behaviour of the tails of ${\mathbb{P}}_{x}$ , such as the quantile (Rostek, 2010), conditional value-at-risk (CVaR, see Rockafellar et al. (2000)) or expectile (Bellini and Di Bernardino, 2017). In this paper we focus our interest on the modelling and optim-isation of quantiles and expectiles.
12
+
13
+ Given an estimate of $g$ based on available data, global op-timisation algorithms define a policy that finds a trade-off between exploration and intensification. More precisely, the algorithm has to explore the input space in order to avoid getting trapped in a local optimum, but it also has to concentrate its budget on input regions identified as having a high potential. The latter results in accurate estimates of $g$ in the region of interest and allows the algorithm to return an optimal input value with high precision.
14
+
15
+ In the context of Bayesian optimisation (BO), such tradeoffs have been initially studied by Mockus et al. (1978) and Jones et al. (1998) in a noise-free setting. Their framework has latter been extended to optimisation of the conditional expectation of a stochastic black box (see e.g. Frazier et al. (2009); Srinivas et al. (2009) or Picheny et al. (2013) for a review). Recently, strategies optimising risk measures have been proposed, In particular, Cakmak et al. (2020) proposed new algorithms to optimise for the quantile and CVaR for a slightly different use case, where the space $\Omega$ is actually controllable. Browne et al. (2016) and Makarova et al. (2021) proposed algorithms to optimise quantiles and CVaRs, but both rely on intensively repeating observations, which hinders their efficiency in a relatively low budget scenario.
16
+
17
+ Contributions The contributions of this paper are the following: 1) We propose a new model based on two latent Gaussian Processes (GPs) to estimate quantiles or expectiles that is tailored to heteroscedastic noise. 2) We use Sparse posterior and variational inference to support potentially large datasets. 3) We propose a new Bayesian algorithm suited to optimise conditional quantiles or expectiles in a data efficient manner. Two batch-sequential acquisition strategies are designed to find a good trade-off between exploration and intensification. The ability of our algorithm to optimise quantiles is illustrated on multiple test problems.
18
+
19
+ ## 2 BAYESIAN METAMODELS OF RISK MEASURES
20
+
21
+ For a given input point $x$ , the quantile of order $\tau \in \left( {0,1}\right)$ of ${Y}_{x}$ can be defined as
22
+
23
+ $$
24
+ {q}_{\tau }\left( x\right) = \underset{q \in \mathbb{R}}{\arg \min }\mathbb{E}\left\lbrack {{l}_{\tau }\left( {{Y}_{x} - q}\right) }\right\rbrack , \tag{1}
25
+ $$
26
+
27
+ where ${l}_{\tau }$ is the pinball loss (Koenker and Bassett Jr,1978)
28
+
29
+ $$
30
+ {l}_{\tau }\left( \xi \right) = \left( {\tau - {\mathbb{1}}_{\left( \xi < 0\right) }}\right) \xi ,\;\xi \in \mathbb{R}. \tag{2}
31
+ $$
32
+
33
+ Similarly, Newey and Powell (1987) introduced the ex-pectile as the minimiser of an asymmetric quadratic loss:
34
+
35
+ $$
36
+ {e}_{\tau }\left( x\right) = \underset{q \in \mathbb{R}}{\arg \min }\mathbb{E}\left\lbrack {{l}_{\tau }^{e}\left( {{Y}_{x} - q}\right) }\right\rbrack , \tag{3}
37
+ $$
38
+
39
+ $$
40
+ {l}_{\tau }^{e}\left( \xi \right) = \left| {\tau - {\mathbb{1}}_{\left( \xi < 0\right) }}\right| {\xi }^{2},\;\xi \in \mathbb{R}. \tag{4}
41
+ $$
42
+
43
+ We detail in the next section how these losses can be used to get an estimate of the objective function $g\left( x\right)$ using a dataset ${\mathcal{D}}_{n} = \left( {\left( {{x}_{1},{y}_{1}}\right) \cdots ,\left( {{x}_{n},{y}_{n}}\right) }\right) = \left( {{\mathcal{X}}_{n},{\mathcal{Y}}_{n}}\right)$ that does not necessarily require replicates of observations at the same input location.
44
+
45
+ ### 2.1 QUANTILE AND EXPECTILE METAMODEL
46
+
47
+ Different metamodels have been proposed to estimate a quantile function, such as artificial neural networks (Cannon, 2011), random forest (Meinshausen, 2006) or nonparametric estimation in reproducing kernel Hilbert spaces (Takeuchi et al., 2006). While the literature on expectile regression is less extended, neural network (Jiang et al., 2017) or SVM-like approaches (Farooq and Steinwart, 2017) have been developed as well. All the approaches cited above defined an estimator of $g$ as the function that minimises (optionally with a regularisation term)
48
+
49
+ $$
50
+ {\mathcal{R}}_{e}\left\lbrack g\right\rbrack = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}l\left( {{y}_{i} - g\left( {x}_{i}\right) }\right) , \tag{5}
51
+ $$
52
+
53
+ with $l = {l}_{\tau }$ for the quantile estimation and $l = {l}_{\tau }^{e}$ for the expectile. This framework makes sense because asymptotically minimising (5) is equivalent to minimising (1) or (3).
54
+
55
+ These approaches however share a common drawback: they do not capture the uncertainty associated with each prediction. This is a significant problem in our setting since quantifying this uncertainty is of paramount importance to define the exploration/intensification trade-off. This limitation can be overcome by using a probabilistic model such as
56
+
57
+ $$
58
+ y = g\left( x\right) + \epsilon \left( x\right) ,
59
+ $$
60
+
61
+ where $g$ is either an unknown parametric function (Yu and Moyeed, 2001) or a Gaussian process (Boukouvalas et al., 2012; Abeywardana and Ramos, 2015), and where the distribution of $\epsilon$ depends on the quantity to be estimated. For modelling a quantile, $\epsilon$ should follow an asymmetric Laplace distribution:
62
+
63
+ $$
64
+ {p}_{\epsilon }\left( e\right) = \frac{\tau \left( {1 - \tau }\right) }{\sigma }\exp \left( {-\frac{{l}_{\tau }\left( e\right) }{\sigma }}\right) .
65
+ $$
66
+
67
+ For approximating an expectile, one can use the asymmetric Gaussian distribution:
68
+
69
+ $$
70
+ {p}_{\epsilon }\left( e\right) = C\left( {\tau ,\sigma }\right) \exp \left( {-\frac{\left. {l}_{\tau }^{e}\left( e\right) \right) }{2{\sigma }^{2}}}\right) , \tag{6}
71
+ $$
72
+
73
+ with $C\left( {\tau ,\sigma }\right) = \frac{\sqrt{{2\tau }\left( {1 - \tau }\right) }}{\sigma \sqrt{\pi }\left( {\sqrt{\tau } + \sqrt{1 - \tau }}\right) }$ .
74
+
75
+ In both cases, the associated likelihood is given by
76
+
77
+ $$
78
+ p\left( {{\mathcal{Y}}_{n} \mid g}\right) = \mathop{\prod }\limits_{{i = 1}}^{n}{p}_{\epsilon }\left( {{y}_{i} - g\left( {x}_{i}\right) }\right) . \tag{7}
79
+ $$
80
+
81
+ Although the Bayesian quantile model presented above is well known (Yu and Moyeed, 2001; Boukouvalas et al., 2012; Abeywardana and Ramos, 2015), the Bayesian ex-pectile model we just introduced is new to the best of our knowledge. It is worth noting that the non-conjugacy between the prior on $g$ and the likelihood functions implies that the posterior distribution of $g$ given the data is not available in closed form. To overcome this, Boukouvalas et al. (2012) use Expectation propagation whereas Abeywardana and Ramos (2015) favours variational inference. The latter appears to be one of the most competitive approaches on the benchmark presented in Torossian et al. (2019) so we will embrace the variational inference framework in the remaining of the paper.
82
+
83
+ ![01963967-c4f0-7c27-9b8c-2fa4e1d2a1a0_2_191_174_1311_450_0.jpg](images/01963967-c4f0-7c27-9b8c-2fa4e1d2a1a0_2_191_174_1311_450_0.jpg)
84
+
85
+ Figure 1: GP quantile model from Abeywardana and Ramos (2015) (left) and ours (right) on data with high heteroscedasticy. The left model cannot compromise between very small observation variances around $x = 4$ and very large variances $\left( {x \leq 2}\right)$ , largely overfits on half of the domain and returns overconfident confidence intervals. In contrast, our model captures both the low and high variance regions, while returning well-calibrated confidence intervals.
86
+
87
+ One limitation of the aforementioned methods is that they can result in overconfident predictions in heteroscedastic settings, as illustrated in Figure 1. The main reason is that they only use a single parameter $\sigma$ to capture the spread for the likelihood function, which amounts to considering that the noise amplitude does not change over the input space. We believe this can be a severe limitation in the context of quantile optimisation since the fluctuation of the quantile value over the input space is likely to be dictated by the noise distribution itself not being stationary.
88
+
89
+ To overcome this issue, we propose to build quantile and ex-pectile models where the spread of the asymmetric Laplace and Gaussian likelihoods varies across the input space. For both distributions, this can be achieved by redefining $\sigma$ in equations 7 and 6 as a function of the input parameters. Intuitively, a small value of $\sigma \left( x\right)$ means that there is a high penalty for having an estimate of $g\left( x\right)$ that is far away from the data, whereas a large value of $\sigma \left( x\right)$ means that this penalty is limited and thus leads to more regularity in the model predictions. In practice, we choose a Gaussian prior for $g$ and a log-Gaussian prior for $\sigma$ ,
90
+
91
+ $$
92
+ g\left( x\right) \sim \mathcal{G}\mathcal{P}\left( {{\mu }_{g}\left( x\right) ,{k}_{\theta }^{g}\left( {x,{x}^{\prime }}\right) }\right) , \tag{8}
93
+ $$
94
+
95
+ $$
96
+ \log \sigma \left( x\right) \sim \mathcal{G}\mathcal{P}\left( {{\mu }_{\sigma }\left( x\right) ,{k}_{\theta }^{\sigma }\left( {x,{x}^{\prime }}\right) }\right) . \tag{9}
97
+ $$
98
+
99
+ This model can be compared to the Heteroskedastic GP model introduced by Saul et al. (2016), but with a different likelihood function so that the posterior mode corresponds to a quantile or an expectile.
100
+
101
+ ### 2.2 INFERENCE PROCEDURE
102
+
103
+ Although one can obtain a reasonable estimate of a mean value using only a handful of samples, inferring quantiles or expectiles tends to require a much larger number of observations, since they require information associated to the tails of the distribution. The inference procedure for the proposed probabilistic model must thus be able to cope with relatively large datasets, with a number of observations in the order of a few thousands to a tens of thousands data points.
104
+
105
+ A well established method that supports both large datasets and non-conjugate likelihoods is the Sparse Variational GP framework (Titsias, 2009; Hensman et al., 2013). It consists in approximating the intractable or computationally expensive posterior distribution $p\left( {g,\sigma \mid {\mathcal{Y}}_{n}}\right)$ by a distribution $p\left( {g,\sigma \mid g\left( Z\right) = {u}_{g},\sigma \left( Z\right) = {u}_{\sigma }}\right)$ , where $Z \in {\mathcal{X}}^{N}$ and ${u}_{g},{u}_{\sigma }$ are $N$ -dimensional random variables:
106
+
107
+ $$
108
+ {u}_{g} \sim \mathcal{N}\left( {{u}_{g} \mid {\mu }_{g},{S}_{g}}\right) \text{ and }{u}_{\sigma } \sim \mathcal{N}\left( {{u}_{\sigma } \mid {\mu }_{\sigma },{S}_{\sigma }}\right) .
109
+ $$
110
+
111
+ The parameters $Z,{\mu }_{g},{S}_{g},{\mu }_{\sigma },{S}_{\sigma }$ , are referred to as the variational parameters. The $Z$ ’s are often called inducing points. Intuitively, ${u}_{g}$ are random variables that act as pseudo-observations at the inducing point locations.
112
+
113
+ The variational parameters can be optimised jointly with the model parameters (e.g. mean function coefficients or kernel hyperparameters) such that Kullback-Leibler divergence between the approximate and the true posterior is as small as possible. In practice, this is achieved by maximising the Evidence Lower Bound (ELBO):
114
+
115
+ $$
116
+ \mathop{\sum }\limits_{{i = 1}}^{n}\int \log p\left( {{y}_{i} \mid {g}_{i},{\sigma }_{i}}\right) \widetilde{p}\left( {g}_{i}\right) \widetilde{p}\left( {\sigma }_{i}\right) d{g}_{i}d{\sigma }_{i}
117
+ $$
118
+
119
+ $$
120
+ - \operatorname{kl}\left( {\widetilde{p}\left( {u}_{q}\right) \parallel p\left( {u}_{q}\right) }\right) - \operatorname{kl}\left( {\widetilde{p}\left( {u}_{\sigma }\right) \parallel p\left( {u}_{\sigma }\right) }\right) ,
121
+ $$
122
+
123
+ where $\widetilde{p}\left( {g}_{i}\right)$ and $\widetilde{p}\left( {\sigma }_{i}\right)$ are shorthands for the variational posterior distributions at ${x}_{i}$ :
124
+
125
+ $$
126
+ \widetilde{p}\left( {g}_{i}\right) = \int p\left( {g\left( {x}_{i}\right) \mid g\left( Z\right) = {u}_{g}}\right) p\left( {u}_{g}\right) d{u}_{g}
127
+ $$
128
+
129
+ $$
130
+ = \mathcal{N}\left( {{g}_{i} \mid {K}_{{x}_{i},{u}_{g}}{K}_{{u}_{g},{u}_{g}}^{-1}{\mu }_{g},{K}_{{x}_{i},{x}_{i}} + {Q}_{g}}\right) ,
131
+ $$
132
+
133
+ where ${Q}_{g} = {K}_{{x}_{i},{u}_{g}}{K}_{{u}_{g},{u}_{g}}^{-1}\left( {{S}_{g} - {K}_{{u}_{g},{u}_{g}}}\right) {K}_{{u}_{g},{u}_{g}}^{-1}{K}_{{u}_{g},{x}_{i}}$ .
134
+
135
+ This proposed inference scheme is similar to the one used in Saul et al. (2016), with the notable difference that non differentiability of the pinball loss at the origin implies that we need to resort to using a first order optimizer such as ADAM (Kingma and Ba, 2014) that can handle non-differentiability of the objective function.
136
+
137
+ ## 3 BAYESIAN OPTIMISATION
138
+
139
+ Classical BO algorithms work as follow. First, a posterior distribution on $g$ is inferred from an initial set of experiments ${\mathcal{D}}_{n}$ (typically obtained using a space-filling design). Then the next input point to evaluate is chosen as the max-imiser of an acquisition function ${\alpha }_{n} : \mathcal{X} \rightarrow \mathbb{R}$ , computed from the posterior. The objective function is sampled at the chosen input and the posterior on $g$ is updated. These steps are repeated until the budget is exhausted. The efficiency of such strategies depends on how informative the $g$ posterior is but also on the exploration/exploitation trade-off provided by the acquisition function. Many acquisition functions have been designed to control this trade off, among them the Expected improvement (EI, Jones et al., 1998), upper confidence bound (UCB, Srinivas et al., 2009), knowledge gradient (KG, Frazier et al., 2009) and Entropy search (PES, Hernández-Lobato et al., 2014).
140
+
141
+ In the case of quantiles and expectiles, adding points one at a time is impractical since many points are typically necessary to modify significantly the $g$ posterior. Hence, we focus here on batch-BO strategies, for which the acquisition recommends a batch of $B > 1$ points instead of a single one. The above-mentioned acquisition functions have been extended to handle batches: see for instance Marmin et al. (2015) for EI, (Wu and Frazier, 2016) for KG or Desautels et al. (2014) for UCB. However, none actually fit our settings for two main reasons. First, most parallel acquisitions make use of explicit update equations for the GP moments and assume access to a Gaussian posterior for observations, neither of which are available for our model. Secondly, most are designed for small batches (say, $B \leq 5$ ) and become numerically intractable for the larger batches (say, $B > {50}$ ) that are more in line with the data volumes necessary for quantiles and expectiles estimation.
142
+
143
+ We propose in the following the first acquisition functions that can be applied to our quantile GP surrogate model, one based on Thompson sampling and one on entropy search.
144
+
145
+ ### 3.1 THOMPSON SAMPLING
146
+
147
+ Thompson sampling (TS) is becoming increasingly popular in BO, in particular because of its embarrassingly parallel nature allowing full scalability with the batch size (Hernández-Lobato et al., 2017; Kandasamy et al., 2018;
148
+
149
+ ## Vakili et al., 2021).
150
+
151
+ Given the posterior on $g$ , an intuitive approach is to sample $\Psi$ according to the probability that $x$ is the location of the maximum of $g$ . Despite this distribution usually being intractable, one may achieve the same result by sampling a trajectory from the posterior of $g$ and then selecting the input that corresponds to its maximiser. Such approach directly extends to batches of inputs, by drawing several trajectories and selecting all the maximisers.
152
+
153
+ The main drawback of GP-based TS is the cost of sampling a trajectory, which can only be done exactly at a finite number of input locations at a cubic cost in the number of locations. An alternative is to rely on a finite rank approximation of the kernel, but this has been found to have an undesirable effect known as variance starvation (Wang et al., 2018).
154
+
155
+ Wilson et al. (2020) showed that pairing sparse GP models with the so-called decoupled sampling formulation avoids the variance starvation issue. Vakili et al. (2021) then demonstrated that such an approach delivered excellent empirical performance on high noise, large budget, large batch scenarios, while enjoying the same theoretical guarantees as the vanilla TS approach. Here, we build upon Vakili et al. (2021), and apply their algorithm to the variational posterior of $g$ to obtain draws directly from the quantile or expectile model. The posterior over $\sigma$ , which controls the observation noise, is not used during the TS algorithm.
156
+
157
+ The procedure for generating quantile samples from the variational posterior of $g$ can be summarised as follows: First, a continuous sample from the prior of $g$ is generated using Random Fourier Features (see supplementary material B). Second we sample from the inducing variables ${u}_{g}$ . Third, we compute the mean function $m\left( x\right)$ of a GPR model that interpolates the dataset $\left\{ {Z,{u}_{g} - s\left( Z\right) }\right\}$ . Finally, the posterior sample is obtained by correcting the prior samples with the mean function $v\left( x\right) = s\left( x\right) + m\left( x\right)$ .
158
+
159
+ ### 3.2 INFORMATION-THEORETIC QUANTILE OPTIMISATION WITH GIBBON
160
+
161
+ Another particularly intuitive search strategy for $\mathrm{{BO}}$ is to choose the evaluations that will maximally reduce the uncertainty in the minimiser of the objective, an approach known as max-value (or min-value) entropy search (MES, Wang and Jegelka, 2017). For quantile optimisation, MES corresponds to reducing uncertainty in the minimal quantile value ${g}^{ * } = \mathop{\min }\limits_{{x \in \mathcal{X}}}g\left( x\right)$ . Following the arguments of Wang and Jegelka (2017), a meaningful measure of uncertainty reduction in this context is taken as the gain in mutual information between a set of candidate evaluations and ${g}^{ * }$ (see Cover and Thomas, 2012, for an introduction to information theory). Principled information-theoretic optimisation then corresponds to finding batches of $B$ input points ${\left\{ {x}_{i}\right\} }_{i = 1}^{B}$ that maximise
162
+
163
+ $$
164
+ {\alpha }_{n}\left( {\left\{ {x}_{i}\right\} }_{i = 1}^{B}\right) = \operatorname{MI}\left( {{g}^{ * };{\left\{ {y}_{{x}_{i}}\right\} }_{i = 1}^{B} \mid {\mathcal{D}}_{n}}\right) , \tag{10}
165
+ $$
166
+
167
+ where ${y}_{{x}_{i}}$ are not-yet-observed evaluations of the batch that are estimated with the GP surrogate model.
168
+
169
+ Although calculating the acquisition function (10) is challenging, there exist effective approximation strategies for GP models with conjugate likelihoods (Moss et al., 2020b; Takeno et al., 2020). In the remaining of this section we show that the approach used in General-purpose Information Based Bayesian-OptimisatioN (GIBBON Moss et al., 2021) can be adapted to support asymmetric Laplace or Gaussian likelihood so that information-theoretic acquisition functions can be used for our quantile and expectile models.
170
+
171
+ Following the derivations of Moss et al. (2021), the application of three well-known information-theoretic inequalities provides the following lower-bound for the mutual information (10):
172
+
173
+ $$
174
+ \operatorname{MI}\left( {{g}^{ * };{\left\{ {y}_{{x}_{i}}\right\} }_{i = 1}^{B} \mid {\mathcal{D}}_{n}}\right) \geq \mathrm{H}\left( {{\left\{ {y}_{{x}_{i}}\right\} }_{i = 1}^{B} \mid {\mathcal{D}}_{n}}\right)
175
+ $$
176
+
177
+ $$
178
+ - \frac{1}{2}\mathop{\sum }\limits_{{i = 1}}^{B}{\mathbb{E}}_{{g}^{ * } \mid {\mathcal{D}}_{n}}\left\lbrack {\log \left( {{2\pi }\operatorname{eVar}\left( {{y}_{{x}_{i}} \mid {g}^{ * },{\mathcal{D}}_{n}}\right) }\right) }\right\rbrack , \tag{11}
179
+ $$
180
+
181
+ where $\mathrm{H}\left( A\right) = - {\mathbb{E}}_{A}\left\lbrack {\log p\left( A\right) }\right\rbrack$ denotes differential entropy. Although calculating the expectation in the second term of [11] is intractable (i.e. no closed-form expression exists for $p\left( {{g}^{ * } \mid {\mathcal{D}}_{n}}\right)$ ), we follow another approximation common among information-theoretic acquisition functions and approximate the integral using Monte-Carlo over a set of $M$ sampled minimum values. In particular, we use the Gum-bel sampler proposed by Wang and Jegelka (2017), which provides a cheap set of samples ${\mathcal{M}}_{n} = \left\{ {{g}_{1}^{ * },..,{g}_{M}^{ * }}\right\}$ from $p\left( {{g}^{ * } \mid {\mathcal{D}}_{n}}\right)$ .
182
+
183
+ When calculating the original GIBBON acquisition function, all the terms in the lower bound (11) are tractable, i.e. the conjugancy of their Gaussian likelihood means that $\mathrm{H}\left( {{\left\{ {y}_{{x}_{i}}\right\} }_{i = 1}^{B} \mid {\mathcal{D}}_{n}}\right)$ is just the differential entropy of a multivariate Gaussian which, alongside each $\operatorname{Var}\left( {{y}_{{x}_{i}} \mid {g}^{ * },{\mathcal{D}}_{n}}\right)$ , has a closed-form expression (See Moss et al. (2021) for details). Consequently, this lower bound itself is used as a closed-form approximation to the mutual information. However, in our quantile setting, we no longer have expressions for the first term of (11) - the joint differential entropy of $B$ -dimensional asymmetric Laplace variables with a complex correlation structure given by our two latent GPs.
184
+
185
+ To build an information-theoretic acquisition function suitable for our quantile model, we must apply an additional approximation. In particular, by using a moment-matching approximation, we can replace the intractable joint differential entropy with the differential entropy of a multivariate Gaussian of the same covariance, leading to our propose
186
+
187
+ Quantile GIBBON (Q-GIBBON) acquisition function
188
+
189
+ $$
190
+ {\alpha }_{n}^{\mathrm{Q}\text{-GIBBON }} = \frac{1}{2}\log \left| C\right| - \frac{1}{2M}\mathop{\sum }\limits_{{{g}^{ * } \in {\mathcal{M}}_{n}}}\mathop{\sum }\limits_{{i = 1}}^{B}\log {V}_{i}\left( {g}^{ * }\right) ,
191
+ $$
192
+
193
+ where $\left| C\right|$ is the determinant of the $B \times B$ predictive covariance matrix with elements ${C}_{i, j} = \operatorname{Cov}\left( {{y}_{{x}_{i}},{y}_{{x}_{j}}}\right)$ and $V\left( {g}^{ * }\right)$ denotes the conditional variances ${V}_{i}\left( {g}^{ * }\right) =$ $\operatorname{Var}\left( {{y}_{{x}_{i}} \mid {g}^{ * },{\mathcal{D}}_{n}}\right)$ . Crucially, all the terms of Q-GIBBON have closed-form expressions (see appendix A for a derivation of $C$ and $V$ from our quantile GP).
194
+
195
+ Although applying an additional moment-matching approximation means that Q-GIBBON is no longer a lower bound on the true mutual information, we found that it provides very efficient optimisation (see Section 4). In fact, we tried much more expensive but unbiased Monte-Carlo approximations which did not result in noticeable difference in performance.
196
+
197
+ In practice, directly searching for the set of $B$ points that maximise ${\alpha }_{n}^{\mathrm{Q}\text{-GIBBON }}$ is a very challenging task, due to the dimensionality $\left( {B \times D}\right)$ and multimodality of the acquisition function. However, the Q-GIBBON formulation makes it particularly well-suited for a greedy approach, where we first optimise Q-GIBBON for $B = 1$ , then optimise for $B = 2$ while fixing the first point to the previously found value, etc. until $B$ points are found.
198
+
199
+ ## 4 EXPERIMENTS
200
+
201
+ We now evaluate our proposed model and acquisition functions on a set of synthetic tasks and two real-world optimisa-tion problems. All that follows could equivalently be applied to expectiles, experiments are focused on quantile optim-isation to streamline the exposition. The results presented in this section can be replicated using the code available at www.github.com/obfuscated-url.
202
+
203
+ ### 4.1 ALGORITHM BASELINES
204
+
205
+ To our knowledge, there is no other existing BO algorithm dedicated to optimising quantiles in our considered setting. The most similar algorithms are those of Cakmak et al. (2020) and Makarova et al. (2021). However, Cakmak et al. (2020) requires precise control over the noise generation process, while Makarova et al. (2021) seek to find solutions with low levels of observation noise but do not provide a method for optimising a specific quantile level.
206
+
207
+ We can, however, apply standard BO methods to perform quantile optimisation if direct observations of the quantiles are available. This is achievable by using repeated observations, which allows computing a (pointwise) empirical quantile. As direct observations are available, a standard GP Regression model (GPR) can be used to provide a posterior on $g$ (Plumlee and Tuo,2014). One can also bootstrap the repeated observations to obtain variance estimates of the empirical quantiles, to improve further the model by accounting for varying observation noise. Next, a BO procedure can be defined based on any classical acquisition function. Here we choose the vanilla EI one. With this strategy, each batch consists of a single point in the input space, repeated a number of times. In the following experiments we use this baseline (denoted GPR-EI) to compare with our two proposed methods using TS and Q-GIBBON over a quantile GP.
208
+
209
+ ### 4.2 IMPLEMENTATION
210
+
211
+ All models are built using the gpflux library (Dutordoir et al. 2021), and the BO procedure is done using trieste (Berkeley et al., 2022). All models use a Matern 5/2 kernel, and all acquisition functions (or GP samples in the case of TS) are optimised using a multi-start BFGS scheme.
212
+
213
+ Our quantile model requires a design choice for the inducing points placement, these are reinitialised for each model fit. We follow the findings of Vakili et al. (2021) and use the centroids of a k-means procedure on the data points, which tends to concentrate the inducing points near the optimal areas as more data is collected by BO. Our implementation of decoupled Thompson sampling uses 1000 random Fourier features (see supplementary material for detailed expressions). To sample minimum values for Q-GIBBON we use the Gumbel sampler of Wang and Jegelka (2017) with ${10},{000} \times D$ random initial points.
214
+
215
+ ### 4.3 SYNTHETIC PROBLEMS
216
+
217
+ Problem description We generated a set of synthetic problems based on the Generalised Lambda Distribution (GLD, Freimer et al., 1988), a highly flexible four-parameter probability distribution function designed to approximate several well-known parametric distributions. The four parameters define the location, scale, left and right shape of the distribution, respectively. By varying the value of each parameter as a function of $x$ , one can create a black-box with high noise, heteroscedasticity and non-Gaussianity:
218
+
219
+ $$
220
+ \left. {{Y}_{x} \sim {GLD}\left( {{\lambda }_{1}\left( x\right) ,\ldots ,{\lambda }_{4}\left( x\right) }\right) }\right) \text{.} \tag{12}
221
+ $$
222
+
223
+ To generate a large set of problems with varying dimensionality while controlling the multimodality of the problem at hand, we used GP random draws for the ${\lambda }_{i}$ ’s. See appendix for a full description. Figure 2 shows examples of marginal distributions (for different $x$ values) for one such problem.
224
+
225
+ We consider two input space dimensions: $D = 3$ and 6 and two quantile levels, $\tau = {0.75}$ and 0.95 . We use as an initial budget ${50D}$ observations, uniformly distributed across the input space and a total budget of ${250D}$ observations, acquired in batches of either $B = {10}$ or 50 points. Each strategy is run on 50 different problems. We report here the simple regret in Figure 3, averaged over the 50 problems, with confidence intervals.
226
+
227
+ ![01963967-c4f0-7c27-9b8c-2fa4e1d2a1a0_5_896_179_696_253_0.jpg](images/01963967-c4f0-7c27-9b8c-2fa4e1d2a1a0_5_896_179_696_253_0.jpg)
228
+
229
+ Figure 2: Examples of marginal distributions for one GLD-based problem at three different locations of the input space.
230
+
231
+ Results In almost all cases, our approaches largely outperform the GPR baseline, the exception being on the simpler problem (small dimension and batch size) for which the GPR baseline is comparable to TS (GIBBON being substantially better for the 0.75 quantile). Comparing acquisition strategies, GIBBON clearly outperforms TS for $D = 3$ . In dimension 6, both approaches are roughly comparable.
232
+
233
+ ### 4.4 LUNAR LANDER
234
+
235
+ Problem description The Lunar Lander problem is a popular benchmark for noisy BO (Moss et al., 2020a; Eriksson et al., 2019). In this well-known reinforcement learning task, we must control three engines (left, main and right) to successfully land a rocket. The learning environment and a hard-coded PID controller is provided in the OpenAI gym. ${}^{\mathrm{T}}$ We seek to optimise 6 thresholds present in the description of the controller to provide the largest expected reward: finding those thresholds defines the BO task. Our RL environment is exactly as provided by OpenAI. We lose 0.3 points per second of fuel use and 100 if we crash. We gain 10 points each time a leg makes contact with the ground, 100 points for any successful landing, and 200 points for a successful landing in the specified landing zone. Each individual run of the environment allows the testing of a controller on a specific random seed.
236
+
237
+ This problem is particularly well-suited for a quantile approach, since reward is stochastic, highly non-Gaussian, and the landing problem is a clear case for which one would want guarantees against risk.
238
+
239
+ Results For this problem, we ran each algorithm 10 times (starting from different initial conditions), with batches of $B = {25}$ points 300 initial observations and 1500 in total. We aim to maximise the ${10}\%$ quantile of the reward. Due to the high cost of calculating the true quantiles of the lunar lander experiment (i.e. they must be calculated empirically across a large collection of runs), we only report the reward quantile obtained after half and all the iterations (see Table 1) and only run one of our two proposed acquisition functions. We choose TS over GIBBON as our synthetic GLD experiments suggest that TS outperforms Q-GIBBON on problems with larger (i.e. 6) dimensions. We can see that TS largely outperforms the baseline, as it seems to robustly identify a much better solution.
240
+
241
+ ---
242
+
243
+ ${}^{1}$ https://gym.openai.com/
244
+
245
+ ---
246
+
247
+ ![01963967-c4f0-7c27-9b8c-2fa4e1d2a1a0_6_141_173_1424_918_0.jpg](images/01963967-c4f0-7c27-9b8c-2fa4e1d2a1a0_6_141_173_1424_918_0.jpg)
248
+
249
+ Figure 3: The mean and 95% confidence intervals of regret on synthetic problems in dimension 3 (top) and 6 (bottom), for two quantile levels $\left( {\tau = {0.75},{0.95}}\right)$ and medium $\left( {B = {10}\text{, left}}\right)$ and large $\left( {B = {50}\text{, right}}\right)$ batch sizes.
250
+
251
+ <table><tr><td/><td>750 obs</td><td>1500 obs</td></tr><tr><td>GPR-EI</td><td>94.6 (106.1)</td><td>159.5 (110.9)</td></tr><tr><td>TS</td><td>204.3 (53.8)</td><td>255.2 (8.0)</td></tr></table>
252
+
253
+ Table 1: Mean and standard deviation over 10 runs for the ${10}\%$ quantile of the reward on the lunar lander problem.
254
+
255
+ ### 4.5 LASER TUNING
256
+
257
+ Problem Description For our final experiment, we test our quantile optimisation in a real-world setting inspired by the Free-Electron Laser (FEL) tuning example of McIntire et al. (2016). This is a challenging 16-dimensional optimisa-tion task where we must configure the strengths of magnets manipulating the shape of the FEL's electron beam, seeking to build a powerful and stable beam suitable for use in scientific experiments. Due to the high levels of observation noise in this problem and as stability of the resulting beam is of critical importance for conducting reliable experiments, it is clearly beneficial to encode a level of risk-adversity into the optimisation. Therefore, there are clear advantages for using quantile optimisation for FEL calibration.
258
+
259
+ As we do not have access to the FEL directly, we follow McIntire et al. (2016) and use their 4,074 observed X-ray pulse energy measurements to build Gaussian process surrogate model from which we can simulate pulse energy at any new magnet configuration. To simulate the effect of observation noise, McIntire et al. (2016) add additional Gaussian perturbations to the simulated values. However, we found that the noise in this system was actually skew Gaussian and varied in scale and skew across the search space. Consequently, we simulate observation noise from a skew Gaussian distribution with location, scale and shape parameters also modelled with additional GPs (i.e. a setup similar to our GLD examples). As many of the 4,074 energy measurements are evaluated at very similar input locations, rounding these inputs to four decimal places provides us with many repeated evaluations, allowing the empirical estimation of each parameter of the skew Gaussian distribution at each of these inputs. The location, scale and shape GPs are then determined to predict the parameters of the skew Gaussian noise distributions for any candidate magnet configuration.
260
+
261
+ ![01963967-c4f0-7c27-9b8c-2fa4e1d2a1a0_7_413_182_922_490_0.jpg](images/01963967-c4f0-7c27-9b8c-2fa4e1d2a1a0_7_413_182_922_490_0.jpg)
262
+
263
+ Figure 4: The mean and 95% confidence intervals of best 0.3 quantile found across 10 repetitions of the FEL tuning task.
264
+
265
+ Results Figure 4 shows the performance of each algorithm over 10 repetitions, seeking to maximise the ${30}\%$ quantile of pulse energy. The models are initialised with 400 data points randomly chosen from the full dataset, and a further 1,200 points are collected with BO in batches of 100 points. Our algorithms based on quantile GP models substantially outperform the replicate-based GPR baseline. In fact, by using TS with a quantile GP, we are able to find solutions very close to the optimal value (4.8). We hypothesise that the relatively poor performance of our Q-GIBBON acquisition function is due to the high dimension of this problem. The Gumbel sampler used by Q-GIBBON for sampling minimal-values is based on random sampling and so its performance likely degrades as the input dimension increases. Since the performance of information-theoretic $\mathrm{{BO}}$ is sensitive to the quality of these samples (Moss et al., 2021), extending information-theoretic BO to high dimensional problems like FEL tuning remains an open question.
266
+
267
+ ### 4.6 CONCLUDING COMMENTS
268
+
269
+ We have presented a new setting to estimate quantiles and ex-pectiles of stochastic black box functions that is well suited to heteroscedastic cases. We then used the proposed model to create two BO algorithms designed for the optimisation of conditional quantiles and expectiles without repetitions in the experimental design. These algorithms outperform the state of the art on several test problems with different dimensions, quantile orders, budgets and batch sizes.
270
+
271
+ Overall, our experiments clearly show that the performance gap between our approaches and the GPR-EI baseline increases with the batch size and problem dimension. Since GPR-EI relies on repetitions, it is much more limited in terms of exploration, while our approaches can evaluate $B$ unique points at each BO iteration. Hence, our approach is much less sensitive to the curse of dimensionality.
272
+
273
+ Experiments also show that for low-dimensional, smaller batches, Q-GIBBON is the best alternative, while with increasing dimension and batch size, the simpler Thompson sampling seems to perform best. Depending on the available hardware, the parallel nature of TS might also provide substantial advantages in terms of wall-clock time.
274
+
275
+ ## Bibliography
276
+
277
+ Abeywardana, S. and Ramos, F. (2015). Variational inference for nonparametric bayesian quantile regression. In AAAI, pages 1686-1692.
278
+
279
+ Bellini, F. and Di Bernardino, E. (2017). Risk management with expectiles. The European Journal of Finance, 23(6):487-506.
280
+
281
+ Bergstra, J. S., Bardenet, R., Bengio, Y., and Kégl, B. (2011). Algorithms for hyper-parameter optimization. In Advances in neural information processing systems, pages 2546-2554.
282
+
283
+ Berkeley, J., Moss, H. B., Artemev, A., Pascual-Diaz, S., Granta, U., Stojic, H., Couckuyt, I., Qing, J., Loka, N., Paleyes, A., Ober, S. W., and Picheny, V. (2022). Trieste v.0.10.0. https://github.com/secondmind-labs/trieste.
284
+
285
+ Boukouvalas, A., Barillec, R., and Cornford, D. (2012). Gaussian process quantile regression using expectation propagation. arXiv preprint arXiv:1206.6391.
286
+
287
+ Browne, T., Iooss, B., Gratiet, L. L., Lonchampt, J., and Remy, E. (2016). Stochastic simulators based optimization by gaussian process metamodels-application to maintenance investments planning issues. Quality and Reliability Engineering International, 32(6):2067-2080.
288
+
289
+ Cakmak, S., Astudillo Marban, R., Frazier, P., and Zhou, E. (2020). Bayesian optimization of risk measures. Advances in Neural Information Processing Systems, 33:20130-20141.
290
+
291
+ Cannon, A. J. (2011). Quantile regression neural networks: Implementation in $\mathrm{r}$ and application to precipitation down-scaling. Computers & geosciences, 37(9):1277-1284.
292
+
293
+ Cover, T. M. and Thomas, J. A. (2012). Elements of information theory. John Wiley & Sons.
294
+
295
+ Desautels, T., Krause, A., and Burdick, J. W. (2014). Parallelizing exploration-exploitation tradeoffs in gaussian process bandit optimization. The Journal of Machine Learning Research, 15(1):3873-3923.
296
+
297
+ Dutordoir, V., Salimbeni, H., Hambro, E., McLeod, J., Leibfried, F., Artemev, A., van der Wilk, M., Hensman, J., Deisenroth, M. P., and John, S. (2021). Gpflux: A library for deep gaussian processes. arXiv preprint arXiv:2104.05674.
298
+
299
+ Eriksson, D., Pearce, M., Gardner, J., Turner, R. D., and Po-loczek, M. (2019). Scalable global optimization via local bayesian optimization. Advances in Neural Information Processing Systems, 32.
300
+
301
+ Farooq, M. and Steinwart, I. (2017). An svm-like approach for expectile regression. Computational Statistics & Data Analysis, 109:159-181.
302
+
303
+ Frazier, P., Powell, W., and Dayanik, S. (2009). The knowledge-gradient policy for correlated normal beliefs.
304
+
305
+ INFORMS journal on Computing, 21(4):599-613.
306
+
307
+ Freimer, M., Kollia, G., Mudholkar, G. S., and Lin, C. T. (1988). A study of the generalized Tukey lambda family. Communications in Statistics-Theory and Methods, 17(10):3547-3567.
308
+
309
+ Hensman, J., Fusi, N., and Lawrence, N. D. (2013). Gaussian processes for big data. arXiv preprint arXiv:1309.6835.
310
+
311
+ Hernández-Lobato, J. M., Hoffman, M. W., and Ghahramani, Z. (2014). Predictive entropy search for efficient global optimization of black-box functions. In Advances in neural information processing systems, pages 918-926.
312
+
313
+ Hernández-Lobato, J. M., Requeima, J., Pyzer-Knapp, E. O., and Aspuru-Guzik, A. (2017). Parallel and distributed thompson sampling for large-scale accelerated exploration of chemical space. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1470-1479. JMLR. org.
314
+
315
+ Jiang, C., Jiang, M., Xu, Q., and Huang, X. (2017). Ex-pectile regression neural network model with applications. Neurocomputing, 247:73-86.
316
+
317
+ Jones, D. R., Schonlau, M., and Welch, W. J. (1998). Efficient global optimization of expensive black-box functions. Journal of Global optimization, 13(4):455-492.
318
+
319
+ Kandasamy, K., Krishnamurthy, A., Schneider, J., and Póczos, B. (2018). Parallelised bayesian optimisation via thompson sampling. In International Conference on Artificial Intelligence and Statistics, pages 133-142. PMLR.
320
+
321
+ Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
322
+
323
+ Koenker, R. and Bassett Jr, G. (1978). Regression quantiles. Econometrica: journal of the Econometric Society, pages 33-50.
324
+
325
+ Makarova, A., Usmanova, I., Bogunovic, I., and Krause, A. (2021). Risk-averse heteroscedastic bayesian optimization. Advances in Neural Information Processing Systems, 34.
326
+
327
+ Marmin, S., Chevalier, C., and Ginsbourger, D. (2015). Differentiating the multipoint expected improvement for optimal batch design. In International Workshop on Machine Learning, Optimization and Big Data, pages 37- 48. Springer.
328
+
329
+ McIntire, M., Ratner, D., and Ermon, S. (2016). Sparse gaussian processes for bayesian optimization. In UAI.
330
+
331
+ Meinshausen, N. (2006). Quantile regression forests. Journal of Machine Learning Research, 7(Jun):983-999.
332
+
333
+ Mockus, J., Tiesis, V., and Zilinskas, A. (1978). The application of bayesian methods for seeking the extremum. Towards global optimization, 2(117-129):2.
334
+
335
+ Moss, H. B., Leslie, D. S., Gonzalez, J., and Rayson, P. (2021). Gibbon: General-purpose information-based bayesian optimisation. Journal of Machine Learning Research, 22(235):1-49.
336
+
337
+ Moss, H. B., Leslie, D. S., and Rayson, P. (2020a). Bosh: Bayesian optimization by sampling hierarchically. arXiv preprint arXiv:2007.00939.
338
+
339
+ Moss, H. B., Leslie, D. S., and Rayson, P. (2020b). Mumbo: Multi-task max-value bayesian optimization. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 447-462. Springer.
340
+
341
+ Newey, W. K. and Powell, J. L. (1987). Asymmetric least squares estimation and testing. Econometrica: Journal of the Econometric Society, pages 819-847.
342
+
343
+ Picheny, V., Wagner, T., and Ginsbourger, D. (2013). A benchmark of kriging-based infill criteria for noisy optimization. Structural and Multidisciplinary Optimization, 48(3):607-626.
344
+
345
+ Plumlee, M. and Tuo, R. (2014). Building accurate emulators for stochastic simulations via quantile kriging. Technometrics, 56(4):466-473.
346
+
347
+ Rasmussen, C. E. (2003). Gaussian processes in machine learning. In Summer School on Machine Learning, pages 63-71. Springer.
348
+
349
+ Rockafellar, R. T., Uryasev, S., et al. (2000). Optimization of conditional value-at-risk. Journal of risk, 2:21-42.
350
+
351
+ Rostek, M. (2010). Quantile maximization in decision theory. The Review of Economic Studies, 77(1):339-371.
352
+
353
+ Saul, A. D., Hensman, J., Vehtari, A., and Lawrence, N. D. (2016). Chained gaussian processes. In Artificial Intelligence and Statistics, pages 1431-1440.
354
+
355
+ Skullerud, H. (1968). The stochastic computer simulation of ion motion in a gas subjected to a constant electric field. Journal of Physics D: Applied Physics, 1(11):1567.
356
+
357
+ Srinivas, N., Krause, A., Kakade, S. M., and Seeger, M. (2009). Gaussian process optimization in the bandit setting: No regret and experimental design. arXiv preprint arXiv:0912.3995.
358
+
359
+ Székely Jr, T. and Burrage, K. (2014). Stochastic simulation in systems biology. Computational and structural biotechnology journal, 12(20-21):14-25.
360
+
361
+ Takeno, S., Fukuoka, H., Tsukada, Y., Koyama, T., Shiga, M., Takeuchi, I., and Karasuyama, M. (2020). Multi-fidelity bayesian optimization with max-value entropy search and its parallelization. In International Conference on Machine Learning, pages 9334-9345. PMLR.
362
+
363
+ Takeuchi, I., Le, Q. V., Sears, T. D., and Smola, A. J. (2006). Nonparametric quantile estimation. Journal of machine learning research, 7(Jul):1231-1264.
364
+
365
+ Titsias, M. (2009). Variational learning of inducing variables in sparse gaussian processes. In Artificial Intelligence and Statistics, pages 567-574.
366
+
367
+ Torossian, L., Picheny, V., Faivre, R., and Garivier, A. (2019). A review on quantile regression for stochastic computer experiments. arXiv preprint arXiv:1901.07874.
368
+
369
+ Vakili, S., Moss, H., Artemev, A., and Picheny, V. (2021). Scalable thompson sampling using sparse gaussian process models. In Advances in neural information processing systems. PMLR.
370
+
371
+ Wang, Z., Gehring, C., Kohli, P., and Jegelka, S. (2018). Batched large-scale bayesian optimization in high-dimensional spaces. In International Conference on Artificial Intelligence and Statistics, pages 745-754. PMLR.
372
+
373
+ Wang, Z. and Jegelka, S. (2017). Max-value entropy search for efficient bayesian optimization. In International Conference on Machine Learning, pages 3627-3635. PMLR.
374
+
375
+ Wilson, J., Borovitskiy, V., Terenin, A., Mostowsky, P., and Deisenroth, M. (2020). Efficiently sampling functions from gaussian process posteriors. In International Conference on Machine Learning, pages 10292-10302. PMLR.
376
+
377
+ Wu, J. and Frazier, P. (2016). The parallel knowledge gradient method for batch bayesian optimization. In Advances in Neural Information Processing Systems, pages 3126- 3134.
378
+
379
+ Yu, K. and Moyeed, R. A. (2001). Bayesian quantile regression. Statistics & Probability Letters, 54(4):437-447.
380
+
381
+ ## A SUPPLEMENTARY MATERIAL: CALCULATION OF Q-GIBBON
382
+
383
+ We derive here the analytical form of our proposed Q-GIBBON acquisition function. For simplicity, we focus on the quantile setting, but the expectile case only requires a straightforward modification of the following derivation.
384
+
385
+ Recall that Q-GIBBON is defined as
386
+
387
+ $$
388
+ {\alpha }_{n}^{\mathrm{Q}\text{-GIBBON }} = \frac{1}{2}\log \left| C\right| - \frac{1}{2M}\mathop{\sum }\limits_{{{g}^{ * } \in {\mathcal{M}}_{n}}}\mathop{\sum }\limits_{{i = 1}}^{B}\log {V}_{i}\left( {g}^{ * }\right) ,
389
+ $$
390
+
391
+ where $\left| C\right|$ is the determinant of the $B \times B$ predictive covariance matrix with elements ${C}_{i, j} = \operatorname{Cov}\left( {{y}_{{x}_{i}},{y}_{{x}_{j}} \mid {\mathcal{D}}_{n}}\right)$ and $V\left( {g}^{ * }\right)$ denotes the conditional variances ${V}_{i}\left( {g}^{ * }\right) =$ $\operatorname{Var}\left( {{y}_{{x}_{i}} \mid {g}^{ * },{\mathcal{D}}_{n}}\right)$ . Therefore, calculating Q-GIBBON boils down to being able to calculate ${V}_{i}\left( {g}^{ * }\right)$ and ${C}_{i, j}$ across any candidate batch of points (i.e. for all $i, j \in \{ 1,.., B\}$ ). We now derive closed-form expressions for ${V}_{i}\left( {g}^{ * }\right)$ and ${C}_{i, j}$ .
392
+
393
+ ### A.1 REQUIRED PREDICTIVE QUANTITIES
394
+
395
+ For ease of notation, we will consider just a single pair of input values of ${x}_{1}$ and ${x}_{2}$ and show how to calculate ${V}_{1}\left( {g}^{ * }\right)$ and ${C}_{1,2}$ . Denote the quantiles, scales and (noisy) observations at these two location as ${g}_{1} = g\left( {x}_{1}\right) \mid {\mathcal{D}}_{n}$ , ${g}_{2} = g\left( {x}_{2}\right) \left| {{\mathcal{D}}_{n},{\sigma }_{1} = \sigma \left( {x}_{1}\right) }\right| {\mathcal{D}}_{n},{\sigma }_{2} = \sigma \left( {x}_{2}\right) \mid {\mathcal{D}}_{n},{y}_{1} =$ $y\left( {x}_{1}\right) \mid {\mathcal{D}}_{n}$ and ${y}_{2} = y\left( {x}_{2}\right) \mid {\mathcal{D}}_{n}$ , respectively. Then, from our underlying GP models we can extract our current beliefs about these random variables:
396
+
397
+ $$
398
+ \left( \begin{array}{l} {g}_{1} \\ {g}_{2} \end{array}\right) \sim N\left\lbrack {\left( \begin{matrix} {\mu }_{1}^{g} \\ {\mu }_{2}^{g} \end{matrix}\right) ,\left( \begin{matrix} {\left( {\sigma }_{1}^{g}\right) }^{2} & {\sum }_{1,2}^{g} \\ {\sum }_{1,2}^{g} & {\left( {\sigma }_{2}^{g}\right) }^{2} \end{matrix}\right) }\right\rbrack ,
399
+ $$
400
+
401
+ $$
402
+ \left( \begin{array}{l} \log \left( {\sigma }_{1}\right) \\ \log \left( {\sigma }_{2}\right) \end{array}\right) \sim N\left\lbrack {\left( \begin{array}{l} {\mu }_{1}^{\sigma } \\ {\mu }_{2}^{\sigma } \end{array}\right) ,\left( \begin{matrix} {\left( {\sigma }_{1}^{\sigma }\right) }^{2} & {\sum }_{1,2}^{\sigma } \\ {\sum }_{1,2}^{\sigma } & {\left( {\sigma }_{2}^{\sigma }\right) }^{2} \end{matrix}\right) }\right\rbrack .
403
+ $$
404
+
405
+ For closed form expressions of ${\mu }_{1}^{g},{\sigma }_{1}^{g},\ldots$ see any GP textbook, e.g. Rasmussen (2003).
406
+
407
+ Before deriving expressions for ${V}_{1}\left( {g}^{ * }\right)$ and ${C}_{1,2}$ , it is convenient to write the conditional mean and variance of our noisy observations ${y}_{1}$ and ${y}_{2}$ . Following Yu and Moyeed (2001), we have
408
+
409
+ $$
410
+ \mathbb{E}\left\lbrack {{y}_{1} \mid {g}_{1},{\sigma }_{1}}\right\rbrack = {g}_{1} + \frac{1 - {2\tau }}{\tau \left( {1 - \tau }\right) }{\sigma }_{1}, \tag{13}
411
+ $$
412
+
413
+ $$
414
+ \operatorname{Var}\left( {{y}_{1} \mid {g}_{1},{\sigma }_{1}}\right) = \frac{1 - {2\tau } + 2{\tau }^{2}}{{\tau }^{2}{\left( 1 - \tau \right) }^{2}}{\sigma }_{1}^{2}, \tag{14}
415
+ $$
416
+
417
+ with similar expressions for the moments of ${y}_{2} \mid {g}_{2},{\sigma }_{2}$
418
+
419
+ ### A.2 CALCULATING THE CONDITIONAL VARIANCE V
420
+
421
+ We now have all the quantities required to calculate ${V}_{1}\left( {g}^{ * }\right) = \operatorname{Var}\left( {y \mid {g}^{ * }}\right)$ . Recall that ${g}^{ * }$ denotes the maximal value obtained by the quantile (i.e. $g\left( x\right)$ ). First, we use the law of total variance to decompose ${V}_{1}$ into two terms:
422
+
423
+ $$
424
+ {V}_{1} = {\operatorname{Var}}_{{g}_{1},\sigma \mid {g}^{ * }}\left( {\mathbb{E}\left\lbrack {{y}_{1} \mid {g}_{1},{\sigma }_{1},{g}^{ * }}\right\rbrack }\right)
425
+ $$
426
+
427
+ $$
428
+ + {\mathbb{E}}_{{g}_{1},\sigma \mid {g}^{ * }}\left\lbrack {\operatorname{Var}\left( {{y}_{1} \mid {g}_{1},{\sigma }_{1},{g}^{ * }}\right) }\right\rbrack \text{.} \tag{15}
429
+ $$
430
+
431
+ Note that conditioning on ${g}_{1},\sigma ,{g}^{ * }$ is equivalent to conditioning on ${g}_{1},\sigma$ only, as knowing that ${g}^{ * } = \min g\left( x\right)$ does not provide additional information over knowing ${g}_{1}$ itself. Therefore, we can insert our expressions for the moments of the asymmetric Laplace (13) and (14) into (15) which, after simple manipulation provides:
432
+
433
+ $$
434
+ {V}_{1}\left( {g}^{ * }\right) = {\operatorname{Var}}_{{g}_{1} \mid {g}^{ * }}\left( {g}_{1}\right) + \frac{3{\left( 1 - 2\tau \right) }^{2} + 1}{2{\tau }^{2}{\left( 1 - \tau \right) }^{2}}{\mathrm{e}}^{2\left( {{\mu }_{1}^{\sigma } + {\left( {\sigma }_{1}^{\sigma }\right) }^{2}}\right) }
435
+ $$
436
+
437
+ $$
438
+ + \frac{{\left( 1 - 2\tau \right) }^{2}}{2{\tau }^{2}{\left( 1 - \tau \right) }^{2}}{\mathrm{e}}^{2{\mu }_{1}^{\sigma } + {\left( {\sigma }_{1}^{\sigma }\right) }^{2}}. \tag{16}
439
+ $$
440
+
441
+ All that remains for the calculation of $V{\left( {g}^{ * }\right) }_{1}$ is an expression for ${\operatorname{Var}}_{{g}_{1} \mid {g}^{ * }}\left( {g}_{1}\right)$ . Fortunately, as shown by Wang and Jegelka (2017), $g \mid {g}^{ * }$ is simply an upper truncated Gaussian variable. Therefore, using the well-known expression for the variance of a truncated Gaussian, we have
442
+
443
+ $$
444
+ {\operatorname{Var}}_{{g}_{1} \mid {g}^{ * }}\left( {g}_{1}\right) = {\left( {\sigma }_{1}^{g}\right) }^{2}\left( {1 + \frac{\phi \left( {\gamma }_{{g}^{ * }}\right) }{\Psi \left( {\gamma }_{{g}^{ * }}\right) }\left( {{\gamma }_{{g}^{ * }} - \frac{\phi \left( {\gamma }_{{g}^{ * }}\right) }{\Psi \left( {\gamma }_{{g}^{ * }}\right) }}\right) }\right) ,
445
+ $$
446
+
447
+ (17)
448
+
449
+ where ${\gamma }_{{g}^{ * }} = \frac{{g}^{ * } - {\mu }_{1}^{g}}{{\sigma }_{1}^{g}}$ , and $\phi$ and $\Psi$ are the probability density functions and cumulative density functions of a standard Gaussian variable, respectively.
450
+
451
+ Finally, inserting (17) into (16) yields a closed form expression for ${V}_{1}\left( {g}^{ * }\right)$ .
452
+
453
+ ### A.3 CALCULATING THE PREDICTIVE COVARIANCE C
454
+
455
+ Just like when calculating the conditional variance ${V}_{1}$ , we begin our decomposition of ${C}_{1,2} = \operatorname{Cov}\left( {{y}_{1},{y}_{2}}\right)$ by applying the law of total variance to get the following two term expansion:
456
+
457
+ $$
458
+ {C}_{1,2} = {\operatorname{Cov}}_{{g}_{1},{g}_{2},{\sigma }_{1},{\sigma }_{2}}\left( {\mathbb{E}\left\lbrack {{y}_{1} \mid {g}_{1},{\sigma }_{1}}\right\rbrack ,\mathbb{E}\left\lbrack {{y}_{2},{g}_{2},{\sigma }_{2}}\right\rbrack }\right)
459
+ $$
460
+
461
+ $$
462
+ + {\mathbb{E}}_{{g}_{1},{g}_{2},{\sigma }_{1},{\sigma }_{2}}\left\lbrack {\operatorname{Cov}\left( {{y}_{1},{y}_{2} \mid {g}_{1},{g}_{2},{\sigma }_{1},{\sigma }_{2}}\right) }\right\rbrack . \tag{18}
463
+ $$
464
+
465
+ Now, as ${y}_{1} \mid {g}_{1},{\sigma }_{1}$ and ${y}_{2} \mid {g}_{2},{\sigma }_{2}$ are independent (all that remains after this conditioning is observation noise), the second term of (18) is in fact zero (at least for unique ${x}_{1}$ and $\left. {x}_{2}\right)$ . To calculate the first term of (18), we insert the expression for the first moment of $y \mid g,\sigma$ ( i.e. Equation (13)) which, after recalling the independence of $g$ and $\sigma$ , yields
466
+
467
+ $$
468
+ {C}_{1,2} = {\operatorname{Cov}}_{{g}_{1},{g}_{2}}\left( {{g}_{1},{g}_{2}}\right)
469
+ $$
470
+
471
+ $$
472
+ + \frac{{\left( 1 - 2\tau \right) }^{2}}{{\tau }^{2}{\left( 1 - \tau \right) }^{2}}{\operatorname{Cov}}_{{\sigma }_{1},{\sigma }_{2}}\left( {{\sigma }_{1},{\sigma }_{2}}\right) . \tag{19}
473
+ $$
474
+
475
+ Finally, we can extract $\operatorname{Cov}\left( {{g}_{1},{g}_{2}}\right)$ and $\operatorname{Cov}\left( {{\sigma }_{1},{\sigma }_{2}}\right)$ from our underlying GP models as ${\sum }_{1,2}^{g}$ and ${\mathrm{e}}^{{\mu }_{1}^{\sigma } + {\mu }_{2}^{\sigma } + {0.5}\left( {{\sigma }_{1}^{\sigma } + {\sigma }_{2}^{\sigma }}\right) }\left( {{\mathrm{e}}^{{\sum }_{1,2}^{\sigma }} - 1}\right)$ (using the formulae for the covariance of joint log Gaussian variables). Inserting these two covariances into (19) provides a closed-from expression for ${C}_{1,2}$ .
476
+
477
+ ## B SUPPLEMENTARY MATERIAL: RFF FOR MATERN KERNELS
478
+
479
+ We present in this section how to use RFFs to generate samples from $d$ -dimensional Matern kernels with regularity $\nu$ , variance ${\sigma }^{2}$ and lengthscales $\theta \in {\mathbb{R}}^{d}$ . First of all, we start from the spectral density of a Matérn kernel:
480
+
481
+ $$
482
+ s\left( w\right) = {\sigma }^{2}{\left| \Lambda \right| }^{1/2}\frac{\Gamma \left( {\frac{d}{2} + \nu }\right) }{\Gamma \left( \nu \right) }\frac{{\left( 2\sqrt{\pi }\right) }^{d}}{{\left( 1 + {w}^{T}\Lambda w\right) }^{\frac{d}{2} + \nu }},
483
+ $$
484
+
485
+ where $\Lambda = \operatorname{diag}\left( {{\theta }_{1},\cdots ,{\theta }_{d}}\right)$ is the diagonal matrix containing the length scale hyperparameters. Using the change of variable ${\Lambda }^{\prime } = {2\nu } \times \Lambda$ and introducing rescaling factor ${\sigma }^{2}{\left( \sqrt{2}\pi \right) }^{d}$ , one can recognise here the probability density function of the multivariate $t$ -distribution:
486
+
487
+ $$
488
+ p\left( w\right) = {\left| \Lambda \right| }^{1/2}\frac{\Gamma \left( {\frac{d}{2} + \nu }\right) }{\Gamma \left( \nu \right) {\pi }^{d/2}{\nu }^{d/2}}\frac{1}{{\left( 1 + \frac{1}{2\nu }{w}^{T}\Lambda w\right) }^{\frac{d}{2} + \nu }}.
489
+ $$
490
+
491
+ As a consequence, prior samples can be generated by computing
492
+
493
+ $$
494
+ g\left( x\right) = \sigma \sqrt{2{\left( \sqrt{2}\pi \right) }^{d}/m}\mathop{\sum }\limits_{{i = 1}}^{m}{\omega }_{i}\cos \left( {{w}_{i}^{T}x + {b}_{i}}\right)
495
+ $$
496
+
497
+ where ${\omega }_{i} \sim \mathcal{N}\left( {0,1}\right) ,{w}_{i} \sim p,{b}_{i} \sim \mathcal{U}\left( {0,{2\pi }}\right)$ , and $m$ is the number of features.
498
+
499
+ ## C SUPPLEMENTARY MATERIAL: DESCRIPTION OF THE GLD SYNTHETIC CASE
500
+
501
+ Several formulations of the GLD exist, we use here the para-meterisation of Freimer et al. (1988). The GLD is defined by its quantile function:
502
+
503
+ $$
504
+ Q\left( u\right) = {\lambda }_{0} + {\lambda }_{1}\left( {{T}_{1} - {T}_{2}}\right) , \tag{20}
505
+ $$
506
+
507
+ with:
508
+
509
+ $$
510
+ {T}_{1} = \left\{ \begin{array}{ll} \frac{{u}^{{\lambda }_{2}} - 1}{{\lambda }_{2}} & \text{ if }{\lambda }_{2} \neq 0 \\ \log \left( u\right) & \text{ if }{\lambda }_{2} = 0 \end{array}\right.
511
+ $$
512
+
513
+ $$
514
+ {T}_{2} = \left\{ {\begin{array}{ll} \frac{{\left( 1 - u\right) }^{{\lambda }_{3}} - 1}{{\lambda }_{3}} & \text{ if }{\lambda }_{3} \neq 0 \\ \log \left( {1 - u}\right) & \text{ if }{\lambda }_{3} = 0 \end{array}.}\right.
515
+ $$
516
+
517
+ Here, the only constraint for the parameter values is ${\lambda }_{1} > 0$ .
518
+
519
+ To define an experiment, each ${\lambda }_{j}$ is a realisation of a GP, except for ${\lambda }_{1}$ for which we use a softplus transform to ensure positivity:
520
+
521
+ $$
522
+ {\lambda }_{j}\left( x\right) \sim \mathcal{G}\mathcal{P}\left( {0, k\left( {\cdot , \cdot }\right) }\right) ,\;j \in \{ 0,2,3\} ,
523
+ $$
524
+
525
+ $$
526
+ \phi \left( {{\lambda }_{1}\left( x\right) }\right) \sim \mathcal{G}\mathcal{P}\left( {0, k\left( {\cdot , \cdot }\right) }\right) ,
527
+ $$
528
+
529
+ with ${\phi }^{-1}\left( w\right) = \log \left( {1 + {e}^{w}}\right)$ . All GPs have a Matern 5/2 kernel $k$ with unit variance. We add to ${\lambda }_{0}\left( x\right)$ a small quadratic mean function to avoid having the optimum located on the edges of the domain. We use a lengthscale of 0.5 in dimension 3 and 1.0 in dimension 6 . These settings ensure that the 6-dimensional test cases do not have too many local optima.
UAI/UAI 2022/UAI 2022 Conference/B248iw8jce5/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,279 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § BAYESIAN QUANTILE AND EXPECTILE OPTIMISATION
2
+
3
+ § ABSTRACT
4
+
5
+ Bayesian optimisation (BO) is widely used to optimise stochastic black box functions. While most BO approaches focus on optimising conditional expectations, many applications require risk-averse strategies and alternative criteria accounting for the distribution tails need to be considered. In this paper, we propose new variational models for Bayesian quantile and expectile regression that are well-suited for heteroscedastic noise settings. Our models consist of two latent Gaussian processes accounting respectively for the conditional quantile (or expectile) and the scale parameter of an asymmetric likelihood functions. Furthermore, we propose two BO strategies based on entropy search and Thompson sampling, that are tailored to such models and that can accommodate large batches of points. Contrary to existing $\mathrm{{BO}}$ approaches for risk-averse optimisation, our strategies can directly optimise for the quantile and expectile, without requiring replicating observations or assuming a parametric form for the noise. As illustrated in the experimental section, the proposed approach clearly outperforms the state of the art in the het-eroscedastic, non-Gaussian case.
6
+
7
+ § 1 INTRODUCTION
8
+
9
+ Let $\Psi : \mathcal{X} \times \Omega \rightarrow \mathbb{R}$ be an unknown function, where $\mathcal{X} \subset {\left\lbrack 0,1\right\rbrack }^{D}$ and $\Omega$ denotes a probability space representing some uncontrolled variables. For any fixed $x \in \mathcal{X},{Y}_{x} = \Psi \left( {x, \cdot }\right)$ is a random variable of distribution ${\mathbb{P}}_{x}$ . We assume here a classical black-box optimisation framework: $\Psi$ is available only through (costly) pointwise evaluations of ${Y}_{x}$ . Typical examples may include stochastic simulators in physics or biology (see Skullerud (1968) for simulations of ion motion and Székely Jr and Burrage (2014) for simulations of heterogeneous natural systems), but $\Psi$ can also represent the performance of a machine learning algorithm according to some hyperparameters (see Bergstra et al. (2011) for instance). In the latter case, the randomness can come from the use of minibatching in the training procedure, the choice of a stochastic optimiser or the randomness in the initialisation of the optimiser.
10
+
11
+ Let $g\left( x\right) = \rho \left( {\mathbb{P}}_{x}\right)$ be the objective function we want to maximise, where $\rho$ is a real-valued functional defined on probability measures. The canonical choice for $\rho$ is the expectation, which is sensible when the exposition to extreme values is not a significant aspect of the decision. However, in a large variety of fields such as agronomy, medicine or finance, decision makers have an incentive to protect themselves against extreme events since they may lead to severe consequences. To take these rare events into account, one should consider alternative choices for $\rho$ that can capture the behaviour of the tails of ${\mathbb{P}}_{x}$ , such as the quantile (Rostek, 2010), conditional value-at-risk (CVaR, see Rockafellar et al. (2000)) or expectile (Bellini and Di Bernardino, 2017). In this paper we focus our interest on the modelling and optim-isation of quantiles and expectiles.
12
+
13
+ Given an estimate of $g$ based on available data, global op-timisation algorithms define a policy that finds a trade-off between exploration and intensification. More precisely, the algorithm has to explore the input space in order to avoid getting trapped in a local optimum, but it also has to concentrate its budget on input regions identified as having a high potential. The latter results in accurate estimates of $g$ in the region of interest and allows the algorithm to return an optimal input value with high precision.
14
+
15
+ In the context of Bayesian optimisation (BO), such tradeoffs have been initially studied by Mockus et al. (1978) and Jones et al. (1998) in a noise-free setting. Their framework has latter been extended to optimisation of the conditional expectation of a stochastic black box (see e.g. Frazier et al. (2009); Srinivas et al. (2009) or Picheny et al. (2013) for a review). Recently, strategies optimising risk measures have been proposed, In particular, Cakmak et al. (2020) proposed new algorithms to optimise for the quantile and CVaR for a slightly different use case, where the space $\Omega$ is actually controllable. Browne et al. (2016) and Makarova et al. (2021) proposed algorithms to optimise quantiles and CVaRs, but both rely on intensively repeating observations, which hinders their efficiency in a relatively low budget scenario.
16
+
17
+ Contributions The contributions of this paper are the following: 1) We propose a new model based on two latent Gaussian Processes (GPs) to estimate quantiles or expectiles that is tailored to heteroscedastic noise. 2) We use Sparse posterior and variational inference to support potentially large datasets. 3) We propose a new Bayesian algorithm suited to optimise conditional quantiles or expectiles in a data efficient manner. Two batch-sequential acquisition strategies are designed to find a good trade-off between exploration and intensification. The ability of our algorithm to optimise quantiles is illustrated on multiple test problems.
18
+
19
+ § 2 BAYESIAN METAMODELS OF RISK MEASURES
20
+
21
+ For a given input point $x$ , the quantile of order $\tau \in \left( {0,1}\right)$ of ${Y}_{x}$ can be defined as
22
+
23
+ $$
24
+ {q}_{\tau }\left( x\right) = \underset{q \in \mathbb{R}}{\arg \min }\mathbb{E}\left\lbrack {{l}_{\tau }\left( {{Y}_{x} - q}\right) }\right\rbrack , \tag{1}
25
+ $$
26
+
27
+ where ${l}_{\tau }$ is the pinball loss (Koenker and Bassett Jr,1978)
28
+
29
+ $$
30
+ {l}_{\tau }\left( \xi \right) = \left( {\tau - {\mathbb{1}}_{\left( \xi < 0\right) }}\right) \xi ,\;\xi \in \mathbb{R}. \tag{2}
31
+ $$
32
+
33
+ Similarly, Newey and Powell (1987) introduced the ex-pectile as the minimiser of an asymmetric quadratic loss:
34
+
35
+ $$
36
+ {e}_{\tau }\left( x\right) = \underset{q \in \mathbb{R}}{\arg \min }\mathbb{E}\left\lbrack {{l}_{\tau }^{e}\left( {{Y}_{x} - q}\right) }\right\rbrack , \tag{3}
37
+ $$
38
+
39
+ $$
40
+ {l}_{\tau }^{e}\left( \xi \right) = \left| {\tau - {\mathbb{1}}_{\left( \xi < 0\right) }}\right| {\xi }^{2},\;\xi \in \mathbb{R}. \tag{4}
41
+ $$
42
+
43
+ We detail in the next section how these losses can be used to get an estimate of the objective function $g\left( x\right)$ using a dataset ${\mathcal{D}}_{n} = \left( {\left( {{x}_{1},{y}_{1}}\right) \cdots ,\left( {{x}_{n},{y}_{n}}\right) }\right) = \left( {{\mathcal{X}}_{n},{\mathcal{Y}}_{n}}\right)$ that does not necessarily require replicates of observations at the same input location.
44
+
45
+ § 2.1 QUANTILE AND EXPECTILE METAMODEL
46
+
47
+ Different metamodels have been proposed to estimate a quantile function, such as artificial neural networks (Cannon, 2011), random forest (Meinshausen, 2006) or nonparametric estimation in reproducing kernel Hilbert spaces (Takeuchi et al., 2006). While the literature on expectile regression is less extended, neural network (Jiang et al., 2017) or SVM-like approaches (Farooq and Steinwart, 2017) have been developed as well. All the approaches cited above defined an estimator of $g$ as the function that minimises (optionally with a regularisation term)
48
+
49
+ $$
50
+ {\mathcal{R}}_{e}\left\lbrack g\right\rbrack = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}l\left( {{y}_{i} - g\left( {x}_{i}\right) }\right) , \tag{5}
51
+ $$
52
+
53
+ with $l = {l}_{\tau }$ for the quantile estimation and $l = {l}_{\tau }^{e}$ for the expectile. This framework makes sense because asymptotically minimising (5) is equivalent to minimising (1) or (3).
54
+
55
+ These approaches however share a common drawback: they do not capture the uncertainty associated with each prediction. This is a significant problem in our setting since quantifying this uncertainty is of paramount importance to define the exploration/intensification trade-off. This limitation can be overcome by using a probabilistic model such as
56
+
57
+ $$
58
+ y = g\left( x\right) + \epsilon \left( x\right) ,
59
+ $$
60
+
61
+ where $g$ is either an unknown parametric function (Yu and Moyeed, 2001) or a Gaussian process (Boukouvalas et al., 2012; Abeywardana and Ramos, 2015), and where the distribution of $\epsilon$ depends on the quantity to be estimated. For modelling a quantile, $\epsilon$ should follow an asymmetric Laplace distribution:
62
+
63
+ $$
64
+ {p}_{\epsilon }\left( e\right) = \frac{\tau \left( {1 - \tau }\right) }{\sigma }\exp \left( {-\frac{{l}_{\tau }\left( e\right) }{\sigma }}\right) .
65
+ $$
66
+
67
+ For approximating an expectile, one can use the asymmetric Gaussian distribution:
68
+
69
+ $$
70
+ {p}_{\epsilon }\left( e\right) = C\left( {\tau ,\sigma }\right) \exp \left( {-\frac{\left. {l}_{\tau }^{e}\left( e\right) \right) }{2{\sigma }^{2}}}\right) , \tag{6}
71
+ $$
72
+
73
+ with $C\left( {\tau ,\sigma }\right) = \frac{\sqrt{{2\tau }\left( {1 - \tau }\right) }}{\sigma \sqrt{\pi }\left( {\sqrt{\tau } + \sqrt{1 - \tau }}\right) }$ .
74
+
75
+ In both cases, the associated likelihood is given by
76
+
77
+ $$
78
+ p\left( {{\mathcal{Y}}_{n} \mid g}\right) = \mathop{\prod }\limits_{{i = 1}}^{n}{p}_{\epsilon }\left( {{y}_{i} - g\left( {x}_{i}\right) }\right) . \tag{7}
79
+ $$
80
+
81
+ Although the Bayesian quantile model presented above is well known (Yu and Moyeed, 2001; Boukouvalas et al., 2012; Abeywardana and Ramos, 2015), the Bayesian ex-pectile model we just introduced is new to the best of our knowledge. It is worth noting that the non-conjugacy between the prior on $g$ and the likelihood functions implies that the posterior distribution of $g$ given the data is not available in closed form. To overcome this, Boukouvalas et al. (2012) use Expectation propagation whereas Abeywardana and Ramos (2015) favours variational inference. The latter appears to be one of the most competitive approaches on the benchmark presented in Torossian et al. (2019) so we will embrace the variational inference framework in the remaining of the paper.
82
+
83
+ < g r a p h i c s >
84
+
85
+ Figure 1: GP quantile model from Abeywardana and Ramos (2015) (left) and ours (right) on data with high heteroscedasticy. The left model cannot compromise between very small observation variances around $x = 4$ and very large variances $\left( {x \leq 2}\right)$ , largely overfits on half of the domain and returns overconfident confidence intervals. In contrast, our model captures both the low and high variance regions, while returning well-calibrated confidence intervals.
86
+
87
+ One limitation of the aforementioned methods is that they can result in overconfident predictions in heteroscedastic settings, as illustrated in Figure 1. The main reason is that they only use a single parameter $\sigma$ to capture the spread for the likelihood function, which amounts to considering that the noise amplitude does not change over the input space. We believe this can be a severe limitation in the context of quantile optimisation since the fluctuation of the quantile value over the input space is likely to be dictated by the noise distribution itself not being stationary.
88
+
89
+ To overcome this issue, we propose to build quantile and ex-pectile models where the spread of the asymmetric Laplace and Gaussian likelihoods varies across the input space. For both distributions, this can be achieved by redefining $\sigma$ in equations 7 and 6 as a function of the input parameters. Intuitively, a small value of $\sigma \left( x\right)$ means that there is a high penalty for having an estimate of $g\left( x\right)$ that is far away from the data, whereas a large value of $\sigma \left( x\right)$ means that this penalty is limited and thus leads to more regularity in the model predictions. In practice, we choose a Gaussian prior for $g$ and a log-Gaussian prior for $\sigma$ ,
90
+
91
+ $$
92
+ g\left( x\right) \sim \mathcal{G}\mathcal{P}\left( {{\mu }_{g}\left( x\right) ,{k}_{\theta }^{g}\left( {x,{x}^{\prime }}\right) }\right) , \tag{8}
93
+ $$
94
+
95
+ $$
96
+ \log \sigma \left( x\right) \sim \mathcal{G}\mathcal{P}\left( {{\mu }_{\sigma }\left( x\right) ,{k}_{\theta }^{\sigma }\left( {x,{x}^{\prime }}\right) }\right) . \tag{9}
97
+ $$
98
+
99
+ This model can be compared to the Heteroskedastic GP model introduced by Saul et al. (2016), but with a different likelihood function so that the posterior mode corresponds to a quantile or an expectile.
100
+
101
+ § 2.2 INFERENCE PROCEDURE
102
+
103
+ Although one can obtain a reasonable estimate of a mean value using only a handful of samples, inferring quantiles or expectiles tends to require a much larger number of observations, since they require information associated to the tails of the distribution. The inference procedure for the proposed probabilistic model must thus be able to cope with relatively large datasets, with a number of observations in the order of a few thousands to a tens of thousands data points.
104
+
105
+ A well established method that supports both large datasets and non-conjugate likelihoods is the Sparse Variational GP framework (Titsias, 2009; Hensman et al., 2013). It consists in approximating the intractable or computationally expensive posterior distribution $p\left( {g,\sigma \mid {\mathcal{Y}}_{n}}\right)$ by a distribution $p\left( {g,\sigma \mid g\left( Z\right) = {u}_{g},\sigma \left( Z\right) = {u}_{\sigma }}\right)$ , where $Z \in {\mathcal{X}}^{N}$ and ${u}_{g},{u}_{\sigma }$ are $N$ -dimensional random variables:
106
+
107
+ $$
108
+ {u}_{g} \sim \mathcal{N}\left( {{u}_{g} \mid {\mu }_{g},{S}_{g}}\right) \text{ and }{u}_{\sigma } \sim \mathcal{N}\left( {{u}_{\sigma } \mid {\mu }_{\sigma },{S}_{\sigma }}\right) .
109
+ $$
110
+
111
+ The parameters $Z,{\mu }_{g},{S}_{g},{\mu }_{\sigma },{S}_{\sigma }$ , are referred to as the variational parameters. The $Z$ ’s are often called inducing points. Intuitively, ${u}_{g}$ are random variables that act as pseudo-observations at the inducing point locations.
112
+
113
+ The variational parameters can be optimised jointly with the model parameters (e.g. mean function coefficients or kernel hyperparameters) such that Kullback-Leibler divergence between the approximate and the true posterior is as small as possible. In practice, this is achieved by maximising the Evidence Lower Bound (ELBO):
114
+
115
+ $$
116
+ \mathop{\sum }\limits_{{i = 1}}^{n}\int \log p\left( {{y}_{i} \mid {g}_{i},{\sigma }_{i}}\right) \widetilde{p}\left( {g}_{i}\right) \widetilde{p}\left( {\sigma }_{i}\right) d{g}_{i}d{\sigma }_{i}
117
+ $$
118
+
119
+ $$
120
+ - \operatorname{kl}\left( {\widetilde{p}\left( {u}_{q}\right) \parallel p\left( {u}_{q}\right) }\right) - \operatorname{kl}\left( {\widetilde{p}\left( {u}_{\sigma }\right) \parallel p\left( {u}_{\sigma }\right) }\right) ,
121
+ $$
122
+
123
+ where $\widetilde{p}\left( {g}_{i}\right)$ and $\widetilde{p}\left( {\sigma }_{i}\right)$ are shorthands for the variational posterior distributions at ${x}_{i}$ :
124
+
125
+ $$
126
+ \widetilde{p}\left( {g}_{i}\right) = \int p\left( {g\left( {x}_{i}\right) \mid g\left( Z\right) = {u}_{g}}\right) p\left( {u}_{g}\right) d{u}_{g}
127
+ $$
128
+
129
+ $$
130
+ = \mathcal{N}\left( {{g}_{i} \mid {K}_{{x}_{i},{u}_{g}}{K}_{{u}_{g},{u}_{g}}^{-1}{\mu }_{g},{K}_{{x}_{i},{x}_{i}} + {Q}_{g}}\right) ,
131
+ $$
132
+
133
+ where ${Q}_{g} = {K}_{{x}_{i},{u}_{g}}{K}_{{u}_{g},{u}_{g}}^{-1}\left( {{S}_{g} - {K}_{{u}_{g},{u}_{g}}}\right) {K}_{{u}_{g},{u}_{g}}^{-1}{K}_{{u}_{g},{x}_{i}}$ .
134
+
135
+ This proposed inference scheme is similar to the one used in Saul et al. (2016), with the notable difference that non differentiability of the pinball loss at the origin implies that we need to resort to using a first order optimizer such as ADAM (Kingma and Ba, 2014) that can handle non-differentiability of the objective function.
136
+
137
+ § 3 BAYESIAN OPTIMISATION
138
+
139
+ Classical BO algorithms work as follow. First, a posterior distribution on $g$ is inferred from an initial set of experiments ${\mathcal{D}}_{n}$ (typically obtained using a space-filling design). Then the next input point to evaluate is chosen as the max-imiser of an acquisition function ${\alpha }_{n} : \mathcal{X} \rightarrow \mathbb{R}$ , computed from the posterior. The objective function is sampled at the chosen input and the posterior on $g$ is updated. These steps are repeated until the budget is exhausted. The efficiency of such strategies depends on how informative the $g$ posterior is but also on the exploration/exploitation trade-off provided by the acquisition function. Many acquisition functions have been designed to control this trade off, among them the Expected improvement (EI, Jones et al., 1998), upper confidence bound (UCB, Srinivas et al., 2009), knowledge gradient (KG, Frazier et al., 2009) and Entropy search (PES, Hernández-Lobato et al., 2014).
140
+
141
+ In the case of quantiles and expectiles, adding points one at a time is impractical since many points are typically necessary to modify significantly the $g$ posterior. Hence, we focus here on batch-BO strategies, for which the acquisition recommends a batch of $B > 1$ points instead of a single one. The above-mentioned acquisition functions have been extended to handle batches: see for instance Marmin et al. (2015) for EI, (Wu and Frazier, 2016) for KG or Desautels et al. (2014) for UCB. However, none actually fit our settings for two main reasons. First, most parallel acquisitions make use of explicit update equations for the GP moments and assume access to a Gaussian posterior for observations, neither of which are available for our model. Secondly, most are designed for small batches (say, $B \leq 5$ ) and become numerically intractable for the larger batches (say, $B > {50}$ ) that are more in line with the data volumes necessary for quantiles and expectiles estimation.
142
+
143
+ We propose in the following the first acquisition functions that can be applied to our quantile GP surrogate model, one based on Thompson sampling and one on entropy search.
144
+
145
+ § 3.1 THOMPSON SAMPLING
146
+
147
+ Thompson sampling (TS) is becoming increasingly popular in BO, in particular because of its embarrassingly parallel nature allowing full scalability with the batch size (Hernández-Lobato et al., 2017; Kandasamy et al., 2018;
148
+
149
+ § VAKILI ET AL., 2021).
150
+
151
+ Given the posterior on $g$ , an intuitive approach is to sample $\Psi$ according to the probability that $x$ is the location of the maximum of $g$ . Despite this distribution usually being intractable, one may achieve the same result by sampling a trajectory from the posterior of $g$ and then selecting the input that corresponds to its maximiser. Such approach directly extends to batches of inputs, by drawing several trajectories and selecting all the maximisers.
152
+
153
+ The main drawback of GP-based TS is the cost of sampling a trajectory, which can only be done exactly at a finite number of input locations at a cubic cost in the number of locations. An alternative is to rely on a finite rank approximation of the kernel, but this has been found to have an undesirable effect known as variance starvation (Wang et al., 2018).
154
+
155
+ Wilson et al. (2020) showed that pairing sparse GP models with the so-called decoupled sampling formulation avoids the variance starvation issue. Vakili et al. (2021) then demonstrated that such an approach delivered excellent empirical performance on high noise, large budget, large batch scenarios, while enjoying the same theoretical guarantees as the vanilla TS approach. Here, we build upon Vakili et al. (2021), and apply their algorithm to the variational posterior of $g$ to obtain draws directly from the quantile or expectile model. The posterior over $\sigma$ , which controls the observation noise, is not used during the TS algorithm.
156
+
157
+ The procedure for generating quantile samples from the variational posterior of $g$ can be summarised as follows: First, a continuous sample from the prior of $g$ is generated using Random Fourier Features (see supplementary material B). Second we sample from the inducing variables ${u}_{g}$ . Third, we compute the mean function $m\left( x\right)$ of a GPR model that interpolates the dataset $\left\{ {Z,{u}_{g} - s\left( Z\right) }\right\}$ . Finally, the posterior sample is obtained by correcting the prior samples with the mean function $v\left( x\right) = s\left( x\right) + m\left( x\right)$ .
158
+
159
+ § 3.2 INFORMATION-THEORETIC QUANTILE OPTIMISATION WITH GIBBON
160
+
161
+ Another particularly intuitive search strategy for $\mathrm{{BO}}$ is to choose the evaluations that will maximally reduce the uncertainty in the minimiser of the objective, an approach known as max-value (or min-value) entropy search (MES, Wang and Jegelka, 2017). For quantile optimisation, MES corresponds to reducing uncertainty in the minimal quantile value ${g}^{ * } = \mathop{\min }\limits_{{x \in \mathcal{X}}}g\left( x\right)$ . Following the arguments of Wang and Jegelka (2017), a meaningful measure of uncertainty reduction in this context is taken as the gain in mutual information between a set of candidate evaluations and ${g}^{ * }$ (see Cover and Thomas, 2012, for an introduction to information theory). Principled information-theoretic optimisation then corresponds to finding batches of $B$ input points ${\left\{ {x}_{i}\right\} }_{i = 1}^{B}$ that maximise
162
+
163
+ $$
164
+ {\alpha }_{n}\left( {\left\{ {x}_{i}\right\} }_{i = 1}^{B}\right) = \operatorname{MI}\left( {{g}^{ * };{\left\{ {y}_{{x}_{i}}\right\} }_{i = 1}^{B} \mid {\mathcal{D}}_{n}}\right) , \tag{10}
165
+ $$
166
+
167
+ where ${y}_{{x}_{i}}$ are not-yet-observed evaluations of the batch that are estimated with the GP surrogate model.
168
+
169
+ Although calculating the acquisition function (10) is challenging, there exist effective approximation strategies for GP models with conjugate likelihoods (Moss et al., 2020b; Takeno et al., 2020). In the remaining of this section we show that the approach used in General-purpose Information Based Bayesian-OptimisatioN (GIBBON Moss et al., 2021) can be adapted to support asymmetric Laplace or Gaussian likelihood so that information-theoretic acquisition functions can be used for our quantile and expectile models.
170
+
171
+ Following the derivations of Moss et al. (2021), the application of three well-known information-theoretic inequalities provides the following lower-bound for the mutual information (10):
172
+
173
+ $$
174
+ \operatorname{MI}\left( {{g}^{ * };{\left\{ {y}_{{x}_{i}}\right\} }_{i = 1}^{B} \mid {\mathcal{D}}_{n}}\right) \geq \mathrm{H}\left( {{\left\{ {y}_{{x}_{i}}\right\} }_{i = 1}^{B} \mid {\mathcal{D}}_{n}}\right)
175
+ $$
176
+
177
+ $$
178
+ - \frac{1}{2}\mathop{\sum }\limits_{{i = 1}}^{B}{\mathbb{E}}_{{g}^{ * } \mid {\mathcal{D}}_{n}}\left\lbrack {\log \left( {{2\pi }\operatorname{eVar}\left( {{y}_{{x}_{i}} \mid {g}^{ * },{\mathcal{D}}_{n}}\right) }\right) }\right\rbrack , \tag{11}
179
+ $$
180
+
181
+ where $\mathrm{H}\left( A\right) = - {\mathbb{E}}_{A}\left\lbrack {\log p\left( A\right) }\right\rbrack$ denotes differential entropy. Although calculating the expectation in the second term of [11] is intractable (i.e. no closed-form expression exists for $p\left( {{g}^{ * } \mid {\mathcal{D}}_{n}}\right)$ ), we follow another approximation common among information-theoretic acquisition functions and approximate the integral using Monte-Carlo over a set of $M$ sampled minimum values. In particular, we use the Gum-bel sampler proposed by Wang and Jegelka (2017), which provides a cheap set of samples ${\mathcal{M}}_{n} = \left\{ {{g}_{1}^{ * },..,{g}_{M}^{ * }}\right\}$ from $p\left( {{g}^{ * } \mid {\mathcal{D}}_{n}}\right)$ .
182
+
183
+ When calculating the original GIBBON acquisition function, all the terms in the lower bound (11) are tractable, i.e. the conjugancy of their Gaussian likelihood means that $\mathrm{H}\left( {{\left\{ {y}_{{x}_{i}}\right\} }_{i = 1}^{B} \mid {\mathcal{D}}_{n}}\right)$ is just the differential entropy of a multivariate Gaussian which, alongside each $\operatorname{Var}\left( {{y}_{{x}_{i}} \mid {g}^{ * },{\mathcal{D}}_{n}}\right)$ , has a closed-form expression (See Moss et al. (2021) for details). Consequently, this lower bound itself is used as a closed-form approximation to the mutual information. However, in our quantile setting, we no longer have expressions for the first term of (11) - the joint differential entropy of $B$ -dimensional asymmetric Laplace variables with a complex correlation structure given by our two latent GPs.
184
+
185
+ To build an information-theoretic acquisition function suitable for our quantile model, we must apply an additional approximation. In particular, by using a moment-matching approximation, we can replace the intractable joint differential entropy with the differential entropy of a multivariate Gaussian of the same covariance, leading to our propose
186
+
187
+ Quantile GIBBON (Q-GIBBON) acquisition function
188
+
189
+ $$
190
+ {\alpha }_{n}^{\mathrm{Q}\text{ -GIBBON }} = \frac{1}{2}\log \left| C\right| - \frac{1}{2M}\mathop{\sum }\limits_{{{g}^{ * } \in {\mathcal{M}}_{n}}}\mathop{\sum }\limits_{{i = 1}}^{B}\log {V}_{i}\left( {g}^{ * }\right) ,
191
+ $$
192
+
193
+ where $\left| C\right|$ is the determinant of the $B \times B$ predictive covariance matrix with elements ${C}_{i,j} = \operatorname{Cov}\left( {{y}_{{x}_{i}},{y}_{{x}_{j}}}\right)$ and $V\left( {g}^{ * }\right)$ denotes the conditional variances ${V}_{i}\left( {g}^{ * }\right) =$ $\operatorname{Var}\left( {{y}_{{x}_{i}} \mid {g}^{ * },{\mathcal{D}}_{n}}\right)$ . Crucially, all the terms of Q-GIBBON have closed-form expressions (see appendix A for a derivation of $C$ and $V$ from our quantile GP).
194
+
195
+ Although applying an additional moment-matching approximation means that Q-GIBBON is no longer a lower bound on the true mutual information, we found that it provides very efficient optimisation (see Section 4). In fact, we tried much more expensive but unbiased Monte-Carlo approximations which did not result in noticeable difference in performance.
196
+
197
+ In practice, directly searching for the set of $B$ points that maximise ${\alpha }_{n}^{\mathrm{Q}\text{ -GIBBON }}$ is a very challenging task, due to the dimensionality $\left( {B \times D}\right)$ and multimodality of the acquisition function. However, the Q-GIBBON formulation makes it particularly well-suited for a greedy approach, where we first optimise Q-GIBBON for $B = 1$ , then optimise for $B = 2$ while fixing the first point to the previously found value, etc. until $B$ points are found.
198
+
199
+ § 4 EXPERIMENTS
200
+
201
+ We now evaluate our proposed model and acquisition functions on a set of synthetic tasks and two real-world optimisa-tion problems. All that follows could equivalently be applied to expectiles, experiments are focused on quantile optim-isation to streamline the exposition. The results presented in this section can be replicated using the code available at www.github.com/obfuscated-url.
202
+
203
+ § 4.1 ALGORITHM BASELINES
204
+
205
+ To our knowledge, there is no other existing BO algorithm dedicated to optimising quantiles in our considered setting. The most similar algorithms are those of Cakmak et al. (2020) and Makarova et al. (2021). However, Cakmak et al. (2020) requires precise control over the noise generation process, while Makarova et al. (2021) seek to find solutions with low levels of observation noise but do not provide a method for optimising a specific quantile level.
206
+
207
+ We can, however, apply standard BO methods to perform quantile optimisation if direct observations of the quantiles are available. This is achievable by using repeated observations, which allows computing a (pointwise) empirical quantile. As direct observations are available, a standard GP Regression model (GPR) can be used to provide a posterior on $g$ (Plumlee and Tuo,2014). One can also bootstrap the repeated observations to obtain variance estimates of the empirical quantiles, to improve further the model by accounting for varying observation noise. Next, a BO procedure can be defined based on any classical acquisition function. Here we choose the vanilla EI one. With this strategy, each batch consists of a single point in the input space, repeated a number of times. In the following experiments we use this baseline (denoted GPR-EI) to compare with our two proposed methods using TS and Q-GIBBON over a quantile GP.
208
+
209
+ § 4.2 IMPLEMENTATION
210
+
211
+ All models are built using the gpflux library (Dutordoir et al. 2021), and the BO procedure is done using trieste (Berkeley et al., 2022). All models use a Matern 5/2 kernel, and all acquisition functions (or GP samples in the case of TS) are optimised using a multi-start BFGS scheme.
212
+
213
+ Our quantile model requires a design choice for the inducing points placement, these are reinitialised for each model fit. We follow the findings of Vakili et al. (2021) and use the centroids of a k-means procedure on the data points, which tends to concentrate the inducing points near the optimal areas as more data is collected by BO. Our implementation of decoupled Thompson sampling uses 1000 random Fourier features (see supplementary material for detailed expressions). To sample minimum values for Q-GIBBON we use the Gumbel sampler of Wang and Jegelka (2017) with ${10},{000} \times D$ random initial points.
214
+
215
+ § 4.3 SYNTHETIC PROBLEMS
216
+
217
+ Problem description We generated a set of synthetic problems based on the Generalised Lambda Distribution (GLD, Freimer et al., 1988), a highly flexible four-parameter probability distribution function designed to approximate several well-known parametric distributions. The four parameters define the location, scale, left and right shape of the distribution, respectively. By varying the value of each parameter as a function of $x$ , one can create a black-box with high noise, heteroscedasticity and non-Gaussianity:
218
+
219
+ $$
220
+ \left. {{Y}_{x} \sim {GLD}\left( {{\lambda }_{1}\left( x\right) ,\ldots ,{\lambda }_{4}\left( x\right) }\right) }\right) \text{ . } \tag{12}
221
+ $$
222
+
223
+ To generate a large set of problems with varying dimensionality while controlling the multimodality of the problem at hand, we used GP random draws for the ${\lambda }_{i}$ ’s. See appendix for a full description. Figure 2 shows examples of marginal distributions (for different $x$ values) for one such problem.
224
+
225
+ We consider two input space dimensions: $D = 3$ and 6 and two quantile levels, $\tau = {0.75}$ and 0.95 . We use as an initial budget ${50D}$ observations, uniformly distributed across the input space and a total budget of ${250D}$ observations, acquired in batches of either $B = {10}$ or 50 points. Each strategy is run on 50 different problems. We report here the simple regret in Figure 3, averaged over the 50 problems, with confidence intervals.
226
+
227
+ < g r a p h i c s >
228
+
229
+ Figure 2: Examples of marginal distributions for one GLD-based problem at three different locations of the input space.
230
+
231
+ Results In almost all cases, our approaches largely outperform the GPR baseline, the exception being on the simpler problem (small dimension and batch size) for which the GPR baseline is comparable to TS (GIBBON being substantially better for the 0.75 quantile). Comparing acquisition strategies, GIBBON clearly outperforms TS for $D = 3$ . In dimension 6, both approaches are roughly comparable.
232
+
233
+ § 4.4 LUNAR LANDER
234
+
235
+ Problem description The Lunar Lander problem is a popular benchmark for noisy BO (Moss et al., 2020a; Eriksson et al., 2019). In this well-known reinforcement learning task, we must control three engines (left, main and right) to successfully land a rocket. The learning environment and a hard-coded PID controller is provided in the OpenAI gym. ${}^{\mathrm{T}}$ We seek to optimise 6 thresholds present in the description of the controller to provide the largest expected reward: finding those thresholds defines the BO task. Our RL environment is exactly as provided by OpenAI. We lose 0.3 points per second of fuel use and 100 if we crash. We gain 10 points each time a leg makes contact with the ground, 100 points for any successful landing, and 200 points for a successful landing in the specified landing zone. Each individual run of the environment allows the testing of a controller on a specific random seed.
236
+
237
+ This problem is particularly well-suited for a quantile approach, since reward is stochastic, highly non-Gaussian, and the landing problem is a clear case for which one would want guarantees against risk.
238
+
239
+ Results For this problem, we ran each algorithm 10 times (starting from different initial conditions), with batches of $B = {25}$ points 300 initial observations and 1500 in total. We aim to maximise the ${10}\%$ quantile of the reward. Due to the high cost of calculating the true quantiles of the lunar lander experiment (i.e. they must be calculated empirically across a large collection of runs), we only report the reward quantile obtained after half and all the iterations (see Table 1) and only run one of our two proposed acquisition functions. We choose TS over GIBBON as our synthetic GLD experiments suggest that TS outperforms Q-GIBBON on problems with larger (i.e. 6) dimensions. We can see that TS largely outperforms the baseline, as it seems to robustly identify a much better solution.
240
+
241
+ ${}^{1}$ https://gym.openai.com/
242
+
243
+ < g r a p h i c s >
244
+
245
+ Figure 3: The mean and 95% confidence intervals of regret on synthetic problems in dimension 3 (top) and 6 (bottom), for two quantile levels $\left( {\tau = {0.75},{0.95}}\right)$ and medium $\left( {B = {10}\text{ , left }}\right)$ and large $\left( {B = {50}\text{ , right }}\right)$ batch sizes.
246
+
247
+ max width=
248
+
249
+ X 750 obs 1500 obs
250
+
251
+ 1-3
252
+ GPR-EI 94.6 (106.1) 159.5 (110.9)
253
+
254
+ 1-3
255
+ TS 204.3 (53.8) 255.2 (8.0)
256
+
257
+ 1-3
258
+
259
+ Table 1: Mean and standard deviation over 10 runs for the ${10}\%$ quantile of the reward on the lunar lander problem.
260
+
261
+ § 4.5 LASER TUNING
262
+
263
+ Problem Description For our final experiment, we test our quantile optimisation in a real-world setting inspired by the Free-Electron Laser (FEL) tuning example of McIntire et al. (2016). This is a challenging 16-dimensional optimisa-tion task where we must configure the strengths of magnets manipulating the shape of the FEL's electron beam, seeking to build a powerful and stable beam suitable for use in scientific experiments. Due to the high levels of observation noise in this problem and as stability of the resulting beam is of critical importance for conducting reliable experiments, it is clearly beneficial to encode a level of risk-adversity into the optimisation. Therefore, there are clear advantages for using quantile optimisation for FEL calibration.
264
+
265
+ As we do not have access to the FEL directly, we follow McIntire et al. (2016) and use their 4,074 observed X-ray pulse energy measurements to build Gaussian process surrogate model from which we can simulate pulse energy at any new magnet configuration. To simulate the effect of observation noise, McIntire et al. (2016) add additional Gaussian perturbations to the simulated values. However, we found that the noise in this system was actually skew Gaussian and varied in scale and skew across the search space. Consequently, we simulate observation noise from a skew Gaussian distribution with location, scale and shape parameters also modelled with additional GPs (i.e. a setup similar to our GLD examples). As many of the 4,074 energy measurements are evaluated at very similar input locations, rounding these inputs to four decimal places provides us with many repeated evaluations, allowing the empirical estimation of each parameter of the skew Gaussian distribution at each of these inputs. The location, scale and shape GPs are then determined to predict the parameters of the skew Gaussian noise distributions for any candidate magnet configuration.
266
+
267
+ < g r a p h i c s >
268
+
269
+ Figure 4: The mean and 95% confidence intervals of best 0.3 quantile found across 10 repetitions of the FEL tuning task.
270
+
271
+ Results Figure 4 shows the performance of each algorithm over 10 repetitions, seeking to maximise the ${30}\%$ quantile of pulse energy. The models are initialised with 400 data points randomly chosen from the full dataset, and a further 1,200 points are collected with BO in batches of 100 points. Our algorithms based on quantile GP models substantially outperform the replicate-based GPR baseline. In fact, by using TS with a quantile GP, we are able to find solutions very close to the optimal value (4.8). We hypothesise that the relatively poor performance of our Q-GIBBON acquisition function is due to the high dimension of this problem. The Gumbel sampler used by Q-GIBBON for sampling minimal-values is based on random sampling and so its performance likely degrades as the input dimension increases. Since the performance of information-theoretic $\mathrm{{BO}}$ is sensitive to the quality of these samples (Moss et al., 2021), extending information-theoretic BO to high dimensional problems like FEL tuning remains an open question.
272
+
273
+ § 4.6 CONCLUDING COMMENTS
274
+
275
+ We have presented a new setting to estimate quantiles and ex-pectiles of stochastic black box functions that is well suited to heteroscedastic cases. We then used the proposed model to create two BO algorithms designed for the optimisation of conditional quantiles and expectiles without repetitions in the experimental design. These algorithms outperform the state of the art on several test problems with different dimensions, quantile orders, budgets and batch sizes.
276
+
277
+ Overall, our experiments clearly show that the performance gap between our approaches and the GPR-EI baseline increases with the batch size and problem dimension. Since GPR-EI relies on repetitions, it is much more limited in terms of exploration, while our approaches can evaluate $B$ unique points at each BO iteration. Hence, our approach is much less sensitive to the curse of dimensionality.
278
+
279
+ Experiments also show that for low-dimensional, smaller batches, Q-GIBBON is the best alternative, while with increasing dimension and batch size, the simpler Thompson sampling seems to perform best. Depending on the available hardware, the parallel nature of TS might also provide substantial advantages in terms of wall-clock time.
UAI/UAI 2022/UAI 2022 Conference/B3M4CS8oql9/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,533 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Residual Bootstrap Exploration for Stochastic Linear Bandit
2
+
3
+ ## Abstract
4
+
5
+ We propose a new bootstrap-based online algorithm for stochastic linear bandit problems. The key idea is to adopt residual bootstrap exploration, in which the agent estimates the next step reward by re-sampling the residuals of mean reward estimate. Our algorithm, residual bootstrap exploration for stochastic linear bandit (LinReBoot), estimates the linear reward from its re-sampling distribution and pulls the arm with the highest reward estimate. In particular, we contribute a theoretical framework to demystify residual bootstrap-based exploration mechanisms in stochastic linear bandit problems. The key insight is that the strength of bootstrap exploration is based on collaborated optimism between the online-learned model and the re-sampling distribution of residuals. Such observation enables us to show that the proposed LinReBoot secure a high-probability $\widetilde{O}\left( {d\sqrt{n}}\right)$ sub-linear regret under mild conditions. Our experiments support the easy generalizability of the ReBoot principle in the various formulations of linear bandit problems and show the significant computational efficiency of LinReBoot.
6
+
7
+ ## 1 INTRODUCTION
8
+
9
+ Stochastic linear bandit is an online learning problem that the learning agent acts by pulling arms, where each arm is associated with a feature vector, then learning the arms information from the corresponding random rewards. In such problems, the typical goal of a learning agent is to maximize its cumulative reward. Learning more about an arm (explore) or pulling the arm with the highest estimated reward (exploit) leads to the well-known exploration- exploitation tradeoff, which is the central trade-off captured in many decision-making applications in modern online service industries. Consequently, the design of stochastic linear bandit algorithms demands an easy-generalizable implementation across various contextualize actions and reward generation processes.
10
+
11
+ In the past decade of bandit literature, such demands have invited researchers to investigate bootstrap-based exploration-exploitation trade-offs and have drawn rising attention [Baransi et al., 2014, Eckles and Kaptein, 2014, Osband and Van Roy, 2015, Vaswani et al., 2018, Hao et al., 2019, Kveton et al., 2019c, Wang et al., 2020]. Yet, prior works on bootstrap-based bandit algorithms focus on provable multi-armed bandit algorithms and only provide a limited empirical evaluation of bootstrap-based stochastic linear bandit algorithms, and their theoretical counterpart remains unknown. Such knowledge gap of bootstrapping stochastic linear bandit persuades our investigation on the provable bootstrap-based stochastic linear bandits: Can we theoretically and empirically support the validity and easy-generalizability of bootstrapping procedure in stochastic linear bandit algorithms design? In particular, we aim to deliver a generic framework to demystify the bootstrap optimism in stochastic linear bandit problems and validate the easy generalizability of the bootstrap principle across various contextual linear bandit problems.
12
+
13
+ Contributions. We introduce LinReBoot algorithms that implement Residual Bootstrap Exploration for stochastic linear bandit problem with sub-linear regret. We theoretically show that LinReBoot secures $\widetilde{O}\left( {d\sqrt{n}}\right)$ regret where $d$ is the dimension of features. This sublinear regret bound matches the regret bound of the same order as those theoretical results of Linear Thompson Sampling algorithms. The key to achieving such sub-linear regret guarantee is to carefully manage and collaborate sample and bootstrap optimism (Section 4.1). In particular, by measuring the "sample-bootstrap optimistic estimated discrepancy ratio" of the optimal arm, LinReboot successfully avoids over or under exploration and theoretically secures sub-linear mean regret with high-probability. To our knowledge, this is the first theoretical analysis to support the validity and efficiency of the residual bootstrap-based procedure for stochastic linear bandit problems. We empirically show that LinReBoot rivals or exceeds competing algorithms including Linear Thompson Sampling, Linear PHE, Linear GIRO, and Linear UCB under stochastic linear bandit problem as well as more complicated linear bandit settings. These significant results support the easy-generalizability of proposed LinReBoot. In summary, our contributions are as follows:
14
+
15
+ - Propose LinReBoot algorithms that implement Residual Bootstrap Exploration in linear bandit problems without boundness assumption of rewards.
16
+
17
+ - Theoretically show that LinReBoot secures $\widetilde{O}\left( {d\sqrt{n}}\right)$ regret, matching the regret bound of the same order as those theoretical results of Linear Thompson Sampling algorithms.
18
+
19
+ - Empirically show that LinReBoot rivals or exceeds baseline algorithms and supports that LinReBoot is easy-generalizable among linear bandit problems.
20
+
21
+ Related Works. Bootstrap-based contextual bandit algorithms design has been actively studied in the last half-decade and drawn a surge of interest from both theoretical studies and industrial practice Elmachtoub et al., 2017, Eckles and Kaptein, 2014, Osband et al., 2016, Kveton et al., 2019c, Hao et al., 2019. Bootstrap-based bandit algorithm design is a paradigm of sequential decision-making based on an exploration mechanism with no pre-defined mean reward model. Such paradigm enjoys a decisive advantage that engineers are free to deploy any reward model of interests without painful adaption to problem structure [Kveton et al., 2019c.b . ReBoot Window provided a the oretical logarithmic regret guarantee for multi-armed bandit (MAB) and empirical investigation to validate the easy generalizability of the ReBoot principle. Our work aims to provide a theoretical guarantee for the bootstrap-based linear bandit algorithms and empirically investigate more general contextual linear bandit setting to validate the ReBoot principle.
22
+
23
+ One close related work is $1\mathrm{{Kveton}}$ et al.,2019a which introduces perturbation of past samples for exploration under stochastic linear bandit problem. The limitation of [Kveton et al., 2019a] is the boundness of rewards, indicating many broader classes of rewards such as Gaussian rewards are not applicable with a theoretical guarantee. In contrast, the proposed LinReBoot algorithms relax the boundness reward assumption and thus validate bootstrap-based bandit algorithms in wider bandit environments with a broader class of reward generation processes. Early works about exploration in bandit problems Abbasi-Yadkori et al., 2011, Langford and Zhang, 2007, Dani et al., 2008] are practical but no guarantee of the optimality. Some works [Wang et al., 2020, Kveton et al., 2019c, b, Thompson, 1933, Auer et al., 2002] provide well designed exploration for bandit problems and have their own principles for adopting to more general problems. In these works, three principles including ReBoot $\left\lbrack {\text{Wang et al., 2020, GIRO}\lbrack \text{Kveton et al., 2019c}}\right\rbrack$ and PHE [Kveton et al., 2019b] are devising exploration mechanism based on up-to-now history instead of on pre-defined reward model in the other two principles TS [Thompson, 1933] and UCB Auer et al., 2002]. Our work generalizes ReBoot into stochastic linear bandit problems.
24
+
25
+ Notations. Let $\left\lbrack n\right\rbrack$ be set $\{ 1,2,\ldots , n\} .\mathbf{1}$ is a vector with all ones and $\mathbf{I}$ is the identity matrix. For a vector $\mathbf{v},\parallel \mathbf{v}{\parallel }_{2}$ is 2 -norm of $\mathbf{v}$ and $\parallel \mathbf{v}{\parallel }_{\mathbf{A}}^{2} \mathrel{\text{:=}} \sqrt{{\mathbf{v}}^{\top }\mathbf{A}\mathbf{v}}$ for a semidefinite matrix $\mathbf{A}$ . Let $\langle \cdot , \cdot \rangle$ be the inner product operation. Denote ${\mathcal{F}}_{t}$ as the history of randomness up to round $t.{\mathbb{E}}_{t}\left\lbrack \cdot \right\rbrack \mathrel{\text{:=}} \mathbb{E}\left\lbrack {\cdot \mid {\mathcal{F}}_{t - 1}}\right\rbrack$ is defined as the conditional expectation given ${\mathcal{F}}_{t - 1}$ and ${\mathbb{P}}_{t}\left( \cdot \right) \mathrel{\text{:=}} \mathbb{P}\left( {\cdot \mid {\mathcal{F}}_{t - 1}}\right)$ is defined as the conditional probability given ${\mathcal{F}}_{t - 1}.\mathbb{I}\{ \cdot \}$ is indicator function. For a set or event $E$ , we denote its complement as $\bar{E}.N\left( {\mu ,{\sigma }^{2}}\right)$ is Gaussian distribution with mean $\mu$ and variance ${\sigma }^{2}$ . We use $\widetilde{O}$ for big $O$ notation up to logarithmic factor.
26
+
27
+ ## 2 STOCHASTIC LINEAR BANDIT
28
+
29
+ Contextualize Action Set. In stochastic linear bandit problem, we identify the actions with $d$ -dimensional features from $\mathcal{A} \subset {\mathbb{R}}^{d}$ and assume $\left| \mathcal{A}\right|$ , the size of the action set, is finite. Let $K \mathrel{\text{:=}} \left| \mathcal{A}\right|$ be the number of actions (arms), ${\mathbf{x}}_{k} \in {\mathbb{R}}^{d}$ be the context vector of the $k$ -th arm, that is, $\mathcal{A} = \left\{ {{\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{K}}\right\}$ .
30
+
31
+ Reward generating mechanism. The reward function is parameterized by $\mathbf{\theta } \in {\mathbb{R}}^{d}$ such that, at time $t$ the agent chooses an action ${I}_{t} \in \left\lbrack K\right\rbrack$ with feature ${X}_{t} = {\mathbf{x}}_{{I}_{t}} \in \mathcal{A}$ , the reward is generated by
32
+
33
+ $$
34
+ {Y}_{t} \equiv \left\langle {{X}_{t},\mathbf{\theta }}\right\rangle + {\epsilon }_{t}. \tag{1}
35
+ $$
36
+
37
+ Specifically, the reward obtained by the agent at round $t$ when pulling arm ${I}_{t} = k$ is generated from a distribution with mean ${\mu }_{k} \mathrel{\text{:=}} {\mathbf{x}}_{k}^{\top }\mathbf{\theta }$ , conditioning on context ${\mathbf{x}}_{k}$ . The property of noise ${\epsilon }_{t}$ is described in Assumption 2, Furthermore, denote the recieved reward by ${r}_{{I}_{t}}$ and the reward random variable by ${Y}_{t}$ at round $t$ .
38
+
39
+ Regret. Without loss of generality, assume that arm 1 is the unique optimal arm, that is ${\mu }_{1} \geq {\mu }_{k}\forall k$ . The optimal gap of the $k$ -th arm is ${\Delta }_{k} \mathrel{\text{:=}} {\mu }_{1} - {\mu }_{k} \geq 0$ . The expected $n$ -round regret is denoted as
40
+
41
+ $$
42
+ {R}_{n} \mathrel{\text{:=}} \mathop{\sum }\limits_{{k = 2}}^{K}{\Delta }_{k}\mathbb{E}\left\lbrack {\mathop{\sum }\limits_{{t = 1}}^{n}\mathbb{I}\left\{ {{I}_{t} = k}\right\} }\right\rbrack . \tag{2}
43
+ $$
44
+
45
+ The goal of the agent is to maximize the expected cumulative reward in $n$ rounds, which is equivalent to minimizing the expected regret ${R}_{n}$ .
46
+
47
+ Assumption 1. (Boundness assumptions) We make the following boundeness assumptions: (1) True parameter $\mathbf{\theta }$ is bounded: $\parallel \mathbf{\theta }{\parallel }_{2} \leq {S}_{2}$ . (2) The context vectors are bounded in a sense that ${\begin{Vmatrix}{\mathbf{x}}_{k}\end{Vmatrix}}_{2} \leq L$ for all $k \in \left\lbrack K\right\rbrack$ .
48
+
49
+ Assumption 1 is referred to the boundness assumptions in the stochastic linear bandit literature and is to ensure the regret is bounded if the agent pulls any sub-optimal actions (see Section 5 in Abbasi-Yadkori et al., 2011).
50
+
51
+ Assumption 2. (Noise Clipping assumption) Noise process ${\left\{ {\epsilon }_{t}\right\} }_{t = 1}^{\infty }$ described in [1] satisfies that for some ${L}_{1},{L}_{2} > 0$ ,
52
+
53
+ $$
54
+ {e}^{{L}_{1}{\eta }^{2}} \leq \mathbb{E}\left\lbrack {{e}^{\eta {\epsilon }_{t}} \mid {\mathcal{F}}_{t - 1}}\right\rbrack \leq {e}^{{L}_{2}{\eta }^{2}},\forall \eta \geq 0, \tag{3}
55
+ $$
56
+
57
+ where ${\mathcal{F}}_{t - 1} = \left\{ {{\epsilon }_{1},{I}_{1},\cdots ,{\epsilon }_{t - 1},{I}_{t - 1}}\right\}$ .
58
+
59
+ Assumption 2 implies that stochastic process ${\left\{ {\epsilon }_{t}\right\} }_{t = 1}^{\infty }$ is conditionally sub-gaussian with constant ${L}_{2}.{L}_{1}$ contributes to the lower bound of moment generating function suggested by [Zhang and Zhou, 2020]. Note that the Assumption 2 allows heteroscedasticity among different arms by choosing ${L}_{2}$ as the largest variance among arms. Such heteroscedasticity consideration arises and has been identified as a challenge in applications of Bayesian optimization [Kirschner, 2021, Cowen-Rivers et al., 2020.
60
+
61
+ ## 3 RESIDUAL BOOTSTRAP EXPLORATION
62
+
63
+ ### 3.1 REBOOT PRINCIPLE
64
+
65
+ This section presents essential proof of concepts to implement ReBoot principle [Wang et al., 2020]. In general, each round of interaction, the decision policy admits four subroutines to implement ReBoot principle: 1) Learning, 2) Fitting, 3) Bootstrapping, and 4) Exploring. Following elaborates on each subroutine:
66
+
67
+ 1) Model Learning. The first subroutine outputs a learned model based on current collected data. Our implementation learns the parameter $\mathbf{\theta }$ in Eq. 1 by some user-specified model.
68
+
69
+ 2) Data Fitting. The second subroutine fits the current data set with the learned model in the previous subroutine and then outputs the residual set. Intuitively, the residuals measure the goodness of fit of the learned model and should drop a hint on the right amount of exploration. In other words, the residuals should suggest a right magnitude of exploration bonus in decision policy (8). How to manage and integrate uncertainty behind residuals into the exploration mechanism of policy is the main challenge.
70
+
71
+ 3) Residuals Bootstraping. The third subroutine associates the residuals obtained the last subroutine with a bootstrapping distribution. Instead of maintaining a belief distribution on a parameter in the Bayesian approach, ReBoot principle maintains a bootstrapping distribution on the statistical error based on residuals. The challenge is to justify the efficacy of residual-based optimism construction in both theory and practice.
72
+
73
+ 4) Actions Exploring. The fourth subroutines sample the exploration bonus from the bootstrapping distribution and output an index for each action. Such bootstrap procedure is more computationally efficient than prior efforts since this procedure only requires drawing a sample from the bootstrapping distribution. The challenge is to prove that such bootstrap procedure secures sub-linear regret in theory.
74
+
75
+ ### 3.2 LINREBOOT ALGORITHM
76
+
77
+ We propose the Linear Residual Bootstrap Exploration algorithm (LinReBoot, Algorithm 1) for stochastic linear bandit problems. This section elaborates the four subroutines in Section 3.1 for the proposed LinReBoot.
78
+
79
+ 1) LinReBoot uses ridge regression procedure, whose learned parameter is ${\widehat{\mathbf{\theta }}}_{t}\left( {4\mathrm{\;b}}\right)$ and estimated mean reward for arm $k$ is ${\widehat{\mu }}_{k, t}$ . Such way to estimate mean reward is easy to manage the confidence [Abbasi-Yadkori et al., 2011. Thus, we focus on confidence management for the bootstrap-based exploration.
80
+
81
+ Ridge Regression Procedure. LinReBoot fits linear model at round $t$ as follow,
82
+
83
+ $$
84
+ {\mathbf{V}}_{t} = {\mathbf{X}}_{t - 1}^{\top }{\mathbf{X}}_{t - 1} + \lambda \mathbf{I}, \tag{4a}
85
+ $$
86
+
87
+ $$
88
+ {\widehat{\mathbf{\theta }}}_{t} = {\mathbf{V}}_{t}^{-1}{\mathbf{X}}_{t - 1}^{\top }{\mathbf{Y}}_{t - 1}, \tag{4b}
89
+ $$
90
+
91
+ $$
92
+ {\widehat{\mu }}_{k, t} = {\mathbf{x}}_{k}^{\top }{\widehat{\mathbf{\theta }}}_{t},\forall k \in \left\lbrack K\right\rbrack , \tag{4c}
93
+ $$
94
+
95
+ where ${\mathbf{X}}_{t - 1} = {\left( {X}_{1},\ldots ,{X}_{t - 1}\right) }^{\top } \in {\mathbb{R}}^{\left( {t - 1}\right) \times d}$ . The $\tau$ - th row of ${\mathbf{X}}_{t - 1}$ is the context ${X}_{\tau }^{\top }$ for $\tau \in \left\lbrack {t - 1}\right\rbrack$ , ${\mathbf{Y}}_{t - 1} = {\left( {Y}_{1},\ldots ,{Y}_{t - 1}\right) }^{\top }$ is reward vector whose elements are rewards up to round $t - 1.\lambda$ denotes the regularization level. ${\mathbf{V}}_{t}$ denotes the sample covariance matrix up to round $t$ and ${\widehat{\mathbf{\theta }}}_{t}$ is the ridge estimation of target parameter $\mathbf{\theta }$ in (1). ${\widehat{\mu }}_{k, t}$ denotes the estimated mean of arm $k$ based on history. Note that the first $K$ rounds in proposed LinReBoot is fully exploring each arm once. In other words, ${I}_{t} = t$ when $t \in \left\lbrack K\right\rbrack$ , indicating ${\mathbf{X}}_{K} \mathrel{\text{:=}} {\left( {\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{K}\right) }^{\top } \in {\mathbb{R}}^{K \times d}$ . We call this ${\mathbf{X}}_{K}$ the context matrix with rank $r \leq \min \left( {K, d}\right)$ and singular values ${\sigma }_{1},\ldots ,{\sigma }_{r}$ . Also define ${\sigma }_{\min }^{2} \leq {\sigma }_{i}^{2} \leq {\sigma }_{\max }^{2},\forall i \in \left\lbrack r\right\rbrack$ . With these definitions, we make a mild assumption about the shrinkage effect of ridge regression:
96
+
97
+ Assumption 3. (Validity of Ridge Regression) The singular value decomposition of context matrix ${\mathbf{X}}_{K}$ is denoted as ${\mathbf{X}}_{K} \mathrel{\text{:=}} \mathbf{{G\sum U}}$ where $\mathbf{G} \in {\mathbb{R}}^{K \times K}$ , $\mathbf{\sum } \in {\mathbb{R}}^{K \times d}$ and $\mathbf{U} \in {\mathbb{R}}^{d \times d}$ . Define $\mathbf{\Omega } \mathrel{\text{:=}} \mathbf{\sum }\left( {{\mathbf{\sum }}^{\top }\mathbf{\sum } + }\right.$ $\lambda \mathbf{I}{)}^{-1}{\mathbf{\sum }}^{\top } \in {\mathbb{R}}^{K \times K}$ and $\mathbf{Z} \mathrel{\text{:=}} \mathbf{G}\mathbf{\Omega }\mathbf{\sum }\mathbf{U} \in {\mathbb{R}}^{K \times d}$ . Let ${\mathbf{z}}_{1} \in {\mathbb{R}}^{d}$ be the first row of $\mathbf{Z}$ . Given any $\lambda > 0$ , there exists a corresponding positive scalar ${S}_{1}$ such that $\left| {{\mathbf{x}}_{1}^{\top }\mathbf{\theta } - {\mathbf{z}}_{1}^{\top }\mathbf{\theta }}\right| \geq {S}_{1}$ for the $\theta$ in (1).
98
+
99
+ Remark 1. Assumption 3 provides a lower bound of the absolute difference between true mean ${\mathbf{x}}_{1}^{\top }\mathbf{\theta }$ and normalized mean ${\mathbf{z}}_{1}^{\top }\mathbf{\theta }$ of the optimal arm. Note that if $\lambda \rightarrow 0$ , then ${\mathbf{z}}_{1} \rightarrow {\mathbf{x}}_{1}$ and ${S}_{1} \rightarrow 0$ . Thus this scalar ${S}_{1}$ measures the small perturbation on the mean of the optimal arm when the ridge regression procedure is applied. This $Z$ can be interpreted as a ridge shrinkage context matrix [Goldstein and Smith, 1974]. One important phenomenon of online ridge regression is that even if the ridge estimator is biased, the shrinkage effect from ridge estimation provides exploration for the agent leading to making a correct decision. The positive scalar ${S}_{1}$ describes the shrinkage effect on the context. That is, the existence of ${S}_{1}$ indicates the ridge procedure is valid and its shrinkage effect exists.
100
+
101
+ 2) The fitting part of LinReBoot outputs the residuals under the linear model framework,
102
+
103
+ $$
104
+ {e}_{k, t, i} = {r}_{k, i} - {\widehat{\mu }}_{k, t},\forall i \in \left\lbrack {s}_{k, t - 1}\right\rbrack , \tag{5}
105
+ $$
106
+
107
+ where ${s}_{k, t - 1} \mathrel{\text{:=}} \mathop{\sum }\limits_{{\tau = 1}}^{{t - 1}}\mathbb{I}\left\{ {{I}_{\tau } = k}\right\}$ is the number of times pulling arm $k$ by round $t - 1,{r}_{k, i}$ is the $i$ -th reward of arm $k$ by round $t - 1$ . The goodness of fit of the learned ridge regression model can be summarised by Residual Sum of Squares(RSS) [Archdeacon, 1994] which is defined as
108
+
109
+ $$
110
+ {RS}{S}_{k, t} \mathrel{\text{:=}} \mathop{\sum }\limits_{{i = 1}}^{{s}_{k, t - 1}}{e}_{k, t, i}^{2}. \tag{6}
111
+ $$
112
+
113
+ Such measure plays an important role in the residual bootstrap exploration mechanism.
114
+
115
+ 3) The third part is Residuals Bootstrapping. This subroutine is independent of the model which suggests the power of generalizability of ReBoot principle. ReBoot principle requires the computation of the exploration bonus [Mammen, 1993, which is ${s}_{k, t - 1}^{-1}\mathop{\sum }\limits_{{i = 1}}^{{s}_{k, t - 1}}{\omega }_{k, t, i}{e}_{k, t, i}$ , where ${\left\{ {\omega }_{k},{t}_{, i}\right\} }_{i = 1}^{{s}_{k, t - 1}}$ is residual bootstrap weights for arm $k$ at round $t$ .
116
+
117
+ Choice of Bootstrapping Weights. The bootstrap weights considered in this work are i.i.d with zero mean and variance ${\sigma }_{\omega }^{2}$ . They are independent of the noise process ${\left\{ {\epsilon }_{t}\right\} }_{t = 1}^{\infty }$ . In the literature of bootstrap procedure Mammen, 1993, the choices of bootstrap weights distribution include Gaussian weights, Rademacher weights and skew correcting weights. In LinReBoot, we adopt the Gaussian bootstrap weights to enable an efficient implement described at section 3.3 .
118
+
119
+ 4) The last subroutine is the action exploring based on residual bootstrap. More specifically, for arm $k$ at round $t$ , LinReBoot adds exploration bonus from residual bootstrapping on the estimated mean ${\widehat{\mu }}_{k, t}$ as follow,
120
+
121
+ $$
122
+ {\widetilde{\mu }}_{k, t} = {\widehat{\mu }}_{k, t} + \frac{1}{{s}_{k, t - 1}}\mathop{\sum }\limits_{{i = 1}}^{{s}_{k, t - 1}}{\omega }_{k, t, i}{e}_{k, t, i}, \tag{7}
123
+ $$
124
+
125
+ then agent pulls arm with the highest bootstrapped
126
+
127
+ Algorithm 1 LinReBoot
128
+
129
+ ---
130
+
131
+ Require: $\lambda ,{s}_{1,0} = \ldots = {s}_{K,0} = 0$
132
+
133
+ for $t = 1,\ldots , n$ do
134
+
135
+ if $t < K + 1$ then
136
+
137
+ ${I}_{t} \leftarrow t$
138
+
139
+ else
140
+
141
+ ${\mathbf{V}}_{t} \leftarrow {\mathbf{X}}_{t - 1}^{\top }{\mathbf{X}}_{t - 1} + \lambda \mathbf{I}$
142
+
143
+ ${\widehat{\mathbf{\theta }}}_{t} \leftarrow {\mathbf{V}}_{t}^{-1}{\mathbf{X}}_{t - 1}^{\top }{\mathbf{Y}}_{t - 1}$
144
+
145
+ for $k = 1,\ldots , K$ do
146
+
147
+ ${e}_{k, t, i} \leftarrow {r}_{k, i} - {\mathbf{x}}_{k}^{\top }{\widehat{\mathbf{\theta }}}_{t},\forall i \in \left\{ {s}_{k, t - 1}\right\}$
148
+
149
+ Generate ${\left\{ {\omega }_{k, t, i}\right\} }_{i = 1}^{{s}_{k, t - 1}}$
150
+
151
+ ${\widetilde{\mu }}_{k} \leftarrow {\mathbf{x}}_{k}^{\top }{\widehat{\mathbf{\theta }}}_{t} + {s}_{k, t - 1}^{-1}\mathop{\sum }\limits_{{i = 1}}^{{s}_{k, t - 1}}{\omega }_{k, t, i}{e}_{k, t, i}$
152
+
153
+ end for
154
+
155
+ ${I}_{t} \leftarrow \arg \max {\widetilde{\mu }}_{k}$
156
+
157
+ $k \in \left\lbrack K\right\rbrack$
158
+
159
+ end if
160
+
161
+ ${s}_{{I}_{t}, t} \leftarrow {s}_{{I}_{t}, t - 1} + 1$ and ${s}_{k, t} \leftarrow {s}_{k, t - 1}.\forall k \neq {I}_{t}$
162
+
163
+ Pull arm ${I}_{t}$ and get reward ${r}_{{I}_{t},{s}_{{I}_{t}}}$
164
+
165
+ ${\mathbf{X}}_{t} \leftarrow \left\lbrack \begin{matrix} {\mathbf{X}}_{t - 1} \\ {\mathbf{x}}_{{I}_{t}}^{\top } \end{matrix}\right\rbrack$ and ${\mathbf{Y}}_{t} \leftarrow \left\lbrack \begin{matrix} {\mathbf{Y}}_{t - 1} \\ {r}_{{I}_{t},{s}_{{I}_{t}}} \end{matrix}\right\rbrack$
166
+
167
+ end for
168
+
169
+ ---
170
+
171
+ mean,
172
+
173
+ $$
174
+ {I}_{t} \equiv \arg \mathop{\max }\limits_{{k \in \left\lbrack K\right\rbrack }}{\widetilde{\mu }}_{k, t} \tag{8}
175
+ $$
176
+
177
+ Note that the variance of bootstrapped mean ${\widetilde{\mu }}_{k, t}$ is ${\sigma }_{\omega }^{2}{s}_{k, t - 1}^{-2}{RS}{S}_{k, t}$ , indicating an adaptive amount of extra exploration is controlled by ${s}_{k, t - 1}$ and ${RS}{S}_{k, t}$ .
178
+
179
+ Short Summary. Our proposed LinReBoot has following steps at round $t > K$ ,
180
+
181
+ 1) Ridge estimation: compute ${\mathbf{V}}_{t},{\widehat{\mathbf{\theta }}}_{t}$ .
182
+
183
+ 2) Finding residuals for each arm: for arm $k$ , compute ${\widehat{\mu }}_{k, t}$ and ${\left\{ {e}_{k, t, i}\right\} }_{i = 1}^{{s}_{k, t - 1}}$ .
184
+
185
+ 3) Compute Bootstrapped mean for each arm: for arm $k$ , generate ${\left\{ {\omega }_{k, t, i}\right\} }_{i = 1}^{{s}_{k, t - 1}}$ and compute ${\widetilde{\mu }}_{k, t}$ (7).
186
+
187
+ 4) Pull arm with the highest ${\widetilde{\mu }}_{k, t}$ then observe reward.
188
+
189
+ Algorithm [1] describes LinReBoot. The strength of LinReBoot is its easy generalizability across different bandit problems including linear bandits and even more complicated structured problems (Appendix D.1).
190
+
191
+ Remark 2. (LinTS perturbs system parameter estimate, LinReBoot perturbs expected reward estimates) Compare with the LinTS in [Agrawal and Goyal, 2013b], in which LinTS samples a perturbed parameter ${\widetilde{\mathbf{\theta }}}_{t}^{\text{LinTS }} =$ ${\widehat{\mathbf{\theta }}}_{t} + {\beta }_{t}{\mathbf{V}}_{t}^{-1/2}{\mathbf{\eta }}_{t}$ with scaling ${\beta }_{t}$ and appropriate independent noise ${\eta }_{t}$ (defined in [Agrawal and Goyal,2013b]). Our proposed LinReBoot samples a perturbed expected reward ${\widetilde{\mu }}_{k, t}^{\text{ LinReBoot }} = \left\langle {{\widehat{\mathbf{\theta }}}_{t},{\mathbf{x}}_{k}}\right\rangle + \frac{1}{{s}_{k, t - 1}}\mathop{\sum }\limits_{{i = 1}}^{{s}_{k, t - 1}}{w}_{k, t, i}{e}_{k, t, i}.$ That is, LinReBoot is perturbing the expected reward estimate via prediction error uncertainty, which is supervised by real reward. In contrast, LinTS is perturbing the system parameter, when can be wrong if the system modeling is wrong.
192
+
193
+ ### 3.3 EFFICIENT IMPLEMENTATION
194
+
195
+ By the attractive computational properties of Gaussian distribution, the computational cost of LinReBoot can be reduced significantly when Gaussian Bootstrap weights are generated. Formally: assume ${\omega }_{k, t, i} \sim$ $N\left( {0,{\sigma }_{\omega }^{2}}\right) ,\forall k, t, i$ , recalling (7), for $k \in \left\lbrack K\right\rbrack$ and any $t \geq 1$ , bootstrapped mean ${\widetilde{\mu }}_{k, t}$ follows a Gaussian distribution,
196
+
197
+ $$
198
+ {\widetilde{\mu }}_{k, t} \mid {\mathcal{F}}_{t - 1} \sim N\left( {{\widehat{\mu }}_{k, t},{\sigma }_{\omega }^{2}{s}_{k, t - 1}^{-2}{RS}{S}_{k, t}}\right) . \tag{9}
199
+ $$
200
+
201
+ Such Gaussian-distributed property of ${\widetilde{\mu }}_{k, t}$ indicates that if we can update ${\widehat{\mu }}_{k, t},{s}_{k, t - 1}$ and ${RS}{S}_{k, t}$ incrementally for arm $k$ , this bootstrapped mean ${\widetilde{\mu }}_{k, t}$ can be generated by Gaussian generator without inner loop for generating weights. The first two terms, ${\widehat{\mu }}_{k, t}$ and ${s}_{k, t - 1}$ , are naturally updated in incremental manner. For ${RS}{S}_{k, t}$ , following decomposition ensures an incremental update,
202
+
203
+ $$
204
+ {RS}{S}_{k, t} = \mathop{\sum }\limits_{{i = 1}}^{{s}_{k, t - 1}}{r}_{k, i}^{2} + {s}_{k, t - 1}{\widehat{\mu }}_{k, t}^{2} - 2{\widehat{\mu }}_{k, t}\mathop{\sum }\limits_{{i = 1}}^{{s}_{k, t - 1}}{r}_{k, i}.
205
+ $$
206
+
207
+ Then an efficient generation for ${\widetilde{\mu }}_{k, t} \mid {\mathcal{F}}_{t - 1}$ is ensured by the incremental updates for ${\widehat{\mu }}_{k, t},{s}_{k, t - 1},\mathop{\sum }\limits_{{i = 1}}^{{s}_{k, t - 1}}{r}_{k, i}^{2}$ , $\mathop{\sum }\limits_{{i = 1}}^{{s}_{k, t - 1}}{r}_{k, i}$ . Furthermore, since the residual bootstrap weights are generated independently, ${\widetilde{\mu }}_{k, t}$ among arms are also independent given historical randomness and can be sampled from one multivariate Gaussian generation simultaneously. Formally, ${\widetilde{\mathbf{\mu }}}^{\left( t\right) } = {\left( {\widetilde{\mu }}_{1, t},\ldots ,{\widetilde{\mu }}_{K, t}\right) }^{\top }$ is conditional distributed as
208
+
209
+ $$
210
+ {\widetilde{\mathbf{\mu }}}^{\left( t\right) } \mid {\mathcal{F}}_{t - 1} \sim {N}_{K}\left( {{\widehat{\mathbf{\mu }}}^{\left( t\right) },{\mathbf{\sum }}_{\omega }^{\left( t\right) }}\right) , \tag{10}
211
+ $$
212
+
213
+ where ${\widehat{\mathbf{\mu }}}^{\left( t\right) } = {\left( {\widehat{\mu }}_{1, t},\ldots ,{\widehat{\mu }}_{K, t}\right) }^{\top }$ and ${\mathbf{\sum }}_{\omega }^{\left( t\right) }$ is a diagonal matrix with diagonal elements ${\sigma }_{\omega }^{2}{s}_{k, t - 1}^{-2}{RS}{S}_{k, t}$ . Detailed steps and more illustration about efficient implementation is provided in Appendix D.7.1. Moreover, an empirical study about computational efficiency is conducted in Appendix D.7.2 and Table 3 provides the computational cost of our proposed LinReBoot as well as other baseline algorithms.
214
+
215
+ ## 4 OPTIMISM DESIGN
216
+
217
+ Optimistic Estimated Discrepancy. This section identifies and demystifies the technical challenge of implementing ReBoot principle in the stochastic linear bandit problem. The key is to conduct a detailed investigation to produce probabilistic control on the behavior of the 'Optimistic Estimate Discrepancy (OED)' of the LinReBoot policy [8]. In principle, the OED is given by
218
+
219
+ $$
220
+ \mathbf{{OED}} = \text{Optimism} \times \text{Action Context Norm,} \tag{11}
221
+ $$
222
+
223
+ where the Action Context Norm is given by ${\begin{Vmatrix}{\mathbf{x}}_{k}\end{Vmatrix}}_{{\mathbf{V}}_{t}^{-1}}$ and Optimism is given by ${c}_{t, k}$ for the $k$ th action at time $t$ , defined in (14). Design of ${c}_{t, k}$ will be elaborated in Section 4.1.
224
+
225
+ Sufficient Explored Arms. We define the concept of Sufficient Explore Arms to facilitate the formal regret analysis of LinReBoot. Intuitively, an arm is sufficient explored if its index produced by the policy (8) is less than the mean reward of the optimal arm. Technically, we say an arm $k$ is sufficiently explored at time $t$ if the adopted OED $\left( {{c}_{t, k}{\begin{Vmatrix}{\mathbf{x}}_{k}\end{Vmatrix}}_{{\mathbf{V}}_{t}^{-1}}}\right)$ is bounded by its optimal gap $\left( {\Delta }_{k}\right)$ .
226
+
227
+ The above notion of sufficient explored arm defines the concept of "set of sufficient explored arms" ${\mathcal{S}}_{t}$ , formally
228
+
229
+ $$
230
+ {\mathcal{S}}_{t} \mathrel{\text{:=}} \left\{ {k \in \left\lbrack K\right\rbrack : {c}_{t, k}{\begin{Vmatrix}{\mathbf{x}}_{k}\end{Vmatrix}}_{{\mathbf{V}}_{t}^{-1}} < {\Delta }_{k}}\right\} , \tag{12}
231
+ $$
232
+
233
+ where and ${c}_{t, k}$ is the collaborated optimism and ${c}_{t, k}{\begin{Vmatrix}{\mathbf{x}}_{k}\end{Vmatrix}}_{{\mathbf{V}}_{t}^{-1}}$ is an optimistic estimate of discrepancy of policy index (8).
234
+
235
+ The key consequence of set (12) is that, any member in ${\mathcal{S}}_{t}$ enjoys the property
236
+
237
+ $$
238
+ \forall j \in {\mathcal{S}}_{t} \cap \left\lbrack K\right\rbrack : {\widetilde{\mu }}_{j, t} < {\mu }_{1}; \tag{13}
239
+ $$
240
+
241
+ that is, the LinReBoot policy always avoids an index [8] from sufficiently explored subset such that the bootstrapped mean of this index is less than the optimal mean reward unless all arm are sufficiently explored. (see equation (82) in the proof of Lemma A.1 at section B.1 for technical details).
242
+
243
+ ### 4.1 COLLABORATE OPTIMISM
244
+
245
+ Here we elaborate on the collaborated optimism adopted in the definition of sufficient explored arms [12]. Concretely, the collaborated optimism has a form
246
+
247
+ $$
248
+ {c}_{t, k} = {c}_{1}\left( {t, k}\right) + {c}_{2}\left( {t, k}\right) , \tag{14}
249
+ $$
250
+
251
+ where ${c}_{1}\left( {t, k}\right)$ is called sample optimism and ${c}_{2}\left( {t, k}\right)$ is called bootstrap optimism for arm $k$ at time $t$ .
252
+
253
+ Sample Optimism. The sample optimism ${c}_{1}\left( {t, k}\right)$ serves as a control on the event that "the realized sample estimate discrepancy (ED) is bounded by sample OED":
254
+
255
+ $$
256
+ {E}_{t, k} \mathrel{\text{:=}} \left\{ {\left| {{\widehat{\mu }}_{k, t} - {\mu }_{k}}\right| \leq {c}_{1}\left( {t, k}\right) {\begin{Vmatrix}{\mathbf{x}}_{k}\end{Vmatrix}}_{{\mathbf{V}}_{t}^{-1}},}\right\} \tag{15a}
257
+ $$
258
+
259
+ $$
260
+ {E}_{t} \mathrel{\text{:=}} \mathop{\bigcap }\limits_{{k = 1}}^{K}{E}_{t, k} \tag{15b}
261
+ $$
262
+
263
+ where ${c}_{1}\left( {t, k}\right)$ is a constant which can be tuned by our LinReBoot algorithm, making the bad event ${\bar{E}}_{t, k}$ and $\bar{E}$ become unlikely. In fact, this ${E}_{t, k}$ is the event that the least squared estimation is "close" to the true mean reward for arm $k$ at round $t$ . In section 5, the probability of the bad event ${\bar{E}}_{t}$ is controlled by a parameter tuned by users based on lemma 5.1.
264
+
265
+ Bootstrap Optimism. The bootstrap optimism ${c}_{2}\left( {t, k}\right)$ serves as a control on the event that "the realized bootstrap ED is bounded by bootstrap OED":
266
+
267
+ $$
268
+ {E}_{t, k}^{\prime } \mathrel{\text{:=}} \left\{ {\left| {{\widetilde{\mu }}_{k, t} - {\widehat{\mu }}_{k, t}}\right| \leq {c}_{2}\left( {t, k}\right) {\begin{Vmatrix}{\mathbf{x}}_{k}\end{Vmatrix}}_{{\mathbf{V}}_{t}^{-1}}}\right\} , \tag{16a}
269
+ $$
270
+
271
+ $$
272
+ {E}_{t}^{\prime } \mathrel{\text{:=}} \mathop{\bigcap }\limits_{{k = 1}}^{K}{E}_{t, k}^{\prime } \tag{16b}
273
+ $$
274
+
275
+ where ${c}_{2}\left( {t, k}\right)$ is also a constant controlling the conditional probability of the bad event ${\bar{E}}_{t}^{\prime }$ . This ${c}_{2}\left( {t, k}\right)$ can be tuned by our LinReBoot algorithm as well. Similar to ${E}_{t, k}$ , this ${E}_{t, k}^{\prime }$ is the event that the residual bootstrap based estimation is "close" to the least squared estimate ${\widehat{\mu }}_{k, t}$ for arm $k$ at round $t$ . In section 5, the probability of bad event ${\bar{E}}_{t}^{\prime }$ is controlled by a parameter tuned by users based on lemma 5.2.
276
+
277
+ ### 4.2 OPTIMISM DESIGN
278
+
279
+ Choice of sample optimism $\left( \alpha \right)$ . The goal of this part is to illustrate how to pick the sample OED such that the event (15) holds with probability at least $1 - \alpha$ for a given confidence budget $\alpha \in \left( {0,1}\right)$ . Formally, the goal is to find a sample OED function ${c}_{1}\left( {t, k}\right)$ : $\left\lbrack n\right\rbrack \times \left\lbrack K\right\rbrack \mapsto \mathbb{R}$ such that the event [15a] holds with probability at least $1 - {\alpha }_{k}$ . To meet the purpose of the risk control, we specify the sample OED function with form
280
+
281
+ $$
282
+ {c}_{1}\left( {t, k}\right) \mathrel{\text{:=}} {R}_{2}\sqrt{d\log \left( {\left( {1 + t{L}^{2}/\lambda }\right) /{\alpha }_{k}}\right) } + {\lambda }^{1/2}{S}_{2}. \tag{17}
283
+ $$
284
+
285
+ Lemma 5.1 gives the formal result on why such choice has confidence budget at most ${\alpha }_{k}$ . For regret analysis, define ${\alpha }_{\min } = \mathop{\min }\limits_{{k \in \left\lbrack K\right\rbrack }}{\alpha }_{k}$ and $\mathbf{\alpha } = {\left( {\alpha }_{1},\ldots ,{\alpha }_{K}\right) }^{\top }$ .
286
+
287
+ Choice of bootstrap optimism $\left( \beta \right)$ . The goal of this part is to pick bootstrapped OED such that the event (16) holds with probability at least $1 - \beta$ for given confidence budget $\beta \in \left( {0,1}\right)$ . Formally, the goal is to find a sample OED function ${c}_{2}\left( {t, k}\right) : \left\lbrack n\right\rbrack \times \left\lbrack K\right\rbrack \mapsto \mathbb{R}$ such that the event (16a) holds with probability at least $1 - {\beta }_{k}$ . To meet the purpose of the risk control, we specify the bootstrapped OED function with form
288
+
289
+ $$
290
+ {c}_{2}\left( {t, k}\right) \mathrel{\text{:=}} \sqrt{\left( {2{\sigma }_{\omega }^{2}{RS}{S}_{k, t}\log \left( {2/{\beta }_{k}}\right) }\right) /{s}_{k, t - 1}^{2}{\begin{Vmatrix}{\mathbf{x}}_{k}\end{Vmatrix}}_{{\mathbf{V}}_{t}^{-1}}^{2}}.
291
+ $$
292
+
293
+ (18)
294
+
295
+ Lemma 5.2 gives the formal result on why such choice has a confidence budget at most ${\beta }_{k}$ . For regret analysis, let ${\beta }_{\min }$ be the smallest ${\beta }_{k},\forall k \in \left\lbrack K\right\rbrack$ and $\mathbf{\beta } =$ ${\left( {\beta }_{1},\ldots ,{\beta }_{K}\right) }^{\top }$ .
296
+
297
+ ### 4.3 OPTIMISM FOR OPTIMAL ARM
298
+
299
+ Sample-Bootstrap OED ratio of the optimal arm (b). Indicated by the regret analysis in [Kve-ton et al., 2019a], instead of controlling the exploration independently, the relation between two sources of explorations needs to be considered because this relation is critical for finding the optimal action. To meet such observation, we define a good event,
300
+
301
+ $$
302
+ {E}_{t}^{\prime \prime } \mathrel{\text{:=}} \left\{ {{\widetilde{\mu }}_{1, t} - {\widehat{\mu }}_{1, t} > {c}_{1}\left( {t,1}\right) {\begin{Vmatrix}{\mathbf{x}}_{1}\end{Vmatrix}}_{{\mathbf{V}}_{t}^{-1}}}\right\} . \tag{19}
303
+ $$
304
+
305
+ <table><tr><td>Notation</td><td>Definition</td></tr><tr><td>${\zeta }_{1}\left( {n, d}\right)$</td><td>$\left( {{L}_{2}\sqrt{d\log \left( \frac{1 + n{L}^{2}/\lambda }{{\alpha }_{\min }}\right) } + {\lambda }^{1/2}{S}_{2}}\right) \times$ $\sqrt{2\left( {n - K}\right) d\log \left( {1 + \mathop{\sum }\limits_{{i = 1}}^{r}{\sigma }_{i}^{2}/{d\lambda }}\right) }$</td></tr><tr><td>${\zeta }_{2}\left( {n, d}\right)$</td><td>$\sqrt{2{\sigma }_{\omega }^{2}\log \left( \frac{2}{{\beta }_{\min }}\right) } \times$ $\sqrt{2\left( {n - K}\right) d\log \left( {1 + \mathop{\sum }\limits_{{i = 1}}^{r}{\sigma }_{i}^{2}/{d\lambda }}\right) }$</td></tr><tr><td>${\zeta }_{3}\left( n\right)$</td><td>${2K}\sqrt{4{L}_{2}{\sigma }_{\omega }^{2}\log \left( \frac{2}{{\beta }_{\min }}\right) \left( {\log n + 1}\right) }$</td></tr><tr><td>${\zeta }_{4}\left( n\right)$</td><td>$2{S}_{2}L\left( {\left( {n - K}\right) \left( {\alpha + \beta }\right) + K - 1}\right)$</td></tr></table>
306
+
307
+ Table 1: Notations in Regret Analysis
308
+
309
+ Given the good event ${E}_{t}^{\prime \prime }$ , the policy index ${\widetilde{\mu }}_{1, t}$ of the optimal arm enjoys further positive bias, hence the agent will have better chance to make optimal action.
310
+
311
+ In particular, we highlight a constant $b$ used to measure the ratio of the sample optimism (17) to the bootstrap optimism (18); formally, we require $b$ satisfies
312
+
313
+ $$
314
+ {c}_{1}\left( {t,1}\right) /{c}_{2}\left( {t,1}\right) \geq b \cdot \sqrt{2\log \left( {2/{\beta }_{1}}\right) }\text{.} \tag{20}
315
+ $$
316
+
317
+ Intuitively, the constant $b$ measures the relation between sample OED and bootstrap OED of the optimal arm. This $b$ plays an important role of the probability lower bound of event (19) (See Lemma 5.3). Note that, if (20) holds, we have the lower bound (26) ; otherwise, we have the lower bound (27). In both cases, we have a lower bound for the event (19).
318
+
319
+ Good event for optimal arm $\left( \gamma \right)$ . Here we introduce the event that over exploration and under exploration of the optimal arm have been avoided simultaneously. Formally, the constant $\gamma$ is the probability that the bandit index (8) is not over-exploration (Event ${E}_{t}^{\prime }$ ) and also not under-exploration (Event ${E}_{t}^{\prime \prime }$ )
320
+
321
+ $$
322
+ \left\{ {{c}_{1}\left( {t,1}\right) < \left( {{\widetilde{\mu }}_{1, t} - {\widehat{\mu }}_{1, t}}\right) /{\begin{Vmatrix}{\mathbf{x}}_{1}\end{Vmatrix}}_{{\mathbf{V}}_{t}^{-1}} < {c}_{2}\left( {t,1}\right) }\right\} . \tag{21}
323
+ $$
324
+
325
+ Technically, we can show that the probability of the event (21) is lower bounded by the term
326
+
327
+ $$
328
+ {\mathbb{P}}_{t}\left( {E}_{t}^{\prime \prime }\right) - {\mathbb{P}}_{t}\left( {\bar{E}}_{t}^{\prime }\right) , \tag{22}
329
+ $$
330
+
331
+ with probability at least $1 - \gamma$ (Lemma 5.4). Such lower bound is translated into an upper bound in regret analysis.
332
+
333
+ ## 5 FORMAL RESULTS
334
+
335
+ ### 5.1 REGRET BOUND FOR LINREBOOT
336
+
337
+ Theorem 5.1. Under Assumptions [1, 2, 3 and technical conditions (32) and (74), with probability at least $1 - \left( {\delta + \gamma }\right)$ , the expected regret of Algorithm 1 is bounded ${as}$ ,
338
+
339
+ $$
340
+ {R}_{n} \leq {C}_{1}\left( {{\alpha }_{1},\mathbf{\beta },\gamma , b}\right) {\zeta }_{1}\left( {n, d}\right)
341
+ $$
342
+
343
+ $$
344
+ + {C}_{2}\left( {\mathbf{\alpha },\mathbf{\beta },\gamma , b,\delta }\right) {\zeta }_{2}\left( {n, d}\right) \tag{23}
345
+ $$
346
+
347
+ $$
348
+ + {C}_{1}\left( {{\alpha }_{1},\mathbf{\beta },\gamma , b}\right) {\zeta }_{3}\left( n\right) + {\zeta }_{4}\left( n\right) ,
349
+ $$
350
+
351
+ where ${\zeta }_{1},{\zeta }_{2},{\zeta }_{3}$ and ${\zeta }_{4}$ are defined in table 1 and ${C}_{1}$ , ${C}_{2},{M}_{1},{M}_{2}$ are described in table 2.
352
+
353
+ Proof. See appendix A.1.
354
+
355
+ Corollary 5.2. Let $\overline{\mathbf{\alpha } = \mathbf{\beta }} = \frac{1}{\sqrt{n}}\mathbf{1}$ , the order of high probability upper bound in Theorem 5.1 is $\widetilde{O}\left( {d\sqrt{n}}\right)$ .
356
+
357
+ Proof. See appendix A.2.
358
+
359
+ Corollary 5.2 shows that our regret bound scales as the regret bound of Linear Thompson sampling [Agrawal and Goyal, 2013b and Linear PHE Kveton et al., 2019a.
360
+
361
+ ### 5.2 VALIDATE SAMPLE OPTIMISM
362
+
363
+ Lemma 5.1. Under Assumptions 1, 2, 3 and choose ${c}_{1}\left( {t, k}\right)$ as $\left( {17}\right) ,\mathbb{P}\left( {\bar{E}}_{t, k}\right)$ , the probability of bad event corresponded to least squared estimation described in [15]), is controlled. Formally, $\forall k \in \left\lbrack K\right\rbrack ,\forall {\alpha }_{k} > 0,\forall t \geq$ 1,
364
+
365
+ $$
366
+ \mathbb{P}\left( {\left| {{\widehat{\mu }}_{k, t} - {\mu }_{k}}\right| \leq {c}_{1}\left( {t, k}\right) {\begin{Vmatrix}{\mathbf{x}}_{k}\end{Vmatrix}}_{{\mathbf{V}}_{t}^{-1}}}\right) \geq 1 - {\alpha }_{k}. \tag{24}
367
+ $$
368
+
369
+ Consequently, we have $\mathbb{P}\left( {\bar{E}}_{t}\right) \leq \alpha \mathrel{\text{:=}} \mathop{\sum }\limits_{{k = 1}}^{K}{\alpha }_{k}$ .
370
+
371
+ Proof. See appendix A.3.
372
+
373
+ Lemma 5.1 supports that the choice of ${c}_{1}\left( {t, k}\right)$ at (17) for the sample optimism event (15) is valid with confidence budget $\alpha$ .
374
+
375
+ ### 5.3 VALIDATE BOOTSTRAP OPTIMISM
376
+
377
+ Lemma 5.2. Suppose bootstrap weights are Gaussian. Pick ${c}_{2}\left( {t, k}\right)$ as (18). The conditional probability of bad event corresponding to residual bootstrap exploration described in [16], ${\mathbb{P}}_{t}\left( {\bar{E}}_{t, k}^{\prime }\right)$ , is controlled. Formally, $\forall k \in \left\lbrack K\right\rbrack ,\forall {\beta }_{k} > 0,\forall t \geq 1$
378
+
379
+ $$
380
+ {\mathbb{P}}_{t}\left( {\left| {{\widetilde{\mu }}_{k, t} - {\widehat{\mu }}_{k, t}}\right| \leq {c}_{2}\left( {t, k}\right) {\begin{Vmatrix}{\mathbf{x}}_{k}\end{Vmatrix}}_{{\mathbf{V}}_{t}^{-1}}}\right) \geq 1 - {\beta }_{k}. \tag{25}
381
+ $$
382
+
383
+ Consequently, we have ${\mathbb{P}}_{t}\left( {\bar{E}}_{t}^{\prime }\right) \leq \beta \mathrel{\text{:=}} \mathop{\sum }\limits_{{k = 1}}^{K}{\beta }_{k}$ .
384
+
385
+ Proof. See appendix A.4.
386
+
387
+ Lemma 5.2 supports that the choice of ${c}_{2}\left( {t, k}\right)$ at $\left( \overline{18}\right)$ for the sample optimism event (16) is valid with confidence budget $\beta$ .
388
+
389
+ ### 5.4 SAMPLE-BOOTSTRAP RATIO
390
+
391
+ Lemma 5.3. Under Assumptions [1, 2, 3, Suppose bootstrap weights are Gaussian. The conditional probability of anti-concentration for optimal arm described in [19], ${\mathbb{P}}_{t}\left( {\bar{E}}_{t}^{\prime \prime }\right)$ , has lower bound. Formally, if $b$ satisfies (20),
392
+
393
+ $$
394
+ {\mathbb{P}}_{t}\left( {E}_{t}^{\prime \prime }\right) \geq \frac{b}{\sqrt{2\pi }}\exp \left( {-\frac{3{c}_{1}^{2}\left( {t,1}\right) {s}_{1, t - 1}^{2}{\begin{Vmatrix}{\mathbf{x}}_{1}\end{Vmatrix}}_{{\mathbf{V}}_{t}^{-1}}^{2}}{2{\sigma }_{\omega }^{2}{RS}{S}_{1, t}}}\right) .
395
+ $$
396
+
397
+ (26)
398
+
399
+ Otherwise,
400
+
401
+ $$
402
+ {\mathbb{P}}_{t}\left( {E}_{t}^{\prime \prime }\right) \geq \Phi \left( {-b}\right) , \tag{27}
403
+ $$
404
+
405
+ where $\Phi$ is the CDF of standard normal distribution.
406
+
407
+ Proof. See appendix A.5.
408
+
409
+ Lemma 5.3 provides the lower bound result for good event ${E}_{t}^{\prime \prime }$ . The result indicates that, if the bootstrap optimism is not 'too large', then the LinReBoot procedure can enjoy additional regret reduction.
410
+
411
+ ### 5.5 VALIDATE GOOD EVENT
412
+
413
+ Lemma 5.4. Under Assumptions [1, 2, 3 and suppose Bootstrap weights are Gaussian. Assume $b$ satisfies a technical condition (74). Then, with probability at least $1 - \gamma ,{\mathbb{P}}_{t}\left( {E}_{t}^{\prime \prime }\right) - {\mathbb{P}}_{t}\left( \overline{{E}_{t}^{\prime }}\right)$ has lower bound,
414
+
415
+ $$
416
+ \frac{b}{\sqrt{2\pi }}\exp \left( {-\frac{3{s}_{1, t - 1}^{3/2}{c}_{1}^{2}\left( {t,1}\right) {\begin{Vmatrix}{\mathbf{x}}_{1}\end{Vmatrix}}_{2}^{2}}{8{\sigma }_{\omega }^{2}\left( {{\sigma }_{\min }^{2} + \lambda }\right) \sqrt{\frac{1}{{M}_{2}}\log \left( \frac{{M}_{1}}{1 - \gamma }\right) }}}\right) - \beta ,
417
+ $$
418
+
419
+ (28)
420
+
421
+ where ${M}_{1}$ and ${M}_{2}$ are defined in table 2 .
422
+
423
+ Proof. See appendix A. 6.
424
+
425
+ Lemma 5.4 provided the a high probability lower bound for the difference between probability of the event for anti-concentration ${E}_{t}^{\prime \prime }$ and probability of bad event discussed in bootstrap optimism in Section 4.1. This lower bound is also for probability of 'not under and not over exploration' event (21). Lemma 5.4 links the sample optimism and bootstrap optimism and holds a right amount of exploration of the optimal arm.
426
+
427
+ ## 6 EXPERIMENTS
428
+
429
+ In this section, we conduct empirical studies under three settings: Stochastic Linear Bandit, Contextual Linear Bandit and Linear Bandit with Covariates. Our LinReBoot is compared to several baselines including LinTS-G [Agrawal and Goyal, 2013b, Lattimore and Szepesvári, 2020], LinTS-IG [Honda and Takemura, 2014, Riquelme et al., 2018, LinPHE [Kveton et al., 2019a, LinGIRO [Kveton et al., 2019c] and LinUCB Abbasi-Yadkori et al., 2011, Lattimore and Szepesvári, 2020]. More details about baselines can be found in Appendix D.6.
430
+
431
+ ### 6.1 STOCHASTIC LINEAR BANDIT
432
+
433
+ We compare LinReBoot to other linear bandit algorithms under stochastic linear bandit described in Section 2 . We experiment with several dimensions $d$ including 5,10 and ${20}.K$ is chosen as 100 . Synthetic data generation for this setting is deferred to Appendix D. 2 in the supplementary material. Results. The first row of Figure 1 reports the results for Stochastic Linear Bandit setting. Our LinReBoot rivals LinTS-G and LinTS-IG while substantially exceeds LinGIRO, LinPHE and LinUCB. When $d$ increases, the performance of LinReBoot rivals and exceeds the best of other methods.
434
+
435
+ ![01963979-8e67-7d36-841e-5241e9060320_7_266_145_1211_756_0.jpg](images/01963979-8e67-7d36-841e-5241e9060320_7_266_145_1211_756_0.jpg)
436
+
437
+ Figure 1: Comparison of LinReBoot with Gaussian Bootstrap weights to baselines under three linear bandit problems and three different context dimension $d$ . First row referred to the setting in Section 6.1, second row is for Section 6.2 and the last row is for Section 6.3. Three columns refer to $d = 5, d = {10}$ and $d = {20}$ respectively.
438
+
439
+ ### 6.2 CONTEXTUAL LINEAR BANDIT
440
+
441
+ In the second experiment, we compare LinReBoot to other linear bandit algorithms under Contextual Linear Bandit where the contexts are generated from some distributions by arms. Note that this setting matches previous work [Chu et al., 2011]. Linear bandit algorithms can also be applied under this kind of environment. In our experiment, the LinReBoot is implemented as Algorithm 2 in Appendix D.1. Like the setting in Section 6.1, the dimension of $d$ is chosen as 5 or 10 or 20 and the synthetic data generation for this setting is described in Appendix D.2. Results. The second row of Figure 1 reports the results for Contextual Linear Bandit. Our LinReBoot rival LinTS-G and substantially exceed LinTS-IG, LinGIRO, LinPHE and LinUCB. When $d$ increases, the performance of LinReBoot rivals LinTS-IG and exceeds others.
442
+
443
+ ### 6.3 BANDIT WITH COVARIATES
444
+
445
+ Our last experiment is conducted under the setting of linear bandit with covariates, which is also called linear parametrized bandit by Rusmevichientong and Tsitsik-lis, 2010. This problem is significantly different from the previous two problems in the following ways. Each arm has its true parameter ${\mathbf{\theta }}_{k}$ . That is, each arm has its estimate ${\widehat{\mathbf{\theta }}}_{k}$ from the ridge regression procedure in Section 3.2. Also, unlike the setting in Section 6.2, the contexts are generated from a distribution that is independent of arms. Thus the overall task in this setting is not only the estimation of the target parameter $\mathbf{\theta }$ , but also the detection of which arm a context belongs to. This case is also referred to as the online decision-making under covariates [Bastani and Bayati, 2020]. For the LinReBoot in this setting, detailed algorithm is provided as Algorithm 3 in Appendix D.1. $d$ is chosen as 5 or 10 or 20 and $K = {10}$ . Synthetic data generation for this setting is described in Appendix D.2. Results. The third row of Figure 1 reports the results for Linear Bandit with Covariates. Our LinReBoot exceeds all competing algorithms LinTS-G, LinTS-IG, LinGIRO, LinPHE and LinUCB.
446
+
447
+ Summary. From Figure 1, the proposed LinReBoot is always the top 3 algorithms under all settings and all choice of dimension $d$ . More specifically, LinReBoot is clearly comparable to the state-of-the-art Linear Thompson Sampling algorithms(LinTS-G, LinTS-IG) or even outperforms them in many cases. Regarding the computational cost, from Table 3, our proposed LinReBoot is consistently computational efficient among all settings compared to LinTS-G, LinTS-IG and LinUCB under all three settings.
448
+
449
+ ## 7 CONCLUSION
450
+
451
+ We propose LinReBoot algorithm for stochastic linear bandit problems. In theory, we prove LinReBoot that secures $\widetilde{O}\left( {d\sqrt{n}}\right)$ high probability expected regret. Empirically, we show LinReBoot rivals LinTS-G, LinTS-IG and exceeds LinPHE, LinGIRO and LinUCB, which supports the easy-generalizability of ReBoot principle in Wang et al., 2020 under various contextual bandit settings including Stochastic Linear Bandit, Contextual Linear Bandit, and Linear Bandit with Covariates.
452
+
453
+ Yasin Abbasi-Yadkori, Dávid Pál, and Csaba Szepesvári. Improved algorithms for linear stochastic bandits. Advances in neural information processing systems, 24:2312-2320, 2011.
454
+
455
+ Deepak Agarwal, Bee-Chung Chen, Pradheep Elango, Nitin Motgi, Seung-Taek Park, Raghu Ramakrish-nan, Scott Roy, and Joe Zachariah. Online models for content optimization. In Advances in Neural Information Processing Systems, pages 17-24, 2009.
456
+
457
+ Shipra Agrawal and Navin Goyal. Further optimal regret bounds for thompson sampling. In Artificial intelligence and statistics, pages 99-107. PMLR, ${2013}\mathrm{a}$ .
458
+
459
+ Shipra Agrawal and Navin Goyal. Thompson sampling for contextual bandits with linear payoffs. In ${In}$ - ternational Conference on Machine Learning, pages 127-135. PMLR, 2013b.
460
+
461
+ Thomas J Archdeacon. Correlation and regression analysis: a historian's guide. Univ of Wisconsin Press, 1994.
462
+
463
+ Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2):235-256, 2002.
464
+
465
+ Akram Baransi, Odalric-Ambrym Maillard, and Shie Mannor. Sub-sampling for multi-armed bandits. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 115-131. Springer, 2014.
466
+
467
+ Hamsa Bastani and Mohsen Bayati. Online decision making with high-dimensional covariates. Operations Research, 68(1):276-294, 2020.
468
+
469
+ Christopher M Bishop. Pattern recognition. Machine learning, 128(9), 2006.
470
+
471
+ Wei Chu, Lihong Li, Lev Reyzin, and Robert Schapire. Contextual bandits with linear payoff functions. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pages 208-214. JMLR Workshop and Conference Proceedings, 2011.
472
+
473
+ Alexander I Cowen-Rivers, Wenlong Lyu, Rasul Tu-tunov, Zhi Wang, Antoine Grosnit, Ryan Rhys Griffiths, Hao Jianye, Jun Wang, and Haitham Bou Am-mar. An empirical study of assumptions in bayesian optimisation. arXiv preprint arXiv:2012.03826, 2020.
474
+
475
+ Varsha Dani, Thomas P Hayes, and Sham M Kakade. Stochastic linear optimization under bandit feedback. 2008.
476
+
477
+ Dean Eckles and Maurits Kaptein. Thompson sam-
478
+
479
+ pling with the online bootstrap. arXiv preprint arXiv:1410.4009, 2014.
480
+
481
+ Adam N Elmachtoub, Ryan McNellis, Sechan Oh, and Marek Petrik. A practical method for solving contextual bandit problems using decision trees. arXiv preprint arXiv:1706.04687, 2017.
482
+
483
+ Aurélien Garivier and Emilie Kaufmann. Optimal best arm identification with fixed confidence. In Conference on Learning Theory, pages 998-1027. PMLR, 2016.
484
+
485
+ M Goldstein and Adrian FM Smith. Ridge-type estimators for regression analysis. Journal of the Royal Statistical Society: Series B (Methodological), 36(2): 284-291, 1974.
486
+
487
+ Botao Hao, Yasin Abbasi-Yadkori, Zheng Wen, and Guang Cheng. Bootstrapping upper confidence bound. arXiv preprint arXiv:1906.05247, 2019.
488
+
489
+ Botao Hao, Tor Lattimore, and Csaba Szepesvari. Adaptive exploration in linear contextual bandit. In International Conference on Artificial Intelligence and Statistics, pages 3536-3545. PMLR, 2020.
490
+
491
+ Junya Honda and Akimichi Takemura. Optimality of thompson sampling for gaussian bandits depends on priors. In Artificial Intelligence and Statistics, pages 375-383. PMLR, 2014.
492
+
493
+ Jean Jacod and Albert Shiryaev. Limit theorems for stochastic processes, volume 288. Springer Science & Business Media, 2013.
494
+
495
+ Johannes Kirschner. Information-Directed Sampling-Frequentist Analysis and Applications. PhD thesis, ETH Zurich, 2021.
496
+
497
+ Branislav Kveton, Csaba Szepesvari, Mohammad Ghavamzadeh, and Craig Boutilier. Perturbed-history exploration in stochastic linear bandits. arXiv preprint arXiv:1903.09132, 2019a.
498
+
499
+ Branislav Kveton, Csaba Szepesvari, Mohammad Ghavamzadeh, and Craig Boutilier. Perturbed-history exploration in stochastic multi-armed bandits. arXiv preprint arXiv:1902.10089, 2019b.
500
+
501
+ Branislav Kveton, Csaba Szepesvari, Sharan Vaswani, Zheng Wen, Tor Lattimore, and Mohammad Ghavamzadeh. Garbage in, reward out: Bootstrapping exploration in multi-armed bandits. In International Conference on Machine Learning, pages 3601-3610. PMLR, 2019c.
502
+
503
+ Branislav Kveton, Manzil Zaheer, Csaba Szepesvari, Lihong Li, Mohammad Ghavamzadeh, and Craig
504
+
505
+ Boutilier. Randomized exploration in generalized linear bandits. In International Conference on Artificial Intelligence and Statistics, pages 2066-2076. PMLR, 2020.
506
+
507
+ John Langford and Tong Zhang. The epoch-greedy algorithm for contextual multi-armed bandits. Advances in neural information processing systems, 20 (1):96-1, 2007.
508
+
509
+ Tor Lattimore and Csaba Szepesvári. Bandit algorithms. Cambridge University Press, 2020.
510
+
511
+ Lihong Li, Wei Chu, John Langford, and Robert E Schapire. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th international conference on World wide web, pages 661-670, 2010.
512
+
513
+ Enno Mammen. Bootstrap and wild bootstrap for high dimensional linear models. The annals of statistics, pages 255-285, 1993.
514
+
515
+ Ian Osband and Benjamin Van Roy. Bootstrapped thompson sampling and deep exploration. arXiv preprint arXiv:1507.00300, 2015.
516
+
517
+ Ian Osband, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. Deep exploration via bootstrapped dqn. Advances in neural information processing systems, 29:4026-4034, 2016.
518
+
519
+ Carlos Riquelme, George Tucker, and Jasper Snoek. Deep bayesian bandits showdown: An empirical comparison of bayesian deep networks for thompson sampling. arXiv preprint arXiv:1802.09127, 2018.
520
+
521
+ Paat Rusmevichientong and John N Tsitsiklis. Linearly parameterized bandits. Mathematics of Operations Research, 35(2):395-411, 2010.
522
+
523
+ Daniel Russo, Benjamin Van Roy, Abbas Kazerouni, Ian Osband, and Zheng Wen. A tutorial on thompson sampling. arXiv preprint arXiv:1707.02038, 2017.
524
+
525
+ Liang Tang, Yexi Jiang, Lei Li, Chunqiu Zeng, and Tao Li. Personalized recommendation via parameter-free contextual bandits. In Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval, pages 323- 332, 2015.
526
+
527
+ William R Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, ${25}\left( {3/4}\right) : {285} -$ 294, 1933.
528
+
529
+ Sharan Vaswani, Branislav Kveton, Zheng Wen, Anup Rao, Mark Schmidt, and Yasin Abbasi-Yadkori. New insights into bootstrapping for bandits. arXiv preprint arXiv:1805.09793, 2018.
530
+
531
+ Chi-Hua Wang, Yang Yu, Botao Hao, and Guang Cheng. Residual bootstrap exploration for bandit algorithms. arXiv preprint arXiv:2002.08436, 2020.
532
+
533
+ Anru R Zhang and Yuchen Zhou. On the non-asymptotic and sharp lower tail bounds of random variables. Stat, 9(1):e314, 2020.
UAI/UAI 2022/UAI 2022 Conference/B3M4CS8oql9/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,463 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § RESIDUAL BOOTSTRAP EXPLORATION FOR STOCHASTIC LINEAR BANDIT
2
+
3
+ § ABSTRACT
4
+
5
+ We propose a new bootstrap-based online algorithm for stochastic linear bandit problems. The key idea is to adopt residual bootstrap exploration, in which the agent estimates the next step reward by re-sampling the residuals of mean reward estimate. Our algorithm, residual bootstrap exploration for stochastic linear bandit (LinReBoot), estimates the linear reward from its re-sampling distribution and pulls the arm with the highest reward estimate. In particular, we contribute a theoretical framework to demystify residual bootstrap-based exploration mechanisms in stochastic linear bandit problems. The key insight is that the strength of bootstrap exploration is based on collaborated optimism between the online-learned model and the re-sampling distribution of residuals. Such observation enables us to show that the proposed LinReBoot secure a high-probability $\widetilde{O}\left( {d\sqrt{n}}\right)$ sub-linear regret under mild conditions. Our experiments support the easy generalizability of the ReBoot principle in the various formulations of linear bandit problems and show the significant computational efficiency of LinReBoot.
6
+
7
+ § 1 INTRODUCTION
8
+
9
+ Stochastic linear bandit is an online learning problem that the learning agent acts by pulling arms, where each arm is associated with a feature vector, then learning the arms information from the corresponding random rewards. In such problems, the typical goal of a learning agent is to maximize its cumulative reward. Learning more about an arm (explore) or pulling the arm with the highest estimated reward (exploit) leads to the well-known exploration- exploitation tradeoff, which is the central trade-off captured in many decision-making applications in modern online service industries. Consequently, the design of stochastic linear bandit algorithms demands an easy-generalizable implementation across various contextualize actions and reward generation processes.
10
+
11
+ In the past decade of bandit literature, such demands have invited researchers to investigate bootstrap-based exploration-exploitation trade-offs and have drawn rising attention [Baransi et al., 2014, Eckles and Kaptein, 2014, Osband and Van Roy, 2015, Vaswani et al., 2018, Hao et al., 2019, Kveton et al., 2019c, Wang et al., 2020]. Yet, prior works on bootstrap-based bandit algorithms focus on provable multi-armed bandit algorithms and only provide a limited empirical evaluation of bootstrap-based stochastic linear bandit algorithms, and their theoretical counterpart remains unknown. Such knowledge gap of bootstrapping stochastic linear bandit persuades our investigation on the provable bootstrap-based stochastic linear bandits: Can we theoretically and empirically support the validity and easy-generalizability of bootstrapping procedure in stochastic linear bandit algorithms design? In particular, we aim to deliver a generic framework to demystify the bootstrap optimism in stochastic linear bandit problems and validate the easy generalizability of the bootstrap principle across various contextual linear bandit problems.
12
+
13
+ Contributions. We introduce LinReBoot algorithms that implement Residual Bootstrap Exploration for stochastic linear bandit problem with sub-linear regret. We theoretically show that LinReBoot secures $\widetilde{O}\left( {d\sqrt{n}}\right)$ regret where $d$ is the dimension of features. This sublinear regret bound matches the regret bound of the same order as those theoretical results of Linear Thompson Sampling algorithms. The key to achieving such sub-linear regret guarantee is to carefully manage and collaborate sample and bootstrap optimism (Section 4.1). In particular, by measuring the "sample-bootstrap optimistic estimated discrepancy ratio" of the optimal arm, LinReboot successfully avoids over or under exploration and theoretically secures sub-linear mean regret with high-probability. To our knowledge, this is the first theoretical analysis to support the validity and efficiency of the residual bootstrap-based procedure for stochastic linear bandit problems. We empirically show that LinReBoot rivals or exceeds competing algorithms including Linear Thompson Sampling, Linear PHE, Linear GIRO, and Linear UCB under stochastic linear bandit problem as well as more complicated linear bandit settings. These significant results support the easy-generalizability of proposed LinReBoot. In summary, our contributions are as follows:
14
+
15
+ * Propose LinReBoot algorithms that implement Residual Bootstrap Exploration in linear bandit problems without boundness assumption of rewards.
16
+
17
+ * Theoretically show that LinReBoot secures $\widetilde{O}\left( {d\sqrt{n}}\right)$ regret, matching the regret bound of the same order as those theoretical results of Linear Thompson Sampling algorithms.
18
+
19
+ * Empirically show that LinReBoot rivals or exceeds baseline algorithms and supports that LinReBoot is easy-generalizable among linear bandit problems.
20
+
21
+ Related Works. Bootstrap-based contextual bandit algorithms design has been actively studied in the last half-decade and drawn a surge of interest from both theoretical studies and industrial practice Elmachtoub et al., 2017, Eckles and Kaptein, 2014, Osband et al., 2016, Kveton et al., 2019c, Hao et al., 2019. Bootstrap-based bandit algorithm design is a paradigm of sequential decision-making based on an exploration mechanism with no pre-defined mean reward model. Such paradigm enjoys a decisive advantage that engineers are free to deploy any reward model of interests without painful adaption to problem structure [Kveton et al., 2019c.b . ReBoot Window provided a the oretical logarithmic regret guarantee for multi-armed bandit (MAB) and empirical investigation to validate the easy generalizability of the ReBoot principle. Our work aims to provide a theoretical guarantee for the bootstrap-based linear bandit algorithms and empirically investigate more general contextual linear bandit setting to validate the ReBoot principle.
22
+
23
+ One close related work is $1\mathrm{{Kveton}}$ et al.,2019a which introduces perturbation of past samples for exploration under stochastic linear bandit problem. The limitation of [Kveton et al., 2019a] is the boundness of rewards, indicating many broader classes of rewards such as Gaussian rewards are not applicable with a theoretical guarantee. In contrast, the proposed LinReBoot algorithms relax the boundness reward assumption and thus validate bootstrap-based bandit algorithms in wider bandit environments with a broader class of reward generation processes. Early works about exploration in bandit problems Abbasi-Yadkori et al., 2011, Langford and Zhang, 2007, Dani et al., 2008] are practical but no guarantee of the optimality. Some works [Wang et al., 2020, Kveton et al., 2019c, b, Thompson, 1933, Auer et al., 2002] provide well designed exploration for bandit problems and have their own principles for adopting to more general problems. In these works, three principles including ReBoot $\left\lbrack {\text{ Wang et al., 2020, GIRO }\lbrack \text{ Kveton et al., 2019c }}\right\rbrack$ and PHE [Kveton et al., 2019b] are devising exploration mechanism based on up-to-now history instead of on pre-defined reward model in the other two principles TS [Thompson, 1933] and UCB Auer et al., 2002]. Our work generalizes ReBoot into stochastic linear bandit problems.
24
+
25
+ Notations. Let $\left\lbrack n\right\rbrack$ be set $\{ 1,2,\ldots ,n\} .\mathbf{1}$ is a vector with all ones and $\mathbf{I}$ is the identity matrix. For a vector $\mathbf{v},\parallel \mathbf{v}{\parallel }_{2}$ is 2 -norm of $\mathbf{v}$ and $\parallel \mathbf{v}{\parallel }_{\mathbf{A}}^{2} \mathrel{\text{ := }} \sqrt{{\mathbf{v}}^{\top }\mathbf{A}\mathbf{v}}$ for a semidefinite matrix $\mathbf{A}$ . Let $\langle \cdot , \cdot \rangle$ be the inner product operation. Denote ${\mathcal{F}}_{t}$ as the history of randomness up to round $t.{\mathbb{E}}_{t}\left\lbrack \cdot \right\rbrack \mathrel{\text{ := }} \mathbb{E}\left\lbrack {\cdot \mid {\mathcal{F}}_{t - 1}}\right\rbrack$ is defined as the conditional expectation given ${\mathcal{F}}_{t - 1}$ and ${\mathbb{P}}_{t}\left( \cdot \right) \mathrel{\text{ := }} \mathbb{P}\left( {\cdot \mid {\mathcal{F}}_{t - 1}}\right)$ is defined as the conditional probability given ${\mathcal{F}}_{t - 1}.\mathbb{I}\{ \cdot \}$ is indicator function. For a set or event $E$ , we denote its complement as $\bar{E}.N\left( {\mu ,{\sigma }^{2}}\right)$ is Gaussian distribution with mean $\mu$ and variance ${\sigma }^{2}$ . We use $\widetilde{O}$ for big $O$ notation up to logarithmic factor.
26
+
27
+ § 2 STOCHASTIC LINEAR BANDIT
28
+
29
+ Contextualize Action Set. In stochastic linear bandit problem, we identify the actions with $d$ -dimensional features from $\mathcal{A} \subset {\mathbb{R}}^{d}$ and assume $\left| \mathcal{A}\right|$ , the size of the action set, is finite. Let $K \mathrel{\text{ := }} \left| \mathcal{A}\right|$ be the number of actions (arms), ${\mathbf{x}}_{k} \in {\mathbb{R}}^{d}$ be the context vector of the $k$ -th arm, that is, $\mathcal{A} = \left\{ {{\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{K}}\right\}$ .
30
+
31
+ Reward generating mechanism. The reward function is parameterized by $\mathbf{\theta } \in {\mathbb{R}}^{d}$ such that, at time $t$ the agent chooses an action ${I}_{t} \in \left\lbrack K\right\rbrack$ with feature ${X}_{t} = {\mathbf{x}}_{{I}_{t}} \in \mathcal{A}$ , the reward is generated by
32
+
33
+ $$
34
+ {Y}_{t} \equiv \left\langle {{X}_{t},\mathbf{\theta }}\right\rangle + {\epsilon }_{t}. \tag{1}
35
+ $$
36
+
37
+ Specifically, the reward obtained by the agent at round $t$ when pulling arm ${I}_{t} = k$ is generated from a distribution with mean ${\mu }_{k} \mathrel{\text{ := }} {\mathbf{x}}_{k}^{\top }\mathbf{\theta }$ , conditioning on context ${\mathbf{x}}_{k}$ . The property of noise ${\epsilon }_{t}$ is described in Assumption 2, Furthermore, denote the recieved reward by ${r}_{{I}_{t}}$ and the reward random variable by ${Y}_{t}$ at round $t$ .
38
+
39
+ Regret. Without loss of generality, assume that arm 1 is the unique optimal arm, that is ${\mu }_{1} \geq {\mu }_{k}\forall k$ . The optimal gap of the $k$ -th arm is ${\Delta }_{k} \mathrel{\text{ := }} {\mu }_{1} - {\mu }_{k} \geq 0$ . The expected $n$ -round regret is denoted as
40
+
41
+ $$
42
+ {R}_{n} \mathrel{\text{ := }} \mathop{\sum }\limits_{{k = 2}}^{K}{\Delta }_{k}\mathbb{E}\left\lbrack {\mathop{\sum }\limits_{{t = 1}}^{n}\mathbb{I}\left\{ {{I}_{t} = k}\right\} }\right\rbrack . \tag{2}
43
+ $$
44
+
45
+ The goal of the agent is to maximize the expected cumulative reward in $n$ rounds, which is equivalent to minimizing the expected regret ${R}_{n}$ .
46
+
47
+ Assumption 1. (Boundness assumptions) We make the following boundeness assumptions: (1) True parameter $\mathbf{\theta }$ is bounded: $\parallel \mathbf{\theta }{\parallel }_{2} \leq {S}_{2}$ . (2) The context vectors are bounded in a sense that ${\begin{Vmatrix}{\mathbf{x}}_{k}\end{Vmatrix}}_{2} \leq L$ for all $k \in \left\lbrack K\right\rbrack$ .
48
+
49
+ Assumption 1 is referred to the boundness assumptions in the stochastic linear bandit literature and is to ensure the regret is bounded if the agent pulls any sub-optimal actions (see Section 5 in Abbasi-Yadkori et al., 2011).
50
+
51
+ Assumption 2. (Noise Clipping assumption) Noise process ${\left\{ {\epsilon }_{t}\right\} }_{t = 1}^{\infty }$ described in [1] satisfies that for some ${L}_{1},{L}_{2} > 0$ ,
52
+
53
+ $$
54
+ {e}^{{L}_{1}{\eta }^{2}} \leq \mathbb{E}\left\lbrack {{e}^{\eta {\epsilon }_{t}} \mid {\mathcal{F}}_{t - 1}}\right\rbrack \leq {e}^{{L}_{2}{\eta }^{2}},\forall \eta \geq 0, \tag{3}
55
+ $$
56
+
57
+ where ${\mathcal{F}}_{t - 1} = \left\{ {{\epsilon }_{1},{I}_{1},\cdots ,{\epsilon }_{t - 1},{I}_{t - 1}}\right\}$ .
58
+
59
+ Assumption 2 implies that stochastic process ${\left\{ {\epsilon }_{t}\right\} }_{t = 1}^{\infty }$ is conditionally sub-gaussian with constant ${L}_{2}.{L}_{1}$ contributes to the lower bound of moment generating function suggested by [Zhang and Zhou, 2020]. Note that the Assumption 2 allows heteroscedasticity among different arms by choosing ${L}_{2}$ as the largest variance among arms. Such heteroscedasticity consideration arises and has been identified as a challenge in applications of Bayesian optimization [Kirschner, 2021, Cowen-Rivers et al., 2020.
60
+
61
+ § 3 RESIDUAL BOOTSTRAP EXPLORATION
62
+
63
+ § 3.1 REBOOT PRINCIPLE
64
+
65
+ This section presents essential proof of concepts to implement ReBoot principle [Wang et al., 2020]. In general, each round of interaction, the decision policy admits four subroutines to implement ReBoot principle: 1) Learning, 2) Fitting, 3) Bootstrapping, and 4) Exploring. Following elaborates on each subroutine:
66
+
67
+ 1) Model Learning. The first subroutine outputs a learned model based on current collected data. Our implementation learns the parameter $\mathbf{\theta }$ in Eq. 1 by some user-specified model.
68
+
69
+ 2) Data Fitting. The second subroutine fits the current data set with the learned model in the previous subroutine and then outputs the residual set. Intuitively, the residuals measure the goodness of fit of the learned model and should drop a hint on the right amount of exploration. In other words, the residuals should suggest a right magnitude of exploration bonus in decision policy (8). How to manage and integrate uncertainty behind residuals into the exploration mechanism of policy is the main challenge.
70
+
71
+ 3) Residuals Bootstraping. The third subroutine associates the residuals obtained the last subroutine with a bootstrapping distribution. Instead of maintaining a belief distribution on a parameter in the Bayesian approach, ReBoot principle maintains a bootstrapping distribution on the statistical error based on residuals. The challenge is to justify the efficacy of residual-based optimism construction in both theory and practice.
72
+
73
+ 4) Actions Exploring. The fourth subroutines sample the exploration bonus from the bootstrapping distribution and output an index for each action. Such bootstrap procedure is more computationally efficient than prior efforts since this procedure only requires drawing a sample from the bootstrapping distribution. The challenge is to prove that such bootstrap procedure secures sub-linear regret in theory.
74
+
75
+ § 3.2 LINREBOOT ALGORITHM
76
+
77
+ We propose the Linear Residual Bootstrap Exploration algorithm (LinReBoot, Algorithm 1) for stochastic linear bandit problems. This section elaborates the four subroutines in Section 3.1 for the proposed LinReBoot.
78
+
79
+ 1) LinReBoot uses ridge regression procedure, whose learned parameter is ${\widehat{\mathbf{\theta }}}_{t}\left( {4\mathrm{\;b}}\right)$ and estimated mean reward for arm $k$ is ${\widehat{\mu }}_{k,t}$ . Such way to estimate mean reward is easy to manage the confidence [Abbasi-Yadkori et al., 2011. Thus, we focus on confidence management for the bootstrap-based exploration.
80
+
81
+ Ridge Regression Procedure. LinReBoot fits linear model at round $t$ as follow,
82
+
83
+ $$
84
+ {\mathbf{V}}_{t} = {\mathbf{X}}_{t - 1}^{\top }{\mathbf{X}}_{t - 1} + \lambda \mathbf{I}, \tag{4a}
85
+ $$
86
+
87
+ $$
88
+ {\widehat{\mathbf{\theta }}}_{t} = {\mathbf{V}}_{t}^{-1}{\mathbf{X}}_{t - 1}^{\top }{\mathbf{Y}}_{t - 1}, \tag{4b}
89
+ $$
90
+
91
+ $$
92
+ {\widehat{\mu }}_{k,t} = {\mathbf{x}}_{k}^{\top }{\widehat{\mathbf{\theta }}}_{t},\forall k \in \left\lbrack K\right\rbrack , \tag{4c}
93
+ $$
94
+
95
+ where ${\mathbf{X}}_{t - 1} = {\left( {X}_{1},\ldots ,{X}_{t - 1}\right) }^{\top } \in {\mathbb{R}}^{\left( {t - 1}\right) \times d}$ . The $\tau$ - th row of ${\mathbf{X}}_{t - 1}$ is the context ${X}_{\tau }^{\top }$ for $\tau \in \left\lbrack {t - 1}\right\rbrack$ , ${\mathbf{Y}}_{t - 1} = {\left( {Y}_{1},\ldots ,{Y}_{t - 1}\right) }^{\top }$ is reward vector whose elements are rewards up to round $t - 1.\lambda$ denotes the regularization level. ${\mathbf{V}}_{t}$ denotes the sample covariance matrix up to round $t$ and ${\widehat{\mathbf{\theta }}}_{t}$ is the ridge estimation of target parameter $\mathbf{\theta }$ in (1). ${\widehat{\mu }}_{k,t}$ denotes the estimated mean of arm $k$ based on history. Note that the first $K$ rounds in proposed LinReBoot is fully exploring each arm once. In other words, ${I}_{t} = t$ when $t \in \left\lbrack K\right\rbrack$ , indicating ${\mathbf{X}}_{K} \mathrel{\text{ := }} {\left( {\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{K}\right) }^{\top } \in {\mathbb{R}}^{K \times d}$ . We call this ${\mathbf{X}}_{K}$ the context matrix with rank $r \leq \min \left( {K,d}\right)$ and singular values ${\sigma }_{1},\ldots ,{\sigma }_{r}$ . Also define ${\sigma }_{\min }^{2} \leq {\sigma }_{i}^{2} \leq {\sigma }_{\max }^{2},\forall i \in \left\lbrack r\right\rbrack$ . With these definitions, we make a mild assumption about the shrinkage effect of ridge regression:
96
+
97
+ Assumption 3. (Validity of Ridge Regression) The singular value decomposition of context matrix ${\mathbf{X}}_{K}$ is denoted as ${\mathbf{X}}_{K} \mathrel{\text{ := }} \mathbf{{G\sum U}}$ where $\mathbf{G} \in {\mathbb{R}}^{K \times K}$ , $\mathbf{\sum } \in {\mathbb{R}}^{K \times d}$ and $\mathbf{U} \in {\mathbb{R}}^{d \times d}$ . Define $\mathbf{\Omega } \mathrel{\text{ := }} \mathbf{\sum }\left( {{\mathbf{\sum }}^{\top }\mathbf{\sum } + }\right.$ $\lambda \mathbf{I}{)}^{-1}{\mathbf{\sum }}^{\top } \in {\mathbb{R}}^{K \times K}$ and $\mathbf{Z} \mathrel{\text{ := }} \mathbf{G}\mathbf{\Omega }\mathbf{\sum }\mathbf{U} \in {\mathbb{R}}^{K \times d}$ . Let ${\mathbf{z}}_{1} \in {\mathbb{R}}^{d}$ be the first row of $\mathbf{Z}$ . Given any $\lambda > 0$ , there exists a corresponding positive scalar ${S}_{1}$ such that $\left| {{\mathbf{x}}_{1}^{\top }\mathbf{\theta } - {\mathbf{z}}_{1}^{\top }\mathbf{\theta }}\right| \geq {S}_{1}$ for the $\theta$ in (1).
98
+
99
+ Remark 1. Assumption 3 provides a lower bound of the absolute difference between true mean ${\mathbf{x}}_{1}^{\top }\mathbf{\theta }$ and normalized mean ${\mathbf{z}}_{1}^{\top }\mathbf{\theta }$ of the optimal arm. Note that if $\lambda \rightarrow 0$ , then ${\mathbf{z}}_{1} \rightarrow {\mathbf{x}}_{1}$ and ${S}_{1} \rightarrow 0$ . Thus this scalar ${S}_{1}$ measures the small perturbation on the mean of the optimal arm when the ridge regression procedure is applied. This $Z$ can be interpreted as a ridge shrinkage context matrix [Goldstein and Smith, 1974]. One important phenomenon of online ridge regression is that even if the ridge estimator is biased, the shrinkage effect from ridge estimation provides exploration for the agent leading to making a correct decision. The positive scalar ${S}_{1}$ describes the shrinkage effect on the context. That is, the existence of ${S}_{1}$ indicates the ridge procedure is valid and its shrinkage effect exists.
100
+
101
+ 2) The fitting part of LinReBoot outputs the residuals under the linear model framework,
102
+
103
+ $$
104
+ {e}_{k,t,i} = {r}_{k,i} - {\widehat{\mu }}_{k,t},\forall i \in \left\lbrack {s}_{k,t - 1}\right\rbrack , \tag{5}
105
+ $$
106
+
107
+ where ${s}_{k,t - 1} \mathrel{\text{ := }} \mathop{\sum }\limits_{{\tau = 1}}^{{t - 1}}\mathbb{I}\left\{ {{I}_{\tau } = k}\right\}$ is the number of times pulling arm $k$ by round $t - 1,{r}_{k,i}$ is the $i$ -th reward of arm $k$ by round $t - 1$ . The goodness of fit of the learned ridge regression model can be summarised by Residual Sum of Squares(RSS) [Archdeacon, 1994] which is defined as
108
+
109
+ $$
110
+ {RS}{S}_{k,t} \mathrel{\text{ := }} \mathop{\sum }\limits_{{i = 1}}^{{s}_{k,t - 1}}{e}_{k,t,i}^{2}. \tag{6}
111
+ $$
112
+
113
+ Such measure plays an important role in the residual bootstrap exploration mechanism.
114
+
115
+ 3) The third part is Residuals Bootstrapping. This subroutine is independent of the model which suggests the power of generalizability of ReBoot principle. ReBoot principle requires the computation of the exploration bonus [Mammen, 1993, which is ${s}_{k,t - 1}^{-1}\mathop{\sum }\limits_{{i = 1}}^{{s}_{k,t - 1}}{\omega }_{k,t,i}{e}_{k,t,i}$ , where ${\left\{ {\omega }_{k},{t}_{,i}\right\} }_{i = 1}^{{s}_{k,t - 1}}$ is residual bootstrap weights for arm $k$ at round $t$ .
116
+
117
+ Choice of Bootstrapping Weights. The bootstrap weights considered in this work are i.i.d with zero mean and variance ${\sigma }_{\omega }^{2}$ . They are independent of the noise process ${\left\{ {\epsilon }_{t}\right\} }_{t = 1}^{\infty }$ . In the literature of bootstrap procedure Mammen, 1993, the choices of bootstrap weights distribution include Gaussian weights, Rademacher weights and skew correcting weights. In LinReBoot, we adopt the Gaussian bootstrap weights to enable an efficient implement described at section 3.3 .
118
+
119
+ 4) The last subroutine is the action exploring based on residual bootstrap. More specifically, for arm $k$ at round $t$ , LinReBoot adds exploration bonus from residual bootstrapping on the estimated mean ${\widehat{\mu }}_{k,t}$ as follow,
120
+
121
+ $$
122
+ {\widetilde{\mu }}_{k,t} = {\widehat{\mu }}_{k,t} + \frac{1}{{s}_{k,t - 1}}\mathop{\sum }\limits_{{i = 1}}^{{s}_{k,t - 1}}{\omega }_{k,t,i}{e}_{k,t,i}, \tag{7}
123
+ $$
124
+
125
+ then agent pulls arm with the highest bootstrapped
126
+
127
+ Algorithm 1 LinReBoot
128
+
129
+ Require: $\lambda ,{s}_{1,0} = \ldots = {s}_{K,0} = 0$
130
+
131
+ for $t = 1,\ldots ,n$ do
132
+
133
+ if $t < K + 1$ then
134
+
135
+ ${I}_{t} \leftarrow t$
136
+
137
+ else
138
+
139
+ ${\mathbf{V}}_{t} \leftarrow {\mathbf{X}}_{t - 1}^{\top }{\mathbf{X}}_{t - 1} + \lambda \mathbf{I}$
140
+
141
+ ${\widehat{\mathbf{\theta }}}_{t} \leftarrow {\mathbf{V}}_{t}^{-1}{\mathbf{X}}_{t - 1}^{\top }{\mathbf{Y}}_{t - 1}$
142
+
143
+ for $k = 1,\ldots ,K$ do
144
+
145
+ ${e}_{k,t,i} \leftarrow {r}_{k,i} - {\mathbf{x}}_{k}^{\top }{\widehat{\mathbf{\theta }}}_{t},\forall i \in \left\{ {s}_{k,t - 1}\right\}$
146
+
147
+ Generate ${\left\{ {\omega }_{k,t,i}\right\} }_{i = 1}^{{s}_{k,t - 1}}$
148
+
149
+ ${\widetilde{\mu }}_{k} \leftarrow {\mathbf{x}}_{k}^{\top }{\widehat{\mathbf{\theta }}}_{t} + {s}_{k,t - 1}^{-1}\mathop{\sum }\limits_{{i = 1}}^{{s}_{k,t - 1}}{\omega }_{k,t,i}{e}_{k,t,i}$
150
+
151
+ end for
152
+
153
+ ${I}_{t} \leftarrow \arg \max {\widetilde{\mu }}_{k}$
154
+
155
+ $k \in \left\lbrack K\right\rbrack$
156
+
157
+ end if
158
+
159
+ ${s}_{{I}_{t},t} \leftarrow {s}_{{I}_{t},t - 1} + 1$ and ${s}_{k,t} \leftarrow {s}_{k,t - 1}.\forall k \neq {I}_{t}$
160
+
161
+ Pull arm ${I}_{t}$ and get reward ${r}_{{I}_{t},{s}_{{I}_{t}}}$
162
+
163
+ ${\mathbf{X}}_{t} \leftarrow \left\lbrack \begin{matrix} {\mathbf{X}}_{t - 1} \\ {\mathbf{x}}_{{I}_{t}}^{\top } \end{matrix}\right\rbrack$ and ${\mathbf{Y}}_{t} \leftarrow \left\lbrack \begin{matrix} {\mathbf{Y}}_{t - 1} \\ {r}_{{I}_{t},{s}_{{I}_{t}}} \end{matrix}\right\rbrack$
164
+
165
+ end for
166
+
167
+ mean,
168
+
169
+ $$
170
+ {I}_{t} \equiv \arg \mathop{\max }\limits_{{k \in \left\lbrack K\right\rbrack }}{\widetilde{\mu }}_{k,t} \tag{8}
171
+ $$
172
+
173
+ Note that the variance of bootstrapped mean ${\widetilde{\mu }}_{k,t}$ is ${\sigma }_{\omega }^{2}{s}_{k,t - 1}^{-2}{RS}{S}_{k,t}$ , indicating an adaptive amount of extra exploration is controlled by ${s}_{k,t - 1}$ and ${RS}{S}_{k,t}$ .
174
+
175
+ Short Summary. Our proposed LinReBoot has following steps at round $t > K$ ,
176
+
177
+ 1) Ridge estimation: compute ${\mathbf{V}}_{t},{\widehat{\mathbf{\theta }}}_{t}$ .
178
+
179
+ 2) Finding residuals for each arm: for arm $k$ , compute ${\widehat{\mu }}_{k,t}$ and ${\left\{ {e}_{k,t,i}\right\} }_{i = 1}^{{s}_{k,t - 1}}$ .
180
+
181
+ 3) Compute Bootstrapped mean for each arm: for arm $k$ , generate ${\left\{ {\omega }_{k,t,i}\right\} }_{i = 1}^{{s}_{k,t - 1}}$ and compute ${\widetilde{\mu }}_{k,t}$ (7).
182
+
183
+ 4) Pull arm with the highest ${\widetilde{\mu }}_{k,t}$ then observe reward.
184
+
185
+ Algorithm [1] describes LinReBoot. The strength of LinReBoot is its easy generalizability across different bandit problems including linear bandits and even more complicated structured problems (Appendix D.1).
186
+
187
+ Remark 2. (LinTS perturbs system parameter estimate, LinReBoot perturbs expected reward estimates) Compare with the LinTS in [Agrawal and Goyal, 2013b], in which LinTS samples a perturbed parameter ${\widetilde{\mathbf{\theta }}}_{t}^{\text{ LinTS }} =$ ${\widehat{\mathbf{\theta }}}_{t} + {\beta }_{t}{\mathbf{V}}_{t}^{-1/2}{\mathbf{\eta }}_{t}$ with scaling ${\beta }_{t}$ and appropriate independent noise ${\eta }_{t}$ (defined in [Agrawal and Goyal,2013b]). Our proposed LinReBoot samples a perturbed expected reward ${\widetilde{\mu }}_{k,t}^{\text{ LinReBoot }} = \left\langle {{\widehat{\mathbf{\theta }}}_{t},{\mathbf{x}}_{k}}\right\rangle + \frac{1}{{s}_{k,t - 1}}\mathop{\sum }\limits_{{i = 1}}^{{s}_{k,t - 1}}{w}_{k,t,i}{e}_{k,t,i}.$ That is, LinReBoot is perturbing the expected reward estimate via prediction error uncertainty, which is supervised by real reward. In contrast, LinTS is perturbing the system parameter, when can be wrong if the system modeling is wrong.
188
+
189
+ § 3.3 EFFICIENT IMPLEMENTATION
190
+
191
+ By the attractive computational properties of Gaussian distribution, the computational cost of LinReBoot can be reduced significantly when Gaussian Bootstrap weights are generated. Formally: assume ${\omega }_{k,t,i} \sim$ $N\left( {0,{\sigma }_{\omega }^{2}}\right) ,\forall k,t,i$ , recalling (7), for $k \in \left\lbrack K\right\rbrack$ and any $t \geq 1$ , bootstrapped mean ${\widetilde{\mu }}_{k,t}$ follows a Gaussian distribution,
192
+
193
+ $$
194
+ {\widetilde{\mu }}_{k,t} \mid {\mathcal{F}}_{t - 1} \sim N\left( {{\widehat{\mu }}_{k,t},{\sigma }_{\omega }^{2}{s}_{k,t - 1}^{-2}{RS}{S}_{k,t}}\right) . \tag{9}
195
+ $$
196
+
197
+ Such Gaussian-distributed property of ${\widetilde{\mu }}_{k,t}$ indicates that if we can update ${\widehat{\mu }}_{k,t},{s}_{k,t - 1}$ and ${RS}{S}_{k,t}$ incrementally for arm $k$ , this bootstrapped mean ${\widetilde{\mu }}_{k,t}$ can be generated by Gaussian generator without inner loop for generating weights. The first two terms, ${\widehat{\mu }}_{k,t}$ and ${s}_{k,t - 1}$ , are naturally updated in incremental manner. For ${RS}{S}_{k,t}$ , following decomposition ensures an incremental update,
198
+
199
+ $$
200
+ {RS}{S}_{k,t} = \mathop{\sum }\limits_{{i = 1}}^{{s}_{k,t - 1}}{r}_{k,i}^{2} + {s}_{k,t - 1}{\widehat{\mu }}_{k,t}^{2} - 2{\widehat{\mu }}_{k,t}\mathop{\sum }\limits_{{i = 1}}^{{s}_{k,t - 1}}{r}_{k,i}.
201
+ $$
202
+
203
+ Then an efficient generation for ${\widetilde{\mu }}_{k,t} \mid {\mathcal{F}}_{t - 1}$ is ensured by the incremental updates for ${\widehat{\mu }}_{k,t},{s}_{k,t - 1},\mathop{\sum }\limits_{{i = 1}}^{{s}_{k,t - 1}}{r}_{k,i}^{2}$ , $\mathop{\sum }\limits_{{i = 1}}^{{s}_{k,t - 1}}{r}_{k,i}$ . Furthermore, since the residual bootstrap weights are generated independently, ${\widetilde{\mu }}_{k,t}$ among arms are also independent given historical randomness and can be sampled from one multivariate Gaussian generation simultaneously. Formally, ${\widetilde{\mathbf{\mu }}}^{\left( t\right) } = {\left( {\widetilde{\mu }}_{1,t},\ldots ,{\widetilde{\mu }}_{K,t}\right) }^{\top }$ is conditional distributed as
204
+
205
+ $$
206
+ {\widetilde{\mathbf{\mu }}}^{\left( t\right) } \mid {\mathcal{F}}_{t - 1} \sim {N}_{K}\left( {{\widehat{\mathbf{\mu }}}^{\left( t\right) },{\mathbf{\sum }}_{\omega }^{\left( t\right) }}\right) , \tag{10}
207
+ $$
208
+
209
+ where ${\widehat{\mathbf{\mu }}}^{\left( t\right) } = {\left( {\widehat{\mu }}_{1,t},\ldots ,{\widehat{\mu }}_{K,t}\right) }^{\top }$ and ${\mathbf{\sum }}_{\omega }^{\left( t\right) }$ is a diagonal matrix with diagonal elements ${\sigma }_{\omega }^{2}{s}_{k,t - 1}^{-2}{RS}{S}_{k,t}$ . Detailed steps and more illustration about efficient implementation is provided in Appendix D.7.1. Moreover, an empirical study about computational efficiency is conducted in Appendix D.7.2 and Table 3 provides the computational cost of our proposed LinReBoot as well as other baseline algorithms.
210
+
211
+ § 4 OPTIMISM DESIGN
212
+
213
+ Optimistic Estimated Discrepancy. This section identifies and demystifies the technical challenge of implementing ReBoot principle in the stochastic linear bandit problem. The key is to conduct a detailed investigation to produce probabilistic control on the behavior of the 'Optimistic Estimate Discrepancy (OED)' of the LinReBoot policy [8]. In principle, the OED is given by
214
+
215
+ $$
216
+ \mathbf{{OED}} = \text{ Optimism } \times \text{ Action Context Norm, } \tag{11}
217
+ $$
218
+
219
+ where the Action Context Norm is given by ${\begin{Vmatrix}{\mathbf{x}}_{k}\end{Vmatrix}}_{{\mathbf{V}}_{t}^{-1}}$ and Optimism is given by ${c}_{t,k}$ for the $k$ th action at time $t$ , defined in (14). Design of ${c}_{t,k}$ will be elaborated in Section 4.1.
220
+
221
+ Sufficient Explored Arms. We define the concept of Sufficient Explore Arms to facilitate the formal regret analysis of LinReBoot. Intuitively, an arm is sufficient explored if its index produced by the policy (8) is less than the mean reward of the optimal arm. Technically, we say an arm $k$ is sufficiently explored at time $t$ if the adopted OED $\left( {{c}_{t,k}{\begin{Vmatrix}{\mathbf{x}}_{k}\end{Vmatrix}}_{{\mathbf{V}}_{t}^{-1}}}\right)$ is bounded by its optimal gap $\left( {\Delta }_{k}\right)$ .
222
+
223
+ The above notion of sufficient explored arm defines the concept of "set of sufficient explored arms" ${\mathcal{S}}_{t}$ , formally
224
+
225
+ $$
226
+ {\mathcal{S}}_{t} \mathrel{\text{ := }} \left\{ {k \in \left\lbrack K\right\rbrack : {c}_{t,k}{\begin{Vmatrix}{\mathbf{x}}_{k}\end{Vmatrix}}_{{\mathbf{V}}_{t}^{-1}} < {\Delta }_{k}}\right\} , \tag{12}
227
+ $$
228
+
229
+ where and ${c}_{t,k}$ is the collaborated optimism and ${c}_{t,k}{\begin{Vmatrix}{\mathbf{x}}_{k}\end{Vmatrix}}_{{\mathbf{V}}_{t}^{-1}}$ is an optimistic estimate of discrepancy of policy index (8).
230
+
231
+ The key consequence of set (12) is that, any member in ${\mathcal{S}}_{t}$ enjoys the property
232
+
233
+ $$
234
+ \forall j \in {\mathcal{S}}_{t} \cap \left\lbrack K\right\rbrack : {\widetilde{\mu }}_{j,t} < {\mu }_{1}; \tag{13}
235
+ $$
236
+
237
+ that is, the LinReBoot policy always avoids an index [8] from sufficiently explored subset such that the bootstrapped mean of this index is less than the optimal mean reward unless all arm are sufficiently explored. (see equation (82) in the proof of Lemma A.1 at section B.1 for technical details).
238
+
239
+ § 4.1 COLLABORATE OPTIMISM
240
+
241
+ Here we elaborate on the collaborated optimism adopted in the definition of sufficient explored arms [12]. Concretely, the collaborated optimism has a form
242
+
243
+ $$
244
+ {c}_{t,k} = {c}_{1}\left( {t,k}\right) + {c}_{2}\left( {t,k}\right) , \tag{14}
245
+ $$
246
+
247
+ where ${c}_{1}\left( {t,k}\right)$ is called sample optimism and ${c}_{2}\left( {t,k}\right)$ is called bootstrap optimism for arm $k$ at time $t$ .
248
+
249
+ Sample Optimism. The sample optimism ${c}_{1}\left( {t,k}\right)$ serves as a control on the event that "the realized sample estimate discrepancy (ED) is bounded by sample OED":
250
+
251
+ $$
252
+ {E}_{t,k} \mathrel{\text{ := }} \left\{ {\left| {{\widehat{\mu }}_{k,t} - {\mu }_{k}}\right| \leq {c}_{1}\left( {t,k}\right) {\begin{Vmatrix}{\mathbf{x}}_{k}\end{Vmatrix}}_{{\mathbf{V}}_{t}^{-1}},}\right\} \tag{15a}
253
+ $$
254
+
255
+ $$
256
+ {E}_{t} \mathrel{\text{ := }} \mathop{\bigcap }\limits_{{k = 1}}^{K}{E}_{t,k} \tag{15b}
257
+ $$
258
+
259
+ where ${c}_{1}\left( {t,k}\right)$ is a constant which can be tuned by our LinReBoot algorithm, making the bad event ${\bar{E}}_{t,k}$ and $\bar{E}$ become unlikely. In fact, this ${E}_{t,k}$ is the event that the least squared estimation is "close" to the true mean reward for arm $k$ at round $t$ . In section 5, the probability of the bad event ${\bar{E}}_{t}$ is controlled by a parameter tuned by users based on lemma 5.1.
260
+
261
+ Bootstrap Optimism. The bootstrap optimism ${c}_{2}\left( {t,k}\right)$ serves as a control on the event that "the realized bootstrap ED is bounded by bootstrap OED":
262
+
263
+ $$
264
+ {E}_{t,k}^{\prime } \mathrel{\text{ := }} \left\{ {\left| {{\widetilde{\mu }}_{k,t} - {\widehat{\mu }}_{k,t}}\right| \leq {c}_{2}\left( {t,k}\right) {\begin{Vmatrix}{\mathbf{x}}_{k}\end{Vmatrix}}_{{\mathbf{V}}_{t}^{-1}}}\right\} , \tag{16a}
265
+ $$
266
+
267
+ $$
268
+ {E}_{t}^{\prime } \mathrel{\text{ := }} \mathop{\bigcap }\limits_{{k = 1}}^{K}{E}_{t,k}^{\prime } \tag{16b}
269
+ $$
270
+
271
+ where ${c}_{2}\left( {t,k}\right)$ is also a constant controlling the conditional probability of the bad event ${\bar{E}}_{t}^{\prime }$ . This ${c}_{2}\left( {t,k}\right)$ can be tuned by our LinReBoot algorithm as well. Similar to ${E}_{t,k}$ , this ${E}_{t,k}^{\prime }$ is the event that the residual bootstrap based estimation is "close" to the least squared estimate ${\widehat{\mu }}_{k,t}$ for arm $k$ at round $t$ . In section 5, the probability of bad event ${\bar{E}}_{t}^{\prime }$ is controlled by a parameter tuned by users based on lemma 5.2.
272
+
273
+ § 4.2 OPTIMISM DESIGN
274
+
275
+ Choice of sample optimism $\left( \alpha \right)$ . The goal of this part is to illustrate how to pick the sample OED such that the event (15) holds with probability at least $1 - \alpha$ for a given confidence budget $\alpha \in \left( {0,1}\right)$ . Formally, the goal is to find a sample OED function ${c}_{1}\left( {t,k}\right)$ : $\left\lbrack n\right\rbrack \times \left\lbrack K\right\rbrack \mapsto \mathbb{R}$ such that the event [15a] holds with probability at least $1 - {\alpha }_{k}$ . To meet the purpose of the risk control, we specify the sample OED function with form
276
+
277
+ $$
278
+ {c}_{1}\left( {t,k}\right) \mathrel{\text{ := }} {R}_{2}\sqrt{d\log \left( {\left( {1 + t{L}^{2}/\lambda }\right) /{\alpha }_{k}}\right) } + {\lambda }^{1/2}{S}_{2}. \tag{17}
279
+ $$
280
+
281
+ Lemma 5.1 gives the formal result on why such choice has confidence budget at most ${\alpha }_{k}$ . For regret analysis, define ${\alpha }_{\min } = \mathop{\min }\limits_{{k \in \left\lbrack K\right\rbrack }}{\alpha }_{k}$ and $\mathbf{\alpha } = {\left( {\alpha }_{1},\ldots ,{\alpha }_{K}\right) }^{\top }$ .
282
+
283
+ Choice of bootstrap optimism $\left( \beta \right)$ . The goal of this part is to pick bootstrapped OED such that the event (16) holds with probability at least $1 - \beta$ for given confidence budget $\beta \in \left( {0,1}\right)$ . Formally, the goal is to find a sample OED function ${c}_{2}\left( {t,k}\right) : \left\lbrack n\right\rbrack \times \left\lbrack K\right\rbrack \mapsto \mathbb{R}$ such that the event (16a) holds with probability at least $1 - {\beta }_{k}$ . To meet the purpose of the risk control, we specify the bootstrapped OED function with form
284
+
285
+ $$
286
+ {c}_{2}\left( {t,k}\right) \mathrel{\text{ := }} \sqrt{\left( {2{\sigma }_{\omega }^{2}{RS}{S}_{k,t}\log \left( {2/{\beta }_{k}}\right) }\right) /{s}_{k,t - 1}^{2}{\begin{Vmatrix}{\mathbf{x}}_{k}\end{Vmatrix}}_{{\mathbf{V}}_{t}^{-1}}^{2}}.
287
+ $$
288
+
289
+ (18)
290
+
291
+ Lemma 5.2 gives the formal result on why such choice has a confidence budget at most ${\beta }_{k}$ . For regret analysis, let ${\beta }_{\min }$ be the smallest ${\beta }_{k},\forall k \in \left\lbrack K\right\rbrack$ and $\mathbf{\beta } =$ ${\left( {\beta }_{1},\ldots ,{\beta }_{K}\right) }^{\top }$ .
292
+
293
+ § 4.3 OPTIMISM FOR OPTIMAL ARM
294
+
295
+ Sample-Bootstrap OED ratio of the optimal arm (b). Indicated by the regret analysis in [Kve-ton et al., 2019a], instead of controlling the exploration independently, the relation between two sources of explorations needs to be considered because this relation is critical for finding the optimal action. To meet such observation, we define a good event,
296
+
297
+ $$
298
+ {E}_{t}^{\prime \prime } \mathrel{\text{ := }} \left\{ {{\widetilde{\mu }}_{1,t} - {\widehat{\mu }}_{1,t} > {c}_{1}\left( {t,1}\right) {\begin{Vmatrix}{\mathbf{x}}_{1}\end{Vmatrix}}_{{\mathbf{V}}_{t}^{-1}}}\right\} . \tag{19}
299
+ $$
300
+
301
+ max width=
302
+
303
+ Notation Definition
304
+
305
+ 1-2
306
+ ${\zeta }_{1}\left( {n,d}\right)$ $\left( {{L}_{2}\sqrt{d\log \left( \frac{1 + n{L}^{2}/\lambda }{{\alpha }_{\min }}\right) } + {\lambda }^{1/2}{S}_{2}}\right) \times$ $\sqrt{2\left( {n - K}\right) d\log \left( {1 + \mathop{\sum }\limits_{{i = 1}}^{r}{\sigma }_{i}^{2}/{d\lambda }}\right) }$
307
+
308
+ 1-2
309
+ ${\zeta }_{2}\left( {n,d}\right)$ $\sqrt{2{\sigma }_{\omega }^{2}\log \left( \frac{2}{{\beta }_{\min }}\right) } \times$ $\sqrt{2\left( {n - K}\right) d\log \left( {1 + \mathop{\sum }\limits_{{i = 1}}^{r}{\sigma }_{i}^{2}/{d\lambda }}\right) }$
310
+
311
+ 1-2
312
+ ${\zeta }_{3}\left( n\right)$ ${2K}\sqrt{4{L}_{2}{\sigma }_{\omega }^{2}\log \left( \frac{2}{{\beta }_{\min }}\right) \left( {\log n + 1}\right) }$
313
+
314
+ 1-2
315
+ ${\zeta }_{4}\left( n\right)$ $2{S}_{2}L\left( {\left( {n - K}\right) \left( {\alpha + \beta }\right) + K - 1}\right)$
316
+
317
+ 1-2
318
+
319
+ Table 1: Notations in Regret Analysis
320
+
321
+ Given the good event ${E}_{t}^{\prime \prime }$ , the policy index ${\widetilde{\mu }}_{1,t}$ of the optimal arm enjoys further positive bias, hence the agent will have better chance to make optimal action.
322
+
323
+ In particular, we highlight a constant $b$ used to measure the ratio of the sample optimism (17) to the bootstrap optimism (18); formally, we require $b$ satisfies
324
+
325
+ $$
326
+ {c}_{1}\left( {t,1}\right) /{c}_{2}\left( {t,1}\right) \geq b \cdot \sqrt{2\log \left( {2/{\beta }_{1}}\right) }\text{ . } \tag{20}
327
+ $$
328
+
329
+ Intuitively, the constant $b$ measures the relation between sample OED and bootstrap OED of the optimal arm. This $b$ plays an important role of the probability lower bound of event (19) (See Lemma 5.3). Note that, if (20) holds, we have the lower bound (26) ; otherwise, we have the lower bound (27). In both cases, we have a lower bound for the event (19).
330
+
331
+ Good event for optimal arm $\left( \gamma \right)$ . Here we introduce the event that over exploration and under exploration of the optimal arm have been avoided simultaneously. Formally, the constant $\gamma$ is the probability that the bandit index (8) is not over-exploration (Event ${E}_{t}^{\prime }$ ) and also not under-exploration (Event ${E}_{t}^{\prime \prime }$ )
332
+
333
+ $$
334
+ \left\{ {{c}_{1}\left( {t,1}\right) < \left( {{\widetilde{\mu }}_{1,t} - {\widehat{\mu }}_{1,t}}\right) /{\begin{Vmatrix}{\mathbf{x}}_{1}\end{Vmatrix}}_{{\mathbf{V}}_{t}^{-1}} < {c}_{2}\left( {t,1}\right) }\right\} . \tag{21}
335
+ $$
336
+
337
+ Technically, we can show that the probability of the event (21) is lower bounded by the term
338
+
339
+ $$
340
+ {\mathbb{P}}_{t}\left( {E}_{t}^{\prime \prime }\right) - {\mathbb{P}}_{t}\left( {\bar{E}}_{t}^{\prime }\right) , \tag{22}
341
+ $$
342
+
343
+ with probability at least $1 - \gamma$ (Lemma 5.4). Such lower bound is translated into an upper bound in regret analysis.
344
+
345
+ § 5 FORMAL RESULTS
346
+
347
+ § 5.1 REGRET BOUND FOR LINREBOOT
348
+
349
+ Theorem 5.1. Under Assumptions [1, 2, 3 and technical conditions (32) and (74), with probability at least $1 - \left( {\delta + \gamma }\right)$ , the expected regret of Algorithm 1 is bounded ${as}$ ,
350
+
351
+ $$
352
+ {R}_{n} \leq {C}_{1}\left( {{\alpha }_{1},\mathbf{\beta },\gamma ,b}\right) {\zeta }_{1}\left( {n,d}\right)
353
+ $$
354
+
355
+ $$
356
+ + {C}_{2}\left( {\mathbf{\alpha },\mathbf{\beta },\gamma ,b,\delta }\right) {\zeta }_{2}\left( {n,d}\right) \tag{23}
357
+ $$
358
+
359
+ $$
360
+ + {C}_{1}\left( {{\alpha }_{1},\mathbf{\beta },\gamma ,b}\right) {\zeta }_{3}\left( n\right) + {\zeta }_{4}\left( n\right) ,
361
+ $$
362
+
363
+ where ${\zeta }_{1},{\zeta }_{2},{\zeta }_{3}$ and ${\zeta }_{4}$ are defined in table 1 and ${C}_{1}$ , ${C}_{2},{M}_{1},{M}_{2}$ are described in table 2.
364
+
365
+ Proof. See appendix A.1.
366
+
367
+ Corollary 5.2. Let $\overline{\mathbf{\alpha } = \mathbf{\beta }} = \frac{1}{\sqrt{n}}\mathbf{1}$ , the order of high probability upper bound in Theorem 5.1 is $\widetilde{O}\left( {d\sqrt{n}}\right)$ .
368
+
369
+ Proof. See appendix A.2.
370
+
371
+ Corollary 5.2 shows that our regret bound scales as the regret bound of Linear Thompson sampling [Agrawal and Goyal, 2013b and Linear PHE Kveton et al., 2019a.
372
+
373
+ § 5.2 VALIDATE SAMPLE OPTIMISM
374
+
375
+ Lemma 5.1. Under Assumptions 1, 2, 3 and choose ${c}_{1}\left( {t,k}\right)$ as $\left( {17}\right) ,\mathbb{P}\left( {\bar{E}}_{t,k}\right)$ , the probability of bad event corresponded to least squared estimation described in [15]), is controlled. Formally, $\forall k \in \left\lbrack K\right\rbrack ,\forall {\alpha }_{k} > 0,\forall t \geq$ 1,
376
+
377
+ $$
378
+ \mathbb{P}\left( {\left| {{\widehat{\mu }}_{k,t} - {\mu }_{k}}\right| \leq {c}_{1}\left( {t,k}\right) {\begin{Vmatrix}{\mathbf{x}}_{k}\end{Vmatrix}}_{{\mathbf{V}}_{t}^{-1}}}\right) \geq 1 - {\alpha }_{k}. \tag{24}
379
+ $$
380
+
381
+ Consequently, we have $\mathbb{P}\left( {\bar{E}}_{t}\right) \leq \alpha \mathrel{\text{ := }} \mathop{\sum }\limits_{{k = 1}}^{K}{\alpha }_{k}$ .
382
+
383
+ Proof. See appendix A.3.
384
+
385
+ Lemma 5.1 supports that the choice of ${c}_{1}\left( {t,k}\right)$ at (17) for the sample optimism event (15) is valid with confidence budget $\alpha$ .
386
+
387
+ § 5.3 VALIDATE BOOTSTRAP OPTIMISM
388
+
389
+ Lemma 5.2. Suppose bootstrap weights are Gaussian. Pick ${c}_{2}\left( {t,k}\right)$ as (18). The conditional probability of bad event corresponding to residual bootstrap exploration described in [16], ${\mathbb{P}}_{t}\left( {\bar{E}}_{t,k}^{\prime }\right)$ , is controlled. Formally, $\forall k \in \left\lbrack K\right\rbrack ,\forall {\beta }_{k} > 0,\forall t \geq 1$
390
+
391
+ $$
392
+ {\mathbb{P}}_{t}\left( {\left| {{\widetilde{\mu }}_{k,t} - {\widehat{\mu }}_{k,t}}\right| \leq {c}_{2}\left( {t,k}\right) {\begin{Vmatrix}{\mathbf{x}}_{k}\end{Vmatrix}}_{{\mathbf{V}}_{t}^{-1}}}\right) \geq 1 - {\beta }_{k}. \tag{25}
393
+ $$
394
+
395
+ Consequently, we have ${\mathbb{P}}_{t}\left( {\bar{E}}_{t}^{\prime }\right) \leq \beta \mathrel{\text{ := }} \mathop{\sum }\limits_{{k = 1}}^{K}{\beta }_{k}$ .
396
+
397
+ Proof. See appendix A.4.
398
+
399
+ Lemma 5.2 supports that the choice of ${c}_{2}\left( {t,k}\right)$ at $\left( \overline{18}\right)$ for the sample optimism event (16) is valid with confidence budget $\beta$ .
400
+
401
+ § 5.4 SAMPLE-BOOTSTRAP RATIO
402
+
403
+ Lemma 5.3. Under Assumptions [1, 2, 3, Suppose bootstrap weights are Gaussian. The conditional probability of anti-concentration for optimal arm described in [19], ${\mathbb{P}}_{t}\left( {\bar{E}}_{t}^{\prime \prime }\right)$ , has lower bound. Formally, if $b$ satisfies (20),
404
+
405
+ $$
406
+ {\mathbb{P}}_{t}\left( {E}_{t}^{\prime \prime }\right) \geq \frac{b}{\sqrt{2\pi }}\exp \left( {-\frac{3{c}_{1}^{2}\left( {t,1}\right) {s}_{1,t - 1}^{2}{\begin{Vmatrix}{\mathbf{x}}_{1}\end{Vmatrix}}_{{\mathbf{V}}_{t}^{-1}}^{2}}{2{\sigma }_{\omega }^{2}{RS}{S}_{1,t}}}\right) .
407
+ $$
408
+
409
+ (26)
410
+
411
+ Otherwise,
412
+
413
+ $$
414
+ {\mathbb{P}}_{t}\left( {E}_{t}^{\prime \prime }\right) \geq \Phi \left( {-b}\right) , \tag{27}
415
+ $$
416
+
417
+ where $\Phi$ is the CDF of standard normal distribution.
418
+
419
+ Proof. See appendix A.5.
420
+
421
+ Lemma 5.3 provides the lower bound result for good event ${E}_{t}^{\prime \prime }$ . The result indicates that, if the bootstrap optimism is not 'too large', then the LinReBoot procedure can enjoy additional regret reduction.
422
+
423
+ § 5.5 VALIDATE GOOD EVENT
424
+
425
+ Lemma 5.4. Under Assumptions [1, 2, 3 and suppose Bootstrap weights are Gaussian. Assume $b$ satisfies a technical condition (74). Then, with probability at least $1 - \gamma ,{\mathbb{P}}_{t}\left( {E}_{t}^{\prime \prime }\right) - {\mathbb{P}}_{t}\left( \overline{{E}_{t}^{\prime }}\right)$ has lower bound,
426
+
427
+ $$
428
+ \frac{b}{\sqrt{2\pi }}\exp \left( {-\frac{3{s}_{1,t - 1}^{3/2}{c}_{1}^{2}\left( {t,1}\right) {\begin{Vmatrix}{\mathbf{x}}_{1}\end{Vmatrix}}_{2}^{2}}{8{\sigma }_{\omega }^{2}\left( {{\sigma }_{\min }^{2} + \lambda }\right) \sqrt{\frac{1}{{M}_{2}}\log \left( \frac{{M}_{1}}{1 - \gamma }\right) }}}\right) - \beta ,
429
+ $$
430
+
431
+ (28)
432
+
433
+ where ${M}_{1}$ and ${M}_{2}$ are defined in table 2 .
434
+
435
+ Proof. See appendix A. 6.
436
+
437
+ Lemma 5.4 provided the a high probability lower bound for the difference between probability of the event for anti-concentration ${E}_{t}^{\prime \prime }$ and probability of bad event discussed in bootstrap optimism in Section 4.1. This lower bound is also for probability of 'not under and not over exploration' event (21). Lemma 5.4 links the sample optimism and bootstrap optimism and holds a right amount of exploration of the optimal arm.
438
+
439
+ § 6 EXPERIMENTS
440
+
441
+ In this section, we conduct empirical studies under three settings: Stochastic Linear Bandit, Contextual Linear Bandit and Linear Bandit with Covariates. Our LinReBoot is compared to several baselines including LinTS-G [Agrawal and Goyal, 2013b, Lattimore and Szepesvári, 2020], LinTS-IG [Honda and Takemura, 2014, Riquelme et al., 2018, LinPHE [Kveton et al., 2019a, LinGIRO [Kveton et al., 2019c] and LinUCB Abbasi-Yadkori et al., 2011, Lattimore and Szepesvári, 2020]. More details about baselines can be found in Appendix D.6.
442
+
443
+ § 6.1 STOCHASTIC LINEAR BANDIT
444
+
445
+ We compare LinReBoot to other linear bandit algorithms under stochastic linear bandit described in Section 2 . We experiment with several dimensions $d$ including 5,10 and ${20}.K$ is chosen as 100 . Synthetic data generation for this setting is deferred to Appendix D. 2 in the supplementary material. Results. The first row of Figure 1 reports the results for Stochastic Linear Bandit setting. Our LinReBoot rivals LinTS-G and LinTS-IG while substantially exceeds LinGIRO, LinPHE and LinUCB. When $d$ increases, the performance of LinReBoot rivals and exceeds the best of other methods.
446
+
447
+ d=5 d=10 d=20 500 400 300 200 6000 8000 10000 2000 4000 6000 8000 10000 800 400 200 6000 8000 10000 2000 4000 8000 10000 800 600 200 6000 8000 10000 2000 4000 8000 10000 Decision Time Decision Time Linear GIRO Linear PHE Linear UCB 500 500 400 400 300 300 200 200 100 100 2000 4000 6000 8000 10000 2000 4000 600 400 400 200 200 0 2000 6000 8000 10000 2000 4000 Regret in Setting 3 800 800 600 600 400 400 200 200 0 2000 6000 8000 10000 2000 4000 Decision Time Linear ReBoot Linear TS-G Linear TS-IG
448
+
449
+ Figure 1: Comparison of LinReBoot with Gaussian Bootstrap weights to baselines under three linear bandit problems and three different context dimension $d$ . First row referred to the setting in Section 6.1, second row is for Section 6.2 and the last row is for Section 6.3. Three columns refer to $d = 5,d = {10}$ and $d = {20}$ respectively.
450
+
451
+ § 6.2 CONTEXTUAL LINEAR BANDIT
452
+
453
+ In the second experiment, we compare LinReBoot to other linear bandit algorithms under Contextual Linear Bandit where the contexts are generated from some distributions by arms. Note that this setting matches previous work [Chu et al., 2011]. Linear bandit algorithms can also be applied under this kind of environment. In our experiment, the LinReBoot is implemented as Algorithm 2 in Appendix D.1. Like the setting in Section 6.1, the dimension of $d$ is chosen as 5 or 10 or 20 and the synthetic data generation for this setting is described in Appendix D.2. Results. The second row of Figure 1 reports the results for Contextual Linear Bandit. Our LinReBoot rival LinTS-G and substantially exceed LinTS-IG, LinGIRO, LinPHE and LinUCB. When $d$ increases, the performance of LinReBoot rivals LinTS-IG and exceeds others.
454
+
455
+ § 6.3 BANDIT WITH COVARIATES
456
+
457
+ Our last experiment is conducted under the setting of linear bandit with covariates, which is also called linear parametrized bandit by Rusmevichientong and Tsitsik-lis, 2010. This problem is significantly different from the previous two problems in the following ways. Each arm has its true parameter ${\mathbf{\theta }}_{k}$ . That is, each arm has its estimate ${\widehat{\mathbf{\theta }}}_{k}$ from the ridge regression procedure in Section 3.2. Also, unlike the setting in Section 6.2, the contexts are generated from a distribution that is independent of arms. Thus the overall task in this setting is not only the estimation of the target parameter $\mathbf{\theta }$ , but also the detection of which arm a context belongs to. This case is also referred to as the online decision-making under covariates [Bastani and Bayati, 2020]. For the LinReBoot in this setting, detailed algorithm is provided as Algorithm 3 in Appendix D.1. $d$ is chosen as 5 or 10 or 20 and $K = {10}$ . Synthetic data generation for this setting is described in Appendix D.2. Results. The third row of Figure 1 reports the results for Linear Bandit with Covariates. Our LinReBoot exceeds all competing algorithms LinTS-G, LinTS-IG, LinGIRO, LinPHE and LinUCB.
458
+
459
+ Summary. From Figure 1, the proposed LinReBoot is always the top 3 algorithms under all settings and all choice of dimension $d$ . More specifically, LinReBoot is clearly comparable to the state-of-the-art Linear Thompson Sampling algorithms(LinTS-G, LinTS-IG) or even outperforms them in many cases. Regarding the computational cost, from Table 3, our proposed LinReBoot is consistently computational efficient among all settings compared to LinTS-G, LinTS-IG and LinUCB under all three settings.
460
+
461
+ § 7 CONCLUSION
462
+
463
+ We propose LinReBoot algorithm for stochastic linear bandit problems. In theory, we prove LinReBoot that secures $\widetilde{O}\left( {d\sqrt{n}}\right)$ high probability expected regret. Empirically, we show LinReBoot rivals LinTS-G, LinTS-IG and exceeds LinPHE, LinGIRO and LinUCB, which supports the easy-generalizability of ReBoot principle in Wang et al., 2020 under various contextual bandit settings including Stochastic Linear Bandit, Contextual Linear Bandit, and Linear Bandit with Covariates.
UAI/UAI 2022/UAI 2022 Conference/B5Lf6PUoqg5/Initial_manuscript_md/Initial_manuscript.md ADDED
The diff for this file is too large to render. See raw diff
 
UAI/UAI 2022/UAI 2022 Conference/B5Lf6PUoqg5/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,371 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § REVAR: STRENGTHENING POLICY EVALUATION VIA REDUCED VARIANCE SAMPLING
2
+
3
+ § ABSTRACT
4
+
5
+ This paper studies the problem of data collection for policy evaluation in Markov decision processes (MDPs). In policy evaluation, we are given a target policy and asked to estimate the expected cumulative reward it will obtain in an environment formalized as an MDP. We develop theory for optimal data collection within the class of tree-structured MDPs by first deriving an oracle data collection strategy that uses knowledge of the variance of the reward distributions. We then introduce the Reduced Variance Sampling (ReVar) algorithm that approximates the oracle strategy when the reward variances are unknown a priori and bound its sub-optimality compared to the oracle strategy. Finally, we empirically validate that ReVar leads to policy evaluation with mean squared error comparable to the oracle strategy and significantly lower than simply running the target policy.
6
+
7
+ § 1 INTRODUCTION
8
+
9
+ In reinforcement learning (RL) applications, there is often a need for policy evaluation to determine (or estimate) the expected return (future cumulative reward) of a given policy. Policy evaluation is also required in other sequential decision-making settings outside of RL. For example, testing an autonomous vehicle stack or ad-serving system can be seen as policy evaluation applications. Accurate and data efficient policy evaluation is critical for safe and trust-worthy deployment of autonomous systems.
10
+
11
+ This paper studies data collection for low mean squared error (MSE) policy evaluation in sequential decision-making tasks formalized as Markov decision processes (MDPs). The objective of policy evaluation is to estimate the expected return that will be obtained by running a target policy which is a given probabilistic mapping from states to actions.
12
+
13
+ To evaluate the target policy, we require data from the environment in which it will be deployed. Collecting data requires running a (possibly non-stationary) behavior policy to generate state-action-reward trajectories. Our goal is to find a behavior policy that leads to a minimum MSE evaluation of the target policy.
14
+
15
+ The most natural choice is on-policy sampling in which we use the target policy as the behavior policy. However, we show that in some cases this choice is far from optimal (e.g., Figure 2 in our empirical analysis) as it fails to actively take actions from which the expected return is uncertain. Instead, an optimal behavior policy should take actions in any given state to reduce uncertainty in the current estimate of the expected return from that state.
16
+
17
+ Our paper makes the following main contributions. We first derive an optimal "oracle" behavior policy for finite tree-structured MDPs assuming oracle access to the MDP transition probabilities and variances of the reward distributions. Sampling trajectories according to the oracle behavior policy minimizes the MSE of the estimator of the target policy's expected. As a special case (depth 1 tree MDPs), we recover the optimal behavior policy for multi-armed bandits Carpentier et al. [2015].
18
+
19
+ We then introduce a practical algorithm, Reduced Variance Sampling (ReVar), that adaptively learns the optimal behavior policy by observing rewards and adjusting the policy to select actions that reduce the MSE of the estimator. The main idea of ReVar is to plug-in upper-confidence bounds on the reward distribution variances to approximate the oracle behavior policy. We define a notion of policy evaluation regret compared to the oracle behavior policy, and bound the regret of ReVar. The regret converges rapidly to 0 as the number of sampled episodes grows, theoretically guaranteeing that ReVar quickly matches the performance of the oracle policy. Finally, we implement ReVar and show it leads to low MSE policy evaluation in both a tree-structured and a general finite-horizon MDP. Taken together, our contributions provide a theoretical foundation towards optimal data collection for policy evaluation in MDPs.
20
+
21
+ The remainder of the paper is organized as follows. In Section 3 we reformulate our problem in the bandit setting and discuss related bandit works. In Section 4 we extend the bandit formulation to the tree MDP. Finally we introduce the more general Directed Acyclic Graph (DAG) MDP in Section 5 and discuss some limitations of our sampling behavior. We show numerical experiments in Section 6 and conclude in Section 7.
22
+
23
+ § 2 BACKGROUND
24
+
25
+ In this section, we introduce notation, define the policy evaluation problem, and discuss the prior literature.
26
+
27
+ § 2.1 NOTATION
28
+
29
+ A finite-horizon Markov Decision Process, M, is the tuple $\left( {\mathcal{S},\mathcal{A},P,R,\gamma ,{d}_{0},L}\right)$ , where $\mathcal{S}$ is a finite set of states, $\mathcal{A}$ is a finite set of actions, $P : \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow \left\lbrack {0,1}\right\rbrack$ is a state transition function, $R$ is the reward distribution (formalized below), $\gamma \in \lbrack 0,1)$ is the discount factor, ${d}_{0}$ is the starting state distribution, and $L$ is the maximum episode length. A (stationary) policy, $\pi : \mathcal{S} \times \mathcal{A} \rightarrow \left\lbrack {0,1}\right\rbrack$ , is a probability distribution over actions conditioned on a given state. We assume data can only be collected through episodic interaction: an agent begins in state ${S}_{0} \sim {d}_{0}$ and then at each step $t$ takes an action ${A}_{t} \sim \pi \left( {\cdot \mid {S}_{t}}\right)$ and proceeds to state ${S}_{t + 1} \sim P\left( {\cdot \mid {S}_{t},{A}_{t}}\right)$ . Interaction terminates in at most $L$ steps. Each time the agent takes action ${a}_{t}$ in state ${s}_{t}$ it observes a reward ${R}_{t} \sim R\left( {{s}_{t},{a}_{t}}\right)$ . We assume $R\left( {s,a}\right) = \mathcal{P}\left( {\mu \left( {s,a}\right) ,{\sigma }^{2}\left( {s,a}\right) }\right)$ , where $\mathcal{P}$ denotes a parametric distribution with mean $\mu \left( {s,a}\right)$ and variance ${\sigma }^{2}\left( {s,a}\right)$ . The entire interaction produces a trajectory $H \mathrel{\text{ := }} {\left\{ \left( {S}_{t},{A}_{t},{R}_{t}\right) \right\} }_{t = 1}^{L}$ . We assume ${d}_{0}$ is known but $P$ and the reward distributions are unknown. We define the value of a policy as: $v\left( \pi \right) \mathrel{\text{ := }} {\mathbb{E}}_{\pi }\left\lbrack {\mathop{\sum }\limits_{{t = 1}}^{L}{\gamma }^{t}{R}_{t}}\right\rbrack$ , where ${\mathbb{E}}_{\pi }$ is the expectation w.r.t. trajectories sampled by following $\pi$ .
30
+
31
+ We will make use of the fact that the value of a policy can be written as: $v\left( \pi \right) = \mathbb{E}\left\lbrack {{v}_{0}^{\pi }\left( {S}_{0}\right) \mid {S}_{0} \sim {d}_{0}}\right\rbrack$ where,
32
+
33
+ $$
34
+ {v}_{t}^{\pi }\left( s\right) \mathrel{\text{ := }} \mathop{\sum }\limits_{a}\pi \left( {a \mid s}\right) \mu \left( {s,a}\right) + \gamma \mathop{\sum }\limits_{{s}^{\prime }}P\left( {{s}^{\prime } \mid s,a}\right) {v}_{t + 1}^{\pi }\left( {s}^{\prime }\right)
35
+ $$
36
+
37
+ $$
38
+ \left( {s}^{\prime }\right)
39
+ $$
40
+
41
+ for $t < L$ and ${v}_{t}^{\pi }\left( s\right) = 0$ for $t \geq L$ .
42
+
43
+ § 2.2 POLICY EVALUATION
44
+
45
+ We now formally define our objective. We are given a target policy, $\pi$ , for which we want to estimate $v\left( \pi \right)$ . To estimate $v\left( \pi \right)$ we will generate a set of $K$ trajectories where each trajectory is generated by following some policy. Let ${H}^{k} \mathrel{\text{ := }} {\left\{ {s}_{t}^{k},{a}_{t}^{k},{R}_{t}^{k}\left( {s}_{t}^{k},{a}_{t}^{k}\right) \right\} }_{t = 1}^{L}$ be the trajectory collected in episode $k$ and let ${b}^{k}$ be the policy ran to produce ${H}^{k}$ . The entire set of collected trajectories is given as $\mathcal{D} \mathrel{\text{ := }} {\left\{ {H}^{k},{b}^{k}\right\} }_{k = 1}^{K}$ .
46
+
47
+ Once $\mathcal{D}$ is collected, we estimate $v\left( \pi \right)$ with a certainty-equivalence estimate Sutton [1988]. Suppose $\mathcal{D}$ consists of $n = {KL}$ state-action transitions. We define the random variable representing the estimated future reward from state $s$ at time-step $t$ as:
48
+
49
+ $$
50
+ {Y}_{n}\left( {s,t}\right) \mathrel{\text{ := }} \mathop{\sum }\limits_{a}\pi \left( {a \mid s}\right) \widehat{\mu }\left( {s,a}\right) + \gamma \mathop{\sum }\limits_{{s}^{\prime }}\widehat{P}\left( {{s}^{\prime } \mid s,a}\right) {Y}_{n}\left( {{s}^{\prime },t + 1}\right) ,
51
+ $$
52
+
53
+ where ${Y}_{n}\left( {s,t + 1}\right) \mathrel{\text{ := }} 0$ if $t \geq L,\widehat{\mu }\left( {s,a}\right)$ is the mean reward observed after taking action $a$ in state $s$ and $\widehat{P}\left( {{s}^{\prime } \mid s,a}\right)$ is an estimate of $P\left( {{s}^{\prime } \mid s,a}\right)$ . Finally, the estimate of $v\left( \pi \right)$ is computed as ${Y}_{n} \mathrel{\text{ := }} \mathop{\sum }\limits_{s}{d}_{0}\left( s\right) {Y}_{n}\left( {s,0}\right)$ . In the policy evaluation literature, the certainty-equivalence estimator is also known as the direct method Jiang and Li [2016] and can be shown to be equivalent to batch temporal-difference estimators Sutton [1988], Pavse et al. [2020]. Thus, it is representative of two types of policy evaluation estimators that often give strong empirical performance Voloshin et al. [2019].
54
+
55
+ Our objective is to determine the sequence of behavior policies that minimize error in estimation of $v\left( \pi \right)$ . Formally, we seek to minimize mean squared error which is defined as: ${\mathbb{E}}_{\mathcal{D}}\left\lbrack {\left( {Y}_{n} - v\left( \pi \right) \right) }^{2}\right\rbrack$ where the expectation is over the collected data set $\mathcal{D}$ .
56
+
57
+ § 2.3 RELATED WORK
58
+
59
+ Our paper builds upon work in the bandit literature for optimal data collection for estimating a weighted sum of the mean reward associated with each arm. Antos et al. [2008] study estimating the mean reward of each arm equally well and show that the optimal solution is to pull each arm proportional to the variance of its reward distribution. Since the variances are unknown a priori, they introduce an algorithm that pulls arms in proportion to the empirical variance of each reward distribution. Carpentier et al. [2015] extend this work by introducing a weighting on each arm that is equivalent to the target policy action probabilities in our work. They show that the optimal solution is then to pull each arm proportional to the product of the standard deviation of the reward distribution and the arm weighting. Instead of using the empirical standard deviations, they introduce an upper confidence bound on the standard deviation and use it to select actions. Our work is different from these earlier works in that we consider more general tree-structured MDPs of which bandits are a special case.
60
+
61
+ In RL and MDPs, exploration is widely studied with the objective of finding the optimal policy. Prior work attempts to balance exploration to reduce uncertainty with exploitation to converge to the optimal policy. Common approaches are based on reducing uncertainty Osband et al. [2016], O'Donoghue et al. [2018] or incentivizing visitation of novel states Barto [2013], Pathak et al. [2017], Burda et al. [2018]. These works differ from our work in that we focus on evaluating a fixed policy rather than finding the optimal policy. In our problem, the trade-off becomes balancing taking actions to reduce uncertainty with taking actions that the target policy is likely to take.
62
+
63
+ Our work is similar in spirit to work on adaptive importance sampling Rubinstein and Kroese [2013] which aims to lower the variance of Monte Carlo estimators by adapting the data collection distribution. Adaptive importance sampling was used by Hanna et al. [2017] to lower the variance of policy evaluation in MDPs. It has also been used to lower the variance of policy gradient RL algorithms Bouchard et al. [2016], Ciosek and Whiteson [2017]. AIS methods attempt to find a single optimal sampling distribution whereas our approach attempts to reduce uncertainty in the estimated mean rewards. Another relevant work is of Talebi and Mail-lard [2019] who use a different loss function to estimate the transition model rather than minimize the MSE in the off-policy setting.
64
+
65
+ § 3 OPTIMAL DATA COLLECTION IN MULTI-ARMED BANDITS
66
+
67
+ Before we address optimal data collection for policy evaluation in MDPs, we first revisit the problem in the bandit setting as addressed by earlier work Carpentier et al. [2015]. The bandit setting provides intuition for how a good data collection strategy should select actions, though it falls short of an entire solution for MDPs.
68
+
69
+ Observe that the policy value in a bandit problem is defined as $v\left( \pi \right) \mathrel{\text{ := }} \mathop{\sum }\limits_{{a = 1}}^{A}\pi \left( a\right) \mu \left( a\right)$ where the bandit consist of a single state $s$ and $A$ actions indexed as $a = 1,2,\ldots ,A$ . In this setting, the horizon $L = 1$ so we return to the same state after taking an action $a$ at time $t$ . Hence, we drop the state $s$ from our standard notation.
70
+
71
+ Suppose we have a budget of $n$ samples to divide between the arms and let ${T}_{n}\left( 1\right) ,{T}_{n}\left( 2\right) ,\ldots ,{T}_{n}\left( A\right)$ be the number of samples allocated to actions $1,2,\ldots ,A$ at the end of $n$ rounds. We define the estimate:
72
+
73
+ $$
74
+ {Y}_{n} \mathrel{\text{ := }} \mathop{\sum }\limits_{{a = 1}}^{A}\frac{\pi \left( a\right) }{{T}_{n}\left( a\right) }\mathop{\sum }\limits_{{h = 1}}^{{{T}_{n}\left( a\right) }}{R}_{h}\left( a\right) = \mathop{\sum }\limits_{{a = 1}}^{A}\pi \left( a\right) \widehat{\mu }\left( a\right) . \tag{1}
75
+ $$
76
+
77
+ where, ${R}_{h}\left( a\right)$ is the ${h}^{\text{ th }}$ reward received after taking action $a$ . Note that, once all actions where $\pi \left( a\right) > 0$ have been tried, ${Y}_{n}$ is an unbiased estimator of $v\left( \pi \right)$ since $\widehat{\mu }\left( a\right)$ is an unbiased estimator of $\mu \left( a\right)$ . Thus, reducing MSE requires allocating the $n$ samples to reduce variance. As shown by Carpentier et al. [2015], the minimal-variance allocation is given by pulling each arm with the proportion ${b}^{ \star }\left( a\right) \propto \pi \left( a\right) \sigma \left( a\right)$ . Though this result was previously shown, we prove it for correctness in Proposition 1 in Appendix A. Intuitively, there is more uncertainty about the mean reward for actions with higher variance reward distributions. Selecting these actions more often is needed to offset higher variance. The optimal proportion also takes $\pi$ into account as a high variance mean reward estimate for one action can be acceptable if $\pi$ would rarely take that action.
78
+
79
+ Note that sampling according to eq. (13) introduces unnecessary variance compared to deterministically selecting actions to match the optimal proportion. Since the variances are typically unknown, a number of works in the bandit community propose different approaches to estimate the variances for both basic bandits and several related extensions Antos et al. [2008], Carpentier and Munos [2011, 2012], Carpentier et al. [2015], Neufeld et al. [2014]. However, none of of these works address the fundamental challenge that MDPs bring - action selection must account for both immediate variance reduction in the current state as well as variance reduction in future states visited. In the next section, we begin to address this challenge by deriving minimal-variance action proportions for tree-structured MDPs.
80
+
81
+ § 4 OPTIMAL DATA COLLECTION IN TREE MDPS
82
+
83
+ In this section, we derive the optimal action proportions for tree-structured MDPs assuming the variances of the reward distributions are known, introduce an algorithm that approximates the optimal allocation when the variances are unknown, and bound the finite-sample MSE of this algorithm. Tree MDPs are a straightforward extension of the multi-armed bandit model to capture the fact that the optimal allocation for each action in a given state must consider the future states that could arise from taking that action.
84
+
85
+ < g r a p h i c s >
86
+
87
+ Figure 1: An $L$ -depth tree with 2 actions at each state.
88
+
89
+ We first define a discrete tree MDP as follows:
90
+
91
+ Definition 1. (Tree MDP) An MDP is a discrete tree MDP $\mathbf{T} \subset \mathbf{M}$ (see Figure 1) if the following holds:
92
+
93
+ (1) There are $L$ levels indexed by $\ell$ where $\ell = 1,2,\ldots ,L$ .
94
+
95
+ (2) Every state is represented as ${s}_{i}^{\ell }$ where $\ell$ is the level of the state $s$ indexed by $i$ . (3) The transition probabilities are such that one can only transition from a state in level $\ell$ to one in level $\ell + 1$ and each non-initial state can only be reached through one other state and only one action in that state. Formally, $\forall {s}^{\prime },P\left( {{s}^{\prime } \mid s,a}\right) \neq$ 0 for only one state-action pair $s,a$ and if ${s}^{\prime }$ is in level $\ell + 1$ then $s$ is in level $\ell$ . Finally, $P\left( {{s}_{j}^{L + 1} \mid {s}_{i}^{L},a}\right) = 0,\forall a$ .
96
+
97
+ (4) For simplicity, we assume that there is a single starting state ${s}_{1}^{1}$ (called the root). It is easy to extend our results to multiple starting states with a starting state distribution, ${d}_{0}$ , by assuming that there is only one action available in the root that leads to each possible start state, $s$ , with probability ${d}_{0}\left( s\right)$ . The leaf states are denoted as ${s}_{i}^{L}$ .
98
+
99
+ (5) The interaction stops after $L$ steps in state ${s}_{i}^{L}$ after taking an action $a$ and observing the reward ${R}_{L}\left( {{s}_{i}^{L},a}\right)$ .
100
+
101
+ Note that, because we assume a single initial state, ${s}_{1}^{1}$ , we have that estimating $v\left( \pi \right)$ is equivalent to estimating $v\left( {s}_{1}^{1}\right)$ . A similar Tree MDP model has been previously used in theoretical analysis by [Jiang and Li, 2016]; our model is more general as we consider per-step stochastic rewards whereas [Jiang and Li, 2016] only consider deterministic rewards at the end of trajectories.
102
+
103
+ § 4.1 ORACLE DATA COLLECTION
104
+
105
+ We first consider an oracle data collection strategy which knows the variance of all reward distributions and knows the state transition probabilities. After observing $n$ state-action-reward tuples, the oracle computes the following estimate of ${v}^{\pi }\left( {s}_{1}^{1}\right)$ :
106
+
107
+ $$
108
+ {Y}_{n}\left( {s}_{1}^{1}\right) \mathrel{\text{ := }} \mathop{\sum }\limits_{{a = 1}}^{A}\pi \left( {a \mid {s}_{1}^{1}}\right) \left( {\frac{1}{{T}_{n}\left( {{s}_{1}^{1},a}\right) }\mathop{\sum }\limits_{{h = 1}}^{{{T}_{n}\left( {{s}_{1}^{1},a}\right) }}{R}_{h}\left( {{s}_{1}^{1},a}\right) }\right.
109
+ $$
110
+
111
+ $$
112
+ \left. {+\gamma \mathop{\sum }\limits_{{s}_{j}^{\ell + 1}}P\left( {{s}_{j}^{\ell + 1} \mid {s}_{1}^{1},a}\right) {Y}_{n}\left( {s}_{j}^{2}\right) }\right)
113
+ $$
114
+
115
+ $$
116
+ = \mathop{\sum }\limits_{{a = 1}}^{A}\pi \left( {a \mid {s}_{1}^{1}}\right) \left( {\widehat{\mu }\left( {{s}_{1}^{1},a}\right) + \gamma \mathop{\sum }\limits_{{s}_{j}^{\ell + 1}}P\left( {{s}_{j}^{\ell + 1} \mid {s}_{1}^{1},a}\right) {Y}_{n}\left( {s}_{j}^{2}\right) }\right) \tag{2}
117
+ $$
118
+
119
+ where ${T}_{n}\left( {s,a}\right)$ denotes the number of times that the oracle took action $a$ in state $s$ . Note that (2) differs from the estimator defined in Section 2.2 as it uses the true transition probabilities, $P$ , instead of their empirical estimate, $\widehat{P}$ . The MSE of ${Y}_{n}$ is:
120
+
121
+ $$
122
+ {\mathbb{E}}_{\mathcal{D}}\left\lbrack {\left( {Y}_{n}\left( {s}_{1}^{1}\right) - {v}^{\pi }\left( {s}_{1}^{1}\right) \right) }^{2}\right\rbrack
123
+ $$
124
+
125
+ $$
126
+ = \operatorname{Var}\left( {{Y}_{n}\left( {s}_{1}^{1}\right) }\right) + {\operatorname{bias}}^{2}\left( {{Y}_{n}\left( {s}_{1}^{1}\right) }\right) . \tag{3}
127
+ $$
128
+
129
+ The bias of this estimator becomes zero once all(s, a)-pairs with $\pi \left( {a \mid s}\right) > 0$ have been visited a single time, thus we focus on reducing $\operatorname{Var}\left( {{Y}_{n}\left( {s}_{1}^{1}\right) }\right)$ . Before defining the oracle data collection strategy, we first state an assumption on $\mathcal{D}$ .
130
+
131
+ Assumption 1. The data $\mathcal{D}$ collected over $n$ state-action-reward samples has at least one observation of each state-action pair,(s, a), for which $\pi \left( {a \mid s}\right) > 0$ .
132
+
133
+ Assumption 1 ensures that ${Y}_{n}$ is an unbiased estimator of $v\left( \pi \right)$ so that reducing MSE is equivalent to reducing variance. Before stating our main result, we provide intuition with a lemma that gives the optimal proportion for each action in a 2-depth tree.
134
+
135
+ Lemma 1. Let $\mathbf{T}$ be a 2-depth stochastic tree MDP as defined in Definition 1 (see Figure 3 in Appendix B). Let ${Y}_{n}\left( {s}_{1}^{1}\right)$ be the estimated return of the starting state ${s}_{1}^{1}$ after observing $n$ state-action-reward samples. Note that ${v}^{\pi }\left( {s}_{1}^{1}\right)$ is the expectation of ${Y}_{n}\left( {s}_{1}^{1}\right)$ under Assumption 1 . Let $\mathcal{D}$ be the observed data over $n$ state-action-reward samples. To minimize MSE, ${\mathbb{E}}_{\mathcal{D}}\left\lbrack {\left( {Y}_{n}\left( {s}_{1}^{1}\right) - {v}^{\pi }\left( {s}_{1}^{1}\right) \right) }^{2}\right\rbrack$ , is obtained by taking actions in each state in the following proportions:
136
+
137
+ $$
138
+ {b}^{ * }\left( {a \mid {s}_{j}^{2}}\right) \propto \pi \left( {a \mid {s}_{j}^{2}}\right) \sigma \left( {{s}_{j}^{2},a}\right)
139
+ $$
140
+
141
+ $$
142
+ {b}^{ * }a\left| {\;{s}_{1}^{1}) \propto \sqrt{{\pi }^{2}\left( {a \mid {s}_{1}^{1}}\right) \left\lbrack {{\sigma }^{2}\left( {{s}_{1}^{1},a}\right) + {\gamma }^{2}\mathop{\sum }\limits_{{s}_{j}^{2}}P\left( {{s}_{j}^{2} \mid {s}_{1}^{1},a}\right) {B}^{2}\left( {s}_{j}^{2}\right) }\right\rbrack },}\right.
143
+ $$
144
+
145
+ where, $B\left( {s}_{j}^{2}\right) = \mathop{\sum }\limits_{a}\pi \left( {a \mid {s}_{j}^{2}}\right) \sigma \left( {{s}_{j}^{2},a}\right)$ .
146
+
147
+ Proof (Overview): We decompose the MSE into its variance and bias terms and show that ${Y}_{n}$ is unbiased under Assumption 1. Next note that the reward in the next state is conditionally independent of the reward in the current state given the current state and action. Hence we can write the variance in terms of the variance of the estimate in the initial state and the variance of the estimate in the final layer. We then rewrite the total samples of a state-action pair i.e ${T}_{n}\left( {{s}_{i}^{\ell },a}\right)$ in terms of the proportion of the number of times the action was sampled in the state i.e $b\left( {a \mid {s}_{i}^{\ell }}\right)$ . To do so, we take into account the tree structure to derive the expected proportion of times that action $a$ is taken in each state in layer 2 as follows:
148
+
149
+ $$
150
+ b\left( {a \mid {s}_{i}^{2}}\right) = \frac{{T}_{n}\left( {{s}_{i}^{2},a}\right) }{\mathop{\sum }\limits_{{a}^{\prime }}{T}_{n}\left( {{s}_{i}^{2},{a}^{\prime }}\right) }\overset{\left( a\right) }{ = }\frac{{T}_{n}\left( {{s}_{i}^{2},a}\right) /n}{P\left( {{s}_{i}^{2} \mid {s}_{1}^{1},a}\right) {T}_{n}\left( {{s}_{1}^{1},a}\right) /n}
151
+ $$
152
+
153
+ where in(a)the action $a$ is used to transition to state ${s}_{j}^{2}$ from ${s}_{1}^{1}$ and so $\mathop{\sum }\limits_{a}{T}_{n}\left( {{s}_{i}^{2},a}\right) = P\left( {{s}_{i}^{2} \mid {s}_{1}^{1},a}\right) {T}_{n}\left( {{s}_{1}^{1},a}\right)$ . We next substitute the $b\left( {a \mid {s}_{i}^{\ell }}\right)$ for each state-action pair into the variance expression and determine the $b$ values that minimize the expression subject to $\forall s,\mathop{\sum }\limits_{a}b\left( {a \mid s}\right) = 1$ and $\forall s,b\left( {a \mid s}\right) > 0$ . The full proof is given in Appendix B.
154
+
155
+ Note that the optimal proportion in the leaf states, ${b}^{ * }\left( {a \mid {s}_{j}^{2}}\right)$ , is the same as in Carpentier and Munos [2011] (see Proposition [1] as terminal states can be treated as bandits in which actions do not affect subsequent states. The key difference is in the root state, ${s}_{1}^{1}$ , where the optimal action proportion, ${b}^{ * }\left( {a \mid {s}_{1}^{1}}\right)$ depends on the expected leaf state normalization factor $B\left( {s}_{j}^{2}\right)$ where ${s}_{j}^{2}$ is a state reachable following $P\left( {{s}_{j}^{2} \mid {s}_{1}^{1},a}\right)$ . The normalization factor, $B\left( {s}_{i}^{2}\right)$ , captures the total contribution of state ${s}_{i}^{2}$ to the variance of ${Y}_{n}$ and thus actions in the root state must be chosen to 1) reduce variance in the immediate reward estimate and to 2) get to states that contribute more to the variance of the estimate. We explore the implications of the oracle action proportions in Lemma 1 with the following two examples.
156
+
157
+ Example 1. (Child Variance matters) Consider a 2- depth,2-action tree MDP T with deterministic $P$ , i.e., $P\left( {{s}_{2}^{2} \mid {s}_{1}^{1},2}\right) = P\left( {{s}_{1}^{2} \mid {s}_{1}^{1},1}\right) = 1$ and $\gamma = 1$ (see Figure 4 (Left) in Appendix C). However, the rewards are stochastic in our setting. Suppose the target policy is the uniform distribution in all states so that $\forall \left( {s,a}\right) ,\pi \left( {a \mid s}\right) = \frac{1}{2}$ . The reward distribution variances are given by ${\sigma }^{2}\left( {{s}_{1}^{1},1}\right) = {400}$ , ${\sigma }^{2}\left( {{s}_{1}^{1},2}\right) = {600},{\sigma }^{2}\left( {{s}_{1}^{2},1}\right) = {400},{\sigma }^{2}\left( {{s}_{1}^{2},2}\right) = {400}$ , ${\sigma }^{2}\left( {{s}_{2}^{2},1}\right) = 4$ , and ${\sigma }^{2}\left( {{s}_{2}^{2},2}\right) = 4$ . So the right sub-tree at ${s}_{1}^{1}$ has higher variance (larger $B$ -value) than the left subtree. Following the sampling rule in Lemma 1 we can show that ${b}^{ * }\left( {1 \mid {s}_{1}^{1}}\right) > {b}^{ * }\left( {2 \mid {s}_{1}^{1}}\right)$ (the full calculation is given in Appendix C). Hence the right sub-tree with higher variance will have a higher proportion of pulls which allows the oracle to get to the high variance ${s}_{1}^{2}$ . Observe that treating ${s}_{1}^{1}$ as a bandit leads to choosing action 2 more often as ${\sigma }^{2}\left( {{s}_{1}^{1},2}\right) > {\sigma }^{2}\left( {{s}_{1}^{1},1}\right)$ . However, taking action 2 leads to state ${s}_{2}^{2}$ which contributes much less to the total variance. Thus, this example highlights the need to consider the variance of subsequent states.
158
+
159
+ Example 2. (Transition Model matters) Consider a 2-depth, 2-action tree MDP T in which we have $P\left( {{s}_{1}^{2} \mid {s}_{1}^{1},1}\right) = p,P\left( {{s}_{1}^{2} \mid {s}_{1}^{1},1}\right) = 1 - p,P\left( {{s}_{3}^{2} \mid {s}_{1}^{1},2}\right) = p,$ and $P\left( {{s}_{4}^{2} \mid {s}_{1}^{1},2}\right) = 1 - p$ . This example is shown in Figure 4 (Right) in Appendix C. Following the result of Lemma 1 if $p \gg \left( {1 - p}\right)$ it can be shown that the variances of the states ${s}_{1}^{2}$ and ${s}_{3}^{2}$ have greater importance in calculating the optimal sampling proportions of ${s}_{1}^{1}$ . The calculation is shown in Appendix D. Thus, less likely future states have less importance for computing the optimal sampling proportion in a given state.
160
+
161
+ Having developed intuition for minimal-variance action selection in a 2-depth tree MDP, we now give our main result that extends Lemma 1 to an $L$ -depth tree.
162
+
163
+ Theorem 1. Let $\mathbf{T}$ be a L-depth stochastic tree MDP as defined in Definition 1. Let the estimated return of the starting state ${s}_{1}^{1}$ after $n$ state-action-reward samples be defined as ${Y}_{n}\left( {s}_{1}^{1}\right)$ . Note that the ${v}^{\pi }\left( {s}_{1}^{1}\right)$ is the expectation of ${Y}_{n}\left( {s}_{1}^{1}\right)$ under Assumption 1. Let $\mathcal{D}$ be the observed data over $n$ state-action-reward samples. To minimise MSE ${\mathbb{E}}_{\mathcal{D}}\left\lbrack {\left( {{Y}_{n}\left( {s}_{1}^{1}\right) }\right) - \mu \left( {{Y}_{n}\left( {s}_{1}^{1}\right) }\right) {)}^{2}}\right\rbrack$ the optimal sampling proportions for any arbitrary state is given by:
164
+
165
+ $$
166
+ {b}^{ * }\left( {a \mid {s}_{i}^{\ell }}\right) \propto \sqrt{{\pi }^{2}\left( {a \mid {s}_{i}^{\ell }}\right) \left\lbrack {{\sigma }^{2}\left( {{s}_{i}^{\ell },a}\right) + {\gamma }^{2}\mathop{\sum }\limits_{\substack{{k \neq 1} \\ {s}_{j}^{\ell + 1} }}P\left( {{s}_{j}^{\ell + 1} \mid {s}_{i}^{\ell },a}\right) {B}^{2}\left( {s}_{j}^{\ell + 1}\right) }\right\rbrack },
167
+ $$
168
+
169
+ where, $B\left( {s}_{j}^{2}\right)$ is the normalization factor defined as follows:
170
+
171
+ $$
172
+ B\left( {s}_{i}^{\ell }\right) = \mathop{\sum }\limits_{a}\sqrt{{\pi }^{2}\left( {a \mid {s}_{i}^{\ell }}\right) \left( {{\sigma }^{2}\left( {{s}_{i}^{\ell },a}\right) + {\gamma }^{2}\mathop{\sum }\limits_{\substack{{{s}_{j}^{\ell } + 1} \\ {s}_{j}^{\ell } }}P\left( {{s}_{j}^{\ell + 1} \mid {s}_{i}^{\ell },a}\right) {B}^{2}\left( {s}_{j}^{\ell + 1}\right) }\right) }
173
+ $$
174
+
175
+ (4)
176
+
177
+ Proof (Overview): We prove Theorem 1 by induction. Lemma 1 proves the base case of estimating the sampling proportion for level $L - 1$ and $L$ . Then, for the induction step, we assume that all the sampling proportions from level $L$ till level $\ell + 1$ can be subsequently built up using dynamic programming starting from level $L$ . For states in level $L$ to the states in level $\ell + 1$ we can compute ${b}^{ * }\left( {a \mid {s}_{i}^{\ell + 1}}\right)$ by repeatedly applying Lemma 1. Then we show that at the level $\ell$ we get a similar recursive sampling proportion as stated in the theorem statement. The proof is given in Appendix E.
178
+
179
+ § 4.2 MSE OF THE ORACLE
180
+
181
+ In this subsection, we derive the MSE that the oracle will incur when matching the action proportions given by theorem 1 . The oracle is run for $K$ episodes where each episode consist of $L$ length trajectory of visiting state-action pairs. So the total budget is $n = {KL}$ . At the end of the $K$ -th episode the MSE of the oracle is estimated which is shown in Proposition 2. Before stating the proposition we introduce additional notation which we will use throughout the remainder of the paper. Let
182
+
183
+ $$
184
+ {T}_{t}^{k}\left( {s,a}\right) = \mathop{\sum }\limits_{{i = 0}}^{{k - 1}}\mathbb{I}\left\{ {\left( {{s}_{t}^{i},{a}_{t}^{i}}\right) = \left( {s,a}\right) }\right\} ,\forall t,s,a \tag{5}
185
+ $$
186
+
187
+ denote the total number of times that(s, a)has been observed in $\mathcal{D}$ (across all trajectories) up to time $t$ in episode $k$ and $\mathbb{I}\{ \cdot \}$ is the indicator function. Similarly let
188
+
189
+ $$
190
+ {T}_{t}^{k}\left( {s,a,{s}^{\prime }}\right) = \mathop{\sum }\limits_{{i = 0}}^{{k - 1}}\mathbb{I}\left\{ {\left( {{s}_{t}^{i},{a}_{t}^{i},{s}_{t + 1}^{i}}\right) = \left( {s,a,{s}^{\prime }}\right) }\right\} ,\forall t,s,a,{s}^{\prime } \tag{6}
191
+ $$
192
+
193
+ denote the number of times action $a$ is taken in $s$ to transition to ${s}^{\prime }$ . Finally we define the state sample ${T}_{t}^{k}\left( s\right) =$ $\mathop{\sum }\limits_{a}{T}_{t}^{k}\left( {s,a}\right)$ as the total number of times any state is visited and an action is taken in that state.
194
+
195
+ Proposition 2. Let there be an oracle which knows the state-action variances and transition probabilities of the L-depth tree MDP T. Let the oracle take actions in the proportions given by Theorem 1 Let $\mathcal{D}$ be the observed data over $n$ state-action-reward samples such that $n = {KL}$ . Then the oracle suffers a MSE of
196
+
197
+ $$
198
+ {\mathcal{L}}_{n}^{ * } = \mathop{\sum }\limits_{{\ell = 1}}^{L}\left\lbrack \frac{{B}^{2}\left( {s}_{i}^{\ell }\right) }{{T}_{L}^{*,K}\left( {s}_{i}^{\ell }\right) }\right.
199
+ $$
200
+
201
+ $$
202
+ \left. {+{\gamma }^{2}\mathop{\sum }\limits_{a}{\pi }^{2}\left( {a \mid {s}_{i}^{\ell }}\right) \mathop{\sum }\limits_{{s}_{j}^{\ell + 1}}P\left( {{s}_{j}^{\ell + 1} \mid {s}_{i}^{\ell },a}\right) \frac{{B}^{2}\left( {s}_{j}^{\ell + 1}\right) }{{T}_{L}^{*,K}\left( {s}_{j}^{\ell + 1}\right) }}\right\rbrack \text{ . } \tag{7}
203
+ $$
204
+
205
+ where, ${T}_{L}^{*,K}\left( {s}_{i}^{\ell }\right)$ denotes the optimal state samples of the oracle at the end of episode $K$ .
206
+
207
+ The proof is given in Appendix F. From Proposition 2 we see that the MSE of the oracle goes to 0 as the number of episodes $K \rightarrow \infty$ , and ${T}_{L}^{*,K}\left( {s}_{i}^{\ell }\right) \rightarrow \infty$ simultaneously for all ${s}_{i}^{\ell } \in \mathcal{S}$ .
208
+
209
+ § 4.3 REDUCED VARIANCE SAMPLING
210
+
211
+ The oracle data collection strategy provides intuition for optimal data collection for minimal-variance policy evaluation, however, it is not a practical strategy itself as it requires $\sigma$ and $P$ to be known. We now introduce a practical data collection algorithm - Reduced Variance Sampling (ReVar) - that is agnostic to $\sigma$ and $P$ . Our algorithm follows the proportions given by Theorem 1 with the true reward variances replaced with an upper confidence bound and the true transition probabilities replaced with empirical frequencies. Formally, we define the desired proportion for action $a$ in state ${s}_{i}^{\ell }$ after $t$ steps as ${\widehat{b}}_{t + 1}^{k}\left( {a \mid {s}_{i}^{\ell }}\right) \propto$
212
+
213
+ $$
214
+ \sqrt{{\pi }^{2}\left( {a \mid {s}_{i}^{\ell }}\right) \left\lbrack {{\widehat{{\sigma }^{u}}}_{t}^{\left( 2\right) ,k}\left( {{s}_{i}^{\ell },a}\right) + {\gamma }^{2}\mathop{\sum }\limits_{{s}_{j}^{\ell + 1}}{\widehat{P}}_{t}^{k}\left( {{s}_{j}^{\ell + 1} \mid {s}_{i}^{\ell },a}\right) {\widehat{B}}_{t}^{\left( 2\right) ,k}\left( {s}_{j}^{\ell + 1}\right) }\right\rbrack },
215
+ $$
216
+
217
+ (8)
218
+
219
+ The upper confidence bound on the variance ${\sigma }^{2}\left( {{s}_{i}^{\ell },a}\right)$ , denoted by ${\widehat{{\sigma }^{u}}}_{t - 1}^{\left( 2\right) ,k}\left( {{s}_{i}^{\ell },a}\right) = {\left( {\widehat{{\sigma }^{u}}}_{t}^{k}\left( {s}_{i}^{\ell },a\right) \right) }^{2}$ , is defined as:
220
+
221
+ $$
222
+ {\widehat{{\sigma }^{u}}}_{t}^{k}\left( {{s}_{i}^{\ell },a}\right) \mathrel{\text{ := }} {\widehat{\sigma }}_{t}^{k}\left( {{s}_{i}^{\ell },a}\right) + {2c}\sqrt{\frac{\log \left( {{SAn}\left( {n + 1}\right) /\delta }\right) }{{T}_{t}^{k}\left( {{s}_{i}^{\ell },a}\right) }} \tag{9}
223
+ $$
224
+
225
+ where, ${\widehat{\sigma }}_{t}^{k}\left( {{s}_{i}^{\ell },a}\right)$ is the plug-in estimate of the standard deviation $\sigma \left( {{s}_{i}^{\ell },a}\right) ,c > 0$ is a constant depending on the boundedness of the rewards to be made explicit later, and $n = {KL}$ is the total budget of samples. Using an upper confidence bound on the reward standard deviations captures our uncertainty about $\sigma \left( {{s}_{i}^{\ell },a}\right)$ needed to compute the true optimal proportions. The state transition model is estimated as:
226
+
227
+ $$
228
+ {\widehat{P}}_{t}^{k}\left( {{s}_{j}^{\ell + 1} \mid {s}_{i}^{\ell },a}\right) = \frac{{T}_{t}^{k}\left( {{s}_{i}^{\ell },a,{s}_{j}^{\ell + 1}}\right) }{{T}_{t}^{k}\left( {{s}_{i}^{\ell },a}\right) } \tag{10}
229
+ $$
230
+
231
+ where, ${T}_{t}^{k}\left( {{s}_{i}^{\ell },a,{s}_{j}^{\ell + 1}}\right)$ is defined in (6). Further in (8), ${\widehat{B}}_{t}^{k}\left( {s}_{j}^{\ell + 1}\right)$ is the plug-in estimate of $B\left( {s}_{j}^{\ell + 1}\right)$ . Observe that for all of these plug-in estimates we use all the past history till time $t$ in episode $k$ to estimate these statistics.
232
+
233
+ Eq. (8) allows us to estimate the optimal proportion for all actions in any state. To match these proportions, rather than sampling from ${\widehat{b}}_{t + 1}^{k}\left( {a \mid {s}_{i}^{\ell }}\right)$ , ReVar takes action ${I}_{t + 1}^{k}$ at time $t + 1$ in episode $k$ according to:
234
+
235
+ $$
236
+ {I}_{t + 1}^{k} = \underset{a}{\arg \max }\left\{ \frac{{\widehat{b}}_{t}^{k}\left( {a \mid {s}_{i}^{\ell }}\right) }{{T}_{t}^{k}\left( {{s}_{i}^{\ell },a}\right) }\right\} . \tag{11}
237
+ $$
238
+
239
+ This action selection rule ensures that the ratio ${\widehat{b}}_{t}^{k}\left( {a \mid {s}_{i}^{\ell }}\right) /{T}_{t}\left( {{s}_{i}^{L},a}\right) \approx 1$ . It is a deterministic action selection rule and thus avoids variance due to simply sampling from the estimated optimal proportions. Note that in the terminal states, ${s}_{i}^{L}$ , the sampling rule becomes
240
+
241
+ $$
242
+ {I}_{t + 1}^{k} = \underset{a}{\arg \max }\left\{ \frac{\pi \left( {a \mid {s}_{i}^{L}}\right) {\widehat{{\sigma }^{u}}}_{t}^{k}\left( {{s}_{i}^{L},a}\right) }{{T}_{t}^{k}\left( {{s}_{i}^{L},a}\right) }\right\}
243
+ $$
244
+
245
+ which matches the bandit sampling rule of Carpentier and Munos [2011, 2012].
246
+
247
+ We give pseudocode for ReVar in Algorithm 1. The algorithm proceeds in episodes. In each episode we generate a trajectory from the starting state ${s}_{1}^{1}$ (root) to one of the terminal state ${s}_{j}^{L}$ (leaf). At episode $k$ and time-step $t$ in some arbitrary state ${s}_{i}^{\ell }$ the next action ${I}_{t + 1}$ is chosen based on (11). The trajectory generated is added to the dataset $\mathcal{D}$ . At the end of the episode we update the model parameters, i.e. we estimate the ${\widehat{\sigma }}_{t}^{k}\left( {{s}_{i}^{\ell },a}\right)$ , and ${\widehat{P}}_{t}^{k}\left( {{s}_{i}^{\ell + 1} \mid {s}_{j}^{\ell },a}\right)$ for each state-action pair. Finally, we update ${\widehat{b}}_{1}^{k + 1}\left( {a \mid {s}_{\ell }^{i}}\right)$ for the next episode using eq. (9).
248
+
249
+ Algorithm 1 Reduced Variance Sampling (ReVar )
250
+
251
+ Input: Number of trajectories to collect, $K$ .
252
+
253
+ Output: Dataset $\mathcal{D}$ .
254
+
255
+ Initialize $\mathcal{D} = \varnothing ,{\widehat{b}}_{1}^{0}\left( {a \mid {s}_{i}^{\ell }}\right)$ uniform over all actions in
256
+
257
+ each state.
258
+
259
+ for $k \in 0,1,\ldots ,K$ do
260
+
261
+ Generate trajectory ${H}^{k} \mathrel{\text{ := }} {\left\{ {S}_{t},{I}_{t},R\left( {I}_{t}\right) \right\} }_{t = 1}^{L}$ by
262
+
263
+ selecting ${I}_{t}$ according to (11).
264
+
265
+ $\mathcal{D} \leftarrow \mathcal{D} \cup \left\{ \left( {{H}^{k},{\widehat{b}}^{k}}\right) \right\}$
266
+
267
+ Update model parameters and estimate ${\widehat{b}}_{1}^{k + 1}\left( {a \mid {s}_{i}^{\ell }}\right)$
268
+
269
+ for each $\left( {{s}_{i}^{\ell },a}\right)$ .
270
+
271
+ Update ${\widehat{b}}_{1}^{k + 1}\left( {a \mid {s}_{i}^{\ell }}\right)$ from level $L$ to 1 following (8).
272
+
273
+ Return Dataset $\mathcal{D}$ to evaluate policy $\pi$ .
274
+
275
+ § 4.4 REGRET ANALYSIS
276
+
277
+ We now theoretically analyze ReVar by bounding its regret with respect to the oracle behavior policy. We analyze ReVar under the assumption that $P$ is known and so we are only concerned with obtaining accurate estimates of the reward means and variances. This assumption is only made for the regret analysis and is not a fundamental requirement of ReVar. Though somewhat restrictive, the case of known state transitions is still interesting as it arises in practice when state transitions are deterministic or we can estimate $P$ much easier than we can estimate the reward means.
278
+
279
+ We first define the notion of regret of an algorithm compared to the oracle MSE ${\mathcal{L}}_{n}^{ * }$ in (7) as follows:
280
+
281
+ $$
282
+ {\mathcal{R}}_{n} = {\mathcal{L}}_{n} - {\mathcal{L}}_{n}^{ * }
283
+ $$
284
+
285
+ where, $n$ is the total budget, and ${\mathcal{L}}_{n}$ is the MSE at the end of episode $K$ following the sampling rule in (8). We make the following assumption that rewards are bounded:
286
+
287
+ Assumption 2. The reward from any state-action pair has bounded range, i.e., ${R}_{t}\left( {s,a}\right) \in \left\lbrack {-\eta ,\eta }\right\rbrack$ almost surely at every time-step $t$ for some fixed $\eta > 0$ .
288
+
289
+ Then the regret of ReVar over a $L$ -depth deterministic tree is given by the following theorem.
290
+
291
+ Theorem 2. Let the total budget be $n = {KL}$ and $n \geq {4SA}$ . Then the total regret in a deterministic $L$ -depth $\mathbf{T}$ at the end of $K$ -th episode when taking actions according to [8] is given by
292
+
293
+ $$
294
+ {\mathcal{R}}_{n} \leq \widetilde{O}\left( \frac{{B}_{{s}_{1}^{1}}^{2}\sqrt{\log \left( {{SA}{n}^{4}}\right) }}{{n}^{3/2}{b}_{\min }^{*,3/2}\left( {s}_{1}^{1}\right) }\right.
295
+ $$
296
+
297
+ $$
298
+ \left. {+\gamma \mathop{\sum }\limits_{{\ell = 2}}^{L}\mathop{\max }\limits_{{{s}_{j}^{\ell },a}}\pi \left( {a \mid {s}_{1}^{1}}\right) P\left( {{s}_{j}^{\ell } \mid {s}_{1}^{1},a}\right) \frac{{B}_{{s}_{j}^{\ell }}^{2}\sqrt{\log \left( {{SA}{n}^{4}}\right) }}{{n}^{3/2}{b}_{\min }^{*,3/2}\left( {s}_{j}^{\ell }\right) }}\right)
299
+ $$
300
+
301
+ where, the $\widetilde{O}$ hides other lower order terms and ${B}_{{s}_{i}^{\ell }}$ is defined in (4) and ${b}_{\min }^{ * }\left( s\right) = \mathop{\min }\limits_{a}{b}^{ * }\left( {a \mid s}\right)$ .
302
+
303
+ Note that if $L = 1,\left| \mathcal{S}\right| = 1$ , we recover the bandit setting and our regret bound matches the bound in Carpentier and Munos [2011]. Note that MSE using data generated by any policy decays at a rate no faster than $O\left( {n}^{-1}\right)$ , the parametric rate. The key feature of $\mathrm{{ReVar}}$ is that it converges to the oracle policy. This means that asymptotically, the MSE based on ReVar will match that of oracle. Theorem 2 shows that the regret scales like $O\left( {n}^{-3/2}\right)$ if we have the ${b}_{\min }^{ * }\left( s\right)$ over all states $s \in \mathcal{S}$ as some reasonable constant $O\left( 1\right)$ . In contrast, suppose we sample trajectories from a suboptimal policy, i.e., a policy that produces an MSE worse than that of the oracle for every $n$ . This MSE gap never diminishes, so the regret cannot decrease at a rate faster than $O\left( {n}^{-1}\right)$ . Finally, note that the regret bound in Theorem 2 is a problem dependent bound as it involves the parameter ${b}_{\min }^{ * }\left( s\right)$ .
304
+
305
+ Proof (Overview): We decompose the proof into several steps. We define the good event ${\xi }_{\delta }$ based on the state-action-reward samples $\mathcal{D}$ that holds for all episode $k$ and time $t$ such that $\left| {{\widehat{\sigma }}_{t}^{k}\left( {s,a}\right) - \sigma \left( {s,a}\right) }\right| \leq \epsilon$ for some $\epsilon > 0$ with probability $1 - \delta$ made explicit in Corollary 1 . Now observe that MSE of ReVar is
306
+
307
+ $$
308
+ {\mathcal{L}}_{n} = {\mathbb{E}}_{\mathcal{D}}\left\lbrack {\left( {{Y}_{n}\left( {s}_{1}^{1}\right) }\right) - {v}^{\pi }\left( {s}_{1}^{1}\right) ){)}^{2}}\right\rbrack
309
+ $$
310
+
311
+ $$
312
+ = {\mathbb{E}}_{\mathcal{D}}\left\lbrack {\left( {{Y}_{n}\left( {s}_{1}^{1}\right) }\right) - {v}^{\pi }\left( {s}_{1}^{1}\right) ){)}^{2}\mathbb{I}\left\{ {\xi }_{\delta }\right\} }\right\rbrack
313
+ $$
314
+
315
+ $$
316
+ \left. {+{\mathbb{E}}_{\mathcal{D}}{\left\lbrack \left( {Y}_{n}\left( {s}_{1}^{1}\right) \right) - {v}^{\pi }\left( {s}_{1}^{1}\right) \right) }^{2}\mathbb{I}\left\{ {\xi }_{\delta }^{C}\right\} }\right\rbrack \tag{12}
317
+ $$
318
+
319
+ Note that here we are considering a known transition function $P$ .The first term in (12) can be bounded using
320
+
321
+ $$
322
+ \left. {{\mathbb{E}}_{\mathcal{D}}{\left\lbrack \left( {Y}_{n}\left( {s}_{1}^{1}\right) \right) - {v}^{\pi }\left( {s}_{1}^{1}\right) \right) }^{2}\mathbb{I}\left\{ {\xi }_{\delta }\right\} }\right\rbrack = \operatorname{Var}\left\lbrack {{Y}_{n}\left( {s}_{1}^{1}\right) }\right\rbrack \mathbb{E}\left\lbrack {{T}_{n}^{k}\left( {s}_{1}^{1}\right) }\right\rbrack
323
+ $$
324
+
325
+ $$
326
+ \leq \mathop{\sum }\limits_{a}{\pi }^{2}\left( {a \mid {s}_{1}^{1}}\right) \left\lbrack \frac{{\sigma }^{2}\left( {{s}_{1}^{1},a}\right) }{{\underline{T}}_{n}^{\left( 2\right) ,k}\left( {{s}_{1}^{1},a}\right) }\right\rbrack \mathbb{E}\left\lbrack {{T}_{n}^{k}\left( {{s}_{1}^{1},a}\right) }\right\rbrack
327
+ $$
328
+
329
+ $$
330
+ + {\gamma }^{2}\mathop{\sum }\limits_{a}{\pi }^{2}\left( {a \mid {s}_{1}^{1}}\right) \mathop{\sum }\limits_{{s}_{j}^{2}}{P}^{2}\left( {{s}_{j}^{2} \mid {s}_{1}^{1},a}\right)
331
+ $$
332
+
333
+ $$
334
+ \cdot \mathop{\sum }\limits_{{a}^{\prime }}{\pi }^{2}\left( {{a}^{\prime } \mid {s}_{j}^{2}}\right) \left\lbrack \frac{{\sigma }^{2}\left( {{s}_{j}^{2},{a}^{\prime }}\right) }{{\underline{T}}_{n}^{\left( 2\right) ,k}\left( {{s}_{j}^{2},{a}^{\prime }}\right) }\right\rbrack \mathbb{E}\left\lbrack {{T}_{n}^{k}\left( {{s}_{j}^{2},{a}^{\prime }}\right) }\right\rbrack
335
+ $$
336
+
337
+ where, ${\underline{T}}^{\left( 2\right) ,k}\left( {{s}_{1}^{1},a}\right)$ is a lower bound to ${T}^{\left( 2\right) ,k}\left( {{s}_{1}^{1},a}\right)$ made explicit in Lemma 6, and ${\underline{T}}^{\left( 2\right) ,k}\left( {{s}_{j}^{2},a}\right)$ is a lower bound to ${T}^{\left( 2\right) ,k}\left( {{s}_{1}^{1},a}\right)$ made explicit in Lemma 5 . We can combine these two lower bounds and give an upper bound to MSE in a two depth $\mathbf{T}$ which is shown Lemma 7. Finally, for the $L$ depth stochastic tree we can repeatedly apply Lemma 7 to bound the first term. For the second term we set the $\delta = {n}^{-2}$ and use the boundedness assumption in Assumption 2 to get the final bound. The proof is given in Appendix A.
338
+
339
+ § 5 OPTIMAL DATA COLLECTION BEYOND TREES
340
+
341
+ The tree-MDP model considered above allows us to develop a foundation for minimal-variance data collection in decision problems where actions at one state affect subsequent states. One limitation of this model is that, for any noninitial state, ${s}_{i}^{\ell }$ , there is only a single state-action path that could have been taken to reach it. In a more general finite-horizon MDP, there could be many different paths to reach the same non-initial state. Unfortunately, the existence of multiple paths to a state introduces cyclical dependencies between states that complicate derivation of the minimal-variance data collection strategy and regret analysis. In this section, we elucidate this difficulty by considering the class of directed acyclic graph (DAG) MDPs.
342
+
343
+ In this section we first define a DAG $\mathcal{G} \subset \mathbf{M}$ . An illustrative figure of a 3-depth 2-action $\mathcal{G}$ is in Figure 5 of Appendix 1.
344
+
345
+ Definition 2. (DAG MDP) A DAG MDP follows the same definition as the tree MDP in Definition 1 except $P\left( {{s}^{\prime } \mid s,a}\right)$ can be non-zero for any $s$ in layer $\ell ,{s}^{\prime }$ in layer $\ell + 1$ , and any $a$ , i.e., one can now reach ${s}^{\prime }$ through multiple previous state-action pairs.
346
+
347
+ Proposition 3. Let $\mathcal{G}$ be a 3-depth, A-action DAG defined in Definition 2 The minimal-MSE sampling proportions ${b}^{ * }\left( {a \mid {s}_{1}^{1}}\right) ,{b}^{ * }\left( {a \mid {s}_{j}^{2}}\right)$ depend on themselves such that $b\left( {a \mid {s}_{1}^{1}}\right) \propto f\left( {1/b\left( {a \mid {s}_{1}^{1}}\right) }\right)$ and $b\left( {a \mid {s}_{j}^{2}}\right) \propto f\left( {1/b\left( {a \mid {s}_{j}^{2}}\right) }\right)$ where $f\left( \cdot \right)$ is a function that hides other dependencies on variances of $s$ and its children.
348
+
349
+ The proof technique follows the approach of Lemma 1 but takes into account the multiple paths leading to the same state. The possibility of multiple paths results in the cyclical dependency of the sampling proportions in level 1 and 2. Note that in $\mathbf{T}$ there is a single path to each state and this cyclical dependency does not arise. The full proof is given in Appendix 1. Because of this cyclical dependency it is difficult to estimate the optimal sampling proportions in $\mathcal{G}$ . However, we can approximate the optimal sampling proportion that ignores the multiple path problem in $\mathcal{G}$ by using the tree formulation in the following way: At every time $t$ during a trajectory ${\tau }^{k}$ call the Algorithm 2 in Appendix J to estimate ${B}_{0}\left( s\right)$ where ${B}_{{t}^{\prime }}\left( s\right) \in {\mathbb{R}}^{L \times \left| \mathcal{S}\right| }$ stores the expected standard deviation of the state $s$ at iteration ${t}^{\prime }$ . After $L$ such iteration we use the value ${B}_{0}\left( s\right)$ to estimate $b\left( {a \mid s}\right)$ as follows:
350
+
351
+ $$
352
+ {b}^{ * }\left( {a \mid s}\right) \propto \sqrt{{\pi }^{2}\left( {a \mid s}\right) \left\lbrack {{\sigma }^{2}\left( {s,a}\right) + {\gamma }^{2}\mathop{\sum }\limits_{{s}^{\prime }}P\left( {{s}^{\prime } \mid s,a}\right) {B}_{0}^{2}\left( s\right) }\right\rbrack }.
353
+ $$
354
+
355
+ Note that for a terminal state $s$ we have the transition probability $P\left( {{s}^{\prime } \mid s,a}\right) = 0$ and then the $b\left( {a \mid s}\right) = \pi \left( {a \mid s}\right) \sigma \left( {s,a}\right)$ . This iterative procedure follows from the tree formulation in Theorem 1 and is necessary in $\mathcal{G}$ to take into account the multiple paths to a particular state. Also observe that in Algorithm 2 we use the idea of value-iteration for episodic setting [Sutton and Barto, 2018] to estimate the the optimal sampling proportion iteratively.
356
+
357
+ § 6 EMPIRICAL STUDY
358
+
359
+ We next verify our theoretical findings with simulated policy evaluation tasks in both a tree MDP and a non-tree Grid-World domain. Our experiments are designed to answer the following questions: 1) can ReVar produce policy value estimates with MSE comparable to the oracle solution? and 2) does our novel algorithm lower MSE relative to on-policy sampling of actions? Full implementation details are given in Appendix D. arm less and suffer a high MSE. The CB-Var policy is a bandit policy that uses an empirical Bernstein Inequality [Maurer and Pontil, 2009] to sample an action without looking ahead and suffers high MSE. The Oracle has access to the model and variances and performs the best. ReVar lowers MSE comparable to Onpolicyand CB-Varand eventually matches the oracle's MSE.
360
+
361
+ 4-Depth, 2-Action Balanced Tree Stochastic Gridworld ${10}^{1}$ ReVar (Ours) CB-Vas MSE ${10}^{2}$ Episodes MSE Onpolicy ${10}^{-1}$ ReVar (Ours) CB-Var ${10}^{2}$ ${10}^{2}$ Episodes
362
+
363
+ Figure 2: (Left) Deterministic 4-depth Tree. (Right) Stochastic gridworld. The vertical axis gives MSE and the horizontal axis is the number of episodes collected. Axes use a log-scale and confidence bars show one standard error.
364
+
365
+ Experiment 1 (Tree): In this setting we have a 4-depth 2-action deterministic tree MDP T consisting of 15 states. Each state has a low variance arm with ${\sigma }^{2}\left( {s,1}\right) = {0.01}$ and high target probability $\pi \left( {1 \mid s}\right) = {0.95}$ and a high variance arm with ${\sigma }^{2}\left( {s,1}\right) = {20.0}$ and low target probability $\pi \left( {2 \mid s}\right) = {0.05}$ . Hence, the Onpolicy sampling which samples according to $\pi$ will sample the second (high variance)
366
+
367
+ Experiment 2 (Gridworld): In this setting we have a $4 \times 4$ stochastic gridworld consisting of 16 grid cells. Considering the current episode time-step as part of the state, this MDP is a DAG MDP in which there are multiple path to a single state. There is a single starting location at the top-left corner and a single terminal state at the bottom-right corner. Let $\mathbf{L},\mathbf{R},\mathbf{D},\mathbf{U}$ denote the left, right, down and up actions in every state. Then in each state the right and down actions have low variance arms with ${\sigma }^{2}\left( {s,\mathbf{R}}\right) = {\sigma }^{2}\left( {s,\mathbf{D}}\right) = {0.01}$ and high target policy probability $\pi \left( {\mathbf{R} \mid s}\right) = \pi \left( {\mathbf{D} \mid s}\right) = {0.45}$ . The left and top actions have high variance arms with ${\sigma }^{2}\left( {s,\mathbf{L}}\right) = {\sigma }^{2}\left( {s,\mathbf{U}}\right) = {0.01}$ and low target policy probability $\pi \left( {\mathbf{L} \mid s}\right) = \pi \left( {\mathbf{U} \mid s}\right) = {0.05}$ . Hence, Onpolicy which goes right and down with high probability (to reach the terminal state) will sample the low variance arms more and suffer a high MSE. Similar to above, CB-Var fails to look ahead when selecting actions and thus suffers from high MSE. ReVar lowers MSE compared to Onpolicyand CB-Varand actually matches and then reduces MSE compared to the Oracle . We point out that the DAG structure of the Gridworld violates the tree-structure under which Oracle and ReVar were derived. Nevertheless, both methods lower MSE compared to Onpolicy.
368
+
369
+ § 7 CONCLUSION AND FUTURE WORKS
370
+
371
+ This paper has studied the question of how to optimally take actions for minimal-variance policy evaluation of a fixed target policy. We developed a theoretical foundation for data collection in policy evaluation by deriving an oracle data collection policy for the class of finite, tree-structured MDPs. We then introduced a practical algorithm, ReVar, that approximates the oracle strategy by computing an upper confidence bound on the variance of the future cumulative reward at each state and using this bound in place of the true variances in the oracle strategy. We bound the finite-sample regret (excess MSE) of our algorithm relative to the oracle strategy. We also present an empirical study where we show that ReVar decreases the MSE of policy evaluation relative to several baseline data collection strategies including on-policy sampling. In the future, we would like to extend our derivation of optimal data collection strategies and regret analysis of ReVar to a more general class of MDPs, in particular, relaxing the tree structure and also considering infinite-horizon MDPs. Finally, real world problems often require function approximation to deal with large state and action spaces. This setting raises new theoretical and implementation challenges for ReVar. Addressing these challenges is an interesting direction for future work.
UAI/UAI 2022/UAI 2022 Conference/BAeO6LIjcec/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,349 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Uncertainty-Aware Pseudo-labeling for Quantum Calculations
2
+
3
+ ## Abstract
4
+
5
+ Machine learning models have recently shown promise in predicting molecular quantum chemical properties. However, the path to real-life adoption requires (1) learning under low-resource constraints and (2) out-of-distribution generalization to unseen, structurally diverse molecules. We observe that these two challenges can be addressed via abundant labels, which is often not the case in quantum chemistry. We hypothesize that pseudo-labeling on a vast array of unlabeled molecules can serve as gold-label proxies to expand the training labeled dataset significantly. The challenge in pseudo-labeling is to prevent the bad pseudo-labels from biasing the model. Motivated by the entropy minimization framework, we develop a simple and effective strategy PSEUD $\sigma$ that can assign pseudo-labels, detect bad pseudo-labels through evidential uncertainty, and prevent them from biasing the model using adaptive weighting. Empirically, PSEUD $\sigma$ improves quantum calculations accuracy in full data, low data, and out-of-distribution settings.
6
+
7
+ ## 1 INTRODUCTION
8
+
9
+ Ab initio quantum chemistry methods attempt to solve the electronic many-body Schrödinger equation to characterize biomolecular properties and interactions at different level of theory and numerical approximations. Despite extensive repertoire of methods from Post-Hartree-Fock methods such as CCSD(T) (coupled cluster single-double-triple) and MP2 (second order Møller-Plesset) [Watts et al., 1992] to Density Functional Theory (DFT) [Parr and Weitao, 1989] they continue to be numerically expensive, despite recent advances in hardware capabilities. Machine learning (ML) models have astonishing performance in approximating these calculations at a fraction of the computational cost [von Lilienfeld and Burke, 2020]. Such speedups have the potential to accelerate the discovery of new materials and therapeutics.
10
+
11
+ Most publications on this topic have relied on QM9 dataset, a standard benchmark for training and evaluating ML models to predict QM properties of small molecules precomputed using approximate DFT calculations. Model-centric approaches demonstrated great capabilities of the machine learning on this dataset, by showing low error on holdout test set of unseen molecules (e.g. [Schütt et al., 2017, Klicpera et al., 2020, Liu et al., 2021]). Despite the promise, realistic adoptions still face unsolved challenges. First, previous ML models rely on large number of labeled molecular geometries (e.g. ${100}\mathrm{\;K}$ for QM9), which are often not available for higher-fidelity level of energy calculations such as $\mathrm{{CCSD}}\left( \mathrm{T}\right)$ or MP2 - the challenge is for a QM/ML model to perform well under small number of computed geometries; Second, previous works evaluate the trained ML models on a test set that is in a similar chemical space as the training set (i.e. in-distribution), while the goal of deployment is to predict energies for structurally distant molecules across the diverse chemical space - the challenge is for a QM/ML model to generalize to out-of-distribution molecules. We observe that known QM/ML architectures would have significantly higher errors in these difficult regimes, calling for innovative ML algorithms to tackle these challenges (Section 6).
12
+
13
+ Present work. Our work focuses on addressing the fundamental cause of the above challenges - the scarcity of computed QM labels on a diverse set of chemicals. We utilize the abundance of the unlabeled molecules and develop an effective pseudo-labeling strategy suitable for QM calculations. The basic idea behind pseudo - labeling is to estimate the labels for the unlabeled data and expand the training dataset. Several pseudo-labeling methods in various machine learning domains have been already applied successfully to improve state-of the-art models especially in computer vision [Xie et al., 2020, Lee et al., 2013, Is-cen et al., 2019]. We investigate many of these approaches (e.g., data augmentation, model noise, student-training, re-initialization) and found many can have a negative effect in quantum calculations. For example, adding positional noise [Xie et al., 2020] in molecular geometries could significantly affect energies and thus bias the pseudo-labels. Thus, a QM-specialized pseudo-labeling strategy is needed. After extensive empirical studies, we reached an optimal QM-specialized scheduling strategy using episodes with no re-initialization and noise.
14
+
15
+ A crucial issue in pseudo-labeling is the introduced bias from low-quality pseudo-labels. Based on theoretical motivations (Section 5), we rely on a key observation that a data point with less evidence/higher model uncertainty is more likely to be of low-quality pseudo-label (Section 6). Thus, we use model-generated evidential uncertainty to quantify each unlabeled data and use it to adaptively lower the weights of bad pseudo-labels in the training loss to reduce the bias effect.
16
+
17
+ In summary, our method focuses on the effective strategy to incorporate QM pseudo-labels to alleviate the fundamental label scarcity issue, along with the associated challenges of low-data and out-of-distribution generalization. We have made the following contributions: (1) Previous QM/ML methods focus on in-distribution and label abundant setting while we investigate more realistic case of low-data and out-of-distribution settings; (2) Pivoting away from the status quo in improving the physics-based representation, we propose to look at data-centric approaches on learning from the vast array of unlabeled molecules; (3) We propose a simple, effective, theoretically motivated pseudo-labeling strategy PSEUD $\sigma$ designed specifically for QM, integrating episodic scheduling and downplaying low-quality pseudo-labels informed by uncertainty; (4) Empirically, we show that PSEUD $\sigma$ can improve QM accuracy for any atomistic model across full-data, low-data, and out-of-distribution settings.
18
+
19
+ ## 2 RELATED WORKS
20
+
21
+ ML-aided quantum calculations. Recently, many ML models have been proposed to improve quantum calculations. They mainly focus on improving the physics-based representation and architectural developments tested on the full QM9 dataset [Schütt et al., 2017, Unke and Meuwly, 2019, Anderson et al., 2019, Lu et al., 2019, Klicpera et al., 2020, Liu et al., 2021, Qiao et al., 2021]. In contrast, our work proposes to shift the focus to model-agnostic training strategies in realistic low-data and out-of-distribution settings.
22
+
23
+ Pseudo-labeling. Pseudo-labeling/self-training generates pseudo-labels for unlabeled data. Numerous works on how to assign pseudo-labels exist, notably, through trained ML model predictions [Lee et al., 2013], label propagation[Shi et al., 2018, Iscen et al., 2019], and history cache [Likhoma-nenko et al., 2021, Higuchi et al., 2021]. PSEUD $\sigma$ is different as it focuses on detecting and preventing bad pseudo-labels from affecting the model. Also, PSEUD $\sigma$ adopts a novel episodic pseudo-labelling strategy with a re-initialized learning rate. [Xie et al., 2020] re-initialize the network as a student when a new pseudo-label set is generated along with noise per epoch. In contrast, PSEUD $\sigma$ has no student and no noise as both are shown to be ineffective for QM in Section 6. In addition, small perturbational noise in $3\mathrm{D}$ molecular geometry could easily lead to a drastic energy change. Thus, a naive strategy of adding noise does not work for QM tasks. More related is a preceding work [Rizve et al., 2021] that develops an uncertainty-aware pseudo-labeling strategy. They introduce additional hyperparameters to remove pseudo-labels at some uncertainties. In contrast, PSEUD $\sigma$ uses an effective adaptive weighting scheme, along with an episodic pseudo-labeling training schedule. Additionally, PSEUD $\sigma$ is the first method that studies pseudo-label in quantum calculations that present unique challenges.
24
+
25
+ Uncertainty. Model uncertainty is a well-studied subject [Kendall and Gal, 2017, Lakshminarayanan et al., 2017, Blundell et al., 2015]. [Amini et al., 2020] use evidential uncertainty to add a prior over the gaussian parameters to search for higher-order patterns for regression tasks. PSEUD $\sigma$ leverages evidential uncertainty as the uncertainty measure. Note that PSEUD $\sigma$ is uncertainty measure-agnostic. We can easily switch to alternative uncertainty measures. Recently, [Soleimany et al., 2021] adapt evidential uncertainty and show that it can successfully help guide property prediction. In contrast, we leverage evidential uncertainty as a proxy for pseudo-label quality to tackle low-data and out-of-distribution challenges in a realistic quantum calculations setup.
26
+
27
+ ## 3 PROBLEM FORMULATION
28
+
29
+ Let $\mathcal{X} = \left\{ {{\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{N}}\right\}$ denote $N$ molecules, where each molecule ${\mathbf{x}}_{i}$ is uniquely defined by $3\mathrm{D}$ coordinates ${\left\{ \left( {a}_{j}^{i},{b}_{j}^{i},{c}_{j}^{i}\right) \right\} }_{j = 1}^{{N}_{i}}$ for ${N}_{i}$ atoms with atom types ${\left\{ {t}_{j}\right\} }_{j = 1}^{{N}_{i}}$ in the corresponding molecule. We then denote $\mathcal{Y} =$ $\left\{ {{y}_{1},\ldots {y}_{N}}\right\}$ a set of quantum mechanical properties for each molecule. The labeled dataset thus consists of a set of pairs of 3D coordinates and scalar labels $\mathcal{D} = \{ \mathcal{X},\mathcal{Y}\}$ .
30
+
31
+ In addition to the labeled data, we solicit a large quantity of unlabeled data to generate pseudo-labels. We denote an unlabeled dataset $\mathcal{U} = \left\{ {{\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{P}}\right\}$ , where $P$ is the size of the unlabeled dataset. Given an atomistic model $f\left( \cdot \right)$ , we can generate pseudo-labels $\widehat{\mathcal{Y}} = \left\{ {{\widehat{y}}_{1},\ldots ,{\widehat{y}}_{P}}\right\}$ , where ${\widehat{y}}_{i} = f\left( {\mathbf{x}}_{i}\right)$ for ${\mathbf{x}}_{i} \in \mathcal{U}$ .
32
+
33
+ The problem is to train a machine learning-based atomistic model $f : \mathbf{x} \mapsto y$ that can establish an accurate map from $3\mathrm{D}$ coordinates to the quantum mechanical properties of the molecules with the help of pseudo-labeled dataset $\mathcal{U}$ .
34
+
35
+ ## 4 PSEUD $\sigma$ : UNCERTAINTY-AWARE PSEUDO-LABELING FOR QUANTUM CALCULATIONS
36
+
37
+ PSEUD $\sigma$ (Figure 1) is an approach for quantum chemical property prediction. Building on theoretical motivation from Section 5, PSEUDo solicits pseudo-labels on a vast array of an unlabeled dataset to increase the diversity of the training space via an episodic labeling strategy. Then, it adaptively weights the pseudo-labels using evidential uncertainty to allow a positive transfer. The overview is in Algorithm 1.
38
+
39
+ Episodic Pseudo-labeling. We devise a pseudo-labeling strategy that can ensure learning from the pseudo-labels to the fullest extent for QM. We have made two distinct modifications compared to existing works. First is the pseudo-label scheduling. In the standard pseudo-labeling [Lee et al., 2013], pseudo-labels are updated in every update and the model is continuously trained. In contrast, we devise an episodic training strategy, where each episode consists of $K$ epochs, and pseudo-labels are regenerated in every episode, while the model is continuously trained. This is important because we observe that updating pseudo labels too frequently prevents the model from extracting all the useful information from pseudo-labels. In contrast, our episodic approach gives the model more time to absorb useful information from a given set of pseudo-labels. A second modification is how we carry out model updates. In self-training [Rizve et al., 2021, Xie et al., 2020], a set of pseudo-labels are regenerated after $K$ epochs (1 episode) and the model is reinitialized. Instead, we train the same model across episodes. This new strategy allows the model to be exposed to a larger number of labels or training data points given the same time frame. For each episode, we also reinitialize the learning rate with a small step-wise decay strategy to allow the model a chance to jump out of the local optimum from the previous set of pseudo-labels.
40
+
41
+ Formally, PSEUD $\sigma$ mainly consists of three stages: in the first stage, regular training is conducted on labeled data $\mathcal{D}$ , and the output model is the initialized model ${f}^{\left( 1\right) }$ . In the second stage, the updated model at episode $k$ then conducts inference on the entire unlabeled data $\widehat{\mathcal{Y}} = {f}^{\left( k\right) }\left( \mathcal{U}\right)$ to generate the pseudo-label set. The per-episode pseudo-label set is then combined with the gold-labeled data to form the training data for the next episode. In the third stage, the model is further trained using the combined dataset to get a new model ${f}^{\left( k + 1\right) }$ after $K$ epochs (i.e., one more episode). The second and third stages are then reiterated till the loss converges.
42
+
43
+ Evidential uncertainty quantification. Pseudo-labels are noisy. Many are incorrect and can potentially lead to negative transfer. Thus, it is instrumental to determine the quality of pseudo-labels. However, there is no auxiliary information in the dataset about the pseudo-labels. Thus, we need to quantify it through some proxies that can be assigned without auxiliary information. Our key observation is that low-quality pseudo-labels have high model uncertainty, and high-quality pseudo-labels have low model uncertainty. Another advantage of model uncertainty is that it can be estimated solely from $\mathbf{x}$ , if we make it model uncertainty-aware.
44
+
45
+ Building on the theoretical motivation about the connection between evidential uncertainty and the entropy minimization in Section 5, we use evidential uncertainty as the proxy for label quality. The evidential modeling of molecular property allows us to derive an analytical expression for uncertainty, which can be directly used to weight the pseudo-labels. Formally, we can model the label probabilistically as being drawn from $\left( {{y}_{1},\cdots ,{y}_{i}}\right) \sim \mathcal{N}\left( {\mu ,{\sigma }^{2}}\right)$ , where the mean $\mu$ are variance ${\sigma }^{2}$ are unknown. To estimate them, we pose a prior
46
+
47
+ $$
48
+ \mu \sim \mathcal{N}\left( {\gamma ,{\sigma }^{2}{v}^{-1}}\right) ,{\sigma }^{2} \sim {\Gamma }^{-1}\left( {\alpha ,\beta }\right) , \tag{1}
49
+ $$
50
+
51
+ where the parameters $\theta = \left( {\mu ,\sigma }\right)$ is an instantiation of the posterior $p\left( {\mu ,{\sigma }^{2} \mid \gamma , v,\alpha ,\beta }\right)$ . The choice of prior allows the factorization $p\left( {\mu ,{\sigma }^{2}}\right) = p\left( \mu \right) p\left( {\sigma }^{2}\right)$ [Jordan,2009]. The posterior then becomes a NormalInvGamma $\left( {\gamma , v,\alpha ,\beta }\right)$ where the maximum likelihood estimation of $\theta$ can be analytically found as
52
+
53
+ $$
54
+ \mathbb{E}\left\lbrack \mu \right\rbrack = \gamma ,\mathbb{E}\left\lbrack {\sigma }^{2}\right\rbrack = \frac{\beta }{\alpha - 1}. \tag{2}
55
+ $$
56
+
57
+ Here, $\mathbb{E}\left\lbrack {\sigma }^{2}\right\rbrack$ plays the role of the aleatoric (data) uncertainty. The uncertainty of the model prediction can also be calculated, i.e. epistemic uncertainty:
58
+
59
+ $$
60
+ \operatorname{Var}\left\lbrack \mu \right\rbrack = \mathbb{E}\left\lbrack {\sigma }^{2}\right\rbrack /v = \frac{\beta }{v\left( {\alpha - 1}\right) }. \tag{3}
61
+ $$
62
+
63
+ As the MLE is deterministic, the model can output four prior parameters $\{ \gamma , v,\alpha ,\beta \}$ directly where the prediction and uncertainty can be derived from them analytically. The prior is optimized by evidential loss ${\mathcal{L}}^{\text{evi }}$ [Amini et al.,2020]:
64
+
65
+ $$
66
+ {\mathcal{L}}_{i}^{\text{evi }} = - \log \operatorname{St}\left( {{y}_{i};\gamma ,\frac{\beta \left( {1 + v}\right) }{v\alpha },{2\alpha }}\right) + \lambda \left| {{y}_{i} - \gamma }\right| \left( {{2v} + \alpha }\right) ,
67
+ $$
68
+
69
+ (4)
70
+
71
+ where the first term maximizes the log-likelihood of the posterior predictive, which is the Student's t-distribution. The second term is a regularizer that imposes a penalty whenever there is an error in the prediction and scales with the total evidence ${2v} + \alpha$ of our inferred posterior. Similarly, it encourages lower uncertainty when the model prediction is error free. This encourages the model to generate an accurate estimate of uncertainty or the degree of errors for the pseudo-labeled data points. The regularization is controlled by a hyperparameter $\lambda$ .
72
+
73
+ Adaptive weighting. The evidential uncertainty detects the low-quality pseudo-labels. The next step is to remove the noisy effect from the model training. Naive methods often use removal based on a threshold [Rizve et al., 2021]. However, this has two disadvantages: (1) it introduces a new hyperparameter - the threshold; and (2) it removes a portion of unlabeled noisy data, which can contain useful information. Instead, we propose an adaptive weighting mechanism that weights the evidential loss with the inverse of the epistemic uncertainty. Intuitively, a higher uncertainty data point should have a smaller effect on the loss function because it is more likely that the sample has a low pseudo-label quality, and we want to reduce its effect on the model. Conversely, if a pseudo-label has low uncertainty, the label quality is high enough to be used as a high-fidelity proxy for a gold-label. Thus, it should have a higher impact on the loss. The uncertainty is from the teacher model in the previous episode and is fixed throughout the current episode. Thus, the adaptive weight for each pseudo data point $i$ becomes ${\widehat{\mathcal{W}}}_{i} = \operatorname{Var}{\left\lbrack \mu \right\rbrack }_{i}^{-1}$ . The final loss then becomes
74
+
75
+ $$
76
+ \mathcal{L} = \frac{1}{\left| \mathcal{D}\right| }\mathop{\sum }\limits_{{i \in \mathcal{D}}}{\mathcal{L}}_{i}^{\text{evi }} + \mathop{\sum }\limits_{{i \in \mathcal{U}}}\frac{{\widehat{\mathcal{W}}}_{i}}{\mathop{\sum }\limits_{{i \in \mathcal{U}}}{\widehat{\mathcal{W}}}_{i}}{\mathcal{L}}_{i}^{\text{evi }}, \tag{5}
77
+ $$
78
+
79
+ ![01963962-4754-7cb8-b840-95836377f22d_3_152_208_1430_430_0.jpg](images/01963962-4754-7cb8-b840-95836377f22d_3_152_208_1430_430_0.jpg)
80
+
81
+ Figure 1: PSEUD $\sigma$ illustration. In every episode $k$ , PSEUD $\sigma$ assigns pseudo-labels along with their evidential uncertainty using trained neural network ${f}^{\left( k - 1\right) }$ from previous episode. The uncertainty is used as weight to adaptively adjust the loss in this episode’s neural network ${f}^{\left( k\right) }$ ’s training to reduce the effect of bad pseudo-labels in an inner-loop training with $K$ epochs.
82
+
83
+ where the first term, corresponding to the labeled dataset $\mathcal{D}$ , does not have weights unlike the second term corresponding to the unlabeled data $\mathcal{U}$ . This adaptive loss solves two disadvantages: it has zero hyperparameters, and it removes the effect of bad pseudo-labels while retaining all training examples including the noisy ones to maximize the diversity of the training space.
84
+
85
+ ## 5 PSEUD $\sigma$ MOTIVATION: CONNECTION TO ENTROPY MINIMIZATION
86
+
87
+ We derive motivation about why evidential uncertainty and the weighting mechanism could be beneficial to pseudo-labeling based on the entropy minimization framework for semi-supervised learning from Grandvalet and Bengio [2004, 2006], Lee et al. [2013]. Notably, our use of Bayesian modeling enables us to analytically derive a conditional entropy for pseudo-labeled data. We find that evidential loss strongly relates to conditional entropy, and minimizing evidential loss directly minimizes entropy. Secondly, we find the conditional entropy could be decomposed into the inverse epistemic uncertainty and the log-likelihood, which motivates our weighting mechanism.
88
+
89
+ In the standard regression setting, one seeks to maximize the likelihood of the model ${p}_{\theta }\left( {\mathcal{Y} \mid \mathcal{X}}\right)$ on the labeled data set $\mathcal{D}$ . To utilize the unlabeled data set, we need to extract some useful information on how the model behaves on the unlabeled dataset and inject this information to improve the model. To measure the utility, entropy $\mathcal{H}\left( {Y \mid \mathcal{U}}\right)$ is introduced [Grandvalet and Bengio, 2006] as a proxy to measure the amount of information in unlabeled data:
90
+
91
+ $$
92
+ \mathcal{H}\left( {\mathcal{Y} \mid \mathcal{U}}\right) = \mathop{\sum }\limits_{{{\mathbf{x}}_{i} \in \mathcal{U}}}{\mathrm{E}}_{y \sim {\mathrm{p}}_{\theta }\left( {y \mid {\mathbf{x}}_{i}}\right) }\left\lbrack {-\log {\mathrm{p}}_{\theta }\left( {y \mid {\mathbf{x}}_{i}}\right) }\right\rbrack . \tag{6}
93
+ $$
94
+
95
+ Throughout the text, we are referring to entropy as Shannon entropy. High entropy is associated with random predictions while low entropy is associated with non-random behavior. Hence, we hypothesize that small entropy may be indication of a signal that our model can benefit from. Small entropy, as seen below, corresponds to high model confidence and vice versa. Large entropy corresponds to high model uncertainty. Entropy minimization framework casts the regression as the following optimization problem:
96
+
97
+ $$
98
+ {\operatorname{argmax}}_{\theta }\left\lbrack {\log {\mathrm{p}}_{\theta }\left( {\mathcal{Y} \mid \mathcal{X}}\right) - c\mathcal{H}\left( {\mathcal{Y} \mid \mathcal{U}}\right) }\right\rbrack , \tag{7}
99
+ $$
100
+
101
+ ![01963962-4754-7cb8-b840-95836377f22d_4_162_180_1427_368_0.jpg](images/01963962-4754-7cb8-b840-95836377f22d_4_162_180_1427_368_0.jpg)
102
+
103
+ Figure 2: (a) Dependence of the entropy (Eq. 9) on epistemic uncertainty and virtual observation parameter $\alpha$ for a fixed aleatoric uncertainty $\mathbb{E}\left\lbrack {\sigma }^{2}\right\rbrack = 1$ . As the the epistemic uncertainty increases, the entropy is also increases for all values of parameter $\alpha$ . For example, figure (b) demonstrate the trend for a fixed $\alpha = 2$ . Figure (c) demonstrates the dependence of empirical weights (Eq. 14) on epistemic uncertainty. The empirical weights tend to decrease as the epistemic uncertainty increases. Figure (d) demonstrates this trend for a fixed $\alpha = 2$ .
104
+
105
+ where $c$ is the proportionality constant. Intuitively, here, the objective tends to maximize the log-likelihood on the labeled dataset while minimizing the entropy on the unlabeled data set at the same time to transfer knowledge from unlabeled data.
106
+
107
+ In previous works [Schütt et al., 2017, Liu et al., 2021], molecule properties are not modeled probabilistically such that entropy calculation is infeasible. In contrast, PSEUD $\sigma$ uses Bayesian modeling approaches that allow us to analytically calculate the entropy. For every molecule ${\mathbf{x}}_{i}$ the machine learning model outputs four parameters $f\left( {\mathbf{x}}_{i}\right) =$ $\left( {{\alpha }_{i},{\beta }_{i},{\gamma }_{i},{\nu }_{i}}\right)$ . Based on these parameters, the likelihood of label $y$ given the input molecule ${\mathbf{x}}_{i}$ is given by the Student’s t-distribution in the context of evidential regression
108
+
109
+ $$
110
+ {p}_{\theta }\left( {y \mid {\mathbf{x}}_{i}}\right) = \operatorname{St}\left( {y;{\gamma }_{i},{\sigma }_{{st}, i}^{2},2{\alpha }_{i}}\right) \tag{8}
111
+ $$
112
+
113
+ evaluated at location parameter ${\gamma }_{i}$ , Student’s t-distribution scale parameter ${\sigma }_{{st}, i}^{2} = \frac{{\beta }_{i}\left( {1 + {\nu }_{i}}\right) }{{\nu }_{i}{\alpha }_{i}}$ and $2{\alpha }_{i}$ degrees of freedom. The entropy of the Student's t-distribution given in terms of evidential parameters is readily available (Appendix [8]:
114
+
115
+ $$
116
+ \mathcal{H}\left( {y \mid {\mathbf{x}}_{i}}\right) = \frac{2{\alpha }_{i} + 1}{2}\left( {\Psi \left( \frac{2{\alpha }_{i} + 1}{2}\right) - \Psi \left( {\alpha }_{i}\right) }\right) \tag{9}
117
+ $$
118
+
119
+ $$
120
+ + \log \sqrt{2{\alpha }_{i}}\mathrm{\;B}\left( {{\alpha }_{i},\frac{1}{2}}\right) + \frac{1}{2}\log {\sigma }_{{st}, i}^{2},
121
+ $$
122
+
123
+ where $\Psi$ is a digamma function and $\mathrm{B}\left( {\cdot , \cdot }\right)$ is a beta function. If we take model (epistemic) uncertainty (Eq. 3) as our measure of uncertainty, we can show that minimizing entropy directly relates to minimizing epistemic uncertainty. We plot the relation between the entropy and epistemic uncertainty in the Figure 2. As the next step, we aim to uncover the dependence of the entropy on the model uncertainty of our pseudo-labeling approach. This can be done if we make two simplifications in entropy evaluation. Firstly, to introduce iterations as in pseudo-labeling, we replace entropy with the cross-entropy between two probability distributions: the predictions $y$ are generated from the probability distribution ${p}_{\theta \left( {t - 1}\right) }\left( {y \mid {\mathbf{x}}_{i}}\right)$ at iteration step $t - 1$ and log-likelihood are evaluated with respect to probability distribution ${p}_{\theta \left( t\right) }\left( {y \mid {\mathbf{x}}_{i}}\right)$ at iterative step t:
124
+
125
+ $$
126
+ \mathcal{H}\left( {\mathcal{Y} \mid \mathcal{U}}\right) \approx \mathop{\sum }\limits_{{{\mathbf{x}}_{i} \in \mathcal{U}}}{\mathbb{E}}_{y \sim {p}_{\theta \left( {t - 1}\right) }\left( {y \mid {\mathbf{x}}_{i}}\right) }\left\lbrack {-\log {\mathrm{p}}_{\theta \left( t\right) }\left( {y \mid {\mathbf{x}}_{i}}\right) }\right\rbrack .
127
+ $$
128
+
129
+ (10)
130
+
131
+ Upon convergence, $t \rightarrow + \infty$ , the probability distributions at every iterative step ${p}_{\theta \left( {t - 1}\right) }\left( {y \mid {\mathbf{x}}_{i}}\right) \approx {p}_{\theta \left( t\right) }\left( {y \mid {\mathbf{x}}_{i}}\right)$ and are approximately the same and one can view introduced cross-entropy with respect to time step $t$ as entropy. At the earlier stages of training cross-entropy acts as a regularizer encouraging network parameters $\theta \left( t\right)$ to match $\theta \left( {t - 1}\right)$ .
132
+
133
+ As a second approximation, to uncover model uncertainty in mathematical formulas, we approximate the probability distribution at time step $t - 1$ . We resort to empirical estimate of the entropy, as done in [Grandvalet and Bengio, 2004]. We select labels $y$ at the highest mode of probability distribution ${\mathrm{p}}_{\theta \left( {t - 1}\right) }\left( {y \mid {\mathbf{x}}_{i}}\right)$ , which corresponds to $y = {\gamma }_{i}^{t - 1}$ . We obtain the following approximate for the entropy:
134
+
135
+ $$
136
+ \mathcal{H}\left( {\mathcal{Y} \mid \mathcal{U}}\right) \approx {\mathcal{H}}_{\text{emp }}\left( {\mathcal{Y} \mid \mathcal{U}}\right) = - \mathop{\sum }\limits_{{{\mathbf{x}}_{i} \in \mathcal{U}}}{\mathcal{E}}_{i}^{t - 1}\log {\mathrm{p}}_{\theta \left( t\right) }\left( {{\gamma }_{i}^{t - 1} \mid {\mathbf{x}}_{i}}\right) ,
137
+ $$
138
+
139
+ (11)
140
+
141
+ where the log probabilities are weighted by empirical probabilities as weights ${\mathcal{E}}_{i}^{t - 1}$ evaluated at iterative step $t - 1$ when plugged into Eq. 8 (also see Appendix. Eq. 18 for exact formula of Student's t-distribution)
142
+
143
+ $$
144
+ {\mathcal{E}}_{i}^{t - 1} = \operatorname{St}\left( {y = {\gamma }_{i},{\sigma }_{{st}, i}^{2},2{\alpha }_{i}}\right) = \frac{1}{\sqrt{2{\alpha }_{i}{\sigma }_{{st}, i}^{2}}\mathrm{\;B}\left( {\frac{1}{2},{\alpha }_{i}}\right) }.
145
+ $$
146
+
147
+ (12)
148
+
149
+ ---
150
+
151
+ Algorithm 1: Pseud $\sigma$ Algorithm.
152
+
153
+ Input: Labeled data $\mathcal{D} = \left\{ {\left( {{\mathbf{x}}_{1},{y}_{1}}\right) ,\cdots ,\left( {{\mathbf{x}}_{N},{y}_{N}}\right) }\right\}$ ,
154
+
155
+ unlabled data $\mathcal{U} = \left\{ {{\mathbf{x}}_{1},\cdots ,{\mathbf{x}}_{P}}\right\} \widehat{\mathcal{U}} \leftarrow \{ \} ,\widehat{\mathcal{W}} \leftarrow \{ \}$
156
+
157
+ // Initialize with empty unlabeled
158
+
159
+ data
160
+
161
+ for $k \in \{ 1,\cdots , M\} \;//$ Outer-loop with $M$
162
+
163
+ episodes
164
+
165
+ do
166
+
167
+ $\mathcal{T} \leftarrow \mathcal{D} \cup \widehat{\mathcal{U}}\;$ // Join updated
168
+
169
+ pseudo-labels
170
+
171
+ for $\left( {{\mathbf{x}}_{i},{\mathbf{y}}_{i}}\right) \in \mathcal{T}\;//$ Inner-loop with $K$
172
+
173
+ epochs
174
+
175
+ do
176
+
177
+ ${\theta }_{i} = \left( {{\gamma }_{i},{v}_{i},{\alpha }_{i},{\beta }_{i}}\right) = {f}^{\left( k - 1\right) }\left( {\mathbf{x}}_{i}\right)$
178
+
179
+ // Evidental parameters
180
+
181
+ $\widehat{{\mathbf{y}}_{i}} = \mathbb{E}\left\lbrack \mu \right\rbrack = {\gamma }_{i}\;$ // Posterior
182
+
183
+ prediction
184
+
185
+ $\mathcal{L} = \mathrm{L}\left( {{\widehat{\mathbf{y}}}_{i},{\mathbf{y}}_{i},{\theta }_{i},{\widehat{\mathcal{W}}}_{i}}\right) \;$ // Adaptive
186
+
187
+ evidential loss via Eq. 5
188
+
189
+ ${f}^{\left( k - 1\right) } = \operatorname{Update}\left( {{f}^{\left( k - 1\right) },\mathcal{L}}\right)$
190
+
191
+ // Inner-loop update
192
+
193
+ end
194
+
195
+ ${f}^{\left( k\right) } \leftarrow {f}^{\left( k - 1\right) }\;$ // Update teacher model
196
+
197
+ for pseudo-labels
198
+
199
+ for ${\mathbf{x}}_{i} \in \mathcal{U}$ do
200
+
201
+ ${\widehat{\theta }}_{i} = \left( {{\widehat{\gamma }}_{i},{\widehat{v}}_{i},{\widehat{\alpha }}_{i},{\widehat{\beta }}_{i}}\right) = {f}^{\left( k\right) }\left( {\mathbf{x}}_{i}\right)$
202
+
203
+ $\widehat{{\mathbf{y}}_{i}} = {\widehat{\gamma }}_{i}\;//$ Infer a new set of
204
+
205
+ pseudo-labels
206
+
207
+ ${\widehat{\mathcal{U}}}_{i} \leftarrow \left( {{\mathbf{x}}_{i},{\widehat{\mathbf{y}}}_{i}}\right)$
208
+
209
+ pseudo-labels
210
+
211
+ ${\widehat{\mathcal{W}}}_{i} \leftarrow \operatorname{Var}{\left\lbrack \mu \right\rbrack }_{i}^{-1} = {\widehat{v}}_{i} * \left( {{\widehat{\alpha }}_{i} - 1}\right) /{\widehat{\beta }}_{i}$
212
+
213
+ // Update adaptive weights
214
+
215
+ end
216
+
217
+ end
218
+
219
+ ---
220
+
221
+ To establish a relationship between empirical weights ${\mathcal{E}}_{i}^{t - 1}$ and aleatoric $\mathbb{E}\left\lbrack {\sigma }_{i}^{2}\right\rbrack /$ epistemic $\operatorname{Var}\left\lbrack {\mu }_{i}\right\rbrack$ uncertainties we rewrite
222
+
223
+ $$
224
+ {\sigma }_{{st}, i}^{2} = \frac{{\alpha }_{i} - 1}{{\alpha }_{i}}\left( {\operatorname{Var}\left\lbrack {\mu }_{i}\right\rbrack + \mathbb{E}\left\lbrack {\sigma }_{i}^{2}\right\rbrack }\right) \tag{13}
225
+ $$
226
+
227
+ $$
228
+ {\mathcal{E}}_{i}^{t - 1} = \frac{{\left( \operatorname{Var}\left\lbrack {\mu }_{i}\right\rbrack + \mathbb{E}\left\lbrack {\sigma }_{i}^{2}\right\rbrack \right) }^{-\frac{1}{2}}}{\sqrt{2}\mathrm{\;B}\left( {\frac{1}{2},{\alpha }_{i}}\right) \sqrt{{\alpha }_{i} - 1}}. \tag{14}
229
+ $$
230
+
231
+ Empirical coefficients depend on aleatoric, epistemic uncertainties and ${\alpha }_{i}$ parameter, which can be interpreted as virtual observations in support of the variance estimation [Jordan, 2009]. In the limiting case ${\alpha }_{i} \gg 1$ one can approximate beta function via Stirling formula $\mathrm{B}\left( {\frac{1}{2},{\alpha }_{i}}\right) \approx \sqrt{\pi }{\alpha }_{i}^{-\frac{1}{2}}$ and empirical weights become
232
+
233
+ $$
234
+ {\mathcal{E}}_{i}^{t - 1} \approx {\left( \operatorname{Var}\left\lbrack {\mu }_{i}\right\rbrack + \mathbb{E}\left\lbrack {\sigma }_{i}^{2}\right\rbrack \right) }^{-\frac{1}{2}}. \tag{15}
235
+ $$
236
+
237
+ We can express the empirical coefficients depend both on aleatoric and epistemic uncertainties in a symmetric fashion.
238
+
239
+ We selected adaptive pseudo-labeling coefficients ${\mathcal{W}}_{i}$ in our pseudo-labeling approach Eq. 5 to be inverse epistemic/model uncertainties. We can see, that those coefficients directly relate to empirical coefficients derived from entropy minimization approach Eq. 14, as empirical coefficients also depend on model uncertainty in the inverse fashion. As the model uncertainty increases, the empirical coefficients ${\mathcal{E}}_{i}$ tend to decrease to minimize the entropy.
240
+
241
+ ## 6 EXPERIMENTS
242
+
243
+ ### 6.1 DATASET AND EXPERIMENTAL SETUPS
244
+
245
+ We evaluate PSEUD $\sigma$ using the QM9 dataset [Wu et al., 2018] under two settings. (A) Full-data: We follow the previous works [Liu et al., 2021, Klicpera et al., 2020] where a 110,000/10,000/10,831 training/validation/testing split is obtained. We draw unlabeled data from PC9, a dataset of 99,234 molecules that consists of the same elements as QM9, curated by [Glavatskikh et al., 2019]. (B) Low-data: we set $k\%$ of QM9 full training set as the training set (i.e. $k\% \times {110},{000})$ and we remove the labels from the remaining $\left( {1 - k\% }\right)$ of the QM9 full training set and treat this as the unlabeled set. We evaluate PSEUD $\sigma$ for two $k$ values, 1 and 10 (meaning only $1,{100}/{11},{000}$ labelled QM data points are retained, respectively). A summary of the dataset statistics is presented in Table 1. Note that PC9 has a wider chemical diversity than QM9, demonstrated by wider distribution of distances of chemical bonds and more functional groups [Glavatskikh et al., 2019].
246
+
247
+ PSEUD $\sigma$ is model-agnostic. We evaluate it with two model backbones SchNet [Schütt et al., 2017] (PSEUDσ-S) and DimeNet++ [Klicpera et al., 2020] (PSEUD $\sigma$ -D). We do not experiment with the SOTA atomistic model SphereNet [Liu et al., 2021] because it is highly computationally expensive. Our result is conducted on two targets ${\sigma }_{\mathrm{{HOMO}}},{\sigma }_{\mathrm{{LUMO}}}$ , because the PC9 dataset only has these two targets. We use mean absolute error as the evaluation metric.
248
+
249
+ For baselines, we compare with 6 state-of-the-art baselines, including SchNet [Schütt et al., 2017], PhysNet [Unke and Meuwly, 2019], Cormorant [Anderson et al., 2019], MGCN [Lu et al., 2019], DimeNet++ [Klicpera et al., 2020], and SphereNet [Liu et al., 2021]. We report the best results taken from the original authors' paper while using the same fraction of data split in the full data setting. For PSEUD $\sigma$ , we conduct two hyperparameter tunings on ${\sigma }_{\mathrm{{HOMO}}}$ with SchNet backbone on the validation MAE with full data/low-data setting, respectively. The optimal hyperparameter is then used for both targets. Note that the atomistic model itself has the same hyperparameters as used by the original authors. Code will be released after the anonymous review period.
250
+
251
+ Table 1: Dataset statistics.
252
+
253
+ <table><tr><td>Setting</td><td>Training Set</td><td>Validation Set</td><td>Testing Set</td><td>Unlabeled Set</td><td>OOD Set</td></tr><tr><td>Full-data</td><td>110,000 (QM9)</td><td>10,000 (QM9)</td><td>10,831 (QM9)</td><td>99,234 (PC9)</td><td>-</td></tr><tr><td>Low-data-1%</td><td>1,100 (QM9)</td><td>10,000 (QM9)</td><td>10,831 (QM9)</td><td>108,900 (QM9)</td><td>-</td></tr><tr><td>Low-data-10%</td><td>11,000 (QM9)</td><td>10,000 (QM9)</td><td>10,831 (QM9)</td><td>99,000 (QM9)</td><td>-</td></tr><tr><td>Out-of-distribution</td><td>110,000 (QM9)</td><td>10,000 (QM9)</td><td>10,831 (QM9)</td><td>99,234 (PC9)</td><td>99,234 (PC9)</td></tr></table>
254
+
255
+ ### 6.2 RESULTS
256
+
257
+ Overview of results. We report performances of PSEUD $\sigma$ in full data (Table 2), low-data (Table 3), and out-of-distribution (Table 4) settings and find PSEUD $\sigma$ achieves the best performance across all settings, suggesting the robustness of the pseudo-labeling strategy. A systematic ablation study (Table 5) shows the importance of each module in Pseud $\sigma$ .
258
+
259
+ PSEUD $\sigma$ improves on fully supervised QM calculations. We compare PSEUD $\sigma$ against 6 state-of-the-art models in Table 2. PSEUD $\sigma$ -D surpasses all baselines on both targets ${\sigma }_{\mathrm{{HOMO}}},{\sigma }_{\text{LUMO }}$ . Notably, PSEUD $\sigma$ -D improves the SOTA by ${3.2}\mathrm{{meV}}$ , a significant margin. Particularly, comparing PSEUD $\sigma$ -S with SchNet and PSEUD $\sigma$ -D with DimeNet++, we find PSEUD $\sigma$ can consistently improve even on the fully supervised setting by a large margin ( ${8.1}\mathrm{{meV}}$ for SchNet and ${4.2}\mathrm{{meV}}$ for DimeNet++), highlighting the utility of PSEUD $\sigma$ and the high quality of PC9 as unlabeled data. It also shows that this direction of improving learning strategy instead of improving physics-based representation is very promising.
260
+
261
+ PSEUD $\sigma$ significantly improves on low-data QM calculations. In Table 3, we investigate how PSEUD $\sigma$ can improve in the low-data regime with only $1\%$ or ${10}\%$ of the training data (i.e, using only 1,100 and 11,000 QM calculations). We observe PSEUD $\sigma$ can consistently and significantly improve prediction accuracy in ${\sigma }_{\mathrm{{HOMO}}},{\sigma }_{\mathrm{{LUMO}}}$ across both low-data settings and both model backbones, suggesting PSEUD $\sigma$ can help prediction in realistic low-data settings (simulating the use of the more expensive QM levels of theory such as CCSD(T)/MP2). Notably, in ${\sigma }_{\text{LUMO }}$ with $1\%$ of QM9 data, PSEUD $\sigma$ improves upon SchNet by 57.8 $\mathrm{{meV}}$ , a considerable margin. We also observe that the gain margin is much more significant when the training dataset is smaller.
262
+
263
+ PSEUD $\sigma$ improves out-of-distribution QM calculations. Another realistic challenge is to infer accurately on unseen data distribution away from QM9. We conduct inference on the PC9 dataset (since the dataset already contains calculated ${\sigma }_{\mathrm{{HOMO}}},{\sigma }_{\text{LUMO }}$ values). We find PSEUD $\sigma$ can again significantly improve OOD accuracy over DimeNet++, a SOTA method, with over ${16.0}\mathrm{{meV}}$ improvement on ${\sigma }_{\mathrm{{HOMO}}}$ and ${8.4}\mathrm{{meV}}$ improvement on ${\sigma }_{\mathrm{{LUMO}}}$ , highlighting the robustness of PSEUD $\sigma$ .
264
+
265
+ ![01963962-4754-7cb8-b840-95836377f22d_6_952_533_583_408_0.jpg](images/01963962-4754-7cb8-b840-95836377f22d_6_952_533_583_408_0.jpg)
266
+
267
+ Figure 3: Uncertainty highly correlates to label quality.
268
+
269
+ Evidential uncertainty highly correlates to label quality. PSEUD $\sigma$ utilizes uncertainty as a proxy of label quality because they are highly correlated for unseen molecules. In this experiment, we want to validate this hypothesis. We train on the complete QM9 training set with evidential uncertainty and then infer on the QM9 testing set. We find that the non-parametric Spearman correlation between MAE and epistemic uncertainty is 0.42 with a p-value $< 1\mathrm{e} - {16}$ . Additionally, we evaluate on PC9 out-of-distribution set, and the Spearman correlation is 0.35 with p-value $< 1\mathrm{e}$ - 16 , suggesting our uncertainty is a robust measure of label quality.
270
+
271
+ Ablations. In Table 5, we conduct a systematic ablation study using SchNet as the backbone architecture on the fully supervised QM9 setting. We show that each component in PSEUD $\sigma$ is indispensable for PSEUD $\sigma$ . In Table 2, we have reported original authors best performance following standard practices Klicpera et al. [2020], Liu et al. [2021]. To further clearly demonstrate the utility of pseudo-labeling, in -pseudo-label, we keep all hyperparameters the same but remove the pseudo-labeling part. We show that our pseudo-labeling strategy improves performance by a large margin. Next, in the -uncertainty ablation, we use a vanilla per-epoch pseudo-labeling strategy with no uncertainty. This decreases performances even as compared to the -pseudo-label strategy. Then, to compare against standard self-training, the -student ablation retrains a model in every episode as in [Xie et al., 2020] and we see decreased performance. Lastly, the -uniform ablation uses the same weight for all pseudo-labels with no uncertainty reweighting. The decreased performance shows the importance of detection and adaptive removal of bad pseudo-labels, achieved by our evidential characterization of the molecular target property.
272
+
273
+ Table 2: PSEUD $\sigma$ improves on full data setting. Reported metric is MAE. The lower the better.
274
+
275
+ <table><tr><td>Property</td><td>Unit</td><td>SchNet</td><td>PhysNet</td><td>Cormorant</td><td>MGCN</td><td>DimeNet++</td><td>SphereNet</td><td>Pseudo-S</td><td>Pseud $\sigma - \mathrm{D}$</td></tr><tr><td>${\epsilon }_{\text{HOMO }}$</td><td>meV</td><td>41</td><td>32.9</td><td>36</td><td>42.1</td><td>24.6</td><td>23.6</td><td>32.9</td><td>20.4</td></tr><tr><td>${\epsilon }_{\text{LUMO }}$</td><td>meV</td><td>34</td><td>24.7</td><td>36</td><td>57.4</td><td>19.5</td><td>18.9</td><td>24.7</td><td>18.2</td></tr></table>
276
+
277
+ Table 3: PSEUD $\sigma$ improves on low-data regime. Reported metric is MAE. The lower the better.
278
+
279
+ <table><tr><td colspan="2">Low-Data Setting</td><td colspan="2">1% QM9 (1,100)</td><td colspan="2">10% QM9 (11,000)</td></tr><tr><td>Property</td><td>Unit</td><td>SchNet $\rightarrow$ PSEUD $\sigma$</td><td>DimeNet++ $\rightarrow$ PSEUD $\sigma$</td><td>SchNet $\rightarrow$ Pseud</td><td>DimeNet++ $\rightarrow$ PSEUD $\sigma$</td></tr><tr><td>${\epsilon }_{\text{HOMO }}$</td><td>meV</td><td>${265.4}\xrightarrow[]{+{10.8}}{276.2}$</td><td>${248.9}\xrightarrow[]{-{18.7}}{230.2}$</td><td>${119.0}\xrightarrow[]{-{30.2}}{88.8}$</td><td>${81.1}\xrightarrow[]{-{13.7}}{67.4}$</td></tr><tr><td>${\epsilon }_{\text{LUMO }}$</td><td>meV</td><td>${290.6}\xrightarrow[]{-{57.8}}{232.8}$</td><td>${229.3}\xrightarrow[]{-{5.2}}{224.1}$</td><td>${93.3}\xrightarrow[]{-{15.0}}{78.3}$</td><td>${60.8}\xrightarrow[]{-{1.6}}{59.2}$</td></tr></table>
280
+
281
+ Table 4: Out-of-distribution best validation MAE.
282
+
283
+ <table><tr><td>Property</td><td>Unit</td><td>SchNet</td><td>DimeNet++</td><td>Pseud $\sigma - \mathrm{D}$</td></tr><tr><td>${\sigma }_{\text{HOMO }}$</td><td>meV</td><td>243.4</td><td>230.4</td><td>214.4</td></tr><tr><td>${\sigma }_{\text{LUMO }}$</td><td>meV</td><td>225.0</td><td>184.2</td><td>175.8</td></tr></table>
284
+
285
+ Table 5: Ablation using SchNet as backbone on the fully supervised setting.
286
+
287
+ <table><tr><td>Property</td><td>Unit</td><td>Pseud</td><td>-pseudo-label</td><td>-uncertainty</td><td>-student</td><td>-uniform</td></tr><tr><td>€HOMO</td><td>meV</td><td>32.9</td><td>38.9</td><td>47.7</td><td>41.4</td><td>37.2</td></tr><tr><td>${\epsilon }_{\text{LUMO }}$</td><td>meV</td><td>24.7</td><td>27.2</td><td>32.1</td><td>31.4</td><td>28.8</td></tr></table>
288
+
289
+ ## 7 CONCLUSION
290
+
291
+ We introduce PSEUD $\sigma$ , a simple, effective, model-agnostic pseudo-labeling strategy that can improve quantum calculations accuracy in abundant data, low data and out-of-distribution settings. PSEUD $\sigma$ learns from vast unlabeled data by assigning uncertainty-aware pseudo-labels. These pseudo-labels are adaptively selected to be absorbed into the model via an episodic schedule. Unlike earlier methods in QM that focuses on physics-based representation, we show the potential of this data-centric approach to improve performance on a task crucial to materials and therapeutic discovery.
292
+
293
+ ## References
294
+
295
+ Alexander Amini, Wilko Schwarting, Ava Soleimany, and Daniela Rus. Deep evidential regression. NeurIPS, 2020.
296
+
297
+ Brandon Anderson, Truong-Son Hy, and Risi Kondor. Cormorant: Covariant molecular neural networks. NeurIPS, 2019.
298
+
299
+ Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural network. In ${ICML}$ , pages 1613-1622. PMLR,2015.
300
+
301
+ Marta Glavatskikh, Jules Leguy, Gilles Hunault, Thomas Cauchy, and Benoit Da Mota. Dataset's chemical diversity limits the generalizability of machine learning predictions. Journal of Cheminformatics, 11(1):1-15, 2019.
302
+
303
+ Yves Grandvalet and Yoshua Bengio. Semi-supervised learning by entropy minimization. NeurIPS, 2004.
304
+
305
+ Yves Grandvalet and Yoshua Bengio. Entropy regularization. Semi-Supervised Learning, pages 151-168, 2006.
306
+
307
+ Yosuke Higuchi, Niko Moritz, Jonathan Le Roux, and Takaaki Hori. Momentum pseudo-labeling for semi-supervised speech recognition. INTERSPEECH, 2021.
308
+
309
+ Ahmet Iscen, Giorgos Tolias, Yannis Avrithis, and Ondrej Chum. Label propagation for deep semi-supervised learning. In CVPR, pages 5070-5079, 2019.
310
+
311
+ Michael I Jordan. The exponential family: Conjugate priors. 2009.
312
+
313
+ Alex Kendall and Yarin Gal. What uncertainties do we need in bayesian deep learning for computer vision? NeurIPS, 2017.
314
+
315
+ Johannes Klicpera, Janek Groß, and Stephan Günnemann. Directional message passing for molecular graphs. ICLR, 2020.
316
+
317
+ Balaji Lakshminarayanan, Alexander Pritzel, and Charles
318
+
319
+ Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. NeurIPS, 2017.
320
+
321
+ Dong-Hyun Lee et al. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on challenges in representation learning, ICML, volume 3, page 896, 2013.
322
+
323
+ Tatiana Likhomanenko, Qiantong Xu, Jacob Kahn, Gabriel Synnaeve, and Ronan Collobert. slimipl: Language-model-free iterative pseudo-labeling. INTERSPEECH, 2021.
324
+
325
+ Yi Liu, Limei Wang, Meng Liu, Xuan Zhang, Bora Oztekin, and Shuiwang Ji. Spherical message passing for $3\mathrm{\;d}$ graph networks. arXiv:2102.05013, 2021.
326
+
327
+ Chengqiang Lu, Qi Liu, Chao Wang, Zhenya Huang, Peize Lin, and Lixin He. Molecular property prediction: A multilevel quantum interactions modeling perspective. In AAAI, volume 33, pages 1052-1060, 2019.
328
+
329
+ Robert G. Parr and Yang Weitao. Density-functional theory of atoms and molecules. 1989.
330
+
331
+ Zhuoran Qiao, Anders S Christensen, Frederick R Manby, Matthew Welborn, Anima Anandkumar, and Thomas F Miller III. Unite: Unitary n-body tensor equivari-ant network with applications to quantum chemistry. arXiv:2105.14655, 2021.
332
+
333
+ Mamshad Nayeem Rizve, Kevin Duarte, Yogesh S Rawat, and Mubarak Shah. In defense of pseudo-labeling: An uncertainty-aware pseudo-label selection framework for semi-supervised learning. ICLR, 2021.
334
+
335
+ Kristof T Schütt, Pieter-Jan Kindermans, Huziel E Sauceda, Stefan Chmiela, Alexandre Tkatchenko, and Klaus-Robert Müller. Schnet: A continuous-filter convolutional neural network for modeling quantum interactions. NeurIPS, 2017.
336
+
337
+ Weiwei Shi, Yihong Gong, Chris Ding, Zhiheng MaXiaoyu Tao, and Nanning Zheng. Transductive semi-supervised deep learning using min-max features. In ${ECCV}$ , pages 299-315, 2018.
338
+
339
+ Ava P Soleimany, Alexander Amini, Samuel Goldman, Daniela Rus, Sangeeta N Bhatia, and Connor W Coley. Evidential deep learning for guided molecular property prediction and discovery. ACS Central Science, 2021.
340
+
341
+ Oliver T Unke and Markus Meuwly. Physnet: a neural network for predicting energies, forces, dipole moments, and partial charges. Journal of Chemical Theory and Computation, 15(6):3678-3693, 2019.
342
+
343
+ O. Anatole von Lilienfeld and Kieron Burke. Retrospective on a decade of machine learning for chemical discovery. Nature Communications, 11(1):4895, dec 2020. ISSN 2041-1723. doi: 10.1038/s41467-020-18556-9. URL https://www.nature.com/articles/ s41467-020-18556-9http://www.nature. com/articles/s41467-020-18556-9.
344
+
345
+ John D Watts, Jürgen Gauss, and Rodney J Bartlett. Open-shell analytical energy gradients for triple excitation many-body, coupled-cluster methods: Mbpt (4), ccsd+ $\mathrm{t}$ (ccsd), ccsd (t), and qcisd (t). Chemical physics letters, 200(1-2):1-7, 1992.
346
+
347
+ Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S Pappu, Karl Leswing, and Vijay Pande. Moleculenet: a benchmark for molecular machine learning. Chemical Science, 9(2):513-530, 2018.
348
+
349
+ Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V Le. Self-training with noisy student improves imagenet classification. In CVPR, pages 10687-10698, 2020.
UAI/UAI 2022/UAI 2022 Conference/BAeO6LIjcec/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,346 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § UNCERTAINTY-AWARE PSEUDO-LABELING FOR QUANTUM CALCULATIONS
2
+
3
+ § ABSTRACT
4
+
5
+ Machine learning models have recently shown promise in predicting molecular quantum chemical properties. However, the path to real-life adoption requires (1) learning under low-resource constraints and (2) out-of-distribution generalization to unseen, structurally diverse molecules. We observe that these two challenges can be addressed via abundant labels, which is often not the case in quantum chemistry. We hypothesize that pseudo-labeling on a vast array of unlabeled molecules can serve as gold-label proxies to expand the training labeled dataset significantly. The challenge in pseudo-labeling is to prevent the bad pseudo-labels from biasing the model. Motivated by the entropy minimization framework, we develop a simple and effective strategy PSEUD $\sigma$ that can assign pseudo-labels, detect bad pseudo-labels through evidential uncertainty, and prevent them from biasing the model using adaptive weighting. Empirically, PSEUD $\sigma$ improves quantum calculations accuracy in full data, low data, and out-of-distribution settings.
6
+
7
+ § 1 INTRODUCTION
8
+
9
+ Ab initio quantum chemistry methods attempt to solve the electronic many-body Schrödinger equation to characterize biomolecular properties and interactions at different level of theory and numerical approximations. Despite extensive repertoire of methods from Post-Hartree-Fock methods such as CCSD(T) (coupled cluster single-double-triple) and MP2 (second order Møller-Plesset) [Watts et al., 1992] to Density Functional Theory (DFT) [Parr and Weitao, 1989] they continue to be numerically expensive, despite recent advances in hardware capabilities. Machine learning (ML) models have astonishing performance in approximating these calculations at a fraction of the computational cost [von Lilienfeld and Burke, 2020]. Such speedups have the potential to accelerate the discovery of new materials and therapeutics.
10
+
11
+ Most publications on this topic have relied on QM9 dataset, a standard benchmark for training and evaluating ML models to predict QM properties of small molecules precomputed using approximate DFT calculations. Model-centric approaches demonstrated great capabilities of the machine learning on this dataset, by showing low error on holdout test set of unseen molecules (e.g. [Schütt et al., 2017, Klicpera et al., 2020, Liu et al., 2021]). Despite the promise, realistic adoptions still face unsolved challenges. First, previous ML models rely on large number of labeled molecular geometries (e.g. ${100}\mathrm{\;K}$ for QM9), which are often not available for higher-fidelity level of energy calculations such as $\mathrm{{CCSD}}\left( \mathrm{T}\right)$ or MP2 - the challenge is for a QM/ML model to perform well under small number of computed geometries; Second, previous works evaluate the trained ML models on a test set that is in a similar chemical space as the training set (i.e. in-distribution), while the goal of deployment is to predict energies for structurally distant molecules across the diverse chemical space - the challenge is for a QM/ML model to generalize to out-of-distribution molecules. We observe that known QM/ML architectures would have significantly higher errors in these difficult regimes, calling for innovative ML algorithms to tackle these challenges (Section 6).
12
+
13
+ Present work. Our work focuses on addressing the fundamental cause of the above challenges - the scarcity of computed QM labels on a diverse set of chemicals. We utilize the abundance of the unlabeled molecules and develop an effective pseudo-labeling strategy suitable for QM calculations. The basic idea behind pseudo - labeling is to estimate the labels for the unlabeled data and expand the training dataset. Several pseudo-labeling methods in various machine learning domains have been already applied successfully to improve state-of the-art models especially in computer vision [Xie et al., 2020, Lee et al., 2013, Is-cen et al., 2019]. We investigate many of these approaches (e.g., data augmentation, model noise, student-training, re-initialization) and found many can have a negative effect in quantum calculations. For example, adding positional noise [Xie et al., 2020] in molecular geometries could significantly affect energies and thus bias the pseudo-labels. Thus, a QM-specialized pseudo-labeling strategy is needed. After extensive empirical studies, we reached an optimal QM-specialized scheduling strategy using episodes with no re-initialization and noise.
14
+
15
+ A crucial issue in pseudo-labeling is the introduced bias from low-quality pseudo-labels. Based on theoretical motivations (Section 5), we rely on a key observation that a data point with less evidence/higher model uncertainty is more likely to be of low-quality pseudo-label (Section 6). Thus, we use model-generated evidential uncertainty to quantify each unlabeled data and use it to adaptively lower the weights of bad pseudo-labels in the training loss to reduce the bias effect.
16
+
17
+ In summary, our method focuses on the effective strategy to incorporate QM pseudo-labels to alleviate the fundamental label scarcity issue, along with the associated challenges of low-data and out-of-distribution generalization. We have made the following contributions: (1) Previous QM/ML methods focus on in-distribution and label abundant setting while we investigate more realistic case of low-data and out-of-distribution settings; (2) Pivoting away from the status quo in improving the physics-based representation, we propose to look at data-centric approaches on learning from the vast array of unlabeled molecules; (3) We propose a simple, effective, theoretically motivated pseudo-labeling strategy PSEUD $\sigma$ designed specifically for QM, integrating episodic scheduling and downplaying low-quality pseudo-labels informed by uncertainty; (4) Empirically, we show that PSEUD $\sigma$ can improve QM accuracy for any atomistic model across full-data, low-data, and out-of-distribution settings.
18
+
19
+ § 2 RELATED WORKS
20
+
21
+ ML-aided quantum calculations. Recently, many ML models have been proposed to improve quantum calculations. They mainly focus on improving the physics-based representation and architectural developments tested on the full QM9 dataset [Schütt et al., 2017, Unke and Meuwly, 2019, Anderson et al., 2019, Lu et al., 2019, Klicpera et al., 2020, Liu et al., 2021, Qiao et al., 2021]. In contrast, our work proposes to shift the focus to model-agnostic training strategies in realistic low-data and out-of-distribution settings.
22
+
23
+ Pseudo-labeling. Pseudo-labeling/self-training generates pseudo-labels for unlabeled data. Numerous works on how to assign pseudo-labels exist, notably, through trained ML model predictions [Lee et al., 2013], label propagation[Shi et al., 2018, Iscen et al., 2019], and history cache [Likhoma-nenko et al., 2021, Higuchi et al., 2021]. PSEUD $\sigma$ is different as it focuses on detecting and preventing bad pseudo-labels from affecting the model. Also, PSEUD $\sigma$ adopts a novel episodic pseudo-labelling strategy with a re-initialized learning rate. [Xie et al., 2020] re-initialize the network as a student when a new pseudo-label set is generated along with noise per epoch. In contrast, PSEUD $\sigma$ has no student and no noise as both are shown to be ineffective for QM in Section 6. In addition, small perturbational noise in $3\mathrm{D}$ molecular geometry could easily lead to a drastic energy change. Thus, a naive strategy of adding noise does not work for QM tasks. More related is a preceding work [Rizve et al., 2021] that develops an uncertainty-aware pseudo-labeling strategy. They introduce additional hyperparameters to remove pseudo-labels at some uncertainties. In contrast, PSEUD $\sigma$ uses an effective adaptive weighting scheme, along with an episodic pseudo-labeling training schedule. Additionally, PSEUD $\sigma$ is the first method that studies pseudo-label in quantum calculations that present unique challenges.
24
+
25
+ Uncertainty. Model uncertainty is a well-studied subject [Kendall and Gal, 2017, Lakshminarayanan et al., 2017, Blundell et al., 2015]. [Amini et al., 2020] use evidential uncertainty to add a prior over the gaussian parameters to search for higher-order patterns for regression tasks. PSEUD $\sigma$ leverages evidential uncertainty as the uncertainty measure. Note that PSEUD $\sigma$ is uncertainty measure-agnostic. We can easily switch to alternative uncertainty measures. Recently, [Soleimany et al., 2021] adapt evidential uncertainty and show that it can successfully help guide property prediction. In contrast, we leverage evidential uncertainty as a proxy for pseudo-label quality to tackle low-data and out-of-distribution challenges in a realistic quantum calculations setup.
26
+
27
+ § 3 PROBLEM FORMULATION
28
+
29
+ Let $\mathcal{X} = \left\{ {{\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{N}}\right\}$ denote $N$ molecules, where each molecule ${\mathbf{x}}_{i}$ is uniquely defined by $3\mathrm{D}$ coordinates ${\left\{ \left( {a}_{j}^{i},{b}_{j}^{i},{c}_{j}^{i}\right) \right\} }_{j = 1}^{{N}_{i}}$ for ${N}_{i}$ atoms with atom types ${\left\{ {t}_{j}\right\} }_{j = 1}^{{N}_{i}}$ in the corresponding molecule. We then denote $\mathcal{Y} =$ $\left\{ {{y}_{1},\ldots {y}_{N}}\right\}$ a set of quantum mechanical properties for each molecule. The labeled dataset thus consists of a set of pairs of 3D coordinates and scalar labels $\mathcal{D} = \{ \mathcal{X},\mathcal{Y}\}$ .
30
+
31
+ In addition to the labeled data, we solicit a large quantity of unlabeled data to generate pseudo-labels. We denote an unlabeled dataset $\mathcal{U} = \left\{ {{\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{P}}\right\}$ , where $P$ is the size of the unlabeled dataset. Given an atomistic model $f\left( \cdot \right)$ , we can generate pseudo-labels $\widehat{\mathcal{Y}} = \left\{ {{\widehat{y}}_{1},\ldots ,{\widehat{y}}_{P}}\right\}$ , where ${\widehat{y}}_{i} = f\left( {\mathbf{x}}_{i}\right)$ for ${\mathbf{x}}_{i} \in \mathcal{U}$ .
32
+
33
+ The problem is to train a machine learning-based atomistic model $f : \mathbf{x} \mapsto y$ that can establish an accurate map from $3\mathrm{D}$ coordinates to the quantum mechanical properties of the molecules with the help of pseudo-labeled dataset $\mathcal{U}$ .
34
+
35
+ § 4 PSEUD $\SIGMA$ : UNCERTAINTY-AWARE PSEUDO-LABELING FOR QUANTUM CALCULATIONS
36
+
37
+ PSEUD $\sigma$ (Figure 1) is an approach for quantum chemical property prediction. Building on theoretical motivation from Section 5, PSEUDo solicits pseudo-labels on a vast array of an unlabeled dataset to increase the diversity of the training space via an episodic labeling strategy. Then, it adaptively weights the pseudo-labels using evidential uncertainty to allow a positive transfer. The overview is in Algorithm 1.
38
+
39
+ Episodic Pseudo-labeling. We devise a pseudo-labeling strategy that can ensure learning from the pseudo-labels to the fullest extent for QM. We have made two distinct modifications compared to existing works. First is the pseudo-label scheduling. In the standard pseudo-labeling [Lee et al., 2013], pseudo-labels are updated in every update and the model is continuously trained. In contrast, we devise an episodic training strategy, where each episode consists of $K$ epochs, and pseudo-labels are regenerated in every episode, while the model is continuously trained. This is important because we observe that updating pseudo labels too frequently prevents the model from extracting all the useful information from pseudo-labels. In contrast, our episodic approach gives the model more time to absorb useful information from a given set of pseudo-labels. A second modification is how we carry out model updates. In self-training [Rizve et al., 2021, Xie et al., 2020], a set of pseudo-labels are regenerated after $K$ epochs (1 episode) and the model is reinitialized. Instead, we train the same model across episodes. This new strategy allows the model to be exposed to a larger number of labels or training data points given the same time frame. For each episode, we also reinitialize the learning rate with a small step-wise decay strategy to allow the model a chance to jump out of the local optimum from the previous set of pseudo-labels.
40
+
41
+ Formally, PSEUD $\sigma$ mainly consists of three stages: in the first stage, regular training is conducted on labeled data $\mathcal{D}$ , and the output model is the initialized model ${f}^{\left( 1\right) }$ . In the second stage, the updated model at episode $k$ then conducts inference on the entire unlabeled data $\widehat{\mathcal{Y}} = {f}^{\left( k\right) }\left( \mathcal{U}\right)$ to generate the pseudo-label set. The per-episode pseudo-label set is then combined with the gold-labeled data to form the training data for the next episode. In the third stage, the model is further trained using the combined dataset to get a new model ${f}^{\left( k + 1\right) }$ after $K$ epochs (i.e., one more episode). The second and third stages are then reiterated till the loss converges.
42
+
43
+ Evidential uncertainty quantification. Pseudo-labels are noisy. Many are incorrect and can potentially lead to negative transfer. Thus, it is instrumental to determine the quality of pseudo-labels. However, there is no auxiliary information in the dataset about the pseudo-labels. Thus, we need to quantify it through some proxies that can be assigned without auxiliary information. Our key observation is that low-quality pseudo-labels have high model uncertainty, and high-quality pseudo-labels have low model uncertainty. Another advantage of model uncertainty is that it can be estimated solely from $\mathbf{x}$ , if we make it model uncertainty-aware.
44
+
45
+ Building on the theoretical motivation about the connection between evidential uncertainty and the entropy minimization in Section 5, we use evidential uncertainty as the proxy for label quality. The evidential modeling of molecular property allows us to derive an analytical expression for uncertainty, which can be directly used to weight the pseudo-labels. Formally, we can model the label probabilistically as being drawn from $\left( {{y}_{1},\cdots ,{y}_{i}}\right) \sim \mathcal{N}\left( {\mu ,{\sigma }^{2}}\right)$ , where the mean $\mu$ are variance ${\sigma }^{2}$ are unknown. To estimate them, we pose a prior
46
+
47
+ $$
48
+ \mu \sim \mathcal{N}\left( {\gamma ,{\sigma }^{2}{v}^{-1}}\right) ,{\sigma }^{2} \sim {\Gamma }^{-1}\left( {\alpha ,\beta }\right) , \tag{1}
49
+ $$
50
+
51
+ where the parameters $\theta = \left( {\mu ,\sigma }\right)$ is an instantiation of the posterior $p\left( {\mu ,{\sigma }^{2} \mid \gamma ,v,\alpha ,\beta }\right)$ . The choice of prior allows the factorization $p\left( {\mu ,{\sigma }^{2}}\right) = p\left( \mu \right) p\left( {\sigma }^{2}\right)$ [Jordan,2009]. The posterior then becomes a NormalInvGamma $\left( {\gamma ,v,\alpha ,\beta }\right)$ where the maximum likelihood estimation of $\theta$ can be analytically found as
52
+
53
+ $$
54
+ \mathbb{E}\left\lbrack \mu \right\rbrack = \gamma ,\mathbb{E}\left\lbrack {\sigma }^{2}\right\rbrack = \frac{\beta }{\alpha - 1}. \tag{2}
55
+ $$
56
+
57
+ Here, $\mathbb{E}\left\lbrack {\sigma }^{2}\right\rbrack$ plays the role of the aleatoric (data) uncertainty. The uncertainty of the model prediction can also be calculated, i.e. epistemic uncertainty:
58
+
59
+ $$
60
+ \operatorname{Var}\left\lbrack \mu \right\rbrack = \mathbb{E}\left\lbrack {\sigma }^{2}\right\rbrack /v = \frac{\beta }{v\left( {\alpha - 1}\right) }. \tag{3}
61
+ $$
62
+
63
+ As the MLE is deterministic, the model can output four prior parameters $\{ \gamma ,v,\alpha ,\beta \}$ directly where the prediction and uncertainty can be derived from them analytically. The prior is optimized by evidential loss ${\mathcal{L}}^{\text{ evi }}$ [Amini et al.,2020]:
64
+
65
+ $$
66
+ {\mathcal{L}}_{i}^{\text{ evi }} = - \log \operatorname{St}\left( {{y}_{i};\gamma ,\frac{\beta \left( {1 + v}\right) }{v\alpha },{2\alpha }}\right) + \lambda \left| {{y}_{i} - \gamma }\right| \left( {{2v} + \alpha }\right) ,
67
+ $$
68
+
69
+ (4)
70
+
71
+ where the first term maximizes the log-likelihood of the posterior predictive, which is the Student's t-distribution. The second term is a regularizer that imposes a penalty whenever there is an error in the prediction and scales with the total evidence ${2v} + \alpha$ of our inferred posterior. Similarly, it encourages lower uncertainty when the model prediction is error free. This encourages the model to generate an accurate estimate of uncertainty or the degree of errors for the pseudo-labeled data points. The regularization is controlled by a hyperparameter $\lambda$ .
72
+
73
+ Adaptive weighting. The evidential uncertainty detects the low-quality pseudo-labels. The next step is to remove the noisy effect from the model training. Naive methods often use removal based on a threshold [Rizve et al., 2021]. However, this has two disadvantages: (1) it introduces a new hyperparameter - the threshold; and (2) it removes a portion of unlabeled noisy data, which can contain useful information. Instead, we propose an adaptive weighting mechanism that weights the evidential loss with the inverse of the epistemic uncertainty. Intuitively, a higher uncertainty data point should have a smaller effect on the loss function because it is more likely that the sample has a low pseudo-label quality, and we want to reduce its effect on the model. Conversely, if a pseudo-label has low uncertainty, the label quality is high enough to be used as a high-fidelity proxy for a gold-label. Thus, it should have a higher impact on the loss. The uncertainty is from the teacher model in the previous episode and is fixed throughout the current episode. Thus, the adaptive weight for each pseudo data point $i$ becomes ${\widehat{\mathcal{W}}}_{i} = \operatorname{Var}{\left\lbrack \mu \right\rbrack }_{i}^{-1}$ . The final loss then becomes
74
+
75
+ $$
76
+ \mathcal{L} = \frac{1}{\left| \mathcal{D}\right| }\mathop{\sum }\limits_{{i \in \mathcal{D}}}{\mathcal{L}}_{i}^{\text{ evi }} + \mathop{\sum }\limits_{{i \in \mathcal{U}}}\frac{{\widehat{\mathcal{W}}}_{i}}{\mathop{\sum }\limits_{{i \in \mathcal{U}}}{\widehat{\mathcal{W}}}_{i}}{\mathcal{L}}_{i}^{\text{ evi }}, \tag{5}
77
+ $$
78
+
79
+ < g r a p h i c s >
80
+
81
+ Figure 1: PSEUD $\sigma$ illustration. In every episode $k$ , PSEUD $\sigma$ assigns pseudo-labels along with their evidential uncertainty using trained neural network ${f}^{\left( k - 1\right) }$ from previous episode. The uncertainty is used as weight to adaptively adjust the loss in this episode’s neural network ${f}^{\left( k\right) }$ ’s training to reduce the effect of bad pseudo-labels in an inner-loop training with $K$ epochs.
82
+
83
+ where the first term, corresponding to the labeled dataset $\mathcal{D}$ , does not have weights unlike the second term corresponding to the unlabeled data $\mathcal{U}$ . This adaptive loss solves two disadvantages: it has zero hyperparameters, and it removes the effect of bad pseudo-labels while retaining all training examples including the noisy ones to maximize the diversity of the training space.
84
+
85
+ § 5 PSEUD $\SIGMA$ MOTIVATION: CONNECTION TO ENTROPY MINIMIZATION
86
+
87
+ We derive motivation about why evidential uncertainty and the weighting mechanism could be beneficial to pseudo-labeling based on the entropy minimization framework for semi-supervised learning from Grandvalet and Bengio [2004, 2006], Lee et al. [2013]. Notably, our use of Bayesian modeling enables us to analytically derive a conditional entropy for pseudo-labeled data. We find that evidential loss strongly relates to conditional entropy, and minimizing evidential loss directly minimizes entropy. Secondly, we find the conditional entropy could be decomposed into the inverse epistemic uncertainty and the log-likelihood, which motivates our weighting mechanism.
88
+
89
+ In the standard regression setting, one seeks to maximize the likelihood of the model ${p}_{\theta }\left( {\mathcal{Y} \mid \mathcal{X}}\right)$ on the labeled data set $\mathcal{D}$ . To utilize the unlabeled data set, we need to extract some useful information on how the model behaves on the unlabeled dataset and inject this information to improve the model. To measure the utility, entropy $\mathcal{H}\left( {Y \mid \mathcal{U}}\right)$ is introduced [Grandvalet and Bengio, 2006] as a proxy to measure the amount of information in unlabeled data:
90
+
91
+ $$
92
+ \mathcal{H}\left( {\mathcal{Y} \mid \mathcal{U}}\right) = \mathop{\sum }\limits_{{{\mathbf{x}}_{i} \in \mathcal{U}}}{\mathrm{E}}_{y \sim {\mathrm{p}}_{\theta }\left( {y \mid {\mathbf{x}}_{i}}\right) }\left\lbrack {-\log {\mathrm{p}}_{\theta }\left( {y \mid {\mathbf{x}}_{i}}\right) }\right\rbrack . \tag{6}
93
+ $$
94
+
95
+ Throughout the text, we are referring to entropy as Shannon entropy. High entropy is associated with random predictions while low entropy is associated with non-random behavior. Hence, we hypothesize that small entropy may be indication of a signal that our model can benefit from. Small entropy, as seen below, corresponds to high model confidence and vice versa. Large entropy corresponds to high model uncertainty. Entropy minimization framework casts the regression as the following optimization problem:
96
+
97
+ $$
98
+ {\operatorname{argmax}}_{\theta }\left\lbrack {\log {\mathrm{p}}_{\theta }\left( {\mathcal{Y} \mid \mathcal{X}}\right) - c\mathcal{H}\left( {\mathcal{Y} \mid \mathcal{U}}\right) }\right\rbrack , \tag{7}
99
+ $$
100
+
101
+ < g r a p h i c s >
102
+
103
+ Figure 2: (a) Dependence of the entropy (Eq. 9) on epistemic uncertainty and virtual observation parameter $\alpha$ for a fixed aleatoric uncertainty $\mathbb{E}\left\lbrack {\sigma }^{2}\right\rbrack = 1$ . As the the epistemic uncertainty increases, the entropy is also increases for all values of parameter $\alpha$ . For example, figure (b) demonstrate the trend for a fixed $\alpha = 2$ . Figure (c) demonstrates the dependence of empirical weights (Eq. 14) on epistemic uncertainty. The empirical weights tend to decrease as the epistemic uncertainty increases. Figure (d) demonstrates this trend for a fixed $\alpha = 2$ .
104
+
105
+ where $c$ is the proportionality constant. Intuitively, here, the objective tends to maximize the log-likelihood on the labeled dataset while minimizing the entropy on the unlabeled data set at the same time to transfer knowledge from unlabeled data.
106
+
107
+ In previous works [Schütt et al., 2017, Liu et al., 2021], molecule properties are not modeled probabilistically such that entropy calculation is infeasible. In contrast, PSEUD $\sigma$ uses Bayesian modeling approaches that allow us to analytically calculate the entropy. For every molecule ${\mathbf{x}}_{i}$ the machine learning model outputs four parameters $f\left( {\mathbf{x}}_{i}\right) =$ $\left( {{\alpha }_{i},{\beta }_{i},{\gamma }_{i},{\nu }_{i}}\right)$ . Based on these parameters, the likelihood of label $y$ given the input molecule ${\mathbf{x}}_{i}$ is given by the Student’s t-distribution in the context of evidential regression
108
+
109
+ $$
110
+ {p}_{\theta }\left( {y \mid {\mathbf{x}}_{i}}\right) = \operatorname{St}\left( {y;{\gamma }_{i},{\sigma }_{{st},i}^{2},2{\alpha }_{i}}\right) \tag{8}
111
+ $$
112
+
113
+ evaluated at location parameter ${\gamma }_{i}$ , Student’s t-distribution scale parameter ${\sigma }_{{st},i}^{2} = \frac{{\beta }_{i}\left( {1 + {\nu }_{i}}\right) }{{\nu }_{i}{\alpha }_{i}}$ and $2{\alpha }_{i}$ degrees of freedom. The entropy of the Student's t-distribution given in terms of evidential parameters is readily available (Appendix [8]:
114
+
115
+ $$
116
+ \mathcal{H}\left( {y \mid {\mathbf{x}}_{i}}\right) = \frac{2{\alpha }_{i} + 1}{2}\left( {\Psi \left( \frac{2{\alpha }_{i} + 1}{2}\right) - \Psi \left( {\alpha }_{i}\right) }\right) \tag{9}
117
+ $$
118
+
119
+ $$
120
+ + \log \sqrt{2{\alpha }_{i}}\mathrm{\;B}\left( {{\alpha }_{i},\frac{1}{2}}\right) + \frac{1}{2}\log {\sigma }_{{st},i}^{2},
121
+ $$
122
+
123
+ where $\Psi$ is a digamma function and $\mathrm{B}\left( {\cdot , \cdot }\right)$ is a beta function. If we take model (epistemic) uncertainty (Eq. 3) as our measure of uncertainty, we can show that minimizing entropy directly relates to minimizing epistemic uncertainty. We plot the relation between the entropy and epistemic uncertainty in the Figure 2. As the next step, we aim to uncover the dependence of the entropy on the model uncertainty of our pseudo-labeling approach. This can be done if we make two simplifications in entropy evaluation. Firstly, to introduce iterations as in pseudo-labeling, we replace entropy with the cross-entropy between two probability distributions: the predictions $y$ are generated from the probability distribution ${p}_{\theta \left( {t - 1}\right) }\left( {y \mid {\mathbf{x}}_{i}}\right)$ at iteration step $t - 1$ and log-likelihood are evaluated with respect to probability distribution ${p}_{\theta \left( t\right) }\left( {y \mid {\mathbf{x}}_{i}}\right)$ at iterative step t:
124
+
125
+ $$
126
+ \mathcal{H}\left( {\mathcal{Y} \mid \mathcal{U}}\right) \approx \mathop{\sum }\limits_{{{\mathbf{x}}_{i} \in \mathcal{U}}}{\mathbb{E}}_{y \sim {p}_{\theta \left( {t - 1}\right) }\left( {y \mid {\mathbf{x}}_{i}}\right) }\left\lbrack {-\log {\mathrm{p}}_{\theta \left( t\right) }\left( {y \mid {\mathbf{x}}_{i}}\right) }\right\rbrack .
127
+ $$
128
+
129
+ (10)
130
+
131
+ Upon convergence, $t \rightarrow + \infty$ , the probability distributions at every iterative step ${p}_{\theta \left( {t - 1}\right) }\left( {y \mid {\mathbf{x}}_{i}}\right) \approx {p}_{\theta \left( t\right) }\left( {y \mid {\mathbf{x}}_{i}}\right)$ and are approximately the same and one can view introduced cross-entropy with respect to time step $t$ as entropy. At the earlier stages of training cross-entropy acts as a regularizer encouraging network parameters $\theta \left( t\right)$ to match $\theta \left( {t - 1}\right)$ .
132
+
133
+ As a second approximation, to uncover model uncertainty in mathematical formulas, we approximate the probability distribution at time step $t - 1$ . We resort to empirical estimate of the entropy, as done in [Grandvalet and Bengio, 2004]. We select labels $y$ at the highest mode of probability distribution ${\mathrm{p}}_{\theta \left( {t - 1}\right) }\left( {y \mid {\mathbf{x}}_{i}}\right)$ , which corresponds to $y = {\gamma }_{i}^{t - 1}$ . We obtain the following approximate for the entropy:
134
+
135
+ $$
136
+ \mathcal{H}\left( {\mathcal{Y} \mid \mathcal{U}}\right) \approx {\mathcal{H}}_{\text{ emp }}\left( {\mathcal{Y} \mid \mathcal{U}}\right) = - \mathop{\sum }\limits_{{{\mathbf{x}}_{i} \in \mathcal{U}}}{\mathcal{E}}_{i}^{t - 1}\log {\mathrm{p}}_{\theta \left( t\right) }\left( {{\gamma }_{i}^{t - 1} \mid {\mathbf{x}}_{i}}\right) ,
137
+ $$
138
+
139
+ (11)
140
+
141
+ where the log probabilities are weighted by empirical probabilities as weights ${\mathcal{E}}_{i}^{t - 1}$ evaluated at iterative step $t - 1$ when plugged into Eq. 8 (also see Appendix. Eq. 18 for exact formula of Student's t-distribution)
142
+
143
+ $$
144
+ {\mathcal{E}}_{i}^{t - 1} = \operatorname{St}\left( {y = {\gamma }_{i},{\sigma }_{{st},i}^{2},2{\alpha }_{i}}\right) = \frac{1}{\sqrt{2{\alpha }_{i}{\sigma }_{{st},i}^{2}}\mathrm{\;B}\left( {\frac{1}{2},{\alpha }_{i}}\right) }.
145
+ $$
146
+
147
+ (12)
148
+
149
+ Algorithm 1: Pseud $\sigma$ Algorithm.
150
+
151
+ Input: Labeled data $\mathcal{D} = \left\{ {\left( {{\mathbf{x}}_{1},{y}_{1}}\right) ,\cdots ,\left( {{\mathbf{x}}_{N},{y}_{N}}\right) }\right\}$ ,
152
+
153
+ unlabled data $\mathcal{U} = \left\{ {{\mathbf{x}}_{1},\cdots ,{\mathbf{x}}_{P}}\right\} \widehat{\mathcal{U}} \leftarrow \{ \} ,\widehat{\mathcal{W}} \leftarrow \{ \}$
154
+
155
+ // Initialize with empty unlabeled
156
+
157
+ data
158
+
159
+ for $k \in \{ 1,\cdots ,M\} \;//$ Outer-loop with $M$
160
+
161
+ episodes
162
+
163
+ do
164
+
165
+ $\mathcal{T} \leftarrow \mathcal{D} \cup \widehat{\mathcal{U}}\;$ // Join updated
166
+
167
+ pseudo-labels
168
+
169
+ for $\left( {{\mathbf{x}}_{i},{\mathbf{y}}_{i}}\right) \in \mathcal{T}\;//$ Inner-loop with $K$
170
+
171
+ epochs
172
+
173
+ do
174
+
175
+ ${\theta }_{i} = \left( {{\gamma }_{i},{v}_{i},{\alpha }_{i},{\beta }_{i}}\right) = {f}^{\left( k - 1\right) }\left( {\mathbf{x}}_{i}\right)$
176
+
177
+ // Evidental parameters
178
+
179
+ $\widehat{{\mathbf{y}}_{i}} = \mathbb{E}\left\lbrack \mu \right\rbrack = {\gamma }_{i}\;$ // Posterior
180
+
181
+ prediction
182
+
183
+ $\mathcal{L} = \mathrm{L}\left( {{\widehat{\mathbf{y}}}_{i},{\mathbf{y}}_{i},{\theta }_{i},{\widehat{\mathcal{W}}}_{i}}\right) \;$ // Adaptive
184
+
185
+ evidential loss via Eq. 5
186
+
187
+ ${f}^{\left( k - 1\right) } = \operatorname{Update}\left( {{f}^{\left( k - 1\right) },\mathcal{L}}\right)$
188
+
189
+ // Inner-loop update
190
+
191
+ end
192
+
193
+ ${f}^{\left( k\right) } \leftarrow {f}^{\left( k - 1\right) }\;$ // Update teacher model
194
+
195
+ for pseudo-labels
196
+
197
+ for ${\mathbf{x}}_{i} \in \mathcal{U}$ do
198
+
199
+ ${\widehat{\theta }}_{i} = \left( {{\widehat{\gamma }}_{i},{\widehat{v}}_{i},{\widehat{\alpha }}_{i},{\widehat{\beta }}_{i}}\right) = {f}^{\left( k\right) }\left( {\mathbf{x}}_{i}\right)$
200
+
201
+ $\widehat{{\mathbf{y}}_{i}} = {\widehat{\gamma }}_{i}\;//$ Infer a new set of
202
+
203
+ pseudo-labels
204
+
205
+ ${\widehat{\mathcal{U}}}_{i} \leftarrow \left( {{\mathbf{x}}_{i},{\widehat{\mathbf{y}}}_{i}}\right)$
206
+
207
+ pseudo-labels
208
+
209
+ ${\widehat{\mathcal{W}}}_{i} \leftarrow \operatorname{Var}{\left\lbrack \mu \right\rbrack }_{i}^{-1} = {\widehat{v}}_{i} * \left( {{\widehat{\alpha }}_{i} - 1}\right) /{\widehat{\beta }}_{i}$
210
+
211
+ // Update adaptive weights
212
+
213
+ end
214
+
215
+ end
216
+
217
+ To establish a relationship between empirical weights ${\mathcal{E}}_{i}^{t - 1}$ and aleatoric $\mathbb{E}\left\lbrack {\sigma }_{i}^{2}\right\rbrack /$ epistemic $\operatorname{Var}\left\lbrack {\mu }_{i}\right\rbrack$ uncertainties we rewrite
218
+
219
+ $$
220
+ {\sigma }_{{st},i}^{2} = \frac{{\alpha }_{i} - 1}{{\alpha }_{i}}\left( {\operatorname{Var}\left\lbrack {\mu }_{i}\right\rbrack + \mathbb{E}\left\lbrack {\sigma }_{i}^{2}\right\rbrack }\right) \tag{13}
221
+ $$
222
+
223
+ $$
224
+ {\mathcal{E}}_{i}^{t - 1} = \frac{{\left( \operatorname{Var}\left\lbrack {\mu }_{i}\right\rbrack + \mathbb{E}\left\lbrack {\sigma }_{i}^{2}\right\rbrack \right) }^{-\frac{1}{2}}}{\sqrt{2}\mathrm{\;B}\left( {\frac{1}{2},{\alpha }_{i}}\right) \sqrt{{\alpha }_{i} - 1}}. \tag{14}
225
+ $$
226
+
227
+ Empirical coefficients depend on aleatoric, epistemic uncertainties and ${\alpha }_{i}$ parameter, which can be interpreted as virtual observations in support of the variance estimation [Jordan, 2009]. In the limiting case ${\alpha }_{i} \gg 1$ one can approximate beta function via Stirling formula $\mathrm{B}\left( {\frac{1}{2},{\alpha }_{i}}\right) \approx \sqrt{\pi }{\alpha }_{i}^{-\frac{1}{2}}$ and empirical weights become
228
+
229
+ $$
230
+ {\mathcal{E}}_{i}^{t - 1} \approx {\left( \operatorname{Var}\left\lbrack {\mu }_{i}\right\rbrack + \mathbb{E}\left\lbrack {\sigma }_{i}^{2}\right\rbrack \right) }^{-\frac{1}{2}}. \tag{15}
231
+ $$
232
+
233
+ We can express the empirical coefficients depend both on aleatoric and epistemic uncertainties in a symmetric fashion.
234
+
235
+ We selected adaptive pseudo-labeling coefficients ${\mathcal{W}}_{i}$ in our pseudo-labeling approach Eq. 5 to be inverse epistemic/model uncertainties. We can see, that those coefficients directly relate to empirical coefficients derived from entropy minimization approach Eq. 14, as empirical coefficients also depend on model uncertainty in the inverse fashion. As the model uncertainty increases, the empirical coefficients ${\mathcal{E}}_{i}$ tend to decrease to minimize the entropy.
236
+
237
+ § 6 EXPERIMENTS
238
+
239
+ § 6.1 DATASET AND EXPERIMENTAL SETUPS
240
+
241
+ We evaluate PSEUD $\sigma$ using the QM9 dataset [Wu et al., 2018] under two settings. (A) Full-data: We follow the previous works [Liu et al., 2021, Klicpera et al., 2020] where a 110,000/10,000/10,831 training/validation/testing split is obtained. We draw unlabeled data from PC9, a dataset of 99,234 molecules that consists of the same elements as QM9, curated by [Glavatskikh et al., 2019]. (B) Low-data: we set $k\%$ of QM9 full training set as the training set (i.e. $k\% \times {110},{000})$ and we remove the labels from the remaining $\left( {1 - k\% }\right)$ of the QM9 full training set and treat this as the unlabeled set. We evaluate PSEUD $\sigma$ for two $k$ values, 1 and 10 (meaning only $1,{100}/{11},{000}$ labelled QM data points are retained, respectively). A summary of the dataset statistics is presented in Table 1. Note that PC9 has a wider chemical diversity than QM9, demonstrated by wider distribution of distances of chemical bonds and more functional groups [Glavatskikh et al., 2019].
242
+
243
+ PSEUD $\sigma$ is model-agnostic. We evaluate it with two model backbones SchNet [Schütt et al., 2017] (PSEUDσ-S) and DimeNet++ [Klicpera et al., 2020] (PSEUD $\sigma$ -D). We do not experiment with the SOTA atomistic model SphereNet [Liu et al., 2021] because it is highly computationally expensive. Our result is conducted on two targets ${\sigma }_{\mathrm{{HOMO}}},{\sigma }_{\mathrm{{LUMO}}}$ , because the PC9 dataset only has these two targets. We use mean absolute error as the evaluation metric.
244
+
245
+ For baselines, we compare with 6 state-of-the-art baselines, including SchNet [Schütt et al., 2017], PhysNet [Unke and Meuwly, 2019], Cormorant [Anderson et al., 2019], MGCN [Lu et al., 2019], DimeNet++ [Klicpera et al., 2020], and SphereNet [Liu et al., 2021]. We report the best results taken from the original authors' paper while using the same fraction of data split in the full data setting. For PSEUD $\sigma$ , we conduct two hyperparameter tunings on ${\sigma }_{\mathrm{{HOMO}}}$ with SchNet backbone on the validation MAE with full data/low-data setting, respectively. The optimal hyperparameter is then used for both targets. Note that the atomistic model itself has the same hyperparameters as used by the original authors. Code will be released after the anonymous review period.
246
+
247
+ Table 1: Dataset statistics.
248
+
249
+ max width=
250
+
251
+ Setting Training Set Validation Set Testing Set Unlabeled Set OOD Set
252
+
253
+ 1-6
254
+ Full-data 110,000 (QM9) 10,000 (QM9) 10,831 (QM9) 99,234 (PC9) -
255
+
256
+ 1-6
257
+ Low-data-1% 1,100 (QM9) 10,000 (QM9) 10,831 (QM9) 108,900 (QM9) -
258
+
259
+ 1-6
260
+ Low-data-10% 11,000 (QM9) 10,000 (QM9) 10,831 (QM9) 99,000 (QM9) -
261
+
262
+ 1-6
263
+ Out-of-distribution 110,000 (QM9) 10,000 (QM9) 10,831 (QM9) 99,234 (PC9) 99,234 (PC9)
264
+
265
+ 1-6
266
+
267
+ § 6.2 RESULTS
268
+
269
+ Overview of results. We report performances of PSEUD $\sigma$ in full data (Table 2), low-data (Table 3), and out-of-distribution (Table 4) settings and find PSEUD $\sigma$ achieves the best performance across all settings, suggesting the robustness of the pseudo-labeling strategy. A systematic ablation study (Table 5) shows the importance of each module in Pseud $\sigma$ .
270
+
271
+ PSEUD $\sigma$ improves on fully supervised QM calculations. We compare PSEUD $\sigma$ against 6 state-of-the-art models in Table 2. PSEUD $\sigma$ -D surpasses all baselines on both targets ${\sigma }_{\mathrm{{HOMO}}},{\sigma }_{\text{ LUMO }}$ . Notably, PSEUD $\sigma$ -D improves the SOTA by ${3.2}\mathrm{{meV}}$ , a significant margin. Particularly, comparing PSEUD $\sigma$ -S with SchNet and PSEUD $\sigma$ -D with DimeNet++, we find PSEUD $\sigma$ can consistently improve even on the fully supervised setting by a large margin ( ${8.1}\mathrm{{meV}}$ for SchNet and ${4.2}\mathrm{{meV}}$ for DimeNet++), highlighting the utility of PSEUD $\sigma$ and the high quality of PC9 as unlabeled data. It also shows that this direction of improving learning strategy instead of improving physics-based representation is very promising.
272
+
273
+ PSEUD $\sigma$ significantly improves on low-data QM calculations. In Table 3, we investigate how PSEUD $\sigma$ can improve in the low-data regime with only $1\%$ or ${10}\%$ of the training data (i.e, using only 1,100 and 11,000 QM calculations). We observe PSEUD $\sigma$ can consistently and significantly improve prediction accuracy in ${\sigma }_{\mathrm{{HOMO}}},{\sigma }_{\mathrm{{LUMO}}}$ across both low-data settings and both model backbones, suggesting PSEUD $\sigma$ can help prediction in realistic low-data settings (simulating the use of the more expensive QM levels of theory such as CCSD(T)/MP2). Notably, in ${\sigma }_{\text{ LUMO }}$ with $1\%$ of QM9 data, PSEUD $\sigma$ improves upon SchNet by 57.8 $\mathrm{{meV}}$ , a considerable margin. We also observe that the gain margin is much more significant when the training dataset is smaller.
274
+
275
+ PSEUD $\sigma$ improves out-of-distribution QM calculations. Another realistic challenge is to infer accurately on unseen data distribution away from QM9. We conduct inference on the PC9 dataset (since the dataset already contains calculated ${\sigma }_{\mathrm{{HOMO}}},{\sigma }_{\text{ LUMO }}$ values). We find PSEUD $\sigma$ can again significantly improve OOD accuracy over DimeNet++, a SOTA method, with over ${16.0}\mathrm{{meV}}$ improvement on ${\sigma }_{\mathrm{{HOMO}}}$ and ${8.4}\mathrm{{meV}}$ improvement on ${\sigma }_{\mathrm{{LUMO}}}$ , highlighting the robustness of PSEUD $\sigma$ .
276
+
277
+ < g r a p h i c s >
278
+
279
+ Figure 3: Uncertainty highly correlates to label quality.
280
+
281
+ Evidential uncertainty highly correlates to label quality. PSEUD $\sigma$ utilizes uncertainty as a proxy of label quality because they are highly correlated for unseen molecules. In this experiment, we want to validate this hypothesis. We train on the complete QM9 training set with evidential uncertainty and then infer on the QM9 testing set. We find that the non-parametric Spearman correlation between MAE and epistemic uncertainty is 0.42 with a p-value $< 1\mathrm{e} - {16}$ . Additionally, we evaluate on PC9 out-of-distribution set, and the Spearman correlation is 0.35 with p-value $< 1\mathrm{e}$ - 16, suggesting our uncertainty is a robust measure of label quality.
282
+
283
+ Ablations. In Table 5, we conduct a systematic ablation study using SchNet as the backbone architecture on the fully supervised QM9 setting. We show that each component in PSEUD $\sigma$ is indispensable for PSEUD $\sigma$ . In Table 2, we have reported original authors best performance following standard practices Klicpera et al. [2020], Liu et al. [2021]. To further clearly demonstrate the utility of pseudo-labeling, in -pseudo-label, we keep all hyperparameters the same but remove the pseudo-labeling part. We show that our pseudo-labeling strategy improves performance by a large margin. Next, in the -uncertainty ablation, we use a vanilla per-epoch pseudo-labeling strategy with no uncertainty. This decreases performances even as compared to the -pseudo-label strategy. Then, to compare against standard self-training, the -student ablation retrains a model in every episode as in [Xie et al., 2020] and we see decreased performance. Lastly, the -uniform ablation uses the same weight for all pseudo-labels with no uncertainty reweighting. The decreased performance shows the importance of detection and adaptive removal of bad pseudo-labels, achieved by our evidential characterization of the molecular target property.
284
+
285
+ Table 2: PSEUD $\sigma$ improves on full data setting. Reported metric is MAE. The lower the better.
286
+
287
+ max width=
288
+
289
+ Property Unit SchNet PhysNet Cormorant MGCN DimeNet++ SphereNet Pseudo-S Pseud $\sigma - \mathrm{D}$
290
+
291
+ 1-10
292
+ ${\epsilon }_{\text{ HOMO }}$ meV 41 32.9 36 42.1 24.6 23.6 32.9 20.4
293
+
294
+ 1-10
295
+ ${\epsilon }_{\text{ LUMO }}$ meV 34 24.7 36 57.4 19.5 18.9 24.7 18.2
296
+
297
+ 1-10
298
+
299
+ Table 3: PSEUD $\sigma$ improves on low-data regime. Reported metric is MAE. The lower the better.
300
+
301
+ max width=
302
+
303
+ 2|c|Low-Data Setting 2|c|1% QM9 (1,100) 2|c|10% QM9 (11,000)
304
+
305
+ 1-6
306
+ Property Unit SchNet $\rightarrow$ PSEUD $\sigma$ DimeNet++ $\rightarrow$ PSEUD $\sigma$ SchNet $\rightarrow$ Pseud DimeNet++ $\rightarrow$ PSEUD $\sigma$
307
+
308
+ 1-6
309
+ ${\epsilon }_{\text{ HOMO }}$ meV ${265.4}\xrightarrow[]{+{10.8}}{276.2}$ ${248.9}\xrightarrow[]{-{18.7}}{230.2}$ ${119.0}\xrightarrow[]{-{30.2}}{88.8}$ ${81.1}\xrightarrow[]{-{13.7}}{67.4}$
310
+
311
+ 1-6
312
+ ${\epsilon }_{\text{ LUMO }}$ meV ${290.6}\xrightarrow[]{-{57.8}}{232.8}$ ${229.3}\xrightarrow[]{-{5.2}}{224.1}$ ${93.3}\xrightarrow[]{-{15.0}}{78.3}$ ${60.8}\xrightarrow[]{-{1.6}}{59.2}$
313
+
314
+ 1-6
315
+
316
+ Table 4: Out-of-distribution best validation MAE.
317
+
318
+ max width=
319
+
320
+ Property Unit SchNet DimeNet++ Pseud $\sigma - \mathrm{D}$
321
+
322
+ 1-5
323
+ ${\sigma }_{\text{ HOMO }}$ meV 243.4 230.4 214.4
324
+
325
+ 1-5
326
+ ${\sigma }_{\text{ LUMO }}$ meV 225.0 184.2 175.8
327
+
328
+ 1-5
329
+
330
+ Table 5: Ablation using SchNet as backbone on the fully supervised setting.
331
+
332
+ max width=
333
+
334
+ Property Unit Pseud -pseudo-label -uncertainty -student -uniform
335
+
336
+ 1-7
337
+ €HOMO meV 32.9 38.9 47.7 41.4 37.2
338
+
339
+ 1-7
340
+ ${\epsilon }_{\text{ LUMO }}$ meV 24.7 27.2 32.1 31.4 28.8
341
+
342
+ 1-7
343
+
344
+ § 7 CONCLUSION
345
+
346
+ We introduce PSEUD $\sigma$ , a simple, effective, model-agnostic pseudo-labeling strategy that can improve quantum calculations accuracy in abundant data, low data and out-of-distribution settings. PSEUD $\sigma$ learns from vast unlabeled data by assigning uncertainty-aware pseudo-labels. These pseudo-labels are adaptively selected to be absorbed into the model via an episodic schedule. Unlike earlier methods in QM that focuses on physics-based representation, we show the potential of this data-centric approach to improve performance on a task crucial to materials and therapeutic discovery.
UAI/UAI 2022/UAI 2022 Conference/BAlqxvUs5lq/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,594 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # NeuroBE: Escalating NN Approximations of Bucket Elimination
2
+
3
+ ## Abstract
4
+
5
+ A major limiting factor in graphical model inference is the complexity of computing the partition function. Exact message-passing algorithms such as Bucket Elimination (BE) require exponentially high levels of memory to compute the partition function, therefore approximations are necessary. In this paper, we build upon a recently introduced methodology called Deep Bucket Elimination (DBE) that uses classical Neural Networks (NNs) to approximate messages generated by ${BE}$ when buckets have large memory requirements. The main feature of our new scheme called NeuroBE is that it customizes the architecture and learning of the NNs to the message size and its distribution. We also explore a new loss function for training taking into account the estimated message cost distribution. Our experiments demonstrate significant improvements in accuracy and time compared with ${DBE}$ .
6
+
7
+ ## 1 INTRODUCTION
8
+
9
+ Two of the critical goals of probabilistic modeling are the compact representation of probability distributions and the efficient computation of their marginals and modes. Probabilistic graphical models, such as Markov networks (Pearl, 1988; Darwiche, 2009; Dechter, 2013) provide a framework to represent distributions compactly as normalized products or factors : $P\left( X\right) = \frac{1}{Z}\mathop{\prod }\limits_{\alpha }{f}_{\alpha }\left( {X}_{\alpha }\right)$ , where $X$ is a set of variables, each potential ${f}_{\alpha }$ is a function over a subset ${X}_{\alpha }$ of the variables (its scope) and $Z = \mathop{\sum }\limits_{X}\mathop{\prod }\limits_{\alpha }{f}_{\alpha }\left( {X}_{\alpha }\right)$ is the partition function. Computing the partition function is still exponential in the induced width of the model's graph even for distributions that admit a compact representation.
10
+
11
+ The partition function $Z$ is defined by two types of operations: sums and products. It can be evaluated efficiently if
12
+
13
+ $\mathop{\sum }\limits_{X}\mathop{\prod }\limits_{\alpha }{f}_{\alpha }\left( {X}_{\alpha }\right)$ can be reorganized using the distributive law along a variable ordering (Dechter, 2003). This organization can be described using buckets as data structures, one for each variable in the ordering. When a bucket is processed, its associated variable is removed by creating a bucket output function, also called a message, that is passed to a subsequent bucket. The complexity of computing this function is exponential in its number of arguments, called scope or the bucket's width. Overall, Bucket Elimination (BE) (Dechter, 1999b) is time and memory exponential in the induced-width of the model's graph along the ordering.
14
+
15
+ Providing good approximations to ${BE}$ is important not only because it generates an answer to a query, but primarily because it compiles a structure and a set of messages that can be used to answer multiple queries (e.g., the probability of evidence for various evidence variables Darwiche (2009)). Also because, the messages can be used as building blocks for generating heuristics for search to further improve performance. We will therefore consider and evaluate NeuroBE within the class of approximate ${BE}$ schemes.
16
+
17
+ Schemes that approximate ${BE}$ include the (weighted) mini-bucket scheme (Dechter and Rish, 2003; Liu and Ihler, 2012) and generalized belief propagation schemes (Yedidia et al., 2000; Mateescu et al., 2010). Recently introduced, Deep Bucket Elimination (DBE) (Razeghi et al., 2021) approximates each bucket function with a neural network (NN). While this approach is inherently time consuming requiring the independent training of many NNs to compute the partition function of a single problem, it has yielded more accurate approximations on several benchmarks. It is important to note that unlike weighted bucket-elimination and belief propagation schemes, ${DBE}$ can improve with time even with bounded memory, yielding an anytime framework for reasoning. Still, DBE's original design can be improved significantly which is the focus of this paper.
18
+
19
+ Contributions. We present NeuroBE, a re-design of ${DBE}$ , that addresses its one size fits all policy by customizing the NN construction and training sample size to each bucket separately, in proportion to its message size. In particular, we introduce a new loss function that is sensitive to bucket's message distribution. We also provide an analysis relating the local errors to a bound on the global error. In an extensive empirical evaluation we show that NeuroBE outperforms ${DBE}$ across all benchmarks using far less training samples, yielding higher accuracy with less time.
20
+
21
+ The paper is organized as follows. We first provide a background to ${BE}$ and ${DBE}$ ; then we present Neuro ${BE}$ ; followed by error analysis; lastly, we demonstrate the efficiency of NeuroBE empirically.
22
+
23
+ Related work. As noted, approximating and bounding Bucket Elimination has been carried out extensively over the years for all probabilistic queries. Well known is the Mini-Bucket Elimination scheme (Dechter and Rish, 2003) and its variants, such as Weighted Mini-Bucket Elimination (WMBE), augmented with message-passing cost-shifting (Liu and Ihler, 2011).
24
+
25
+ Neural network approximation to ${BE}$ was introduced in Razeghi et al. (2021). The idea is closest in spirit to the Neuro-Dynamic Programming scheme as outlined in Bert-sekas and Tsitsiklis (1996) where the cost-to-go functions (similar to messages) generated by dynamic programming can be approximated by NNs. This technique is also highly related to Deep Reinforcement Learning (DRL) (Mnih et al., 2015) where, in the absence of a model, the value function is approximated by neural networks learned from temporal trajectories.
26
+
27
+ Recently, Graph Neural Networks (GNNs) (Scarselli et al., 2009) are used to learn messages following the message-passing reasoning methods in graphical models (Abboud et al., 2020; Yoon et al., 2018; Heess et al., 2013). However, Yoon et al. (2018); Heess et al. (2013) is restricted to small instances (i.e., $\sim {40}$ variables) and Abboud et al. (2020) tackles problems with a known polynomial-time approximation. GNN based methods derive a supervised end-to-end learning algorithm which generalize across different problem instances. In contrast, we consider a different class of algorithms, where we confine learning to within a problem instance only.
28
+
29
+ ## 2 BACKGROUND
30
+
31
+ A graphical model can be defined by a 3-tuple $\mathcal{M} =$ (X, D, F), where $\mathbf{X} = \left\{ {{X}_{i} : i \in V, V = \{ 1,\ldots , n\} }\right\}$ is a set of $n$ variables indexed by $V$ and $\mathbf{D} = \left\{ {{D}_{i} : i \in V}\right\}$ is the set of finite domains for each ${X}_{i}$ (i.e. each ${X}_{i}$ can only assume values in ${D}_{i}$ , and each ${D}_{i}$ is finite). Each function ${f}_{\alpha } \in \mathbf{F}$ is defined over a subset of the variables called its scope, ${X}_{\alpha }$ , where $\alpha \subseteq V$ are the indices of variables in its scope and ${D}_{\alpha }$ denotes the Cartesian product of their domains, so that ${f}_{\alpha } : {D}_{\alpha } \rightarrow R \geq 0$ .
32
+
33
+ ![019638e3-0451-7570-941f-a427f4563be0_1_950_170_577_757_0.jpg](images/019638e3-0451-7570-941f-a427f4563be0_1_950_170_577_757_0.jpg)
34
+
35
+ Figure 1: (a) A primal graph of a GM with 7 variables. (b) Illustration of ${BE}$ with an ordering $\mathrm{{ABCEDFG}}$ .
36
+
37
+ The primal graph of a graphical model associates each variable with a node. An edge between node $i$ and node $j$ is created if and only if there is a function containing ${X}_{i}$ and ${X}_{j}$ in its scope. Figure 1a shows a primal graph of a graphical model with variables indexed from $A$ to $G$ with functions over pairs of variables that are connected by an edge. Graphical models can be used to represent a global function, often a probability distribution, defined by $\Pr \left( X\right) \propto \mathop{\prod }\limits_{\alpha }{f}_{\alpha }\left( {X}_{\alpha }\right)$ . An important task is to compute the normalizing constant, also known as the partition function $Z = \mathop{\sum }\limits_{X}\mathop{\prod }\limits_{\alpha }{f}_{\alpha }\left( {X}_{\alpha }\right) .$
38
+
39
+ ### 2.1 BUCKET ELIMINATION
40
+
41
+ Bucket Elimination (BE) (Dechter, 1999a) is a universal exact algorithm for probabilistic inference. It is a variable elimination algorithm that can answer a wide-range of queries, including the partition function ranging from constraint satisfaction, to pure combinatorial optimization (e.g., Most Probable Explanation (MPE/MAP)), and weighted counting (Partition Function, Probability of Evidence).
42
+
43
+ Given a variable ordering $d$ , BE (presented in Algorithm 1, omitting steps 9-12) creates a bucket tree where each node is a bucket representing a variable in the ordering $d$ . Figure 1b shows a bucket tree for the primal graph in Figure 1a along an ordering. Each bucket in this tree contains a set of the model's functions depending on the given order of processing. For example, Bucket G in Figure 1b has functions $\{ f\left( {A, G}\right) , f\left( {F, G}\right) \}$ , an exhaustive set of model’s functions with variable $G$ in its scope. There is an arc from a bucket, say ${B}_{c}$ , to a parent bucket, ${B}_{p}$ , if ${X}_{p}$ is the latest variable in bucket ${B}_{c}$ ’s message scope along the ordering (constants are placed in ${B}_{1}$ ). In the same example, there is an arc from Bucket G to Bucket F.
44
+
45
+ ${BE}$ then, performs inference along the bucket tree as a 1- iteration message-passing algorithm (bottom-up). It processes each bucket from leaves to the root passing messages from child(c)to parent(p). For a child variable ${X}_{c}$ , it considers all the functions in bucket ${B}_{c}$ . This includes the original functions in the graphical model as well as the messages received by processing previous variables. It then marginal-izes ${X}_{c}$ out from the product of functions in ${B}_{c}$ generating a new, so called, bucket function or message, denoted ${\lambda }_{c \rightarrow p}$ , or ${\lambda }_{c}$ for short:
46
+
47
+ $$
48
+ {\lambda }_{c} = \mathop{\sum }\limits_{{X}_{c}}\mathop{\prod }\limits_{{{f}_{\alpha } \in {B}_{c}}}{f}_{\alpha } \tag{1}
49
+ $$
50
+
51
+ The ${\lambda }_{c}$ function is placed in ${B}_{p}$ , the bucket of ${X}_{p}$ . Once all the variables are processed, ${BE}$ outputs all the messages and the exact value of $\mathrm{Z}$ by taking the product of all the constant present in the bucket of the first variable. We illustrate ${BE}$ message flow on our example problem in Figure 1b.
52
+
53
+ Complexity. Both the time and space complexity of ${BE}$ are exponential in the induced width, which is the size of the largest number of variables in the scope of any message over all buckets (Dechter,2013). Clearly, ${BE}$ becomes impractical if the induced width is large.
54
+
55
+ ### 2.2 DEEP BUCKET ELIMINATION
56
+
57
+ Given a variable ordering $d$ , Deep Bucket Elimination (DBE) (Razeghi et al., 2021) approximates each message generated in the bucket tree by training a NN when the scope of any bucket message(S)is high $\left( { > i\text{-bound}}\right)$ . For example, in figure $1\mathrm{\;b}$ , if we use an $i$ -bound $= 2$ , instead of sending an exact function from the bucket of $D$ to the bucket of $C,{\lambda }_{D \rightarrow C}\left( {A, B, C}\right) ,{DBE}$ sends a NN approximation ${\mu }_{\theta , D \rightarrow C}\left( {A, B, C}\right)$ with parameters $\theta$ , as we describe next.
58
+
59
+ Let $B$ be a bucket with width $w > i$ -bound and output message $\lambda \left( S\right)$ with scope $S.{DBE}$ then, constructs a fully-connected feed-forward NN having $w$ nodes in the input layer. This is followed by $L$ hidden layers with a constant $h$ hidden nodes per layer with ${ReLU}$ activation function. Finally, the output layer contains one node with a real-valued output. Subsequently, ${DBE}$ generates a training set $\left\{ \left( {{s}_{n},\lambda \left( {s}_{n}\right) }\right) \right\}$ of size $\mathrm{N}$ , where ${s}_{n}$ denotes a configuration over $S$ sampled uniformly at random and $\lambda \left( {s}_{n}\right)$ is the message value defined in Eq. 1. The NN parameters, $\theta$ , are then trained to minimize the mean square error loss :
60
+
61
+ $$
62
+ L\left( \theta \right) = \frac{1}{N}\mathop{\sum }\limits_{{n = 1}}^{N}{\left( \lambda \left( {s}_{n}\right) - {\mu }_{\theta }\left( {s}_{n}\right) \right) }^{2}
63
+ $$
64
+
65
+ where ${s}_{n}$ is the ${n}^{th}$ sample in the training set and ${\mu }_{\theta }\left( {s}_{n}\right)$ is the NN output. Once training is complete, ${DBE}$ passes the trained NN, ${\mu }_{\theta }$ to its parent bucket. Typically, ${DBE}$ needs to approximate many buckets in order to compute the partition function $Z$ .
66
+
67
+ Even though ${DBE}$ has yielded more accurate approximations when competing with the weighted bucket-elimination scheme on several benchmarks, each message approximation procedure requires a large training sample size, increasing ${DBE}$ ’s total time and memory requirements substantially. Therefore, we re-design each message approximation procedure, as elaborated in the following section.
68
+
69
+ ## 3 NEUROBE
70
+
71
+ We rename ${DBE}$ to ${NeuroBE}$ since we use mostly shallow neural networks ( 2 layers). Algorithm 1 presents Neu-roBE. NeuroBE first creates a bucket tree along a given ordering (line 2). While processing each bucket, if it's width $\leq i$ -bound, then the message, ${\mu }_{c \rightarrow p}^{ * }$ , is computed exactly (line 7). Otherwise, the message is approximated using a NN (line 10). Note that in either case, if a bucket contains a NN function, then computing ${\mu }^{ * }$ in Line 7 or in the ${NN}$ - train method in Algorithm 3 (line 5) requires evaluating the trained NN. Finally, line 14 calculates the partition function using the functions in bucket ${B}_{1}$ .
72
+
73
+ Note that we denote ${\mu }_{c \rightarrow p}^{ * }$ as the exact message computed in a bucket while we reserve the notation ${\lambda }_{c \rightarrow p}$ to the messages computed by exact ${BE}$ . We do this to distinguish the exact local computation of a message that may be based on inexact functions in the bucket from the globally exact messages $\lambda$ computed by ${BE}$ . In the latter, each bucket function is computed exactly where all the functions are exact. Hence, we refer to $\lambda$ as the global exact message; ${\mu }_{c}^{ * }$ as the local exact message and by ${\mu }_{\theta , c \rightarrow p}$ the NN approximation of the local exact message, ${\mu }_{c \rightarrow p}^{ * }$ .
74
+
75
+ The difference between NeuroBE and DBE is solely in the individual message approximation scheme, NN-train. In contrast to ${DBE},{NeuroBE}$ dynamically customizes the NN architecture and its training set size to the bucket message complexity (see Vapnik (1999)).
76
+
77
+ NN Architecture selection Clearly, the NN size should be dependant on the dimensionality of the message function. In our case, the function’s scope size is the induced-width, $w$ . We therefore propose to adjust the NN size by making the number of hidden units, $h$ , a function of $w$ while keeping the number of layers, $L$ , constant. Specifically, we select $h = b$ . $w$ , where $b$ is a constant satisfying $b \geq 1$ . Figure 2 illustrates an example NN model with an input layer of size $w$ and 2 hidden layers with dimension $h$ , varying linearly with $b$ . Through such a rule, NeuroBE fits the NN's architecture to the message size. We next quantify the capacity of such NNs, and use it to determine its train sample sizes.
78
+
79
+ Algorithm 1 NeuroBE
80
+
81
+ ---
82
+
83
+ Input: Graphical model $\mathcal{M} = \left( {\mathbf{X},\mathbf{D},\mathbf{F}}\right)$ , Ordering $d =$
84
+
85
+ ${X}_{1},\ldots ,{X}_{n}$
86
+
87
+ Parameters: $i$ -bound $i$ ; #layers $L$ ; constants $b,\eta$ ;
88
+
89
+ Output: the partition function constant and bucket mes-
90
+
91
+ sages
92
+
93
+ for $c$ in n...1 do
94
+
95
+ (Initialize buckets) put all unplaced functions men-
96
+
97
+ tioning ${X}_{c}$ in ${B}_{c}$ .
98
+
99
+ end for
100
+
101
+ for $\mathrm{c}$ in n...1 do
102
+
103
+ Let ${X}_{p}$ be the parent variable of ${X}_{c}$ in the bucket-tree
104
+
105
+ if width $\left( {B}_{c}\right) < i$ then
106
+
107
+ compute ${\mu }_{c \rightarrow p}^{ * } \leftarrow \mathop{\sum }\limits_{{X}_{c}}\mathop{\prod }\limits_{{{f}_{\alpha } \in {B}_{c}}}{f}_{\alpha }$ ,
108
+
109
+ else
110
+
111
+ ${\mu }_{\theta , c \rightarrow p} \leftarrow \mathrm{{NN}} - \operatorname{train}\left( {\left\{ {{f}_{\alpha } \mid {f}_{\alpha } \in {B}_{c}}\right\} , L, b,\eta }\right)$
112
+
113
+ end if
114
+
115
+ Put ${\mu }_{c \rightarrow p}^{ * }$ or ${\mu }_{\theta , c \rightarrow p}$ in ${B}_{p}$
116
+
117
+ end for
118
+
119
+ $Z = \mathop{\sum }\limits_{{X}_{0}}\mathop{\prod }\limits_{{{f}_{\alpha } \in {B}_{1}}}{f}_{\alpha }$
120
+
121
+ return $\mathrm{Z}$ and all messages generated
122
+
123
+ ---
124
+
125
+ ![019638e3-0451-7570-941f-a427f4563be0_3_321_978_364_246_0.jpg](images/019638e3-0451-7570-941f-a427f4563be0_3_321_978_364_246_0.jpg)
126
+
127
+ Figure 2: For a bucket of width $w$ , we illustrate a NN architecture with L layers and #bw hidden-units with $b \geq 1$ .
128
+
129
+ NN complexity. Pseudo-dimension (Pollard, 1984; Anthony and Bartlett, 2002) is often used to measure the expressive power of a set of functions that can be learned by any statistical regression algorithm. Bounds to the pseudo-dimension of NNs used in our work with ReLU activation function is provided by Bartlett et al. (2019). We use the lower bound as a proxy to estimate the pseudo-dimension $\left( \rho \right)$ of the $\mathrm{{NN}}\left( {\mu }_{\theta }\right)$ , with $L$ layers, and $b \cdot w$ hidden units, approximating ${\mu }^{ * }$ having width $w$ , yielding (see appendix for derivation):
130
+
131
+ $$
132
+ \rho \left( w\right) \propto {\left( L * b * w\right) }^{2}\log \left\lbrack \left( {b * w}\right) \right\rbrack \tag{2}
133
+ $$
134
+
135
+ The above equation correlates the complexity of the candidate NN with the width of the message ${\mu }^{ * }$ it approximates.
136
+
137
+ Sample Complexity As suggested in Vapnik (1999), we choose a sample size $\mathrm{N}$ that is proportional to the pseudo-dimension (Eq. 2) of each NN as a function of its # of
138
+
139
+ Algorithm 2 generate-samples(F, N, X)
140
+
141
+ ---
142
+
143
+ Input: $F$ , a set of functions over scope $S \cup \{ X\}$ , where $X$ ,
144
+
145
+ a variable to be eliminated. ${\mu }^{ * }$ denotes the output message
146
+
147
+ over $S, N$ , an integer,
148
+
149
+ Output: $D$ , a set of $\mathrm{N}$ samples $\left\{ \left( {s,{\mu }_{n}^{ * }\left( s\right) }\right) \right\}$
150
+
151
+ initialize $D = \{ \}$ ,
152
+
153
+ for $\mathrm{i} = 1..\mathrm{N}$ do
154
+
155
+ $s \leftarrow$ sample uniformly from domain(S)
156
+
157
+ ${\mu }^{ * }\left( s\right) \leftarrow \mathop{\sum }\limits_{x}\mathop{\prod }\limits_{{f \in F}}f\left( {s, x}\right) \{$ Eq. 4 $\}$
158
+
159
+ ${\mu }_{\text{nor }}^{ * }\left( s\right) \leftarrow$ Normalize ${\mu }^{ * }\left( s\right) \{$ Eq. 5 $\}$
160
+
161
+ Add (s, ${\mu }_{\text{nor }}^{ * }\left( s\right)$ ) to $D$
162
+
163
+ end for
164
+
165
+ return $D$
166
+
167
+ ---
168
+
169
+ arguments $w$ :
170
+
171
+ $$
172
+ N\left( w\right) = \eta * {\left( L * b * w\right) }^{2}\log \left( {b * w}\right) \tag{3}
173
+ $$
174
+
175
+ where $\eta$ is a constant. The sample size $N\left( w\right)$ often exceeds memory limits for higher width buckets with even the simplest NN architecture $\left( {L = 1, b = 1}\right)$ . So, we use $\eta$ as a control over the sample size and threshold sample size per bucket to ${10}^{6}$ . In general, for high induced-width problems, we keep $\eta$ small and train NNs with small sample sizes. However, for problems with small induced-width, we let $\eta$ take high values.
176
+
177
+ Sample Generation Given a generic bucket $B$ of a variable $X$ and a local bucket function ${\mu }^{ * }\left( S\right)$ , where $S$ is the scope of the bucket's output function, we create a training set ${D}_{T}$ (see Algorithm 2), by generating $N$ (from Eq. 3) number of training examples $\left( {s,{\mu }_{\text{nor }}^{ * }\left( s\right) }\right)$ by sampling configurations $\{ S = s\}$ uniformly at random from the domain of scope $S$ . We compute the message value for each configuration $s$ as
178
+
179
+ $$
180
+ {\mu }^{ * }\left( s\right) = \mathop{\sum }\limits_{{x \in \text{ domain }\left( X\right) }}\mathop{\prod }\limits_{{f \in B}}f\left( {s, x}\right) \tag{4}
181
+ $$
182
+
183
+ Each generated configuration $s$ then, is paired with its normalized message value ${\mu }_{\text{nor }}^{ * }\left( s\right) \in \left\lbrack {0,1}\right\rbrack$ as,
184
+
185
+ $$
186
+ {\mu }_{\text{nor }}^{ * }\left( s\right) = \frac{{\mu }^{ * }\left( s\right) - {\mu }_{\text{min }}^{ * }\left( {D}_{T}\right) }{{\mu }_{\text{max }}^{ * }\left( {D}_{T}\right) - {\mu }_{\text{min }}^{ * }\left( {D}_{T}\right) } \tag{5}
187
+ $$
188
+
189
+ where ${\mu }_{min}^{ * }$ and ${\mu }_{max}^{ * }$ are minimum and maximum ${\mu }^{ * }$ values in ${D}_{T}$ . Note that we also normalize the input sample configurations across benchmarks with domain values $> 2$ to be in $\left\lbrack {-1,1}\right\rbrack$ to accelerate training, as suggested in Cun et al. (1991). Some benchmarks in our experiments have very large message values (e.g. ${e}^{51}$ ). In such cases, we use a log transform i.e. $\log {\mu }^{ * },\log {\mu }_{\min }^{ * }\left( {D}_{T}\right)$ and $\log {\mu }_{\max }^{ * }\left( {D}_{T}\right)$ instead of their corresponding usual ${\mu }^{ * }$ values to compute the target NN values ${\mu }_{\text{nor }}^{ * }$ in Eq. 5. (For simplicity we will ignore the normalization subscript on the output function ${\mu }^{ * }$ and will explicitly denote it when relevant.)
190
+
191
+ Loss Function ${DBE}$ used the mean square error loss function for training each NN. However, in the context of the partition function we want to weigh errors associated with high cost tuples higher than low cost tuples. Namely, squared error on message values that are large seem to be potentially much costlier than smaller message values because of their significant contribution to the sum-product operation in Eq. 1. Hence, we propose an importance mean square error (abbreviated as I.m.s.e) as a loss function where the weights are dependent on the message values.
192
+
193
+ We define the importance weight of a sample as its relative weight in the function distribution in the given dataset $D$ . Namely, given a dataset $D$ , the importance weight $W\left( s\right)$ is normalized over the ${\mu }^{ * }$ values:
194
+
195
+ $$
196
+ W\left( s\right) = \frac{{\mu }^{ * }\left( s\right) }{\mathop{\sum }\limits_{{s \in D}}{\mu }^{ * }\left( s\right) } \tag{6}
197
+ $$
198
+
199
+ Definition 1 (Importance mean square error loss (I.m.s.e)). Let ${\mu }_{\theta }$ be the learned NN function for approximating function ${\mu }^{ * }$ . Then, the I.m.s.e loss function for a given mini-batch, $D \in {D}_{T}$ of size $\# D$ is defined by
200
+
201
+ $$
202
+ {L}_{D}\left( {{\mu }^{ * },{\mu }_{\theta }}\right) = \frac{1}{\# D}\mathop{\sum }\limits_{{s \in D}}{\left( {\mu }^{ * }\left( s\right) - {\mu }_{\theta }\left( s\right) \right) }^{2} * W\left( s\right) \tag{7}
203
+ $$
204
+
205
+ log transformation. For problem instances whose functions costs and message costs are very large, we apply log transformation everywhere, replacing ${\mu }^{ * }$ by $\log {\mu }^{ * }$ in the above equations. Here, the weight function becomes:
206
+
207
+ $$
208
+ {W}_{D}^{log}\left( s\right) = \frac{\log {\mu }^{ * }\left( s\right) - \log {\mu }_{min}^{ * }\left( D\right) }{\mathop{\sum }\limits_{{s \in D}}\left( {\log {\mu }^{ * }\left( s\right) - \log {\mu }_{min}^{ * }\left( D\right) }\right) } \tag{8}
209
+ $$
210
+
211
+ Algorithm 3 describes the procedure, ${NN}$ -Train. Its input parameters are $L, b,\eta$ where $L$ is the number of layers, $b$ is a constant to determine the number of hidden-units, $b \cdot w$ (line $1),\eta$ is another constant to determine the training sample size $N$ (line 2, Eq. 3). A major step occurs next where the algorithm generates a Data set of samples (line 3, see also Algorithm 2) and splits it to create the training set of size $N$ , validation set of size $N/4$ and testing set of size ${50k}$ (line 4). If the bucket $B$ contains a trained NN, then this step requires evaluating that NN. Lines 8-12 then performs batch training for updating the NN parameters $\left( \theta \right)$ using the I.m.s.e loss function (Eq. 7), and the Adam optimizer (Kingma and Ba, 2014) (line 14) with a learning rate of 0.001 and a batch-size $\left( {N}_{B}\right)$ of 256 across all benchmarks. At the end of each epoch, the current model is evaluated on a holdout validation set (line 14). We evaluate the early-stopping criteria (line
212
+
213
+ Algorithm 3 NN-train $\left( {F, X, L, b,\eta ,\# \text{epochs}}\right)$
214
+
215
+ ---
216
+
217
+ Input: $F$ : a set of functions over scope $S \cup \{ X\}$ where $X$
218
+
219
+ is to be removed, $w$ scope size.
220
+
221
+ Parameters: $L$ : # layers in NN,#epochs, $\eta , b$ : constants
222
+
223
+ Output: ${\mu }_{\theta } : \mathrm{{NN}}$ message approximation, $\widehat{\epsilon }$ : an estimated
224
+
225
+ bucket error bound, ${\widehat{\epsilon }}_{avg}$ : estimated average bucket error
226
+
227
+ $\# h \leftarrow b * w$
228
+
229
+ $N \leftarrow \#$ samples $\left( {w,\eta , L, b}\right) \{$ Eq. 3 $\}$
230
+
231
+ Data $\leftarrow$ generate-samples $\left( {F, N + N/4 + {50k}, X}\right)$
232
+
233
+ ${D}_{\text{Train }},{D}_{\text{Val }},{D}_{\text{Test }} \leftarrow$ Split(Data)
234
+
235
+ Initialize NN parameters $\theta , p = 1$ , early-stopping $\leftarrow$ False
236
+
237
+ while $\mathrm{p} \leq \#$ epochs and $\neg$ early-stopping do
238
+
239
+ ${D}_{1},..,{D}_{k} \leftarrow$ divide ${D}_{\text{Train }}$ to minibatch
240
+
241
+ for $i = 1..k$ do
242
+
243
+ Let ${D}_{i} = \left\{ \left( {\mathrm{s},{\mu }^{ * }}\right) \right\}$
244
+
245
+ Compute $\left\{ {{\mu }_{\theta }\left( s\right) \mid \mathrm{s} \in {D}_{i}}\right\}$
246
+
247
+ ${\operatorname{loss}}_{{D}_{i}} \leftarrow {L}_{{D}_{i}}\left( {{\mu }^{ * },{\mu }_{\theta }}\right) \{$ Eq. 7 $\}$
248
+
249
+ $\theta \leftarrow$ update $\theta$ by optimize $\left( {\text{Adam,}{\text{loss}}_{{D}_{i}},\theta }\right)$
250
+
251
+ end for
252
+
253
+ ${\operatorname{loss}}_{{D}_{\text{val }}} \leftarrow {L}_{{D}_{\text{val }}}\left( {{\mu }^{ * },{\mu }_{\theta }}\right)$ \{For stop condition\}
254
+
255
+ early-stopping $\leftarrow$ evaluate early-stoping $\left( {\operatorname{loss}}_{{D}_{Val}}\right)$
256
+
257
+ $p \leftarrow p + 1$
258
+
259
+ end while
260
+
261
+ Unnormalize $\left\{ {{\mu }_{\theta }\left( s\right) ,{\mu }^{ * }\left( s\right) \mid s \in {D}_{\text{Test }}}\right\}$ \{Inverse of Eq.
262
+
263
+ 5\}
264
+
265
+ $\widehat{\epsilon } \leftarrow \mathop{\max }\limits_{{s \in {D}_{\text{Test }}}}\left( {\log {\mu }^{ * }\left( s\right) - \log {\mu }_{\theta }\left( s\right) }\right)$
266
+
267
+ return ${\mu }_{\theta },\widehat{\epsilon }$
268
+
269
+ ---
270
+
271
+ 15), which is assigned True when either the maximum limit #epochs is reached or the validation error increases for two consecutive epochs. Once training is complete, we compute the maximum log relative error between the target and NN approximated messages over a test set (lines 18-19). In the next section, we use this to analyse error propagation in NeuroBE. The NN-train procedure then returns the approximated message ${\mu }_{\theta }$ , along with its estimated error.
272
+
273
+ Complexity. The time and space complexity for learning a single message in NeuroBE is linear in the sample size. In contrast to ${DBE}$ , here the sample size varies with the bucket's width.
274
+
275
+ ## 4 ERROR ANALYSIS
276
+
277
+ We next analyse the relationship between the local errors contributed by each approximated message and the global partition function error.
278
+
279
+ Definition 2 (local and global bucket errors). Let $\lambda$ be the (global) exact message generated in $B$ by the exact BE algorithm, ${\mu }^{ * }$ be the (local) exact message in $B$ computed by the functions in it and $\mu = {NN}$ -train $\left( {\mu }^{ * }\right)$ be its NN approximation. Then, the Local Bucket Error is the function
280
+
281
+ $$
282
+ E = \log {\mu }^{ * } - \log \mu
283
+ $$
284
+
285
+ <table><tr><td colspan="5" rowspan="2">Problem Description</td><td rowspan="4">ref Z</td><td rowspan="2">WMB</td><td rowspan="4">#NB</td><td colspan="4" rowspan="2">DBE (#h=100, N=320k)</td><td colspan="11">NeuroBE (#h=3w, ${\mathrm{N}}_{\min } = {49}\mathrm{k}$ )</td></tr><tr><td colspan="3" rowspan="2">Statistics on resources</td><td colspan="4">Loss : mean square error</td><td colspan="4">Loss : Imp. mean square error</td></tr><tr><td colspan="5">i-bound=20</td><td rowspan="2">error</td><td colspan="4">statistics on error ( 5 runs )</td><td colspan="4">statistics on error ( 5 runs )</td><td colspan="4">statistics on error ( 5 runs )</td></tr><tr><td>Id</td><td>name</td><td>k</td><td>#v</td><td>w</td><td>avg error</td><td>min error</td><td>stdev</td><td>time</td><td>${\mathrm{h}}_{\max }$</td><td>${\mathrm{N}}_{avg}$</td><td>${\mathrm{N}}_{\max }$</td><td>avg error</td><td>min error</td><td>std</td><td>time (h)</td><td>avg error</td><td>min error</td><td>std</td><td>time (h)</td></tr><tr><td>1</td><td>pedigree13</td><td>3</td><td>888</td><td>33</td><td>-31.18</td><td>6.4696</td><td>127</td><td>5.32</td><td>2.62</td><td>3.06</td><td>11</td><td>96</td><td>218k</td><td>706k</td><td>9.04</td><td>7.86</td><td>0.8</td><td>7.5</td><td>1.11</td><td>0.76</td><td>0.25</td><td>7.5</td></tr><tr><td>2</td><td>pedigree41</td><td>5</td><td>885</td><td>32</td><td>-76.04</td><td>4.1497</td><td>92</td><td>4.27</td><td>3.25</td><td>0.73</td><td>9.9</td><td>93</td><td>190k</td><td>658k</td><td>10.2</td><td>8.4</td><td>2.16</td><td>6.2</td><td>0.47</td><td>0.153</td><td>0.21</td><td>6.2</td></tr><tr><td>3</td><td>pedigree51</td><td>5</td><td>871</td><td>35</td><td>-77.27</td><td>9.7624</td><td>120</td><td>23.92</td><td>9.23</td><td>12.7</td><td>13</td><td>102</td><td>259k</td><td>809k</td><td>11.73</td><td>10.37</td><td>1.26</td><td>10.4</td><td>3.51</td><td>1.96</td><td>0.89</td><td>10.4</td></tr><tr><td>4</td><td>pedigree34</td><td>5</td><td>922</td><td>33</td><td>-64.23</td><td>7.0762</td><td>106</td><td>5.91</td><td>1.57</td><td>5.98</td><td>11</td><td>96</td><td>211k</td><td>706k</td><td>6.14</td><td>4.56</td><td>4.14</td><td>6.96</td><td>0.65</td><td>0.23</td><td>0.29</td><td>6.96</td></tr><tr><td>5</td><td>pedigree7</td><td>4</td><td>867</td><td>34</td><td>-64.82</td><td>6.0012</td><td>108</td><td>11.26</td><td>5.18</td><td>7.8</td><td>11</td><td>99</td><td>350k</td><td>900k</td><td>#</td><td>#</td><td>#</td><td>10.7</td><td>1.75</td><td>1.21</td><td>0.7</td><td>10.7</td></tr><tr><td>7</td><td>pedigree19</td><td>5</td><td>693</td><td>28</td><td>-59.020</td><td>2.5809</td><td>43</td><td>6.054</td><td>5.41</td><td>0.92</td><td>9.4</td><td>71</td><td>149k</td><td>482k</td><td>9.14</td><td>8.7</td><td>0.6</td><td>5</td><td>2.61</td><td>1.91</td><td>0.6</td><td>5</td></tr></table>
286
+
287
+ <table><tr><td colspan="5" rowspan="2">Problem Description</td><td rowspan="4">ref Z</td><td rowspan="2">WMB</td><td rowspan="4">#NB</td><td colspan="4" rowspan="2">DBE (#h=100, N=320k)</td><td colspan="11">NeuroBE (#h=w, Nmin = 19k)</td></tr><tr><td colspan="3" rowspan="2">Statistics on resources</td><td colspan="4">Loss : mean square error</td><td colspan="4">Loss : Imp. mean square error</td></tr><tr><td colspan="5">i-bound=20</td><td rowspan="2">error</td><td colspan="4">statistics on error ( 5 runs )</td><td colspan="4">statistics on error (5 runs)</td><td colspan="4">statistics on error ( 5 runs )</td></tr><tr><td>Id</td><td>name</td><td>k</td><td>#v</td><td>W</td><td>avg error</td><td>min error</td><td>stdev</td><td>time</td><td>${\mathrm{h}}_{\max }$</td><td>${\mathrm{N}}_{\text{avg }}$</td><td>${\mathrm{N}}_{\max }$</td><td>avg error</td><td>min error</td><td>std</td><td>time (h)</td><td>avg error</td><td>min error</td><td>std</td><td>time (h)</td></tr><tr><td>1</td><td>grid4040f10</td><td>2</td><td>1600</td><td>55</td><td>5490</td><td>215.45</td><td>308</td><td>97.1</td><td>11.8</td><td>65.2</td><td>12</td><td>55</td><td>120k</td><td>364k</td><td>14.57</td><td>0.15</td><td>8.94</td><td>7.69</td><td>9.71</td><td>2.1</td><td>0.91</td><td>6.4</td></tr><tr><td>2</td><td>grid4040f5</td><td>2</td><td>1600</td><td>55</td><td>2800</td><td>84.92</td><td>308</td><td>39.9</td><td>6.28</td><td>35</td><td>12</td><td>55</td><td>120k</td><td>364k</td><td>8.86</td><td>2.05</td><td>8.21</td><td>7.66</td><td>3.7</td><td>0.2</td><td>3.06</td><td>6.3</td></tr><tr><td>3</td><td>grid4040f2</td><td>2</td><td>1600</td><td>55</td><td>1220</td><td>25.24</td><td>308</td><td>7.34</td><td>1.2</td><td>5.4</td><td>12</td><td>55</td><td>120k</td><td>364k</td><td>3.15</td><td>1.48</td><td>1.66</td><td>7.48</td><td>2.28</td><td>1.2</td><td>0.91</td><td>6.1</td></tr><tr><td>4</td><td>grid4040f15</td><td>2</td><td>1600</td><td>55</td><td>8200</td><td>338.2</td><td>308</td><td>83.46</td><td>41.8</td><td>34.2</td><td>13</td><td>55</td><td>75k</td><td>228k</td><td>24.75</td><td>6.62</td><td>18.1</td><td>5.7</td><td>17.87</td><td>7.45</td><td>9.16</td><td>5.7</td></tr><tr><td>5</td><td>grid4040f10w</td><td>2</td><td>1600</td><td>114</td><td>5637</td><td>297.7</td><td>376</td><td>100.5</td><td>6.4</td><td>82.2</td><td>21</td><td>114</td><td>150k</td><td>670k</td><td>67.76</td><td>32.7</td><td>26</td><td>11</td><td>54.18</td><td>25.01</td><td>21.1</td><td>11.1</td></tr><tr><td>6</td><td>grid4040f5w</td><td>2</td><td>1600</td><td>114</td><td>2819</td><td>136.99</td><td>376</td><td>78.2</td><td>72.6</td><td>5.6</td><td>21</td><td>114</td><td>150k</td><td>670k</td><td>5.37</td><td>1.65</td><td>4.77</td><td>11.6</td><td>9.62</td><td>4.62</td><td>5.27</td><td>12.1</td></tr><tr><td>7</td><td>grid4040f2w</td><td>2</td><td>1600</td><td>114</td><td>1231</td><td>32</td><td>376</td><td>15.12</td><td>0.92</td><td>20.5</td><td>18</td><td>114</td><td>150k</td><td>670k</td><td>10.15</td><td>8.81</td><td>1.44</td><td>11.8</td><td>5.56</td><td>2.92</td><td>2.11</td><td>10</td></tr></table>
288
+
289
+ (a) pedigree (b) Grid-hard (c) Grid-easy (d) DBN
290
+
291
+ <table><tr><td colspan="5" rowspan="2">Problem Description</td><td rowspan="4">ref Z</td><td rowspan="2">WMB</td><td rowspan="4">#NB</td><td colspan="4" rowspan="2">DBE (#h=100, N=320k)</td><td colspan="11">NeuroBE (#h=w, Nmin = 20k)</td></tr><tr><td colspan="3" rowspan="2">Statistics on resources</td><td colspan="4">Loss : mean square error</td><td colspan="4">Loss: Imp. mean square error</td></tr><tr><td colspan="5">i-bound=10</td><td rowspan="2">error</td><td colspan="4">statistics on error ( 5 runs )</td><td colspan="4">statistics on error ( 5 runs )</td><td colspan="4">statistics on error ( 5 runs )</td></tr><tr><td>Id</td><td>name</td><td>k</td><td>#v</td><td>W</td><td>avg error</td><td>min error</td><td>stdev</td><td>time</td><td>${\mathrm{h}}_{\max }$</td><td>${\mathrm{N}}_{\text{avg }}$</td><td>${\mathrm{N}}_{\max }$</td><td>avg error</td><td>min error</td><td>std</td><td>time (h)</td><td>avg error</td><td>min error</td><td>std</td><td>time (h)</td></tr><tr><td>1</td><td>grid1010f10w</td><td>2</td><td>100</td><td>21</td><td>333.32</td><td>32</td><td>31</td><td>4.45</td><td>0.89</td><td>3.71</td><td>1.5</td><td>21</td><td>51k</td><td>87k</td><td>1.58</td><td>0.28</td><td>0.75</td><td>0.35</td><td>1.83</td><td>0.98</td><td>0.76</td><td>0.48</td></tr><tr><td>2</td><td>grid1010f10</td><td>2</td><td>100</td><td>13</td><td>303.09</td><td>1.58</td><td>8</td><td>0.7</td><td>0.05</td><td>0.54</td><td>0.3</td><td>13</td><td>23k</td><td>30k</td><td>1.28</td><td>0.59</td><td>0.41</td><td>0.06</td><td>1.18</td><td>0.86</td><td>0.32</td><td>0.05</td></tr><tr><td>3</td><td>grid2020f2</td><td>2</td><td>400</td><td>27</td><td>291.73</td><td>11.24</td><td>114</td><td>1.98</td><td>0.36</td><td>1.28</td><td>5.3</td><td>27</td><td>68k</td><td>177k</td><td>0.4</td><td>0.013</td><td>0.36</td><td>2.24</td><td>0.13</td><td>0.015</td><td>0.14</td><td>2.5</td></tr><tr><td>4</td><td>grid2020f10</td><td>2</td><td>400</td><td>27</td><td>1312</td><td>80.86</td><td>114</td><td>10.04</td><td>1.05</td><td>9.3</td><td>5.5</td><td>27</td><td>68k</td><td>177k</td><td>2.51</td><td>1.62</td><td>1.31</td><td>2.22</td><td>2.4</td><td>0.37</td><td>1.98</td><td>2.27</td></tr><tr><td>5</td><td>grid2020f5</td><td>2</td><td>400</td><td>27</td><td>665.12</td><td>39.44</td><td>114</td><td>5.75</td><td>0.24</td><td>3.1</td><td>5.5</td><td>27</td><td>68k</td><td>177k</td><td>0.84</td><td>0.37</td><td>0.53</td><td>1.95</td><td>0.8</td><td>0.081</td><td>0.78</td><td>2.3</td></tr><tr><td>6</td><td>grid2020f15</td><td>2</td><td>400</td><td>27</td><td>1963</td><td>122.91</td><td>114</td><td>17.8</td><td>3.08</td><td>9.8</td><td>3</td><td>27</td><td>68k</td><td>177k</td><td>2.39</td><td>0.41</td><td>2.16</td><td>2.2</td><td>2.68</td><td>1.08</td><td>2.54</td><td>2.2</td></tr></table>
292
+
293
+ <table><tr><td colspan="5" rowspan="2">Problem Description</td><td rowspan="4">ref Z</td><td>WMB</td><td rowspan="4">#NB</td><td colspan="4" rowspan="2">DBE (#h=100, N=320k)</td><td colspan="11">NeuroBE (#h=3w, Nmin = 147k )</td></tr><tr><td rowspan="3">error</td><td colspan="3" rowspan="2">Statistics on resources</td><td colspan="4">Loss : mean square error</td><td colspan="4">Loss: Imp. mean square error</td></tr><tr><td colspan="5">i-bound=20</td><td colspan="4">statistics on error ( 5 runs )</td><td colspan="4">statistics on error ( 5 runs )</td><td colspan="4">statistics on error ( 5 runs )</td></tr><tr><td>ld</td><td>name</td><td>k</td><td>#v</td><td>W</td><td>avg error</td><td>min error</td><td>stdev</td><td>time (h)</td><td>${\mathrm{h}}_{\max }$</td><td>${\mathrm{N}}_{\text{avg }}$</td><td>${\mathrm{N}}_{\max }$</td><td>avg error</td><td>min error</td><td>std</td><td>time (h)</td><td>avg error</td><td>min error</td><td>std</td><td>time (h)</td></tr><tr><td>1</td><td>rbm20</td><td>2</td><td>40</td><td>21</td><td>58.53</td><td>0.0007</td><td>20</td><td>0.22</td><td>0.03</td><td>0.17</td><td>1.1</td><td>60</td><td>147k</td><td>147k</td><td>0.37</td><td>0.08</td><td>0.29</td><td>0.45</td><td>0.37</td><td>0.06</td><td>0.25</td><td>1.73</td></tr><tr><td>2</td><td>rbm21</td><td>2</td><td>42</td><td>22</td><td>63.15</td><td>6.39</td><td>22</td><td>0.48</td><td>0.27</td><td>0.19</td><td>1.3</td><td>63</td><td>163k</td><td>164k</td><td>0.9</td><td>0.59</td><td>0.2</td><td>0.46</td><td>0.57</td><td>0.15</td><td>0.26</td><td>1.75</td></tr><tr><td>3</td><td>rbm22</td><td>2</td><td>40</td><td>21</td><td>66.55</td><td>8.65</td><td>24</td><td>0.47</td><td>0.14</td><td>0.35</td><td>1</td><td>66</td><td>180k</td><td>182k</td><td>0.59</td><td>0.02</td><td>0.38</td><td>0.57</td><td>0.75</td><td>0.48</td><td>0.36</td><td>1.76</td></tr><tr><td>4</td><td>rbm-ferro20</td><td>2</td><td>44</td><td>23</td><td>151.16</td><td>0.005</td><td>20</td><td>1.33</td><td>0.29</td><td>1.21</td><td>1</td><td>60</td><td>147k</td><td>147k</td><td>1.11</td><td>0.38</td><td>0.75</td><td>0.44</td><td>0.75</td><td>0.37</td><td>0.32</td><td>1.13</td></tr><tr><td>5</td><td>rbm-ferro21</td><td>2</td><td>42</td><td>22</td><td>152.62</td><td>1.98</td><td>22</td><td>3.43</td><td>0.83</td><td>1.89</td><td>1.2</td><td>63</td><td>163k</td><td>164k</td><td>2.3</td><td>1.27</td><td>1.08</td><td>0.48</td><td>1.82</td><td>1.06</td><td>0.87</td><td>2.5</td></tr><tr><td>6</td><td>rbm-ferro22</td><td>2</td><td>44</td><td>23</td><td>166.11</td><td>0.517</td><td>24</td><td>6.52</td><td>3.86</td><td>1.5</td><td>1.3</td><td>66</td><td>180k</td><td>182k</td><td>5.32</td><td>2.515</td><td>2.69</td><td>0.59</td><td>4.17</td><td>3.8</td><td>0.32</td><td>1.29</td></tr><tr><td/><td>i-bound=10</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td colspan="11">NeuroBE (#h=3w, Nmin = 82k)</td></tr><tr><td>1</td><td>rbm20</td><td>2</td><td>40</td><td>21</td><td>58.53</td><td>7.85</td><td>30</td><td>0.49</td><td>0.11</td><td>0.35</td><td>1.6</td><td>100</td><td>82k</td><td>87k</td><td>1.57</td><td>0.89</td><td>1.04</td><td>0.3</td><td>1.43</td><td>0.83</td><td>0.82</td><td>0.8</td></tr><tr><td>2</td><td>rbm21</td><td>2</td><td>42</td><td>22</td><td>63.15</td><td>15.73</td><td>32</td><td>0.73</td><td>0.14</td><td>0.47</td><td>1.7</td><td>105</td><td>90k</td><td>109k</td><td>1.62</td><td>0.68</td><td>0.98</td><td>0.4</td><td>0.89</td><td>0.51</td><td>0.42</td><td>1.02</td></tr><tr><td>3</td><td>rbm22</td><td>2</td><td>40</td><td>21</td><td>66.55</td><td>27.46</td><td>34</td><td>0.65</td><td>0.19</td><td>0.41</td><td>1.8</td><td>110</td><td>99.5k</td><td>121k</td><td>0.89</td><td>0.03</td><td>0.6</td><td>0.46</td><td>0.94</td><td>0.83</td><td>0.61</td><td>1.3</td></tr></table>
294
+
295
+ Figure 3: Results on performance of NeuroBE against DBE and WMB. k:domain size,#v:variables, w:induced width, #NB: number of buckets that are trained with NNs, $\# h$ : number of hidden units per layer (reported maximum $\# h$ for NeuroBE), $N$ : number of training samples (reported minimum, average and maximum $\# N$ for NeuroBE), error: L1 error for referenced and estimated $\log \left( Z\right)$ (reported minimum, average, and standard deviation over 5 runs for ${DBE}$ and NeuroBE), time: average time taken to get the estimated error,# in a cell denotes estimated partition function is $- \infty$ . *Note: Here, referenced $Z$ is approximated by Kask et al. (2020)
296
+
297
+ The Global Bucket Error is
298
+
299
+ $$
300
+ G = \log \lambda - \log \mu
301
+ $$
302
+
303
+ The above error corresponds to a log of the relative errors. We use log relative error here because bounding the global error as a function of the local errors turned out to be easier.
304
+
305
+ Theorem 1. Assume a bucket-chain along an ordering d and let $B$ be a bucket along the chain at index c. Let $E\left( s\right) = \ln {\mu }^{ * }\left( s\right) - \ln \mu \left( s\right)$ as defined above and let $\epsilon =$ $\mathop{\max }\limits_{{s \in D\left( S\right) }}\left| {E\left( s\right) }\right|$ , where $S$ is the scope of outgoing message from $B$ and $D\left( S\right)$ is the set of all possible configurations on $S$ . Then,
306
+
307
+ $$
308
+ E = \ln \lambda - \ln \mu \leq \mathop{\sum }\limits_{{k = 0}}^{{n - c}}{\epsilon }_{c + k}
309
+ $$
310
+
311
+ In particular, since ${\lambda }_{1} = Z$ , the partition function
312
+
313
+ $$
314
+ {E}_{1} = \ln Z - \ln {\mu }_{1} \leq \mathop{\sum }\limits_{{k = 0}}^{{n - 1}}{\epsilon }_{1 + k} \tag{9}
315
+ $$
316
+
317
+ For the proof see the Appendix.
318
+
319
+ In the next section we evaluate ${NeuroBE}$ .
320
+
321
+ ## 5 EXPERIMENTS
322
+
323
+ ### 5.1 EXPERIMENT SETUP
324
+
325
+ We ran experiments comparing NeuroBE against the Weighted Mini Bucket Elimination scheme (WMB) (Dechter and Rish, 2003; Liu and Ihler, 2012) and DBE (Razeghi et al., 2021).
326
+
327
+ Benchmarks Following the methodology in ${DBE}$ , we evaluated NeuroBE on instances selected from three wellknown benchmarks from the UAI repository used in Kask et al. (2020), i.e. grids (vision domain), pedigree (genetic linkage analysis) and DBNs. We targeted diverse benchmarks (in structure and level of determinism) and aimed for different levels of hardness. Thus, in the grids benchmark, we distinguish between problems that can be solved exactly, which we call "grid-easy", and those that cannot be solved, called "grid-hard". We also distinguish benchmarks that possess determinism, namely have a high proportion of zero probabilities, a feature which can impact training. We randomly selected 13 instances from Grids, with easy ones (width 20-30) and hard ones (1600 variables, width 55 or 114), 6 from pedigrees, which posses high level of determinism and 6 from DBNs, totalling 25 instances.
328
+
329
+ NN architectures and sample sizes. To trigger bucket message approximations, we used $i$ -bound $= {10}$ for easy problems and $i$ -bound $= {20}$ for hard ones. For problems with determinism, such as pedigree, the NN architecture has 2 set of output layers, one indicating whether the input configuration has non-zero probability while the other specifies the message value if so. The loss is then a sum of the cross-entropy loss (from the first output layer) and the (importance) mean square loss (from the second output layer). We keep the $\#$ layers fixed $\left( { = 2}\right)$ across all benchmarks. Through a process of trial and error applied to a selected representative instance from each benchmark, we used as static parameters of the architecture and sample sizes as follows. We selected $h = {3w}$ and ${N}_{\text{avg }} \in \left\lbrack {{149k},{350k}}\right\rbrack$ for pedigrees; $h = {3w}$ and ${N}_{\text{avg }} \in \left\lbrack {{80k},{180k}}\right\rbrack$ for DBN; $h = w$ and ${N}_{\text{avg }} \in \left\lbrack {{23k},{68k}}\right\rbrack$ for grid-easy; and $h = w$ and ${N}_{\text{avg }} \in \left\lbrack {{75k},{150k}}\right\rbrack$ for grid-hard.
330
+
331
+ Performance measures We evaluate the performance of NeuroBE using: $\operatorname{error} = \left| {{\log }_{e}Z - {\log }_{e}\widehat{Z}}\right|$ where $\widehat{Z}$ is the estimate of the partition function, $Z$ . When the exact $Z$ is not available (for hard Grid benchmark), ${Z}^{ * }$ is a surrogate to $Z$ , which is obtained using an advanced sampling scheme for a duration of ${100} * {1hr}$ (Kask et al.,2020).
332
+
333
+ ### 5.2 RESULTS
334
+
335
+ Figure 3 compares ${NeuroBE}$ against ${WMB}$ and ${DBE}$ over the 3 benchmarks. The first few columns show the problem statistics for instances in the respective benchmarks (pedigree, DBN and grids). We then show ${WMB}$ error, followed by the average error, minimum error, standard deviation and average time (in hours) over 5 runs (due to stochasticity) for both ${DBE}$ and ${NeuroBE}$ with m.s.e and I.m.s.e loss functions. For ${NeuroBE}$ , we also report the average and maximum #training samples, $\left( {{N}_{\text{avg }},{N}_{\text{max }}}\right)$ and maximum #hidden units, $\left( {h}_{\max }\right)$ across all buckets of the instance.
336
+
337
+ Pedigrees We observe immediately that overall NeuroBE has far superior performance to ${DBE}$ , especially with the I.m.s.e loss function. In particular, it is $\geq 5$ times more accurate than ${DBE}$ for almost all the instances and takes less time, since it uses far less training samples. Also, NeuroBE outperforms WMB on most of the instances. Here, ${DBE}$ yields either a similar (or even worse) accuracy as ${WMB}$ .
338
+
339
+ Grids. Here too we observe that NeuroBE outperforms ${DBE}$ in accuracy, particularly with the I.m.s.e loss, as reflected by the average error and standard deviation, even though it uses far less time. In most cases, we see a reduction in time by a factor of 2 while still producing a far better estimate. Neuro ${BE}$ and ${DBE}$ outperform ${WMB}$ across all problem instances.
340
+
341
+ <table><tr><td colspan="3">i-bound=20</td><td colspan="4">h=3*w</td><td colspan="4">h=5*w</td></tr><tr><td>k</td><td>#v</td><td>W</td><td>$\# {\mathrm{N}}_{\text{avg }}$ (103)</td><td>standard deviation</td><td>error</td><td>t(h)</td><td>#Navg (103)</td><td>standard deviation</td><td>error</td><td>t(h)</td></tr><tr><td>3</td><td>888</td><td>33</td><td>218</td><td>0.25</td><td>1.11</td><td>7.5</td><td>437</td><td>0.17</td><td>0.34</td><td>14.3</td></tr><tr><td>5</td><td>871</td><td>35</td><td>259</td><td>0.89</td><td>3.51</td><td>10.4</td><td>518</td><td>1.57</td><td>1.42</td><td>18.3</td></tr><tr><td>5</td><td>693</td><td>28</td><td>149</td><td>0.6</td><td>2.61</td><td>5</td><td>298</td><td>0.51</td><td>0.68</td><td>8.2</td></tr></table>
342
+
343
+ <table><tr><td colspan="3">i-bound=20</td><td colspan="4">h=w</td><td colspan="4">h=w</td></tr><tr><td>Id</td><td>#v</td><td>W</td><td>#Navg (103)</td><td>standard deviation</td><td>error</td><td>t(h)</td><td>#Navg (103)</td><td>standard deviation</td><td>error</td><td>t(h)</td></tr><tr><td>5</td><td>1600</td><td>114</td><td>120</td><td>10.82</td><td>55.11</td><td>12</td><td>262</td><td>16.3</td><td>17.76</td><td>17</td></tr><tr><td>7</td><td>1600</td><td>114</td><td>120</td><td>3.6</td><td>5.5</td><td>10</td><td>262</td><td>1.39</td><td>2.34</td><td>14.5</td></tr></table>
344
+
345
+ (a) pedigree (b) Grid-hard (c) Grid-easy (d) DBN
346
+
347
+ <table><tr><td colspan="3">i-bound=10</td><td colspan="4">h=w</td><td colspan="4">h=3*w</td></tr><tr><td>Id</td><td>#v</td><td>W</td><td>#Navg (103)</td><td>standard deviation</td><td>error</td><td>t(h)</td><td>#Navg (103)</td><td>standard deviation</td><td>error</td><td>t(h)</td></tr><tr><td>1</td><td>100</td><td>21</td><td>51</td><td>0.76</td><td>1.83</td><td>0.48</td><td>202</td><td>0.08</td><td>1.2</td><td>1.13</td></tr><tr><td>2</td><td>100</td><td>13</td><td>23</td><td>0.32</td><td>1.18</td><td>0.05</td><td>38</td><td>0.15</td><td>0.12</td><td>0.09</td></tr></table>
348
+
349
+ <table><tr><td colspan="3">i-bound=20</td><td colspan="4">h=3*w</td><td colspan="4">h=5*w</td></tr><tr><td>Id</td><td>#v</td><td>W</td><td>#Navg (103)</td><td>standard deviation</td><td>error</td><td>t(h)</td><td>#Navg (103)</td><td>standard deviation</td><td>error</td><td>t(h)</td></tr><tr><td>1</td><td>40</td><td>21</td><td>147</td><td>0.25</td><td>0.37</td><td>1.73</td><td>245</td><td>0.07</td><td>0.2</td><td>3.2</td></tr><tr><td>2</td><td>42</td><td>22</td><td>163</td><td>0.26</td><td>0.57</td><td>1.75</td><td>272</td><td>0.24</td><td>0.77</td><td>3.7</td></tr><tr><td>3</td><td>40</td><td>21</td><td>163</td><td>0.36</td><td>0.75</td><td>1.76</td><td>300</td><td>0.22</td><td>0.51</td><td>4.5</td></tr><tr><td>4</td><td>44</td><td>23</td><td>147</td><td>0.32</td><td>0.75</td><td>1.13</td><td>245</td><td>0.61</td><td>0.67</td><td>2.17</td></tr><tr><td>5</td><td>42</td><td>22</td><td>164</td><td>0.87</td><td>1.82</td><td>1.3</td><td>272</td><td>0.23</td><td>1.06</td><td>2.6</td></tr><tr><td>6</td><td>44</td><td>23</td><td>182</td><td>0.32</td><td>4.17</td><td>1.29</td><td>300</td><td>0.86</td><td>3.43</td><td>2.9</td></tr></table>
350
+
351
+ Figure 4: Performance of NeuroBE when increasing # sample &/or NN complexity. ${N}_{avg}$ : average samples, $t\left( h\right)$ : average time, Error: global error (reported average and standard deviation over 5 runs)
352
+
353
+ DBN We report results for the DBN benhcmark for 2 $i$ -bounds. For i-bound $= {20},$ NeuroBE, mostly with the I.m.s.e loss achieves a higher accuracy than ${DBE}$ with far less #training samples (but with more training time). It is superior to ${WMB}$ on instances2,3,5. However, ${WMB}$ performs better on instance 1,4 & 6 , as the induced-width is closer to the i-bound. For i-bound=10, NeuroBE shows better accuracy than ${WMB}$ for all three instances. It is comparable to ${DBE}$ on most instances taking less time.
354
+
355
+ Overall, compared with DBE, NeuroBE using I.m.s.e is about ${50}\%$ faster while also far more accurate on pedigrees, twice as fast and 5 to 10 fold more accurate on hard grids. It is also faster and more accurate on easy grids and has a comparable performance on the DBN benchmark.
356
+
357
+ The impact of loss functions. We observe that NeuroBE with the I.m.s.e loss shows better performance (lower average error and standard deviation) than NeuroBE with the m.s.e loss for the pedigree instances, and for the majority of DBN, grid-easy and grid-hard instances. An F-test on the two groups of partition function estimates (each consisting of 5 approximations) showed that the means are significantly different for pedigrees, (Fig. 3(a)). For the grids and DBN, there was no statistical difference between the two means. However, by inspection, we see a reduction in the standard deviation for almost all instances.
358
+
359
+ Impact of architecture size. Figure 4 shows the impact of architecture size on time and accuracy on a few problem instances. We show the average error of the partition function estimate (what we call global error), and standard deviation (over 5 runs) and the computation time for 2 different NN architecture and their associated sample sizes. As expected we see that increasing the NN and sample sizes increases the time and accuracy for pedigrees. For grid-hard instances, we just increased the sample sizes for the same architecture having $h = w$ , and observe that the average error is reduced, as expected. Instances from grid-easy and DBN (except, instance #2) show a similar improvement in performance with a larger NN and training sample size. This shows that the algorithm has an anytime characteristic, as it can improve its performance by controlling the size of the approximating NN matched by a suitable sample size for training.
360
+
361
+ ## 6 CONCLUSION & FUTURE WORK
362
+
363
+ In this work, we advance the earlier theme of using Neural Networks to approximate the class of bucket-elimination algorithms that is at the heart to probabilistic reasoning. NeuroBE can be viewed as a realization of Neuro-Dynamic Programming schemes Bertsekas and Tsitsiklis (1996), in the context of graphical models. That being said, it requires training of numerous NN per problem instance and thus the central aim of NeuroBE's design (customizing NN architectures, training samples and the loss function to the message) is to enhance efficiency of such schemes. We presented NeuroBE and illustrated, on challenging instances over three benchmarks that it can be far more accurate and requires less time compared with Deep Bucket Elimination (DBE) and is also superior to weighted mini-bucket (WMB) that cannot improve its accuracy once their memory is exhausted. We believe that NeuroBE has the potential to become more time efficient and may also extend into learning across a set of instances from the same benchmark domain.
364
+
365
+ Future Work. While NeuroBE adjusts the NN architectures according to a bucket's scope, it keeps the #layers fixed, a hyper-parameter we wish to explore varying dynamically. We will also explore training a single function per union of buckets, which yield a cluster in a tree-decomposition (Dechter, 2013). This can significantly reduce the number of trained functions at the cost of more time for sample generation, a trade-off we plan to study. We will also explore the task of parameter sharing and training multiple bucket functions simultaneously.
366
+
367
+ Ralph Abboud, Ismail Ceylan, and Thomas Lukasiewicz. Learning to reason: Leveraging neural networks for approximate dnf counting. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 3097-3104, 2020.
368
+
369
+ Martin Anthony and Peter L. Bartlett. Neural Network Learning - Theoretical Foundations. Cambridge University Press, 2002. ISBN 978-0-521-57353-5.
370
+
371
+ Peter L. Bartlett, Nick Harvey, Christopher Liaw, and Abbas Mehrabian. Nearly-tight vc-dimension and pseudodimen-sion bounds for piecewise linear neural networks. Journal of Machine Learning Research, 20(63):1-17, 2019. URL http://jmlr.org/papers/v20/17-612.html.
372
+
373
+ Dimitri P. Bertsekas and John N. Tsitsiklis. Neuro-dynamic programming, volume 3 of Optimization and neural computation series. Athena Scientific, 1996. ISBN 1886529108. URL https://www.worldcat.org/ oclc/35983505.
374
+
375
+ Yann Le Cun, Ido Kanter, and Sara A. Solla. Eigenvalues of covariance matrices: Application to neural-network learning. Phys. Rev. Lett., 66:2396- 2399, May 1991. doi: 10.1103/PhysRevLett.66. 2396. URL https://link.aps.org/doi/10.1103/PhysRevLett.66.2396.
376
+
377
+ A. Darwiche. Modeling and Reasoning with Bayesian Networks. Cambridge University Press, 2009.
378
+
379
+ R. Dechter. Bucket elimination: A unifying framework for reasoning. Artificial Intelligence, 113:41-85, 1999a.
380
+
381
+ R. Dechter. Constraint Processing. Morgan Kaufmann Publishers, 2003.
382
+
383
+ Rina Dechter. Bucket elimination: A unifying framework for reasoning. Artif. Intell., 113(1-2):41-85, 1999b.
384
+
385
+ Rina Dechter. Reasoning with probabilistic and deterministic graphical models: Exact algorithms. Synthesis Lectures on Artificial Intelligence and Machine Learning, 7 (3):1-191, 2013.
386
+
387
+ Rina Dechter and Irina Rish. Mini-buckets: A general scheme for bounded inference. Journal of the ACM (JACM), 50(2):107-153, 2003.
388
+
389
+ Nicolas Heess, Daniel Tarlow, and John Winn. Learning to pass expectation propagation messages. In NIPS, volume 26, pages 3219-3227, 2013.
390
+
391
+ Kalev Kask, Bobak Pezeshki, Filjor Broka, Alexander T. Ihler, and Rina Dechter. Scaling up AND/OR abstraction sampling. In Proceedings of IJCAI 2020, pages 4266- 4274, 2020.
392
+
393
+ Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2014.
394
+
395
+ Qiang Liu and Alexander Ihler. Belief propagation for structured decision making. In Proceedings of the 28th Conference on Uncertainty in Artificial Intelligence, pages 523-532, 2012.
396
+
397
+ Qiang Liu and Alexander T. Ihler. Bounding the partition function using holder's inequality. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28 - July 2, 2011, pages 849-856, 2011.
398
+
399
+ Robert Mateescu, Kalev Kask, Vibhav Gogate, and Rina Dechter. Join-graph propagation algorithms. J. Artif. Intell. Res. (JAIR), 37:279-328, 2010.
400
+
401
+ Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. nature, 518(7540):529-533, 2015.
402
+
403
+ J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988.
404
+
405
+ D. Pollard. Convergence of Stochastic Processes. Springer-Verlag, 1984.
406
+
407
+ Yasaman Razeghi, Kalev Kask, Yadong Lu, Pierre Baldi, Sakshi Agarwal, and Rina Dechter. Deep bucket elimination. In Zhi-Hua Zhou, editor, Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pages 4235-4242. International Joint Conferences on Artificial Intelligence Organization, 8 2021. Main Track.
408
+
409
+ Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Ha-genbuchner, and Gabriele Monfardini. The graph neural network model. Trans. Neur. Netw., 20(1):61-80, jan 2009. ISSN 1045-9227. doi: 10.1109/TNN.2008. 2005605. URL https://doi.org/10.1109/TNN.2008.2005605.
410
+
411
+ Vladimir N. Vapnik. The Nature of Statistical Learning Theory. Springer, second edition, November 1999. ISBN 0387987800.
412
+
413
+ Jonathan S. Yedidia, William T. Freeman, and Yair Weiss. Generalized belief propagation. In (NIPS) 2000, pages 689-695. MIT Press, 2000.
414
+
415
+ KiJung Yoon, Renjie Liao, Yuwen Xiong, Lisa Zhang, Ethan Fetaya, Raquel Urtasun, Richard S. Zemel, and Xaq Pitkow. Inference in probabilistic graphical models by graph neural networks. 2018.
416
+
417
+ ## A APPENDIX
418
+
419
+ ### A.1 ESTIMATING THE PSEUDO-DIMENSION OF A NN:
420
+
421
+ In our work, we use NN architectures with ReLU activation functions. To construct a NN with L layers and a variable h, #hidden-units per layer to model a specific local bucket message ${\mu }^{ * }$ , we pick the rule $h = b * w$ where $w$ is the width and $\mathrm{b}$ is a constant. By doing this, the #parameters in the NN is :
422
+
423
+ $$
424
+ \left| \theta \right| = \left( {L - 1}\right) * {b}^{2} * {w}^{2} + b * {w}^{2} + \left( {L + 1}\right) * b * w + 1 \tag{10}
425
+ $$
426
+
427
+ We make use of the lower bound of pseudo-dimension for NNs with ReLU activation functions from the work in Bartlett et al. (2019) to get:
428
+
429
+ $$
430
+ \rho = \left| \theta \right| * \operatorname{Llog}\left( {\left| \theta \right| /L}\right) \tag{11}
431
+ $$
432
+
433
+ By substituting Eq. 10 in Eq. 11 and ignoring all linear terms in $w$ we get that $\rho \left( w\right)$ can be dominated by:
434
+
435
+ $$
436
+ \rightarrow {\rho }_{c}\left( {w}_{c}\right) \propto {\left( L * b * w\right) }^{2}\log \left\lbrack \left( {b * w}\right) \right\rbrack
437
+ $$
438
+
439
+ ### A.2 ESTIMATING ERROR IN PARTITION FUNCTION:
440
+
441
+ Theorem. 1 Let ${B}_{c}$ be a bucket in a bucket chain along an ordering $d$ ; let ${B}_{c}$ contain the original functions as ${\phi }_{c}$ and ${\mu }_{c + 1}$ as the message passed to it from the previous bucket; let ${\lambda }_{c}$ be the (global) exact message generated in ${B}_{c}$ , ${\mu }_{c}^{ * }$ be the local exact message in ${B}_{c}$ and ${\mu }_{c} = {APP}\left( {\mu }_{c}^{ * }\right)$ its approximation (e.g., by a trained neural network). Let ${E}_{c} = \ln {\mu }_{c}^{ * } - \ln {\mu }_{c}$ and and ${\epsilon }_{c} = \mathop{\max }\limits_{{B}_{c}}\left| {E}_{c}\right|$ . Then,
442
+
443
+ $$
444
+ \ln {\lambda }_{c} - \ln {\mu }_{c} \leq \mathop{\sum }\limits_{{k = 2}}^{{n - c}}{\epsilon }_{c + k}
445
+ $$
446
+
447
+ In particular, since ${\lambda }_{1} = Z$ , the partition function
448
+
449
+ $$
450
+ \ln Z - \ln {\mu }_{1} \leq \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}{\epsilon }_{1 + k} \tag{12}
451
+ $$
452
+
453
+ Proof. We will next derive the recursion, starting at the first processed bucket ${B}_{n}$ and going down in order. Remember throughout that $\ln {\mu }_{n - i}^{ * } = \mathop{\sum }\limits_{{X}_{n - i}}\ln \left( {e}^{\ln {\phi }_{n - i} + \ln {\mu }_{n - i + 1}}\right)$
454
+
455
+ For ${B}_{n}{\lambda }_{n} = {\mu }_{n}^{ * }$ , therefore
456
+
457
+ $$
458
+ \ln {\lambda }_{n} - \ln {\mu }_{n} = \ln {\mu }_{n}^{ * } - \ln {\mu }_{n} = {E}_{n}
459
+ $$
460
+
461
+ For ${B}_{n - 1}$ , by definition
462
+
463
+ $$
464
+ \ln {\lambda }_{n - 1} - \ln {\mu }_{n - 1} = \ln \mathop{\sum }\limits_{{X}_{n - 1}}{e}^{\ln {\phi }_{n - 1} + \ln {\lambda }_{n}} - \ln {\mu }_{n - 1}
465
+ $$
466
+
467
+ Substituting $\ln {\lambda }_{n}$ from ${B}_{n}$
468
+
469
+ $$
470
+ = \ln \mathop{\sum }\limits_{{X}_{n - 1}}{e}^{\left\lbrack \left( \ln {\phi }_{n - 1} + \ln {\mu }_{n}\right) + {E}_{n}\right\rbrack } - \ln {\mu }_{n - 1}
471
+ $$
472
+
473
+ $$
474
+ = \ln \left\lbrack {\mathop{\sum }\limits_{{X}_{n - 1}}{e}^{\left( \ln {\phi }_{n - 1} + \ln {\mu }_{n}\right) }{e}^{{E}_{n}}}\right\rbrack - \ln {\mu }_{n - 1}
475
+ $$
476
+
477
+ If $\mathop{\max }\limits_{{\text{scope }\left( {\mu }_{n}^{ * }\right) }}\left| {E}_{n}\right| = {\epsilon }_{n}$ , then,
478
+
479
+ $$
480
+ \leq \ln \left\lbrack {{e}^{{\epsilon }_{n}}\mathop{\sum }\limits_{{X}_{n - 1}}{e}^{\left( \ln {\phi }_{n - 1} + \ln {\mu }_{n}\right) }}\right\rbrack - \ln {\mu }_{n - 1}
481
+ $$
482
+
483
+ $$
484
+ \leq {\epsilon }_{n} + \ln \mathop{\sum }\limits_{{X}_{n - 1}}{e}^{\left( \ln {\phi }_{n - 1} + \ln {\mu }_{n}\right) } - \ln {\mu }_{n - 1}
485
+ $$
486
+
487
+ Since $\ln \mathop{\sum }\limits_{{X}_{n - 1}}{e}^{\ln {\phi }_{n - 1} + \ln {\mu }_{n}} = \ln {\mu }_{n - 1}^{ * }$ we get
488
+
489
+ $$
490
+ \ln {\lambda }_{n - 1} - \ln {\mu }_{n - 1} \leq {\epsilon }_{n} + \ln {\mu }_{n - 1}^{ * } - \ln {\mu }_{n - 1} \tag{13}
491
+ $$
492
+
493
+ or equivalently,
494
+
495
+ $$
496
+ \ln {\lambda }_{n - 1} - \ln {\mu }_{n - 1} \leq {\epsilon }_{n} + {E}_{n - 1} \tag{14}
497
+ $$
498
+
499
+ Moving to ${B}_{n - 2}$ , by definition:
500
+
501
+ $$
502
+ \ln {\lambda }_{n - 2} - \ln {\mu }_{n - 2} = \ln \mathop{\sum }\limits_{{X}_{n - 2}}{e}^{\ln {\phi }_{n - 2} + \ln {\lambda }_{n - 1}} - \ln {\mu }_{n - 2}
503
+ $$
504
+
505
+ (15)
506
+
507
+ Substituting $\ln {\lambda }_{n - 1}$ from Eq. (13) we get
508
+
509
+ $$
510
+ \ln {\lambda }_{n - 2} - \ln {\mu }_{n - 2} \leq \ln \mathop{\sum }\limits_{{X}_{n - 2}}{e}^{\ln {\phi }_{n - 2} + \left\lbrack {\ln {\mu }_{n - 1} + {\epsilon }_{n} + {E}_{n - 1}}\right\rbrack } - \ln {\mu }_{n - 2}
511
+ $$
512
+
513
+ $$
514
+ \leq \ln \mathop{\sum }\limits_{{X}_{n - 2}}{e}^{\ln {\phi }_{n - 2} + {\mu }_{n - 1}}{e}^{{\epsilon }_{n} + {E}_{n - 1}} - \ln {\mu }_{n - 2}
515
+ $$
516
+
517
+ Taking $\mathop{\max }\limits_{{\text{scope }\left( {\mu }_{n - 1}^{ * }\right) }}{E}_{n - 1} = {\epsilon }_{n - 1}$ ,
518
+
519
+ $$
520
+ \leq \ln {e}^{{\epsilon }_{n} + {\epsilon }_{n - 1}}\mathop{\sum }\limits_{{X}_{n - 2}}{e}^{\ln {\phi }_{n - 2} + {\mu }_{n - 1}} - \ln {\mu }_{n - 2}
521
+ $$
522
+
523
+ $$
524
+ \leq {\epsilon }_{n} + {\epsilon }_{n - 1} + \ln \mathop{\sum }\limits_{{X}_{n - 2}}{e}^{\ln {\phi }_{n - 2} + {\mu }_{n - 1}} - \ln {\mu }_{n - 2}
525
+ $$
526
+
527
+ $$
528
+ \leq {\epsilon }_{n} + {\epsilon }_{n - 1} + \ln {\mu }_{n - 2}^{ * } - \ln {\mu }_{n - 2}
529
+ $$
530
+
531
+ yielding,
532
+
533
+ $$
534
+ \ln {\lambda }_{n - 2} - \ln {\mu }_{n - 2} \leq {E}_{n - 2} + {\epsilon }_{n - 1} + {\epsilon }_{n} \tag{16}
535
+ $$
536
+
537
+ Moving to bucket ${B}_{n - 3}$ , by definition
538
+
539
+ $$
540
+ \ln {\lambda }_{n - 3} - \ln {\mu }_{n - 3} = \ln \mathop{\sum }\limits_{{X}_{n - 3}}{e}^{\ln {\phi }_{n - 3} + \ln {\lambda }_{n - 2}} - \ln {\mu }_{n - 3}
541
+ $$
542
+
543
+ Substituting for ${\lambda }_{n - 2}$ from Eq. (16) we get with some algebra
544
+
545
+ $$
546
+ \ln {\lambda }_{n - 3} - \ln {\mu }_{n - 3}
547
+ $$
548
+
549
+ $$
550
+ \leq {ln}\mathop{\sum }\limits_{{X}_{n - 3}}{e}^{{ln}{\phi }_{n - 3} + \left\lbrack {{ln}{\mu }_{n - 2} + {E}_{n - 2} + {\epsilon }_{n - 1} + {\epsilon }_{n}}\right\rbrack } - {ln}{\mu }_{n - 3}
551
+ $$
552
+
553
+ yielding
554
+
555
+ $$
556
+ \ln {\lambda }_{n - 3} - \ln {\mu }_{n - 3} \leq {E}_{n - 3} + {\epsilon }_{n - 2} + {\epsilon }_{n - 1} + {\epsilon }_{n}
557
+ $$
558
+
559
+ and so on. Clearly the emerging expression for bucket ${B}_{c}$ is
560
+
561
+ $$
562
+ \ln {\lambda }_{c} - \ln {\mu }_{c} \leq {E}_{c} + {\epsilon }_{c + 1} + {\epsilon }_{c + 2} + \ldots \tag{17}
563
+ $$
564
+
565
+ or,
566
+
567
+ $$
568
+ \ln {\lambda }_{c} - \ln {\mu }_{c} \leq {E}_{c} + \mathop{\sum }\limits_{{k = 0}}^{{n - c - 1}}{\epsilon }_{c + 1 + k} \tag{18}
569
+ $$
570
+
571
+ The general transition from $n - i$ to $n - i - 1$ can be easily followed to complete the inductive proof.
572
+
573
+ Assuming that we control the derivation of ${\mu }_{c}$ for each ${B}_{c}$ to ensure that ${E}_{c} = \ln {\mu }_{c}^{ * } - \ln {\mu }_{c} \leq {\epsilon }_{c}$ and substituting in the expression we get from Eq. (18) that
574
+
575
+ $$
576
+ \ln {\lambda }_{c} - \ln {\mu }_{c} \leq {\epsilon }_{c} + \mathop{\sum }\limits_{{k = 0}}^{{n - c - 1}}{\epsilon }_{c + 1 + k} \leq \left( {n - c + 1}\right) * \epsilon \tag{19}
577
+ $$
578
+
579
+ ### A.3 MISCALLANEOUS EXPERIMENTS ON ANALYZING ERROR
580
+
581
+ Calculating $\epsilon$ from Theorem 1 is hard because it involves computing the local bucket error $E$ over all configurations in the scope of the bucket. Therefore, we calculate the maximum over a sampled test set (lines 18-20 of algorithm 3) as $\widehat{\epsilon }$ . Additionally, we calculate the average local bucket error, ${\widehat{\epsilon }}^{avg}$ over the same test set. To bound the global error of the approximated partition function from Eq. 9, we sum over all the estimated bucket error bounds, $\widehat{\epsilon }$ ), Clearly, the bound is very lose. We therefore use the average local bucket error, ${\widehat{\epsilon }}^{avg}$ to give us some additional information on the global error empirically:
582
+
583
+ $$
584
+ {\widehat{E}}_{1} \leq \mathop{\sum }\limits_{{k = 0}}^{{n - 1}}{\widehat{\epsilon }}_{1 + k}^{avg} \tag{20}
585
+ $$
586
+
587
+ Relationship between local and global errors empirically. Figure 5a) depicts the empirical global errors against the local error bound for 4 grid instances over 2 sample configurations $= \{ {60}\mathrm{k},{120}\mathrm{k}\}$ . Specifically, the local error bound shown is the maximum over the estimated local bucket errors $\left( {{\widehat{\epsilon }}^{\text{avg }}\text{from Eq. 20) across all buckets and the (empirical)}}\right)$ global error is the error in the partition function estimate. As expected, we see a somewhat linear relationship between the global error and the local error bound. We also see that higher samples drive the local and global errors towards the lower-left of the plot and vice-versa.
588
+
589
+ Impact of sample size on error bounds. Figure 5b &c) depicts the impact of sample size on the estimated local and global error bounds (Eq. 20). Specifically, the local error bound shown is the maximum over the estimated local bucket errors ( ${\widehat{\epsilon }}^{\text{avg }}$ from Eq. 20) across all buckets. As expected, we see that increasing the training sample size makes the two bounds tighter. For the 4 grid instances (f10, $\mathrm{f}5,\mathrm{f}2,\mathrm{f}{15})$ , we also observed that the empirical global error in the partition function estimate for for the two sample configurations $\{ \left( {{37.21},{7.94}}\right) ,\left( {{18.8},{5.9}}\right) ,\left( {{10.28},{3.05}}\right) ,({41.3}$ , 27.5) $\}$ is in proportion to the global error bound from Fig. 5b).
590
+
591
+ <table><tr><td/><td colspan="8">${\mathrm{N}}_{\text{avg }} = {60}\mathrm{k},\mathrm{\;h} = \mathrm{w}$</td><td colspan="8">${\mathrm{N}}_{\mathrm{{avg}}} = {150}\mathrm{k},\mathrm{\;h} = \mathrm{w}$</td></tr><tr><td/><td colspan="4">Statistics on local bucket errors</td><td colspan="4">Statistics on global errors</td><td colspan="4">Statistics on local bucket errors</td><td colspan="4">Statistics on global errors</td></tr><tr><td/><td colspan="2">test w.m.s.e</td><td colspan="2">local bucket errors</td><td colspan="2">estimated bounds</td><td colspan="2">empirical errors</td><td colspan="2">test w.m.s.e</td><td colspan="2">local bucket errors</td><td colspan="2">estimated bounds</td><td colspan="2">empirical errors</td></tr><tr><td/><td>avg</td><td>max</td><td>avg</td><td>$\max$</td><td>avg</td><td>$\max$</td><td>avg</td><td>max</td><td>avg</td><td>$\max$</td><td>avg</td><td>$\max$</td><td>avg</td><td>max</td><td>avg</td><td>$\max$</td></tr><tr><td>1</td><td>2.19E-04</td><td>0.06</td><td>1.67</td><td>3.81</td><td>251.7</td><td>1664</td><td>24.01</td><td>37.21</td><td>1.63E-05</td><td>0.006</td><td>0.29</td><td>3.71</td><td>71.75</td><td>1380</td><td>8.11</td><td>13.62</td></tr><tr><td>2</td><td>1.74E-04</td><td>0.053</td><td>0.784</td><td>2.44</td><td>115.6</td><td>796</td><td>21.15</td><td>31.32</td><td>1.40E-05</td><td>0.0068</td><td>0.131</td><td>1.45</td><td>34.34</td><td>680</td><td>6.4</td><td>12.3</td></tr><tr><td>3</td><td>1.06E-04</td><td>0.053</td><td>0.336</td><td>0.661</td><td>34.98</td><td>264.91</td><td>5.05</td><td>7.88</td><td>7.46E-06</td><td>0.004</td><td>0.045</td><td>0.46</td><td>10.28</td><td>228.59</td><td>4.3</td><td>5.95</td></tr><tr><td>4</td><td>1.94E-04</td><td>0.057</td><td>1.988</td><td>10.97</td><td>359.38</td><td>2549.28</td><td>22.43</td><td>27.68</td><td>${1.85}\mathrm{E} - {05}$</td><td>0.009</td><td>0.382</td><td>4.665</td><td>106.75</td><td>2166</td><td>31.03</td><td>57.46</td></tr></table>
592
+
593
+ Figure 5: Statistics of Local & bucket errors compared with global error over 5 runs for 4 grid-hard instances having w=55 with $i$ -bound=20, where $\mathrm{h} = \mathrm{w},\#$ buckets trained, $\# {NB} = {308}$ for two different scales of smaples sizes. test wmse is the w.m.s.e of the learned NN over the test set; local bucket error is the average L1 error for $\log \lambda$ approximations over all buckets; estimated bounds is the bound obtained in eq 12; empirical error is the average global error over 5 runs.
594
+
UAI/UAI 2022/UAI 2022 Conference/BAlqxvUs5lq/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,575 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § NEUROBE: ESCALATING NN APPROXIMATIONS OF BUCKET ELIMINATION
2
+
3
+ § ABSTRACT
4
+
5
+ A major limiting factor in graphical model inference is the complexity of computing the partition function. Exact message-passing algorithms such as Bucket Elimination (BE) require exponentially high levels of memory to compute the partition function, therefore approximations are necessary. In this paper, we build upon a recently introduced methodology called Deep Bucket Elimination (DBE) that uses classical Neural Networks (NNs) to approximate messages generated by ${BE}$ when buckets have large memory requirements. The main feature of our new scheme called NeuroBE is that it customizes the architecture and learning of the NNs to the message size and its distribution. We also explore a new loss function for training taking into account the estimated message cost distribution. Our experiments demonstrate significant improvements in accuracy and time compared with ${DBE}$ .
6
+
7
+ § 1 INTRODUCTION
8
+
9
+ Two of the critical goals of probabilistic modeling are the compact representation of probability distributions and the efficient computation of their marginals and modes. Probabilistic graphical models, such as Markov networks (Pearl, 1988; Darwiche, 2009; Dechter, 2013) provide a framework to represent distributions compactly as normalized products or factors : $P\left( X\right) = \frac{1}{Z}\mathop{\prod }\limits_{\alpha }{f}_{\alpha }\left( {X}_{\alpha }\right)$ , where $X$ is a set of variables, each potential ${f}_{\alpha }$ is a function over a subset ${X}_{\alpha }$ of the variables (its scope) and $Z = \mathop{\sum }\limits_{X}\mathop{\prod }\limits_{\alpha }{f}_{\alpha }\left( {X}_{\alpha }\right)$ is the partition function. Computing the partition function is still exponential in the induced width of the model's graph even for distributions that admit a compact representation.
10
+
11
+ The partition function $Z$ is defined by two types of operations: sums and products. It can be evaluated efficiently if
12
+
13
+ $\mathop{\sum }\limits_{X}\mathop{\prod }\limits_{\alpha }{f}_{\alpha }\left( {X}_{\alpha }\right)$ can be reorganized using the distributive law along a variable ordering (Dechter, 2003). This organization can be described using buckets as data structures, one for each variable in the ordering. When a bucket is processed, its associated variable is removed by creating a bucket output function, also called a message, that is passed to a subsequent bucket. The complexity of computing this function is exponential in its number of arguments, called scope or the bucket's width. Overall, Bucket Elimination (BE) (Dechter, 1999b) is time and memory exponential in the induced-width of the model's graph along the ordering.
14
+
15
+ Providing good approximations to ${BE}$ is important not only because it generates an answer to a query, but primarily because it compiles a structure and a set of messages that can be used to answer multiple queries (e.g., the probability of evidence for various evidence variables Darwiche (2009)). Also because, the messages can be used as building blocks for generating heuristics for search to further improve performance. We will therefore consider and evaluate NeuroBE within the class of approximate ${BE}$ schemes.
16
+
17
+ Schemes that approximate ${BE}$ include the (weighted) mini-bucket scheme (Dechter and Rish, 2003; Liu and Ihler, 2012) and generalized belief propagation schemes (Yedidia et al., 2000; Mateescu et al., 2010). Recently introduced, Deep Bucket Elimination (DBE) (Razeghi et al., 2021) approximates each bucket function with a neural network (NN). While this approach is inherently time consuming requiring the independent training of many NNs to compute the partition function of a single problem, it has yielded more accurate approximations on several benchmarks. It is important to note that unlike weighted bucket-elimination and belief propagation schemes, ${DBE}$ can improve with time even with bounded memory, yielding an anytime framework for reasoning. Still, DBE's original design can be improved significantly which is the focus of this paper.
18
+
19
+ Contributions. We present NeuroBE, a re-design of ${DBE}$ , that addresses its one size fits all policy by customizing the NN construction and training sample size to each bucket separately, in proportion to its message size. In particular, we introduce a new loss function that is sensitive to bucket's message distribution. We also provide an analysis relating the local errors to a bound on the global error. In an extensive empirical evaluation we show that NeuroBE outperforms ${DBE}$ across all benchmarks using far less training samples, yielding higher accuracy with less time.
20
+
21
+ The paper is organized as follows. We first provide a background to ${BE}$ and ${DBE}$ ; then we present Neuro ${BE}$ ; followed by error analysis; lastly, we demonstrate the efficiency of NeuroBE empirically.
22
+
23
+ Related work. As noted, approximating and bounding Bucket Elimination has been carried out extensively over the years for all probabilistic queries. Well known is the Mini-Bucket Elimination scheme (Dechter and Rish, 2003) and its variants, such as Weighted Mini-Bucket Elimination (WMBE), augmented with message-passing cost-shifting (Liu and Ihler, 2011).
24
+
25
+ Neural network approximation to ${BE}$ was introduced in Razeghi et al. (2021). The idea is closest in spirit to the Neuro-Dynamic Programming scheme as outlined in Bert-sekas and Tsitsiklis (1996) where the cost-to-go functions (similar to messages) generated by dynamic programming can be approximated by NNs. This technique is also highly related to Deep Reinforcement Learning (DRL) (Mnih et al., 2015) where, in the absence of a model, the value function is approximated by neural networks learned from temporal trajectories.
26
+
27
+ Recently, Graph Neural Networks (GNNs) (Scarselli et al., 2009) are used to learn messages following the message-passing reasoning methods in graphical models (Abboud et al., 2020; Yoon et al., 2018; Heess et al., 2013). However, Yoon et al. (2018); Heess et al. (2013) is restricted to small instances (i.e., $\sim {40}$ variables) and Abboud et al. (2020) tackles problems with a known polynomial-time approximation. GNN based methods derive a supervised end-to-end learning algorithm which generalize across different problem instances. In contrast, we consider a different class of algorithms, where we confine learning to within a problem instance only.
28
+
29
+ § 2 BACKGROUND
30
+
31
+ A graphical model can be defined by a 3-tuple $\mathcal{M} =$ (X, D, F), where $\mathbf{X} = \left\{ {{X}_{i} : i \in V,V = \{ 1,\ldots ,n\} }\right\}$ is a set of $n$ variables indexed by $V$ and $\mathbf{D} = \left\{ {{D}_{i} : i \in V}\right\}$ is the set of finite domains for each ${X}_{i}$ (i.e. each ${X}_{i}$ can only assume values in ${D}_{i}$ , and each ${D}_{i}$ is finite). Each function ${f}_{\alpha } \in \mathbf{F}$ is defined over a subset of the variables called its scope, ${X}_{\alpha }$ , where $\alpha \subseteq V$ are the indices of variables in its scope and ${D}_{\alpha }$ denotes the Cartesian product of their domains, so that ${f}_{\alpha } : {D}_{\alpha } \rightarrow R \geq 0$ .
32
+
33
+ < g r a p h i c s >
34
+
35
+ Figure 1: (a) A primal graph of a GM with 7 variables. (b) Illustration of ${BE}$ with an ordering $\mathrm{{ABCEDFG}}$ .
36
+
37
+ The primal graph of a graphical model associates each variable with a node. An edge between node $i$ and node $j$ is created if and only if there is a function containing ${X}_{i}$ and ${X}_{j}$ in its scope. Figure 1a shows a primal graph of a graphical model with variables indexed from $A$ to $G$ with functions over pairs of variables that are connected by an edge. Graphical models can be used to represent a global function, often a probability distribution, defined by $\Pr \left( X\right) \propto \mathop{\prod }\limits_{\alpha }{f}_{\alpha }\left( {X}_{\alpha }\right)$ . An important task is to compute the normalizing constant, also known as the partition function $Z = \mathop{\sum }\limits_{X}\mathop{\prod }\limits_{\alpha }{f}_{\alpha }\left( {X}_{\alpha }\right) .$
38
+
39
+ § 2.1 BUCKET ELIMINATION
40
+
41
+ Bucket Elimination (BE) (Dechter, 1999a) is a universal exact algorithm for probabilistic inference. It is a variable elimination algorithm that can answer a wide-range of queries, including the partition function ranging from constraint satisfaction, to pure combinatorial optimization (e.g., Most Probable Explanation (MPE/MAP)), and weighted counting (Partition Function, Probability of Evidence).
42
+
43
+ Given a variable ordering $d$ , BE (presented in Algorithm 1, omitting steps 9-12) creates a bucket tree where each node is a bucket representing a variable in the ordering $d$ . Figure 1b shows a bucket tree for the primal graph in Figure 1a along an ordering. Each bucket in this tree contains a set of the model's functions depending on the given order of processing. For example, Bucket G in Figure 1b has functions $\{ f\left( {A,G}\right) ,f\left( {F,G}\right) \}$ , an exhaustive set of model’s functions with variable $G$ in its scope. There is an arc from a bucket, say ${B}_{c}$ , to a parent bucket, ${B}_{p}$ , if ${X}_{p}$ is the latest variable in bucket ${B}_{c}$ ’s message scope along the ordering (constants are placed in ${B}_{1}$ ). In the same example, there is an arc from Bucket G to Bucket F.
44
+
45
+ ${BE}$ then, performs inference along the bucket tree as a 1- iteration message-passing algorithm (bottom-up). It processes each bucket from leaves to the root passing messages from child(c)to parent(p). For a child variable ${X}_{c}$ , it considers all the functions in bucket ${B}_{c}$ . This includes the original functions in the graphical model as well as the messages received by processing previous variables. It then marginal-izes ${X}_{c}$ out from the product of functions in ${B}_{c}$ generating a new, so called, bucket function or message, denoted ${\lambda }_{c \rightarrow p}$ , or ${\lambda }_{c}$ for short:
46
+
47
+ $$
48
+ {\lambda }_{c} = \mathop{\sum }\limits_{{X}_{c}}\mathop{\prod }\limits_{{{f}_{\alpha } \in {B}_{c}}}{f}_{\alpha } \tag{1}
49
+ $$
50
+
51
+ The ${\lambda }_{c}$ function is placed in ${B}_{p}$ , the bucket of ${X}_{p}$ . Once all the variables are processed, ${BE}$ outputs all the messages and the exact value of $\mathrm{Z}$ by taking the product of all the constant present in the bucket of the first variable. We illustrate ${BE}$ message flow on our example problem in Figure 1b.
52
+
53
+ Complexity. Both the time and space complexity of ${BE}$ are exponential in the induced width, which is the size of the largest number of variables in the scope of any message over all buckets (Dechter,2013). Clearly, ${BE}$ becomes impractical if the induced width is large.
54
+
55
+ § 2.2 DEEP BUCKET ELIMINATION
56
+
57
+ Given a variable ordering $d$ , Deep Bucket Elimination (DBE) (Razeghi et al., 2021) approximates each message generated in the bucket tree by training a NN when the scope of any bucket message(S)is high $\left( { > i\text{ -bound }}\right)$ . For example, in figure $1\mathrm{\;b}$ , if we use an $i$ -bound $= 2$ , instead of sending an exact function from the bucket of $D$ to the bucket of $C,{\lambda }_{D \rightarrow C}\left( {A,B,C}\right) ,{DBE}$ sends a NN approximation ${\mu }_{\theta ,D \rightarrow C}\left( {A,B,C}\right)$ with parameters $\theta$ , as we describe next.
58
+
59
+ Let $B$ be a bucket with width $w > i$ -bound and output message $\lambda \left( S\right)$ with scope $S.{DBE}$ then, constructs a fully-connected feed-forward NN having $w$ nodes in the input layer. This is followed by $L$ hidden layers with a constant $h$ hidden nodes per layer with ${ReLU}$ activation function. Finally, the output layer contains one node with a real-valued output. Subsequently, ${DBE}$ generates a training set $\left\{ \left( {{s}_{n},\lambda \left( {s}_{n}\right) }\right) \right\}$ of size $\mathrm{N}$ , where ${s}_{n}$ denotes a configuration over $S$ sampled uniformly at random and $\lambda \left( {s}_{n}\right)$ is the message value defined in Eq. 1. The NN parameters, $\theta$ , are then trained to minimize the mean square error loss :
60
+
61
+ $$
62
+ L\left( \theta \right) = \frac{1}{N}\mathop{\sum }\limits_{{n = 1}}^{N}{\left( \lambda \left( {s}_{n}\right) - {\mu }_{\theta }\left( {s}_{n}\right) \right) }^{2}
63
+ $$
64
+
65
+ where ${s}_{n}$ is the ${n}^{th}$ sample in the training set and ${\mu }_{\theta }\left( {s}_{n}\right)$ is the NN output. Once training is complete, ${DBE}$ passes the trained NN, ${\mu }_{\theta }$ to its parent bucket. Typically, ${DBE}$ needs to approximate many buckets in order to compute the partition function $Z$ .
66
+
67
+ Even though ${DBE}$ has yielded more accurate approximations when competing with the weighted bucket-elimination scheme on several benchmarks, each message approximation procedure requires a large training sample size, increasing ${DBE}$ ’s total time and memory requirements substantially. Therefore, we re-design each message approximation procedure, as elaborated in the following section.
68
+
69
+ § 3 NEUROBE
70
+
71
+ We rename ${DBE}$ to ${NeuroBE}$ since we use mostly shallow neural networks ( 2 layers). Algorithm 1 presents Neu-roBE. NeuroBE first creates a bucket tree along a given ordering (line 2). While processing each bucket, if it's width $\leq i$ -bound, then the message, ${\mu }_{c \rightarrow p}^{ * }$ , is computed exactly (line 7). Otherwise, the message is approximated using a NN (line 10). Note that in either case, if a bucket contains a NN function, then computing ${\mu }^{ * }$ in Line 7 or in the ${NN}$ - train method in Algorithm 3 (line 5) requires evaluating the trained NN. Finally, line 14 calculates the partition function using the functions in bucket ${B}_{1}$ .
72
+
73
+ Note that we denote ${\mu }_{c \rightarrow p}^{ * }$ as the exact message computed in a bucket while we reserve the notation ${\lambda }_{c \rightarrow p}$ to the messages computed by exact ${BE}$ . We do this to distinguish the exact local computation of a message that may be based on inexact functions in the bucket from the globally exact messages $\lambda$ computed by ${BE}$ . In the latter, each bucket function is computed exactly where all the functions are exact. Hence, we refer to $\lambda$ as the global exact message; ${\mu }_{c}^{ * }$ as the local exact message and by ${\mu }_{\theta ,c \rightarrow p}$ the NN approximation of the local exact message, ${\mu }_{c \rightarrow p}^{ * }$ .
74
+
75
+ The difference between NeuroBE and DBE is solely in the individual message approximation scheme, NN-train. In contrast to ${DBE},{NeuroBE}$ dynamically customizes the NN architecture and its training set size to the bucket message complexity (see Vapnik (1999)).
76
+
77
+ NN Architecture selection Clearly, the NN size should be dependant on the dimensionality of the message function. In our case, the function’s scope size is the induced-width, $w$ . We therefore propose to adjust the NN size by making the number of hidden units, $h$ , a function of $w$ while keeping the number of layers, $L$ , constant. Specifically, we select $h = b$ . $w$ , where $b$ is a constant satisfying $b \geq 1$ . Figure 2 illustrates an example NN model with an input layer of size $w$ and 2 hidden layers with dimension $h$ , varying linearly with $b$ . Through such a rule, NeuroBE fits the NN's architecture to the message size. We next quantify the capacity of such NNs, and use it to determine its train sample sizes.
78
+
79
+ Algorithm 1 NeuroBE
80
+
81
+ Input: Graphical model $\mathcal{M} = \left( {\mathbf{X},\mathbf{D},\mathbf{F}}\right)$ , Ordering $d =$
82
+
83
+ ${X}_{1},\ldots ,{X}_{n}$
84
+
85
+ Parameters: $i$ -bound $i$ ; #layers $L$ ; constants $b,\eta$ ;
86
+
87
+ Output: the partition function constant and bucket mes-
88
+
89
+ sages
90
+
91
+ for $c$ in n...1 do
92
+
93
+ (Initialize buckets) put all unplaced functions men-
94
+
95
+ tioning ${X}_{c}$ in ${B}_{c}$ .
96
+
97
+ end for
98
+
99
+ for $\mathrm{c}$ in n...1 do
100
+
101
+ Let ${X}_{p}$ be the parent variable of ${X}_{c}$ in the bucket-tree
102
+
103
+ if width $\left( {B}_{c}\right) < i$ then
104
+
105
+ compute ${\mu }_{c \rightarrow p}^{ * } \leftarrow \mathop{\sum }\limits_{{X}_{c}}\mathop{\prod }\limits_{{{f}_{\alpha } \in {B}_{c}}}{f}_{\alpha }$ ,
106
+
107
+ else
108
+
109
+ ${\mu }_{\theta ,c \rightarrow p} \leftarrow \mathrm{{NN}} - \operatorname{train}\left( {\left\{ {{f}_{\alpha } \mid {f}_{\alpha } \in {B}_{c}}\right\} ,L,b,\eta }\right)$
110
+
111
+ end if
112
+
113
+ Put ${\mu }_{c \rightarrow p}^{ * }$ or ${\mu }_{\theta ,c \rightarrow p}$ in ${B}_{p}$
114
+
115
+ end for
116
+
117
+ $Z = \mathop{\sum }\limits_{{X}_{0}}\mathop{\prod }\limits_{{{f}_{\alpha } \in {B}_{1}}}{f}_{\alpha }$
118
+
119
+ return $\mathrm{Z}$ and all messages generated
120
+
121
+ < g r a p h i c s >
122
+
123
+ Figure 2: For a bucket of width $w$ , we illustrate a NN architecture with L layers and #bw hidden-units with $b \geq 1$ .
124
+
125
+ NN complexity. Pseudo-dimension (Pollard, 1984; Anthony and Bartlett, 2002) is often used to measure the expressive power of a set of functions that can be learned by any statistical regression algorithm. Bounds to the pseudo-dimension of NNs used in our work with ReLU activation function is provided by Bartlett et al. (2019). We use the lower bound as a proxy to estimate the pseudo-dimension $\left( \rho \right)$ of the $\mathrm{{NN}}\left( {\mu }_{\theta }\right)$ , with $L$ layers, and $b \cdot w$ hidden units, approximating ${\mu }^{ * }$ having width $w$ , yielding (see appendix for derivation):
126
+
127
+ $$
128
+ \rho \left( w\right) \propto {\left( L * b * w\right) }^{2}\log \left\lbrack \left( {b * w}\right) \right\rbrack \tag{2}
129
+ $$
130
+
131
+ The above equation correlates the complexity of the candidate NN with the width of the message ${\mu }^{ * }$ it approximates.
132
+
133
+ Sample Complexity As suggested in Vapnik (1999), we choose a sample size $\mathrm{N}$ that is proportional to the pseudo-dimension (Eq. 2) of each NN as a function of its # of
134
+
135
+ Algorithm 2 generate-samples(F, N, X)
136
+
137
+ Input: $F$ , a set of functions over scope $S \cup \{ X\}$ , where $X$ ,
138
+
139
+ a variable to be eliminated. ${\mu }^{ * }$ denotes the output message
140
+
141
+ over $S,N$ , an integer,
142
+
143
+ Output: $D$ , a set of $\mathrm{N}$ samples $\left\{ \left( {s,{\mu }_{n}^{ * }\left( s\right) }\right) \right\}$
144
+
145
+ initialize $D = \{ \}$ ,
146
+
147
+ for $\mathrm{i} = 1..\mathrm{N}$ do
148
+
149
+ $s \leftarrow$ sample uniformly from domain(S)
150
+
151
+ ${\mu }^{ * }\left( s\right) \leftarrow \mathop{\sum }\limits_{x}\mathop{\prod }\limits_{{f \in F}}f\left( {s,x}\right) \{$ Eq. 4 $\}$
152
+
153
+ ${\mu }_{\text{ nor }}^{ * }\left( s\right) \leftarrow$ Normalize ${\mu }^{ * }\left( s\right) \{$ Eq. 5 $\}$
154
+
155
+ Add (s, ${\mu }_{\text{ nor }}^{ * }\left( s\right)$ ) to $D$
156
+
157
+ end for
158
+
159
+ return $D$
160
+
161
+ arguments $w$ :
162
+
163
+ $$
164
+ N\left( w\right) = \eta * {\left( L * b * w\right) }^{2}\log \left( {b * w}\right) \tag{3}
165
+ $$
166
+
167
+ where $\eta$ is a constant. The sample size $N\left( w\right)$ often exceeds memory limits for higher width buckets with even the simplest NN architecture $\left( {L = 1,b = 1}\right)$ . So, we use $\eta$ as a control over the sample size and threshold sample size per bucket to ${10}^{6}$ . In general, for high induced-width problems, we keep $\eta$ small and train NNs with small sample sizes. However, for problems with small induced-width, we let $\eta$ take high values.
168
+
169
+ Sample Generation Given a generic bucket $B$ of a variable $X$ and a local bucket function ${\mu }^{ * }\left( S\right)$ , where $S$ is the scope of the bucket's output function, we create a training set ${D}_{T}$ (see Algorithm 2), by generating $N$ (from Eq. 3) number of training examples $\left( {s,{\mu }_{\text{ nor }}^{ * }\left( s\right) }\right)$ by sampling configurations $\{ S = s\}$ uniformly at random from the domain of scope $S$ . We compute the message value for each configuration $s$ as
170
+
171
+ $$
172
+ {\mu }^{ * }\left( s\right) = \mathop{\sum }\limits_{{x \in \text{ domain }\left( X\right) }}\mathop{\prod }\limits_{{f \in B}}f\left( {s,x}\right) \tag{4}
173
+ $$
174
+
175
+ Each generated configuration $s$ then, is paired with its normalized message value ${\mu }_{\text{ nor }}^{ * }\left( s\right) \in \left\lbrack {0,1}\right\rbrack$ as,
176
+
177
+ $$
178
+ {\mu }_{\text{ nor }}^{ * }\left( s\right) = \frac{{\mu }^{ * }\left( s\right) - {\mu }_{\text{ min }}^{ * }\left( {D}_{T}\right) }{{\mu }_{\text{ max }}^{ * }\left( {D}_{T}\right) - {\mu }_{\text{ min }}^{ * }\left( {D}_{T}\right) } \tag{5}
179
+ $$
180
+
181
+ where ${\mu }_{min}^{ * }$ and ${\mu }_{max}^{ * }$ are minimum and maximum ${\mu }^{ * }$ values in ${D}_{T}$ . Note that we also normalize the input sample configurations across benchmarks with domain values $> 2$ to be in $\left\lbrack {-1,1}\right\rbrack$ to accelerate training, as suggested in Cun et al. (1991). Some benchmarks in our experiments have very large message values (e.g. ${e}^{51}$ ). In such cases, we use a log transform i.e. $\log {\mu }^{ * },\log {\mu }_{\min }^{ * }\left( {D}_{T}\right)$ and $\log {\mu }_{\max }^{ * }\left( {D}_{T}\right)$ instead of their corresponding usual ${\mu }^{ * }$ values to compute the target NN values ${\mu }_{\text{ nor }}^{ * }$ in Eq. 5. (For simplicity we will ignore the normalization subscript on the output function ${\mu }^{ * }$ and will explicitly denote it when relevant.)
182
+
183
+ Loss Function ${DBE}$ used the mean square error loss function for training each NN. However, in the context of the partition function we want to weigh errors associated with high cost tuples higher than low cost tuples. Namely, squared error on message values that are large seem to be potentially much costlier than smaller message values because of their significant contribution to the sum-product operation in Eq. 1. Hence, we propose an importance mean square error (abbreviated as I.m.s.e) as a loss function where the weights are dependent on the message values.
184
+
185
+ We define the importance weight of a sample as its relative weight in the function distribution in the given dataset $D$ . Namely, given a dataset $D$ , the importance weight $W\left( s\right)$ is normalized over the ${\mu }^{ * }$ values:
186
+
187
+ $$
188
+ W\left( s\right) = \frac{{\mu }^{ * }\left( s\right) }{\mathop{\sum }\limits_{{s \in D}}{\mu }^{ * }\left( s\right) } \tag{6}
189
+ $$
190
+
191
+ Definition 1 (Importance mean square error loss (I.m.s.e)). Let ${\mu }_{\theta }$ be the learned NN function for approximating function ${\mu }^{ * }$ . Then, the I.m.s.e loss function for a given mini-batch, $D \in {D}_{T}$ of size $\# D$ is defined by
192
+
193
+ $$
194
+ {L}_{D}\left( {{\mu }^{ * },{\mu }_{\theta }}\right) = \frac{1}{\# D}\mathop{\sum }\limits_{{s \in D}}{\left( {\mu }^{ * }\left( s\right) - {\mu }_{\theta }\left( s\right) \right) }^{2} * W\left( s\right) \tag{7}
195
+ $$
196
+
197
+ log transformation. For problem instances whose functions costs and message costs are very large, we apply log transformation everywhere, replacing ${\mu }^{ * }$ by $\log {\mu }^{ * }$ in the above equations. Here, the weight function becomes:
198
+
199
+ $$
200
+ {W}_{D}^{log}\left( s\right) = \frac{\log {\mu }^{ * }\left( s\right) - \log {\mu }_{min}^{ * }\left( D\right) }{\mathop{\sum }\limits_{{s \in D}}\left( {\log {\mu }^{ * }\left( s\right) - \log {\mu }_{min}^{ * }\left( D\right) }\right) } \tag{8}
201
+ $$
202
+
203
+ Algorithm 3 describes the procedure, ${NN}$ -Train. Its input parameters are $L,b,\eta$ where $L$ is the number of layers, $b$ is a constant to determine the number of hidden-units, $b \cdot w$ (line $1),\eta$ is another constant to determine the training sample size $N$ (line 2, Eq. 3). A major step occurs next where the algorithm generates a Data set of samples (line 3, see also Algorithm 2) and splits it to create the training set of size $N$ , validation set of size $N/4$ and testing set of size ${50k}$ (line 4). If the bucket $B$ contains a trained NN, then this step requires evaluating that NN. Lines 8-12 then performs batch training for updating the NN parameters $\left( \theta \right)$ using the I.m.s.e loss function (Eq. 7), and the Adam optimizer (Kingma and Ba, 2014) (line 14) with a learning rate of 0.001 and a batch-size $\left( {N}_{B}\right)$ of 256 across all benchmarks. At the end of each epoch, the current model is evaluated on a holdout validation set (line 14). We evaluate the early-stopping criteria (line
204
+
205
+ Algorithm 3 NN-train $\left( {F,X,L,b,\eta ,\# \text{ epochs }}\right)$
206
+
207
+ Input: $F$ : a set of functions over scope $S \cup \{ X\}$ where $X$
208
+
209
+ is to be removed, $w$ scope size.
210
+
211
+ Parameters: $L$ : # layers in NN,#epochs, $\eta ,b$ : constants
212
+
213
+ Output: ${\mu }_{\theta } : \mathrm{{NN}}$ message approximation, $\widehat{\epsilon }$ : an estimated
214
+
215
+ bucket error bound, ${\widehat{\epsilon }}_{avg}$ : estimated average bucket error
216
+
217
+ $\# h \leftarrow b * w$
218
+
219
+ $N \leftarrow \#$ samples $\left( {w,\eta ,L,b}\right) \{$ Eq. 3 $\}$
220
+
221
+ Data $\leftarrow$ generate-samples $\left( {F,N + N/4 + {50k},X}\right)$
222
+
223
+ ${D}_{\text{ Train }},{D}_{\text{ Val }},{D}_{\text{ Test }} \leftarrow$ Split(Data)
224
+
225
+ Initialize NN parameters $\theta ,p = 1$ , early-stopping $\leftarrow$ False
226
+
227
+ while $\mathrm{p} \leq \#$ epochs and $\neg$ early-stopping do
228
+
229
+ ${D}_{1},..,{D}_{k} \leftarrow$ divide ${D}_{\text{ Train }}$ to minibatch
230
+
231
+ for $i = 1..k$ do
232
+
233
+ Let ${D}_{i} = \left\{ \left( {\mathrm{s},{\mu }^{ * }}\right) \right\}$
234
+
235
+ Compute $\left\{ {{\mu }_{\theta }\left( s\right) \mid \mathrm{s} \in {D}_{i}}\right\}$
236
+
237
+ ${\operatorname{loss}}_{{D}_{i}} \leftarrow {L}_{{D}_{i}}\left( {{\mu }^{ * },{\mu }_{\theta }}\right) \{$ Eq. 7 $\}$
238
+
239
+ $\theta \leftarrow$ update $\theta$ by optimize $\left( {\text{ Adam, }{\text{ loss }}_{{D}_{i}},\theta }\right)$
240
+
241
+ end for
242
+
243
+ ${\operatorname{loss}}_{{D}_{\text{ val }}} \leftarrow {L}_{{D}_{\text{ val }}}\left( {{\mu }^{ * },{\mu }_{\theta }}\right)$ {For stop condition}
244
+
245
+ early-stopping $\leftarrow$ evaluate early-stoping $\left( {\operatorname{loss}}_{{D}_{Val}}\right)$
246
+
247
+ $p \leftarrow p + 1$
248
+
249
+ end while
250
+
251
+ Unnormalize $\left\{ {{\mu }_{\theta }\left( s\right) ,{\mu }^{ * }\left( s\right) \mid s \in {D}_{\text{ Test }}}\right\}$ {Inverse of Eq.
252
+
253
+ 5}
254
+
255
+ $\widehat{\epsilon } \leftarrow \mathop{\max }\limits_{{s \in {D}_{\text{ Test }}}}\left( {\log {\mu }^{ * }\left( s\right) - \log {\mu }_{\theta }\left( s\right) }\right)$
256
+
257
+ return ${\mu }_{\theta },\widehat{\epsilon }$
258
+
259
+ 15), which is assigned True when either the maximum limit #epochs is reached or the validation error increases for two consecutive epochs. Once training is complete, we compute the maximum log relative error between the target and NN approximated messages over a test set (lines 18-19). In the next section, we use this to analyse error propagation in NeuroBE. The NN-train procedure then returns the approximated message ${\mu }_{\theta }$ , along with its estimated error.
260
+
261
+ Complexity. The time and space complexity for learning a single message in NeuroBE is linear in the sample size. In contrast to ${DBE}$ , here the sample size varies with the bucket's width.
262
+
263
+ § 4 ERROR ANALYSIS
264
+
265
+ We next analyse the relationship between the local errors contributed by each approximated message and the global partition function error.
266
+
267
+ Definition 2 (local and global bucket errors). Let $\lambda$ be the (global) exact message generated in $B$ by the exact BE algorithm, ${\mu }^{ * }$ be the (local) exact message in $B$ computed by the functions in it and $\mu = {NN}$ -train $\left( {\mu }^{ * }\right)$ be its NN approximation. Then, the Local Bucket Error is the function
268
+
269
+ $$
270
+ E = \log {\mu }^{ * } - \log \mu
271
+ $$
272
+
273
+ max width=
274
+
275
+ 5|c|Problem Description 4*ref Z 2*WMB 4*#NB 4|c|DBE (#h=100, N=320k) 11|c|NeuroBE (#h=3w, ${\mathrm{N}}_{\min } = {49}\mathrm{k}$ )
276
+
277
+ 13-23
278
+ 5|c|X 4|c|X 3|c|Statistics on resources 4|c|Loss : mean square error 4|c|Loss : Imp. mean square error
279
+
280
+ 1-5
281
+ 7-7
282
+ 9-12
283
+ 16-23
284
+ 5|c|i-bound=20 2*error 4|c|statistics on error ( 5 runs ) 3|c|X 4|c|statistics on error ( 5 runs ) 4|c|statistics on error ( 5 runs )
285
+
286
+ 1-5
287
+ 9-23
288
+ Id name k #v w avg error min error stdev time ${\mathrm{h}}_{\max }$ ${\mathrm{N}}_{avg}$ ${\mathrm{N}}_{\max }$ avg error min error std time (h) avg error min error std time (h)
289
+
290
+ 1-23
291
+ 1 pedigree13 3 888 33 -31.18 6.4696 127 5.32 2.62 3.06 11 96 218k 706k 9.04 7.86 0.8 7.5 1.11 0.76 0.25 7.5
292
+
293
+ 1-23
294
+ 2 pedigree41 5 885 32 -76.04 4.1497 92 4.27 3.25 0.73 9.9 93 190k 658k 10.2 8.4 2.16 6.2 0.47 0.153 0.21 6.2
295
+
296
+ 1-23
297
+ 3 pedigree51 5 871 35 -77.27 9.7624 120 23.92 9.23 12.7 13 102 259k 809k 11.73 10.37 1.26 10.4 3.51 1.96 0.89 10.4
298
+
299
+ 1-23
300
+ 4 pedigree34 5 922 33 -64.23 7.0762 106 5.91 1.57 5.98 11 96 211k 706k 6.14 4.56 4.14 6.96 0.65 0.23 0.29 6.96
301
+
302
+ 1-23
303
+ 5 pedigree7 4 867 34 -64.82 6.0012 108 11.26 5.18 7.8 11 99 350k 900k # # # 10.7 1.75 1.21 0.7 10.7
304
+
305
+ 1-23
306
+ 7 pedigree19 5 693 28 -59.020 2.5809 43 6.054 5.41 0.92 9.4 71 149k 482k 9.14 8.7 0.6 5 2.61 1.91 0.6 5
307
+
308
+ 1-23
309
+
310
+ max width=
311
+
312
+ 5|c|Problem Description 4*ref Z 2*WMB 4*#NB 4|c|DBE (#h=100, N=320k) 11|c|NeuroBE (#h=w, Nmin = 19k)
313
+
314
+ 13-23
315
+ 5|c|X 4|c|X 3|c|Statistics on resources 4|c|Loss : mean square error 4|c|Loss : Imp. mean square error
316
+
317
+ 1-5
318
+ 7-7
319
+ 9-12
320
+ 16-23
321
+ 5|c|i-bound=20 2*error 4|c|statistics on error ( 5 runs ) 3|c|X 4|c|statistics on error (5 runs) 4|c|statistics on error ( 5 runs )
322
+
323
+ 1-5
324
+ 9-23
325
+ Id name k #v W avg error min error stdev time ${\mathrm{h}}_{\max }$ ${\mathrm{N}}_{\text{ avg }}$ ${\mathrm{N}}_{\max }$ avg error min error std time (h) avg error min error std time (h)
326
+
327
+ 1-23
328
+ 1 grid4040f10 2 1600 55 5490 215.45 308 97.1 11.8 65.2 12 55 120k 364k 14.57 0.15 8.94 7.69 9.71 2.1 0.91 6.4
329
+
330
+ 1-23
331
+ 2 grid4040f5 2 1600 55 2800 84.92 308 39.9 6.28 35 12 55 120k 364k 8.86 2.05 8.21 7.66 3.7 0.2 3.06 6.3
332
+
333
+ 1-23
334
+ 3 grid4040f2 2 1600 55 1220 25.24 308 7.34 1.2 5.4 12 55 120k 364k 3.15 1.48 1.66 7.48 2.28 1.2 0.91 6.1
335
+
336
+ 1-23
337
+ 4 grid4040f15 2 1600 55 8200 338.2 308 83.46 41.8 34.2 13 55 75k 228k 24.75 6.62 18.1 5.7 17.87 7.45 9.16 5.7
338
+
339
+ 1-23
340
+ 5 grid4040f10w 2 1600 114 5637 297.7 376 100.5 6.4 82.2 21 114 150k 670k 67.76 32.7 26 11 54.18 25.01 21.1 11.1
341
+
342
+ 1-23
343
+ 6 grid4040f5w 2 1600 114 2819 136.99 376 78.2 72.6 5.6 21 114 150k 670k 5.37 1.65 4.77 11.6 9.62 4.62 5.27 12.1
344
+
345
+ 1-23
346
+ 7 grid4040f2w 2 1600 114 1231 32 376 15.12 0.92 20.5 18 114 150k 670k 10.15 8.81 1.44 11.8 5.56 2.92 2.11 10
347
+
348
+ 1-23
349
+
350
+ (a) pedigree (b) Grid-hard (c) Grid-easy (d) DBN
351
+
352
+ max width=
353
+
354
+ 5|c|Problem Description 4*ref Z 2*WMB 4*#NB 4|c|DBE (#h=100, N=320k) 11|c|NeuroBE (#h=w, Nmin = 20k)
355
+
356
+ 13-23
357
+ 5|c|X 4|c|X 3|c|Statistics on resources 4|c|Loss : mean square error 4|c|Loss: Imp. mean square error
358
+
359
+ 1-5
360
+ 7-7
361
+ 9-12
362
+ 16-23
363
+ 5|c|i-bound=10 2*error 4|c|statistics on error ( 5 runs ) 3|c|X 4|c|statistics on error ( 5 runs ) 4|c|statistics on error ( 5 runs )
364
+
365
+ 1-5
366
+ 9-23
367
+ Id name k #v W avg error min error stdev time ${\mathrm{h}}_{\max }$ ${\mathrm{N}}_{\text{ avg }}$ ${\mathrm{N}}_{\max }$ avg error min error std time (h) avg error min error std time (h)
368
+
369
+ 1-23
370
+ 1 grid1010f10w 2 100 21 333.32 32 31 4.45 0.89 3.71 1.5 21 51k 87k 1.58 0.28 0.75 0.35 1.83 0.98 0.76 0.48
371
+
372
+ 1-23
373
+ 2 grid1010f10 2 100 13 303.09 1.58 8 0.7 0.05 0.54 0.3 13 23k 30k 1.28 0.59 0.41 0.06 1.18 0.86 0.32 0.05
374
+
375
+ 1-23
376
+ 3 grid2020f2 2 400 27 291.73 11.24 114 1.98 0.36 1.28 5.3 27 68k 177k 0.4 0.013 0.36 2.24 0.13 0.015 0.14 2.5
377
+
378
+ 1-23
379
+ 4 grid2020f10 2 400 27 1312 80.86 114 10.04 1.05 9.3 5.5 27 68k 177k 2.51 1.62 1.31 2.22 2.4 0.37 1.98 2.27
380
+
381
+ 1-23
382
+ 5 grid2020f5 2 400 27 665.12 39.44 114 5.75 0.24 3.1 5.5 27 68k 177k 0.84 0.37 0.53 1.95 0.8 0.081 0.78 2.3
383
+
384
+ 1-23
385
+ 6 grid2020f15 2 400 27 1963 122.91 114 17.8 3.08 9.8 3 27 68k 177k 2.39 0.41 2.16 2.2 2.68 1.08 2.54 2.2
386
+
387
+ 1-23
388
+
389
+ max width=
390
+
391
+ 5|c|Problem Description 4*ref Z WMB 4*#NB 4|c|DBE (#h=100, N=320k) 11|c|NeuroBE (#h=3w, Nmin = 147k )
392
+
393
+ 7-7
394
+ 13-23
395
+ 5|c|X 3*error 4|c|X 3|c|Statistics on resources 4|c|Loss : mean square error 4|c|Loss: Imp. mean square error
396
+
397
+ 1-5
398
+ 9-12
399
+ 16-23
400
+ 5|c|i-bound=20 4|c|statistics on error ( 5 runs ) 3|c|X 4|c|statistics on error ( 5 runs ) 4|c|statistics on error ( 5 runs )
401
+
402
+ 1-5
403
+ 9-23
404
+ ld name k #v W avg error min error stdev time (h) ${\mathrm{h}}_{\max }$ ${\mathrm{N}}_{\text{ avg }}$ ${\mathrm{N}}_{\max }$ avg error min error std time (h) avg error min error std time (h)
405
+
406
+ 1-23
407
+ 1 rbm20 2 40 21 58.53 0.0007 20 0.22 0.03 0.17 1.1 60 147k 147k 0.37 0.08 0.29 0.45 0.37 0.06 0.25 1.73
408
+
409
+ 1-23
410
+ 2 rbm21 2 42 22 63.15 6.39 22 0.48 0.27 0.19 1.3 63 163k 164k 0.9 0.59 0.2 0.46 0.57 0.15 0.26 1.75
411
+
412
+ 1-23
413
+ 3 rbm22 2 40 21 66.55 8.65 24 0.47 0.14 0.35 1 66 180k 182k 0.59 0.02 0.38 0.57 0.75 0.48 0.36 1.76
414
+
415
+ 1-23
416
+ 4 rbm-ferro20 2 44 23 151.16 0.005 20 1.33 0.29 1.21 1 60 147k 147k 1.11 0.38 0.75 0.44 0.75 0.37 0.32 1.13
417
+
418
+ 1-23
419
+ 5 rbm-ferro21 2 42 22 152.62 1.98 22 3.43 0.83 1.89 1.2 63 163k 164k 2.3 1.27 1.08 0.48 1.82 1.06 0.87 2.5
420
+
421
+ 1-23
422
+ 6 rbm-ferro22 2 44 23 166.11 0.517 24 6.52 3.86 1.5 1.3 66 180k 182k 5.32 2.515 2.69 0.59 4.17 3.8 0.32 1.29
423
+
424
+ 1-23
425
+ X i-bound=10 X X X X X X X X X X 11|c|NeuroBE (#h=3w, Nmin = 82k)
426
+
427
+ 1-23
428
+ 1 rbm20 2 40 21 58.53 7.85 30 0.49 0.11 0.35 1.6 100 82k 87k 1.57 0.89 1.04 0.3 1.43 0.83 0.82 0.8
429
+
430
+ 1-23
431
+ 2 rbm21 2 42 22 63.15 15.73 32 0.73 0.14 0.47 1.7 105 90k 109k 1.62 0.68 0.98 0.4 0.89 0.51 0.42 1.02
432
+
433
+ 1-23
434
+ 3 rbm22 2 40 21 66.55 27.46 34 0.65 0.19 0.41 1.8 110 99.5k 121k 0.89 0.03 0.6 0.46 0.94 0.83 0.61 1.3
435
+
436
+ 1-23
437
+
438
+ Figure 3: Results on performance of NeuroBE against DBE and WMB. k:domain size,#v:variables, w:induced width, #NB: number of buckets that are trained with NNs, $\# h$ : number of hidden units per layer (reported maximum $\# h$ for NeuroBE), $N$ : number of training samples (reported minimum, average and maximum $\# N$ for NeuroBE), error: L1 error for referenced and estimated $\log \left( Z\right)$ (reported minimum, average, and standard deviation over 5 runs for ${DBE}$ and NeuroBE), time: average time taken to get the estimated error,# in a cell denotes estimated partition function is $- \infty$ . *Note: Here, referenced $Z$ is approximated by Kask et al. (2020)
439
+
440
+ The Global Bucket Error is
441
+
442
+ $$
443
+ G = \log \lambda - \log \mu
444
+ $$
445
+
446
+ The above error corresponds to a log of the relative errors. We use log relative error here because bounding the global error as a function of the local errors turned out to be easier.
447
+
448
+ Theorem 1. Assume a bucket-chain along an ordering d and let $B$ be a bucket along the chain at index c. Let $E\left( s\right) = \ln {\mu }^{ * }\left( s\right) - \ln \mu \left( s\right)$ as defined above and let $\epsilon =$ $\mathop{\max }\limits_{{s \in D\left( S\right) }}\left| {E\left( s\right) }\right|$ , where $S$ is the scope of outgoing message from $B$ and $D\left( S\right)$ is the set of all possible configurations on $S$ . Then,
449
+
450
+ $$
451
+ E = \ln \lambda - \ln \mu \leq \mathop{\sum }\limits_{{k = 0}}^{{n - c}}{\epsilon }_{c + k}
452
+ $$
453
+
454
+ In particular, since ${\lambda }_{1} = Z$ , the partition function
455
+
456
+ $$
457
+ {E}_{1} = \ln Z - \ln {\mu }_{1} \leq \mathop{\sum }\limits_{{k = 0}}^{{n - 1}}{\epsilon }_{1 + k} \tag{9}
458
+ $$
459
+
460
+ For the proof see the Appendix.
461
+
462
+ In the next section we evaluate ${NeuroBE}$ .
463
+
464
+ § 5 EXPERIMENTS
465
+
466
+ § 5.1 EXPERIMENT SETUP
467
+
468
+ We ran experiments comparing NeuroBE against the Weighted Mini Bucket Elimination scheme (WMB) (Dechter and Rish, 2003; Liu and Ihler, 2012) and DBE (Razeghi et al., 2021).
469
+
470
+ Benchmarks Following the methodology in ${DBE}$ , we evaluated NeuroBE on instances selected from three wellknown benchmarks from the UAI repository used in Kask et al. (2020), i.e. grids (vision domain), pedigree (genetic linkage analysis) and DBNs. We targeted diverse benchmarks (in structure and level of determinism) and aimed for different levels of hardness. Thus, in the grids benchmark, we distinguish between problems that can be solved exactly, which we call "grid-easy", and those that cannot be solved, called "grid-hard". We also distinguish benchmarks that possess determinism, namely have a high proportion of zero probabilities, a feature which can impact training. We randomly selected 13 instances from Grids, with easy ones (width 20-30) and hard ones (1600 variables, width 55 or 114), 6 from pedigrees, which posses high level of determinism and 6 from DBNs, totalling 25 instances.
471
+
472
+ NN architectures and sample sizes. To trigger bucket message approximations, we used $i$ -bound $= {10}$ for easy problems and $i$ -bound $= {20}$ for hard ones. For problems with determinism, such as pedigree, the NN architecture has 2 set of output layers, one indicating whether the input configuration has non-zero probability while the other specifies the message value if so. The loss is then a sum of the cross-entropy loss (from the first output layer) and the (importance) mean square loss (from the second output layer). We keep the $\#$ layers fixed $\left( { = 2}\right)$ across all benchmarks. Through a process of trial and error applied to a selected representative instance from each benchmark, we used as static parameters of the architecture and sample sizes as follows. We selected $h = {3w}$ and ${N}_{\text{ avg }} \in \left\lbrack {{149k},{350k}}\right\rbrack$ for pedigrees; $h = {3w}$ and ${N}_{\text{ avg }} \in \left\lbrack {{80k},{180k}}\right\rbrack$ for DBN; $h = w$ and ${N}_{\text{ avg }} \in \left\lbrack {{23k},{68k}}\right\rbrack$ for grid-easy; and $h = w$ and ${N}_{\text{ avg }} \in \left\lbrack {{75k},{150k}}\right\rbrack$ for grid-hard.
473
+
474
+ Performance measures We evaluate the performance of NeuroBE using: $\operatorname{error} = \left| {{\log }_{e}Z - {\log }_{e}\widehat{Z}}\right|$ where $\widehat{Z}$ is the estimate of the partition function, $Z$ . When the exact $Z$ is not available (for hard Grid benchmark), ${Z}^{ * }$ is a surrogate to $Z$ , which is obtained using an advanced sampling scheme for a duration of ${100} * {1hr}$ (Kask et al.,2020).
475
+
476
+ § 5.2 RESULTS
477
+
478
+ Figure 3 compares ${NeuroBE}$ against ${WMB}$ and ${DBE}$ over the 3 benchmarks. The first few columns show the problem statistics for instances in the respective benchmarks (pedigree, DBN and grids). We then show ${WMB}$ error, followed by the average error, minimum error, standard deviation and average time (in hours) over 5 runs (due to stochasticity) for both ${DBE}$ and ${NeuroBE}$ with m.s.e and I.m.s.e loss functions. For ${NeuroBE}$ , we also report the average and maximum #training samples, $\left( {{N}_{\text{ avg }},{N}_{\text{ max }}}\right)$ and maximum #hidden units, $\left( {h}_{\max }\right)$ across all buckets of the instance.
479
+
480
+ Pedigrees We observe immediately that overall NeuroBE has far superior performance to ${DBE}$ , especially with the I.m.s.e loss function. In particular, it is $\geq 5$ times more accurate than ${DBE}$ for almost all the instances and takes less time, since it uses far less training samples. Also, NeuroBE outperforms WMB on most of the instances. Here, ${DBE}$ yields either a similar (or even worse) accuracy as ${WMB}$ .
481
+
482
+ Grids. Here too we observe that NeuroBE outperforms ${DBE}$ in accuracy, particularly with the I.m.s.e loss, as reflected by the average error and standard deviation, even though it uses far less time. In most cases, we see a reduction in time by a factor of 2 while still producing a far better estimate. Neuro ${BE}$ and ${DBE}$ outperform ${WMB}$ across all problem instances.
483
+
484
+ max width=
485
+
486
+ 3|c|i-bound=20 4|c|h=3*w 4|c|h=5*w
487
+
488
+ 1-11
489
+ k #v W $\# {\mathrm{N}}_{\text{ avg }}$ (103) standard deviation error t(h) #Navg (103) standard deviation error t(h)
490
+
491
+ 1-11
492
+ 3 888 33 218 0.25 1.11 7.5 437 0.17 0.34 14.3
493
+
494
+ 1-11
495
+ 5 871 35 259 0.89 3.51 10.4 518 1.57 1.42 18.3
496
+
497
+ 1-11
498
+ 5 693 28 149 0.6 2.61 5 298 0.51 0.68 8.2
499
+
500
+ 1-11
501
+
502
+ max width=
503
+
504
+ 3|c|i-bound=20 4|c|h=w 4|c|h=w
505
+
506
+ 1-11
507
+ Id #v W #Navg (103) standard deviation error t(h) #Navg (103) standard deviation error t(h)
508
+
509
+ 1-11
510
+ 5 1600 114 120 10.82 55.11 12 262 16.3 17.76 17
511
+
512
+ 1-11
513
+ 7 1600 114 120 3.6 5.5 10 262 1.39 2.34 14.5
514
+
515
+ 1-11
516
+
517
+ (a) pedigree (b) Grid-hard (c) Grid-easy (d) DBN
518
+
519
+ max width=
520
+
521
+ 3|c|i-bound=10 4|c|h=w 4|c|h=3*w
522
+
523
+ 1-11
524
+ Id #v W #Navg (103) standard deviation error t(h) #Navg (103) standard deviation error t(h)
525
+
526
+ 1-11
527
+ 1 100 21 51 0.76 1.83 0.48 202 0.08 1.2 1.13
528
+
529
+ 1-11
530
+ 2 100 13 23 0.32 1.18 0.05 38 0.15 0.12 0.09
531
+
532
+ 1-11
533
+
534
+ max width=
535
+
536
+ 3|c|i-bound=20 4|c|h=3*w 4|c|h=5*w
537
+
538
+ 1-11
539
+ Id #v W #Navg (103) standard deviation error t(h) #Navg (103) standard deviation error t(h)
540
+
541
+ 1-11
542
+ 1 40 21 147 0.25 0.37 1.73 245 0.07 0.2 3.2
543
+
544
+ 1-11
545
+ 2 42 22 163 0.26 0.57 1.75 272 0.24 0.77 3.7
546
+
547
+ 1-11
548
+ 3 40 21 163 0.36 0.75 1.76 300 0.22 0.51 4.5
549
+
550
+ 1-11
551
+ 4 44 23 147 0.32 0.75 1.13 245 0.61 0.67 2.17
552
+
553
+ 1-11
554
+ 5 42 22 164 0.87 1.82 1.3 272 0.23 1.06 2.6
555
+
556
+ 1-11
557
+ 6 44 23 182 0.32 4.17 1.29 300 0.86 3.43 2.9
558
+
559
+ 1-11
560
+
561
+ Figure 4: Performance of NeuroBE when increasing # sample &/or NN complexity. ${N}_{avg}$ : average samples, $t\left( h\right)$ : average time, Error: global error (reported average and standard deviation over 5 runs)
562
+
563
+ DBN We report results for the DBN benhcmark for 2 $i$ -bounds. For i-bound $= {20},$ NeuroBE, mostly with the I.m.s.e loss achieves a higher accuracy than ${DBE}$ with far less #training samples (but with more training time). It is superior to ${WMB}$ on instances2,3,5. However, ${WMB}$ performs better on instance 1,4 & 6, as the induced-width is closer to the i-bound. For i-bound=10, NeuroBE shows better accuracy than ${WMB}$ for all three instances. It is comparable to ${DBE}$ on most instances taking less time.
564
+
565
+ Overall, compared with DBE, NeuroBE using I.m.s.e is about ${50}\%$ faster while also far more accurate on pedigrees, twice as fast and 5 to 10 fold more accurate on hard grids. It is also faster and more accurate on easy grids and has a comparable performance on the DBN benchmark.
566
+
567
+ The impact of loss functions. We observe that NeuroBE with the I.m.s.e loss shows better performance (lower average error and standard deviation) than NeuroBE with the m.s.e loss for the pedigree instances, and for the majority of DBN, grid-easy and grid-hard instances. An F-test on the two groups of partition function estimates (each consisting of 5 approximations) showed that the means are significantly different for pedigrees, (Fig. 3(a)). For the grids and DBN, there was no statistical difference between the two means. However, by inspection, we see a reduction in the standard deviation for almost all instances.
568
+
569
+ Impact of architecture size. Figure 4 shows the impact of architecture size on time and accuracy on a few problem instances. We show the average error of the partition function estimate (what we call global error), and standard deviation (over 5 runs) and the computation time for 2 different NN architecture and their associated sample sizes. As expected we see that increasing the NN and sample sizes increases the time and accuracy for pedigrees. For grid-hard instances, we just increased the sample sizes for the same architecture having $h = w$ , and observe that the average error is reduced, as expected. Instances from grid-easy and DBN (except, instance #2) show a similar improvement in performance with a larger NN and training sample size. This shows that the algorithm has an anytime characteristic, as it can improve its performance by controlling the size of the approximating NN matched by a suitable sample size for training.
570
+
571
+ § 6 CONCLUSION & FUTURE WORK
572
+
573
+ In this work, we advance the earlier theme of using Neural Networks to approximate the class of bucket-elimination algorithms that is at the heart to probabilistic reasoning. NeuroBE can be viewed as a realization of Neuro-Dynamic Programming schemes Bertsekas and Tsitsiklis (1996), in the context of graphical models. That being said, it requires training of numerous NN per problem instance and thus the central aim of NeuroBE's design (customizing NN architectures, training samples and the loss function to the message) is to enhance efficiency of such schemes. We presented NeuroBE and illustrated, on challenging instances over three benchmarks that it can be far more accurate and requires less time compared with Deep Bucket Elimination (DBE) and is also superior to weighted mini-bucket (WMB) that cannot improve its accuracy once their memory is exhausted. We believe that NeuroBE has the potential to become more time efficient and may also extend into learning across a set of instances from the same benchmark domain.
574
+
575
+ Future Work. While NeuroBE adjusts the NN architectures according to a bucket's scope, it keeps the #layers fixed, a hyper-parameter we wish to explore varying dynamically. We will also explore training a single function per union of buckets, which yield a cluster in a tree-decomposition (Dechter, 2013). This can significantly reduce the number of trained functions at the cost of more time for sample generation, a trade-off we plan to study. We will also explore the task of parameter sharing and training multiple bucket functions simultaneously.
UAI/UAI 2022/UAI 2022 Conference/BCg4lD8ice5/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,575 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Local Calibration: Metrics and Recalibration
2
+
3
+ ## Abstract
4
+
5
+ Probabilistic classifiers output confidence scores along with their predictions, and these confidence scores should be calibrated, i.e., they should reflect the reliability of the prediction. Confidence scores that minimize standard metrics such as the expected calibration error (ECE) accurately measure the reliability on average across the entire population. However, it is in general impossible to measure the reliability of an individual prediction. In this work, we propose the local calibration error (LCE) to span the gap between average and individual reliability. For each individual prediction, the LCE measures the average reliability of a set of similar predictions, where similarity is quantified by a kernel function on a pretrained feature space and by a binning scheme over predicted model confidences. We show theoretically that the LCE can be estimated sample-efficiently from data, and empirically find that it reveals miscalibration modes that are more fine-grained than the ECE can detect. Our key result is a novel local recalibration method LoRe, to improve confidence scores for individual predictions and decrease the LCE. Experimentally, we show that our recalibration method produces more accurate confidence scores, which improves downstream fairness and decision making on classification tasks with both image and tabular data.
6
+
7
+ ## 1 INTRODUCTION
8
+
9
+ Uncertainty estimation is extremely important in high stakes decision-making tasks. For example, a patient wants to know the probability that a medical diagnosis is correct; an autonomous driving system wants to know the probability that a pedestrian is correctly identified. Uncertainty estimates are usually achieved by predicting a probability along with each classification. Ideally, we want to achieve individual calibration, i.e., we want to predict the probability that each sample is misclassified.
10
+
11
+ However, each sample is observed only once for most datasets (e.g., image classification datasets do not contain identical images), making it impossible to estimate, or even define, the probability of incorrect classification for individual samples. Because of this, commonly used metrics such as the expected calibration error (ECE) measure the gap between a classifier's confidence and accuracy averaged across the entire dataset. Consequently, ECE can be accurately estimated but does not measure the reliability of individual predictions.
12
+
13
+ In this work, we propose the local calibration error (LCE), a calibration metric that spans the gap between fully global (e.g., ECE) and fully individual calibration. Motivated by the success of kernel-based locality in other fields such as fairness (where similar individuals should be treated similarly) [Dwork et al., 2012, Pleiss et al., 2017] and causal inference (where matching techniques are used to find similar neighboring samples) [Stuart, 2010], we approximate the probability of misclassification for an individual sample by computing the average classification error over similar samples, where similarity is measured by a kernel function in a pre-trained feature space and a binning scheme over predicted confidences. Intuitively, two samples are similar if they are close in a pretrained feature space and have similar predicted confidence scores. By choosing the bandwidth of the kernel function, we can trade off estimation accuracy and individuality: when the bandwidth is very large, we recover existing global calibration metrics; when the bandwidth is small, we approximate individual calibration. We choose an intermediate bandwidth, so our metric can be accurately estimated, and provides some measurement on the reliability of individual predictions.
14
+
15
+ Theoretically, we show that the LCE can be estimated with polynomially many samples if the kernel function is bounded. Empirically, we also show that for intermediate values of the bandwidth, the LCE can be accurately estimated and reveals modes of miscalibration that global metrics (such as ECE) fail to uncover.
16
+
17
+ In addition, we introduce a non-parametric, post-hoc localized recalibration method (LoRe), for lowering the LCE. Empirically, LoRe improves fairness by achieving low calibration error on all potentially sensitive subsets of the data, such as racial groups. Notably, it can do so without any prior knowledge of those groups, and is more effective than global methods at this task. In addition, our recalibration method improves decision making when there is a "safe" action that is selected whenever the predicted confidence is low. For example, an automated system which classifies tissue samples as cancerous should request a human expert opinion whenever it is unsure about a classification. In a simulation on an image classification dataset, we show that recalibrated prediction models more accurately choose whether to use the "safe" action, which improves the overall utility.
18
+
19
+ In summary, the contributions of our paper are as follows. (1) We introduce a local calibration metric, the LCE, that is both easy to compute and can estimate the reliability of individual predictions. (2) We introduce a post-hoc localized recalibration method LoRe, that transforms a model's confidence predictions to improve the local calibration. (3) We empirically evaluate LoRe on several downstream tasks and observe that LoRe improves fairness and decision-making more than existing baselines.
20
+
21
+ ## 2 BACKGROUND AND RELATED WORK
22
+
23
+ ### 2.1 GLOBAL CALIBRATION METRICS
24
+
25
+ Consider a classification task that maps from some input domain (e.g., images) $\mathcal{X}$ to a finite set of labels $\mathcal{Y} =$ $\{ 1,\cdots , m\}$ . A classifier is a pair $\left( {f,\widehat{p}}\right)$ where $f : \mathcal{X} \rightarrow \mathcal{Y}$ maps each input $x \in \mathcal{X}$ to a label $y \in \mathcal{Y}$ and $\widehat{p} : \mathcal{X} \rightarrow \left\lbrack {0,1}\right\rbrack$ maps each input $x$ to a confidence value $c$ . Let $\Pr$ be a joint distribution on $\mathcal{X} \times \mathcal{Y}$ (e.g., from which training or test data pairs(x, y)are drawn). The classifier $\left( {f,\widehat{p}}\right)$ is perfectly calibrated [Guo et al.,2017] with respect to $\Pr$ if for all $c \in \left\lbrack {0,1}\right\rbrack$
26
+
27
+ $$
28
+ \Pr \left\lbrack {f\left( X\right) = Y \mid \widehat{p}\left( X\right) = c}\right\rbrack = c. \tag{1}
29
+ $$
30
+
31
+ To numerically measure how well a classifier is calibrated, the most commonly used metric is the expected calibration error (ECE) [Naeini et al., 2015, Guo et al., 2017], which measures the average absolute deviation from Eq. 1 over the domain. In practice, given a finite dataset, the ECE is approximated by binning. The predicted confidences $\widehat{p}$ are partitioned into bins ${B}_{1},\ldots ,{B}_{k}$ , and then a weighted average is taken of the absolute difference between the average confidence $\operatorname{conf}\left( {B}_{i}\right)$ and average accuracy $\operatorname{acc}\left( {B}_{i}\right)$ for each
32
+
33
+ bin ${B}_{i}$ :
34
+
35
+ $$
36
+ \operatorname{ECE}\left( {f,\widehat{p}}\right) \mathrel{\text{:=}} \mathop{\sum }\limits_{{i = 1}}^{k}\frac{\left| {B}_{i}\right| }{N}\left| {\operatorname{conf}\left( {B}_{i}\right) - \operatorname{acc}\left( {B}_{i}\right) }\right| . \tag{2}
37
+ $$
38
+
39
+ Similarly, the maximum calibration error (MCE) [Naeini et al., 2015, Guo et al., 2017] measures the average deviation from Eq. 1 in the bin with the highest calibration error, and is defined as
40
+
41
+ $$
42
+ \operatorname{MCE}\left( {f,\widehat{p}}\right) \mathrel{\text{:=}} \mathop{\max }\limits_{i}\left| {\operatorname{conf}\left( {B}_{i}\right) - \operatorname{acc}\left( {B}_{i}\right) }\right| . \tag{3}
43
+ $$
44
+
45
+ ### 2.2 EXISTING GLOBAL RECALIBRATION METHODS
46
+
47
+ Many existing methods apply a post-hoc adjustment that changes a model's confidence predictions to improve global calibration, including Platt scaling [Platt, 1999], temperature scaling [Guo et al., 2017], isotonic regression [Zadrozny and Elkan, 2002], and histogram binning [Zadrozny and Elkan, 2001]. These methods all learn a simple transformation from the original confidence predictions to new confidence predictions, and aim to decrease the expected calibration error (ECE). Platt scaling fits a logistic regression model; temperature scaling learns a single temperature parameter to rescale confidence scores for all samples simultaneously; isotonic regression learns a piece-wise constant monotonic function; histogram binning partitions confidence scores into bins $\{ \lbrack 0,\epsilon ),\left\lbrack {\epsilon ,{2\epsilon }),\cdots ,\left\lbrack {1 - \epsilon ,1}\right\rbrack }\right\rbrack$ and sorts each validation sample into a bin based on its confidence $\widehat{p}\left( x\right)$ ; it then resets the confidence level of all samples in the bin to match the classification accuracy of that bin.
48
+
49
+ ### 2.3 LOCAL CALIBRATION
50
+
51
+ Two notions of calibration that address some of the deficits of global calibration are class-wise calibration and groupwise calibration. Class-wise calibration groups samples by their true class label [Kull et al., 2019, Nixon et al., 2019] and measures the average class ECE, while group-wise calibration uses pre-specified groupings (e.g., race or gender) [Kleinberg et al., 2016, Pleiss et al., 2017] and measures the average group-wise ECE or maximum group-wise MCE.
52
+
53
+ A few recalibration methods have been proposed for these notions of calibration as well. Dirichlet calibration [Kull et al., 2019] achieves calibration for groups defined by class labels, but does not generalize well to settings with many classes [Zhao et al., 2021]. Multicalibration [Hébert-Johnson et al., 2017] achieves calibration for any group that can be represented by a polynomial sized circuit, but lacks a tractable algorithm. If the groups are known a priori, one can also apply global calibration methods within each group; however, this is impractical in many situations where the groups are not known for new examples at inference time. At an even more local level, Zhao et al. [2020] looks at individual calibration in the regression setting and concludes that individual calibration is impossible to verify with a deterministic forecaster, and thus there is no general method to achieve individual calibration.
54
+
55
+ ### 2.4 KERNEL-BASED CALIBRATION METRICS
56
+
57
+ Kumar et al. [2018] introduces the maximum mean calibration error (MMCE), a kernel-based quantity that replaces the hard binning of the standard ECE estimator with a kernel similarity $k\left( {\widehat{p}\left( x\right) ,\widehat{p}\left( {x}^{\prime }\right) }\right)$ between the confidence of two examples. They further propose to optimize the MMCE directly in order to achieve better model calibration globally. Widmann et al. [2019] extends their work and proposes the more general kernel calibration error. Zhang et al. [2020] and Gupta et al. [2020] also consider kernel-based calibration. However, these methods only consider the similarity between model confidences $\widehat{p}\left( x\right) ,\widehat{p}\left( {x}^{\prime }\right)$ , rather than the inputs $x,{x}^{\prime }$ themselves.
58
+
59
+ ## 3 THE LOCAL CALIBRATION ERROR
60
+
61
+ Recall that commonly used metrics for calibration, such as the ECE or the MCE, are global in nature and thus only measure an aggregate reliability over the entire dataset, making them insufficient for many applications. An ideal calibration metric would instead measure calibration at an individual level; however, doing so is impossible without making assumptions about the ground truth distribution [Zhao et al., 2020]. A localized calibration metric represents an adjustable balance between these two extremes. Ideally, such a metric should measure calibration at a local level (where the extent of the local neighborhood can be chosen by the user) and group similar data points together.
62
+
63
+ In this section, we introduce the local calibration error (LCE), a kernel-based metric that allows us to measure the calibration locally around a prediction. Our metric leverages learned features to automatically group similar samples into a soft neighborhood, and allows the neighborhood size to be set with a hyperparameter $\gamma$ . We also consider only points with a similar model confidence as the prediction, so that similarity is defined in terms of distance both in the feature space and in model confidence. Thus, the LCE effectively creates soft groupings that depend on the feature space; with a semantically meaningful feature space, these groupings correspond to useful subsets of the data. We then mention a few design choices and visualize LCE maps over a 2D feature space to show that we can use our metric to diagnose regions of local miscalibration.
64
+
65
+ ### 3.1 LOCAL CALIBRATION ERROR METRIC
66
+
67
+ We propose a metric to measure calibration locally around a given prediction. The calibration of similar samples should be similar, so we use a kernel similarity function ${k}_{\gamma } : \mathcal{X} \times$ $\mathcal{X} \rightarrow {\mathbb{R}}_{ + }$ , which provides similarity scores, to define soft local neighborhoods. ${k}_{\gamma }\left( {x,{x}^{\prime }}\right)$ has bandwidth $\gamma > 0$ , which determines the extent of the local neighborhood - as $\gamma$ increases, the neighborhood grows. Less similar (i.e., further away) samples ${x}^{\prime }$ have less influence on the local calibration metric at $x$ . Also, as with the ECE and MCE (Eqs. 2 and 3), we use binning and consider only the points in the same confidence bin as $x$ . Thus, the samples that influence the local calibration metric at $x$ are similar to $x$ in both features and model confidences.
68
+
69
+ More formally, let $\phi : \mathcal{X} \rightarrow {\mathbb{R}}^{d}$ be a feature map that transforms an input to a feature vector, and let ${k}_{\gamma }$ be parameterized as ${k}_{\gamma }\left( {x,{x}^{\prime }}\right) = g\left( {\left( {\phi \left( x\right) - \phi \left( {x}^{\prime }\right) }\right) /\gamma }\right)$ for some Lipschitz function $g : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}_{ + }$ . Then given a data point $x \in \mathcal{X}$ and a classifier $\left( {f,\widehat{p}}\right)$ , the local calibration error (LCE) of the model at $x$ is the expected difference between the model's confidence and accuracy on a randomly sampled data point ${x}^{\prime } \sim \Pr$ , weighted by the kernel similarity ${k}_{\gamma }\left( {x,{x}^{\prime }}\right)$ .
70
+
71
+ We say a probabilistic classifier $\left( {\widehat{p}, f}\right)$ is perfectly locally calibrated with respect to ${k}_{\gamma }$ if
72
+
73
+ $$
74
+ \mathop{\sup }\limits_{{x \in \operatorname{supp}\left( \Pr \right) }}\underset{ \mathrel{\text{:=}} {\mathrm{{LCE}}}_{\gamma }^{ * }\left( {x;f,\widehat{p}}\right) }{\underbrace{\left( \begin{matrix} {\mathbb{E}}_{\left( {{x}^{\prime },{y}^{\prime }}\right) \sim \Pr }\left\lbrack {\left( {\widehat{p}\left( {x}^{\prime }\right) }\right) - \mathbb{1}\left\lbrack {f\left( {x}^{\prime }\right) = {y}^{\prime }}\right\rbrack }\right\rbrack \\ \cdot {k}_{\gamma }\left( {x,{x}^{\prime }}\right) \left| {\widehat{p}\left( {x}^{\prime }\right) = \widehat{p}\left( x\right) }\right| \end{matrix}\right) }} = 0.
75
+ $$
76
+
77
+ Similar to perfect calibration, perfect local calibration is achieved by the Bayes-optimal classifier. In general, perfect local calibration is a much stricter notion than perfect calibration due to localizing to each indiviudal data point $x$ , and reduces to perfect calibration if ${k}_{\gamma }\left( {x,{x}^{\prime }}\right) \equiv 1$ is a trivial kernel.
78
+
79
+ To define LCE on a finite dataset, we perform an additional binning on the confidence to deal with the conditioning.
80
+
81
+ Let $\mathcal{D} = \left( {\left( {{x}_{1},{y}_{1}}\right) ,\ldots ,\left( {{x}_{N},{y}_{N}}\right) }\right)$ be a dataset, and let $\beta \left( x\right) = \left\{ {i : \widehat{p}\left( {x}_{i}\right) \in B\left( {\widehat{p}\left( x\right) }\right) }\right\}$ be the set of indices of the points in $\mathcal{D}$ occupying the same confidence bin as $x$ . Then we can compute the LCE by
82
+
83
+ $$
84
+ {\operatorname{LCE}}_{\gamma }\left( {x;f,\widehat{p}}\right) =
85
+ $$
86
+
87
+ $$
88
+ \left| \frac{\mathop{\sum }\limits_{{i \in \beta \left( x\right) }}\left( {\widehat{p}\left( {x}_{i}\right) - \mathbb{1}\left\lbrack {f\left( {x}_{i}\right) = {y}_{i}}\right\rbrack }\right) k\left( {x,{x}_{i}}\right) }{\mathop{\sum }\limits_{{i \in \beta \left( x\right) }}k\left( {x,{x}_{i}}\right) }\right| . \tag{4}
89
+ $$
90
+
91
+ Note that the quantity $\left( {\widehat{p}\left( {x}_{i}\right) - \mathbb{1}\left\lbrack {f\left( {x}_{i}\right) = {y}_{i}}\right\rbrack }\right)$ is simply the difference between the confidence and the accuracy for sample ${x}_{i}$ , and the denominator is a normalization term.
92
+
93
+ We then define the maximum local calibration error (MLCE) as
94
+
95
+ $$
96
+ {\operatorname{MLCE}}_{\gamma }\left( {f,\widehat{p}}\right) \mathrel{\text{:=}} \mathop{\max }\limits_{x}{\operatorname{LCE}}_{\gamma }\left( {x;f,\widehat{p}}\right) . \tag{5}
97
+ $$
98
+
99
+ Intuitively, the LCE considers a neighborhood about a sample $x$ (as defined by the kernel ${k}_{\gamma }$ and the confidence bin $B$ ), and computes the kernel-weighted average of the difference between the confidence and accuracy for each sample in that neighborhood. Note that by changing the bandwidth $\gamma$ , we can interpolate the LCE between an individualized calibration metric (as $\gamma \rightarrow 0$ ) and a global one (as $\gamma \rightarrow \infty$ ). Lemma 1 makes this more concrete under the assumption that $\mathop{\lim }\limits_{{\gamma \rightarrow \infty }}{k}_{\gamma }\left( {x,{x}^{\prime }}\right) = 1$ (proof in Appendix C). For example, the Laplacian and Gaussian kernels satisfy this condition. Lemma 1. As $\gamma \rightarrow \infty$ , the MLCE converges to the MCE.
100
+
101
+ Theorem 1 shows that under certain regularity conditions, the finite-sample estimator $\operatorname{LCE}\left( x\right)$ converges uniformly and sample-efficiently to its true expected value ${\mathrm{{LCE}}}^{ * }\left( x\right)$ :
102
+
103
+ Theorem 1. (Informal) Let $\alpha \leq$ $\mathop{\inf }\limits_{{x \in \mathcal{X}}}\mathbb{E}\left\lbrack {{k}_{\gamma }\left( {X, x}\right) \mathbb{1}\left\lbrack {\widehat{p}\left( X\right) \in B\left( {\widehat{p}\left( x\right) }\right) }\right\rbrack }\right\rbrack \;$ be a lower bound on the expectation of the kernel, and $d$ be the dimension of the kernel's feature space. If the sample size is at least $\widetilde{O}\left( {d/{\alpha }^{4}{\epsilon }^{2}}\right)$ where $\epsilon > 0$ is a target accuracy level, then with probability at least $1 - \delta$ we have
104
+
105
+ $$
106
+ \mathop{\sup }\limits_{{x \in \mathcal{X}}}\left| {{\operatorname{LCE}}_{\gamma }\left( {x;f,\widehat{p}}\right) - {\operatorname{LCE}}_{\gamma }^{ * }\left( {x;f,\widehat{p}}\right) }\right| \leq \epsilon .
107
+ $$
108
+
109
+ Here, $\widetilde{O}$ hides $\log$ factors of the form $\log \left( {1/{\alpha \gamma \delta \epsilon }}\right)$ . In practice, $\alpha$ depends inversely on $\gamma$ .
110
+
111
+ To summarize, the MLCE measures a worst-case individual calibration error as $\gamma \rightarrow 0$ (i.e., the effective neighborhood is very small) and converges to the global MCE metric as $\gamma \rightarrow \infty$ (i.e., the effective neighborhood is very large). In practice, one must pick intermediate values of $\gamma$ to balance a more local notion of calibration error with the sample efficiency of its estimation. A more formal statement and full proof of Theorem 1 can be found in Appendix D.
112
+
113
+ ### 3.2 CHOICE OF KERNEL AND FEATURE MAP
114
+
115
+ In this work, we compute the LCE using 15 equal-width bins and use the Laplacian kernel
116
+
117
+ $$
118
+ {k}_{\gamma }\left( {x,{x}^{\prime }}\right) = \exp \left( {-\frac{{\begin{Vmatrix}\phi \left( x\right) - \phi \left( {x}^{\prime }\right) \end{Vmatrix}}_{1}}{d\gamma }}\right) .
119
+ $$
120
+
121
+ Because distances in a high-dimensional input space (e.g., image data) may not be meaningful on their own, we evaluate the kernel on a feature representation of $x$ rather than on $x$ itself. Features learned from neural networks have proven useful for a wide range of tasks, and they have been shown to capture useful semantic features of their inputs [Huh et al., 2016, Chen et al., 2020, Li et al., 2020]. The kernel similarity term ${k}_{\gamma }\left( {x,{x}^{\prime }}\right)$ in the LCE thus leverages learned features to automatically capture rich subgroups of the data. For image data, we chose an Inception-v3 model as our feature map, since Inception features are widely accepted as useful and representative in many areas (e.g., for generative models [Salimans et al., 2016]), though other neural features can also be used (Appendix B). For tabular data, we used the final hidden layer of the neural network trained for classification.
122
+
123
+ ![0196392e-c44f-7e24-962b-80131a96a6d9_3_969_184_550_338_0.jpg](images/0196392e-c44f-7e24-962b-80131a96a6d9_3_969_184_550_338_0.jpg)
124
+
125
+ Figure 1: MLCE of a Resnet-50 classifier on the ImageNet test split, as a function of the kernel bandwidth $\gamma$ . We use a Laplacian kernel with feature map ${\phi }_{2} \circ {\phi }_{1}$ , where ${\phi }_{1} : \mathcal{X} \rightarrow {\mathbb{R}}^{2048}$ is the Inception-v3 model’s hidden layer. Blue: ${\phi }_{2} : {\mathbb{R}}^{2048} \rightarrow {\mathbb{R}}^{3}$ is t-SNE; orange: ${\phi }_{2} : {\mathbb{R}}^{2048} \rightarrow {\mathbb{R}}^{50}$ is PCA; green: ${\phi }_{2}\left( z\right) = z$ is the identity.
126
+
127
+ In general, we also use t-SNE or PCA to reduce the dimension of the feature space. For example, the 2048-D Inception-v3 embedding is still very high-dimensional. We report results using t-SNE to reduce the dimension to 2 or 3 , as well as PCA to reduce the dimension to reduce the dimension to 50 for image data and 20 for tabular data. Thus the overall representation function is $\phi \left( x\right) = {\phi }_{2}\left( {{\phi }_{1}\left( x\right) }\right)$ , where ${\phi }_{1}$ maps from the inputs to the neural features, and ${\phi }_{2}$ reduces the feature space dimension.
128
+
129
+ Figure 1 plots the MLCE as a function of the kernel bandwidth for an ImageNet classification task. Note that when $\gamma$ is small, the MLCE is 1 (a worst-case individual calibration error), and when $\gamma$ is large, the MLCE approaches the global MCE. To obtain a single summary statistic describing the local calibration error, we can view this plot and pick a value of $\gamma$ between the limiting behaviors. We find that $\gamma = {0.2}$ and $\gamma = {0.4}$ are good intermediate points for the 3-D t-SNE and 50-D PCA features, respectively (Figure 1).
130
+
131
+ ### 3.3 LOCAL CALIBRATION ERROR VISUALIZATIONS
132
+
133
+ To provide more intuition for the LCE, we will now visualize some examples of the LCE metric over a 2-D feature embedding. We consider a ResNet-50 model pre-trained on ImageNet as our classifier $\left( {f,\widehat{p}}\right)$ , and pre-trained Inception-v3 features as a feature map ${\phi }_{1} : \mathcal{X} \rightarrow {\mathbb{R}}^{2048}$ . ${\phi }_{2} : {\mathbb{R}}^{2048} \rightarrow {\mathbb{R}}^{2}$ then reduces the 2048-D feature vectors with t-SNE to two dimensions for ease of visualization in the LCE landscapes, so our overall representation function is $\phi = {\phi }_{2}\left( {{\phi }_{1}\left( x\right) }\right)$ . Figure 2 visualizes the landscape of ${\operatorname{LCE}}_{0.2}\left( {x;f,\widehat{p}}\right)$ as a function of $\phi \left( x\right)$ for the entire ImageNet validation set, as well as the marginal CDF of ${\mathrm{{LCE}}}_{0.2}\left( {x, f,\widehat{p}}\right)$ . We show these visualizations for the two confidence bins with the best and worst global calibration.
134
+
135
+ ![0196392e-c44f-7e24-962b-80131a96a6d9_4_361_187_1061_677_0.jpg](images/0196392e-c44f-7e24-962b-80131a96a6d9_4_361_187_1061_677_0.jpg)
136
+
137
+ Figure 2: We visualize ${\operatorname{LCE}}_{0.2}\left( {x;f,\widehat{p}}\right)$ for a ResNet-50 classifier $\left( {f,\widehat{p}}\right)$ pre-trained on ImageNet, for every image $x$ in the ImageNet validation set. We focus on the bins with the best and worst global calibration errors.
138
+
139
+ In the bin with the best global calibration, Figure 2 (top) shows that the landscape of the LCE is highly non-uniform, and the CDF of the LCE lies almost entirely to the right of the bin's average calibration error. Numerically, the bin's average calibration error is 0.0022 , while its average LCE is 0.0259 . This implies that the regions where the model is un-derconfident and overconfident are spatially clustered within the bin. Because global calibration metrics solely consider the average accuracy and average confidence within a bin, confidence predictions that are too high and too low are averaged out to obtain a low overall error value; they fail to capture this localized miscalibration.
140
+
141
+ In the bin with the worst global calibration, Figure 2 (bottom) clearly shows that the LCE still has high variance, even though the average calibration error of the bin (0.0455) is much closer to its average LCE (0.0515). The CDF plot provides more evidence that the landscape is not flat - there is no sharp rise at the bin calibration error. However, in this case the regions that are underconfident and overconfident are not clustered spatially.
142
+
143
+ ## 4 LCE RECALIBRATION
144
+
145
+ In this section, we introduce local recalibration (LoRe), a non-parametric recalibration method that adjusts a model's output confidences to achieve better local calibration. Our method improves the LCE more than existing recalibration methods, and using our method improves performance on both downstream fairness tasks and downstream decision-making tasks. Specifically, we can leverage the kernel similarity to achieve strong calibration for all sensitive subgroups of a population, without knowing those groups a priori. As long as the feature space is semantically meaningful, LoRe provides utility for downstream tasks without needing subgroup labels for the samples. If the subgroups are known, one can recover standard group-wise recalibration methods (and metrics) by using the improper kernel $k\left( {x,{x}^{\prime }}\right) = \mathbb{1}\left\lbrack {x,{x}^{\prime }\text{in same group}}\right\rbrack$ .
146
+
147
+ The idea behind our method is simple: we can compute the kernel-weighted accuracy for each point $x$ of all the points that are in the same confidence bin as $x$ , and then reset the confidence of $x$ to this kernel-weighted accuracy value. Note that using the kernel function to compute this value is intuitively like taking a weighted average of the accuracy of the points in the local neighborhood of $x$ . Thus, LoRe can be considered a local analogue to histogram binning.
148
+
149
+ More formally, given a trained classifier $\left( {f,\widehat{p}}\right)$ , a recalibration dataset $\mathcal{D} = \left( {\left( {{x}_{1},{y}_{1}}\right) ,\ldots ,\left( {{x}_{N},{y}_{N}}\right) }\right)$ , and a fixed point $x \in \mathcal{X}$ , let $\beta \left( x\right) = \left\{ {i : \widehat{p}\left( {x}_{i}\right) \in B\left( {\widehat{p}\left( x\right) }\right) }\right\}$ be the set of indices of the points in $\mathcal{D}$ occupying the same confidence bin as $x$ . Then, we compute the recalibrated confidence as
150
+
151
+ $$
152
+ {\widehat{p}}^{\prime }\left( x\right) = \frac{\mathop{\sum }\limits_{{i \in \beta \left( x\right) }}{k}_{\gamma }\left( {x,{x}_{i}}\right) \mathbb{1}\left\lbrack {f\left( {x}_{i}\right) = {y}_{i}}\right\rbrack }{\mathop{\sum }\limits_{{i \in \beta \left( x\right) }}{k}_{\gamma }\left( {x,{x}_{i}}\right) }. \tag{6}
153
+ $$
154
+
155
+ Equation 6 represents the kernel-weighted average accuracy of all points in the same confidence bin as $x$ . In the limit as the kernel bandwidth $\gamma \rightarrow \infty ,{\widehat{p}}^{\prime }\left( x\right) \rightarrow \mathop{\sum }\limits_{{i \in \beta \left( x\right) }}\mathbb{1}\left\lbrack {f\left( {x}_{i}\right) = }\right.$ $\left. {y}_{i}\right\rbrack /\left| {\beta \left( x\right) }\right|$ recovers histogram binning. As $\gamma \rightarrow 0,{\widehat{p}}^{\prime }\left( x\right) \rightarrow$ $\mathbb{1}\left\lbrack {f\left( {x}_{{i}^{ * }}\right) = {y}_{{i}^{ * }}}\right\rbrack$ , where ${i}^{ * } = \arg \mathop{\min }\limits_{{i \in \beta \left( x\right) }}{k}_{\gamma }\left( {x,{x}_{i}}\right)$ , thus recovering a nearest-neighbor method. For intermediate $\gamma$ , our method interpolates between the two extremes. Throughout this work, we used used $\gamma = {0.2}$ for LoRe with tSNE and $\gamma = {0.4}$ for LoRe with PCA throughout this work, since these represent intermediate points between the limiting behaviors of the LCE (e.g., see Fig. 1).
156
+
157
+ ## 5 EXPERIMENTS
158
+
159
+ In this section, we show empirically that LoRe substantially improves LCE values, and that these lower LCE values lead to better performance on downstream fairness and decision-making tasks. In particular, we evaluate the local calibration through the MLCE, because we are interested in understanding a model's worst-case local miscalibration. On each task, we compare the performance of LoRe to no recalibration ('Original'), temperature scaling ('TS') [Guo et al., 2017], histogram binning ('HB') [Zadrozny and Elkan, 2001], isotonic regression ('IR') [Zadrozny and Elkan, 2002], and direct MMCE optimization ('MMCE') [Kumar et al., 2018], all strong global recalibration methods.
160
+
161
+ We first run extensive experiments on three datasets to demonstrate that LoRe outperforms all baselines and achieves the lowest MLCE over a wide range of $\gamma$ values. We then evaluate the performance of our method on a fairness task, where it is important that a model is well-calibrated for all sensitive subgroups of a given population, and we demonstrate that it achieves the lowest group-wise MCE. Notably, we find that the MLCE is well-correlated with the group-wise MCE across all experimental settings, and thus achieving low MLCE is a good indicator that a model has good group-wise calibration. Finally, we compare the performance of our method against the baselines on a cost-sensitive decision-making task, where there is a low cost for a prediction of "unsure" but a high cost for an incorrect prediction, and show that our method achieves the lowest cost.
162
+
163
+ ### 5.1 DATASETS
164
+
165
+ ImageNet dataset [Deng et al., 2009]: A large-scale dataset of natural scene images with 1000 classes; over 1.3 million images total. The training/validation/test split is 1.3mil / 25,000 / 25,000.
166
+
167
+ ![0196392e-c44f-7e24-962b-80131a96a6d9_5_968_186_549_340_0.jpg](images/0196392e-c44f-7e24-962b-80131a96a6d9_5_968_186_549_340_0.jpg)
168
+
169
+ Figure 3: MLCE vs. kernel bandwidth $\gamma$ for ImageNet. LoRe (with t-SNE and $\gamma = {0.2}$ ) achieves the lowest MLCE for a wide range of $\gamma$ . This suggests that LoRe leads to lower LCE values across the whole dataset.
170
+
171
+ UCI Communities and Crime dataset [Dua and Graff, 2017]: This tabular dataset contains a number of attributes about American neighborhoods (e.g., race, age, employment, housing, etc.). The task is to predict the neighborhood's violent crime rate. The training/validation/test split is 1494/500/500. We randomize the training/validation/test split over multiple trials.
172
+
173
+ CelebA dataset [Liu et al., 2015]: A large-scale dataset of face images with 40 attribute annotations (e.g., glasses, hair color, etc.); 202,599 images total. The training/validation/test split is 162,770/19,867/19,962.
174
+
175
+ ### 5.2 RECALIBRATION PERFORMANCE
176
+
177
+ LoRe substantially improves the LCE values. In Figure 3, we plot the MLCE as a function of $\gamma$ . We can see that our method outperforms all baselines (strong global calibration methods) across a wide range of $\gamma$ values on Ima-geNet. This is true despite the fact that we only implement LoRe for a single $\gamma$ . Appendix B provides similar results on the Communities & Crime, CelebA, CIFAR-10, and CIFAR-100 datasets. Note that LoRe works well regardless of the feature map and the dimensionality reduction method (see Section 5.3 for results with both t-SNE and PCA). Although the results shown in this section use Inception-v3 features, we show similar results in Appendix B with AlexNet [Krizhevsky, 2014], DenseNet121 [Huang et al., 2018], and ResNet101 [He et al., 2015] features.
178
+
179
+ Recall that as $\gamma$ gets large, the MLCE recovers the MCE; because LoRe does well even at large $\gamma$ , our method also works well at minimizing global calibration errors. The fact that LoRe lowers the worst-case LCE suggests that it leads to lower LCE values across the entire dataset.
180
+
181
+ ### 5.3 DOWNSTREAM FAIRNESS PERFORMANCE
182
+
183
+ Experimental Setup In many fairness-related applications, it is important to show that a model is well-calibrated for all sensitive subgroups of a given population. For example, when predicting the crime rate of a neighborhood, a model should not be considered well-calibrated if it consistently underestimates the crime rate for neighborhoods of one demographic, while overestimating the crime rate for neighborhoods of a different demographic. Therefore, in this section, we examine the worst-case group-wise miscalibra-tion of a classifier, as measured by the maximum group-wise MCE when evaluated only on sensitive sub-groups. We consider the following experimental settings:
184
+
185
+ <table><tr><td>Recalibration method</td><td>Setting 1</td><td>Setting 2</td><td>Setting 3</td></tr><tr><td>No recalibration</td><td>${0.588} \pm {0.107}$</td><td>${0.407} \pm {0.087}$</td><td>0.446 ± 0.083</td></tr><tr><td>Temperature scaling</td><td>${0.521} \pm {0.092}$</td><td>${0.532} \pm {0.089}$</td><td>${0.441} \pm {0.079}$</td></tr><tr><td>Histogram binning</td><td>${0.515} \pm {0.081}$</td><td>${0.218} \pm {0.056}$</td><td>${0.268} \pm {0.067}$</td></tr><tr><td>Isotonic regression</td><td>0.596 ± 0.063</td><td>0.615 ± 0.100</td><td>0.716 ± 0.082</td></tr><tr><td>MMCE optimization</td><td>0.526 ± 0.172</td><td>${0.429} \pm {0.079}$</td><td>${0.475} \pm {0.079}$</td></tr><tr><td>Group temp. scaling</td><td>${0.423} \pm {0.066}$</td><td>${0.673} \pm {0.075}$</td><td>${0.329} \pm {0.108}$</td></tr><tr><td>Group hist. binning</td><td>${0.542} \pm {0.083}$</td><td>${0.260} \pm {0.053}$</td><td>${0.352} \pm {0.068}$</td></tr><tr><td>LoRe (tSNE) (ours)</td><td>$\mathbf{{0.351} \pm {0.084}}$</td><td>$\mathbf{{0.165} \pm {0.055}}$</td><td>${0.235} \pm {0.063}$</td></tr><tr><td>LoRe (PCA) (ours)</td><td>${0.392} \pm {0.071}$</td><td>0.167 ± 0.013</td><td>$\mathbf{{0.154} \pm {0.082}}$</td></tr></table>
186
+
187
+ Table 1: Performance on downstream fairness, as measured by maximum group-wise MCE (lower is better). Experimental settings as described in Section 5.3. Mean and standard deviations are computed over 60 random seeds for setting 1, and 20 for settings 2 and 3. Best results are bold.
188
+
189
+ 1. UCI Communities and Crime: Predict whether a neighborhood's crime rate is higher than the median; group neighborhoods by their plurality race (White, Black, Asian, Indian, Hispanic). 60 random seeds for model training.
190
+
191
+ 2. CelebA: Predict a person's hair color (bald, black, blond, brown, gray, other); group people by hair type (bald, receding hairline, bangs, straight, wavy, other). 20 random seeds for model training.
192
+
193
+ 3. CelebA: Predict a person's hair type; group people by their hair color; inverse of Setting 2.20 random seeds for model training.
194
+
195
+ For each task, we train a classifier (see Appendix A for full details) and recalibrate its output confidences using each of the recalibration methods.
196
+
197
+ Results Table 1 reports the maximum group-wise MCE for each of the recalibration methods on each of the three tasks. LoRe outperforms the other baselines, achieving an average ${49}\%$ reduction over no recalibration and an average ${23}\%$ improvement over the next best global recalibration method. (Note in Figures 5, 6, and 7 in Appendix B that LoRe is the most effective method of lowering the MLCE over a wide range of $\gamma$ ). Notably, LoRe is robust to the feature map used (tSNE vs. PCA). It even outperforms global methods applied to each individual group, implying that correcting local calibration errors is a robust way to improve group calibration that generalizes better than naive alternatives.
198
+
199
+ <table><tr><td/><td>Setting 1</td><td>Setting 2</td><td>Setting 3</td></tr><tr><td>ECE</td><td>0.102</td><td>-0.061</td><td>-0.195</td></tr><tr><td>MCE</td><td>0.233</td><td>0.439</td><td>0.281</td></tr><tr><td>NLL</td><td>0.542</td><td>0.045</td><td>-0.287</td></tr><tr><td>Brier</td><td>0.101</td><td>0.144</td><td>-0.280</td></tr><tr><td>${\mathrm{{MLCE}}}_{0.2}$ (tSNE)</td><td>0.642</td><td>0.801</td><td>0.591</td></tr><tr><td>${\mathrm{{MLCE}}}_{0.4}$ (PCA)</td><td>0.639</td><td>0.659</td><td>0.778</td></tr></table>
200
+
201
+ Table 2: Pearson correlation between max group-wise MCE and other calibration metrics (higher is better). Experimental settings as described in Section 5.3. Best results in bold. MLCE is better-correlated with the max group-wise MCE than any of the global metrics.
202
+
203
+ Moreover, Table 2 shows that the maximum group-wise MCE is well-correlated with the MLCE, and it is in fact much better correlated with MLCE than global calibration metrics are. Taken together, our results indicate that lowering the LCE has positive implications in fairness settings that cannot be achieved by simply lowering global metrics like the ECE. For reference, we also include the performance of all recalibration methods on various global calibration metrics in Table 3, which shows that LoRe is able to improve worst-case group-wise calibration without meaningfully sacrificing (and in some cases improving) average-case global calibration.
204
+
205
+ ### 5.4 DOWNSTREAM DECISION-MAKING
206
+
207
+ Experimental Setup Machine learning predictions are often used to make decisions, and in many situations an agent must select a best action in expectation. As an example, suppose there is a low cost $u$ associated with returning "unsure" and a high cost $w$ associated with returning an incorrect classification (e.g., in situations such as autonomous driving, being unsure incurs only the small cost of calling a human operator, but making an incorrect classification incurs a high cost). An agent with good uncertainty quantification can make a more optimal decision about whether to return a classification or return "unsure"; for a calibrated model, it would be optimal for the agent to return "unsure" below the confidence threshold of $1 - u/w$ , and return a prediction above this threshold.
208
+
209
+ <table><tr><td rowspan="2">Recalibration method</td><td colspan="3">Setting 1</td><td colspan="3">Setting 2</td><td colspan="3">Setting 3</td></tr><tr><td>ECE(%)</td><td>NLL</td><td>Brier</td><td>ECE(%)</td><td>NLL</td><td>Brier</td><td>ECE(%)</td><td>NLL</td><td>Brier</td></tr><tr><td>No recalibration</td><td>15. ${1}_{2,7}$</td><td>.96.25</td><td>.17.02</td><td>1.80.3</td><td>.617 .004</td><td>.641.004</td><td>1.10.3</td><td>.782 .006</td><td>.571.006</td></tr><tr><td>Temperature scaling</td><td>${4.9}_{1.7}$</td><td>.43.03</td><td>.14.01</td><td>2.0 0.3</td><td>.619.004</td><td>.622 .003</td><td>1.0 0.2</td><td>.781.006</td><td>.569 .002</td></tr><tr><td>Histogram binning</td><td>${\mathbf{{3.3}}}_{1.1}$</td><td>.48.03</td><td>.150.1</td><td>2.5 ${}_{0,2}$</td><td>.619.004</td><td>.614.003</td><td>2.5 0.4</td><td>.788 .006</td><td>.552 .002</td></tr><tr><td>Isotonic regression</td><td>${30.6}_{2.3}$</td><td>.79.05</td><td>.30.02</td><td>${2.6}_{0.2}$</td><td>.618 .004</td><td>${.615}_{.003}$</td><td>2.40.2</td><td>.785 .006</td><td>.553 .002</td></tr><tr><td>MMCE optimization</td><td>${4.4}_{1.3}$</td><td>.43.03</td><td>.14.01</td><td>${3.8}_{0.7}$</td><td>.646.014</td><td>.679.012</td><td>5.40.8</td><td>.808 .009</td><td>.619.009</td></tr><tr><td>LoRe (tSNE) (ours)</td><td>${3.5}_{1.1}$</td><td>.42.02</td><td>.13.01</td><td>2.80.2</td><td>.623.004</td><td>.613 .003</td><td>2.6</td><td>.792.006</td><td>.551 .002</td></tr><tr><td>LoRe (PCA) (ours)</td><td>${4.5}_{1.4}$</td><td>.44.02</td><td>.14.01</td><td>${3.1}_{0.2}$</td><td>.628 .004</td><td>.606.003</td><td>2.80.4</td><td>.792 .007</td><td>.538 .002</td></tr></table>
210
+
211
+ Table 3: Performance on global calibration metrics, formatted as ${\mathrm{{mean}}}_{\mathrm{{sd}}}$ . Lower is better. Experimental settings as described in Section 5.3. Best results are bold. Across all settings, LoRe generally achieves a global calibration error that is comparable to the baselines.
212
+
213
+ ![0196392e-c44f-7e24-962b-80131a96a6d9_7_228_645_552_344_0.jpg](images/0196392e-c44f-7e24-962b-80131a96a6d9_7_228_645_552_344_0.jpg)
214
+
215
+ Figure 4: Reward attained vs. reward ratio for the ImageNet dataset (higher is better). LoRe achieves the highest rewards across a wide range of reward ratios.
216
+
217
+ Following this policy - i.e., returning "unsure" when the confidence is below this threshold and returning a prediction when the confidence is above it, we used a ResNet-50 model to make predictions on ImageNet, and recalibrated the predictions with each of the recalibration methods. For each method, we then calculated the total reward attained under various reward ratios $w/u$ , as well as various global calibration metrics.
218
+
219
+ Results In Figure 4, we show the improvement in the total reward over the original classifier (i.e., no recalibration) as a function of the reward ratio $w/u$ (the ratio of the cost of an incorrect classification to the cost of being unsure). Across a wide range of reward ratios, LoRe achieves the highest reward. The MLCE curves for this task are shown in Figure 3; note that LoRe also achieves lower LCE values than the global recalibration methods. Table 4 reports several global calibration metrics; LoRe achieves strong global calibration. These results indicate that our recalibration method most effectively lowers LCE values without sacrificing (and indeed often improving) average-case global calibration, and that these lower LCE values correspond to better performance on this decision-making task.
220
+
221
+ <table><tr><td>Recalibration method</td><td>ECE</td><td>NLL</td><td>Brier</td></tr><tr><td>No recalibration</td><td>0.037</td><td>0.959</td><td>40.64</td></tr><tr><td>Temperature scaling</td><td>0.022</td><td>0.948</td><td>40.60</td></tr><tr><td>Histogram binning</td><td>0.012</td><td>0.952</td><td>40.59</td></tr><tr><td>Isotonic regression</td><td>0.011</td><td>0.945</td><td>40.59</td></tr><tr><td>MMCE optimization</td><td>0.061</td><td>0.965</td><td>40.67</td></tr><tr><td>LoRe (ours)</td><td>0.007</td><td>0.955</td><td>40.58</td></tr></table>
222
+
223
+ Table 4: Performance on global calibration metrics on ImageNet. Lower is better. Best results are bold. LoRe achieves strong global calibration according to all metrics.
224
+
225
+ ## 6 CONCLUSION
226
+
227
+ In this paper, we introduce the local calibration error (LCE), a metric that measures calibration in a localized neighborhood around a prediction. The LCE spans the gap between fully global and fully individualized calibration error, with an effective neighborhood size that can be set with a bandwidth parameter $\gamma$ . We also introduce LoRe, a recalibration method that greatly improves the local calibration. Finally, we demonstrate that achieving lower LCE values leads to better performance on downstream fairness and decision-making tasks. In future work, we hope to further explore alternative feature spaces to define similarity, since the quality of our metric depends on the quality of the feature space underpinning the notion of locality.
228
+
229
+ ## References
230
+
231
+ Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 1597-1607. PMLR, 13–18 Jul 2020. URL http://proceedings.mlr.press/v119/chen20j.html.
232
+
233
+ Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision
234
+
235
+ and pattern recognition, pages 248-255. Ieee, 2009.
236
+
237
+ Dheeru Dua and Casey Graff. UCI machine learning repository,2017. URL http://archive.ics.uci.edu/ml.
238
+
239
+ Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Rein-gold, and Richard Zemel. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, ITCS '12, page 214-226, New York, NY, USA, 2012. Association for Computing Machinery. ISBN 9781450311151. doi: 10.1145/2090236.2090255.
240
+
241
+ Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On calibration of modern neural networks. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1321-1330, International Convention Centre, Sydney, Australia, 06-11 Aug 2017. PMLR. URL http:// proceedings.mlr.press/v70/guo17a.html.
242
+
243
+ Kartik Gupta, Amir Rahimi, Thalaiyasingam Ajanthan, Thomas Mensink, Cristian Sminchisescu, and Richard Hartley. Calibration of neural networks using splines. 2020.
244
+
245
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. 2015.
246
+
247
+ Úrsula Hébert-Johnson, Michael P. Kim, Omer Rein-gold, and Guy N. Rothblum. Calibration for the (computationally-identifiable) masses, 2017. URL http: //arxiv.org/abs/1711.08513.
248
+
249
+ Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. Densely connected convolutional networks. 2018.
250
+
251
+ Minyoung Huh, Pulkit Agrawal, and Alexei A. Efros. What makes imagenet good for transfer learning?, 2016. URL http://arxiv.org/abs/1608.08614.
252
+
253
+ Jon M. Kleinberg, Sendhil Mullainathan, and Manish Ragha-van. Inherent trade-offs in the fair determination of risk scores,2016. URL http://arxiv.org/abs/ 1609.05807.
254
+
255
+ Alex Krizhevsky. One weird trick for parallelizing convolutional neural networks. CoRR, abs/1404.5997, 2014. URL http://arxiv.org/abs/1404.5997.
256
+
257
+ Meelis Kull, Miquel Perello Nieto, Markus Kängsepp, Telmo Silva Filho, Hao Song, and Peter Flach. Beyond temperature scaling: Obtaining well-calibrated multi-class probabilities with dirichlet calibration. In H. Wallach, H. Larochelle, A. Beygelzimer,
258
+
259
+ F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32, pages 12316-12326. Curran Associates, Inc., 2019. WRL https://proceedings.neurips.cc/paper/2019/file/
260
+
261
+ 8ca01ea920679a0fe3728441494041b9-Paper. pdf.
262
+
263
+ Aviral Kumar, Sunita Sarawagi, and Ujjwal Jain. Trainable calibration measures for neural networks from kernel mean embeddings. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2805- 2814, Stockholmsmässan, Stockholm Sweden, 10-15 Jul 2018. PMLR. URL http://proceedings.mlr.press/v80/kumar18a.html.
264
+
265
+ Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, and Lei Li. On the sentence embed-dings from pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9119-9130. Association for Computational Linguistics, November 2020. doi: 10.18653/v1/2020.emnlp-main.733. URL https://www.aclweb.org/anthology/ 2020.emnlp-main.733.
266
+
267
+ Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015.
268
+
269
+ Mahdi Pakdaman Naeini, Gregory F Cooper, and Milos Hauskrecht. Obtaining well calibrated probabilities using bayesian binning. In ${AAAI}$ , page 2901-2907,2015.
270
+
271
+ Jeremy Nixon, Michael W. Dusenberry, Linchuan Zhang, Ghassen Jerfel, and Dustin Tran. Measuring calibration in deep learning. ArXiv, abs/1904.01685, 2019.
272
+
273
+ John C. Platt. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In Advances in Large Margin Classifiers, pages 61-74. MIT Press, 1999.
274
+
275
+ Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. On fairness and calibration. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30, pages 5680-5689. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper/2017/file/ b8b9c74ac526fffbeb2d39ab038d1cd7-Paper. pdf.
276
+
277
+ Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In NIPS, 2016.
278
+
279
+ Elizabeth A. Stuart. Matching methods for causal inference: A review and a look forward. Statistical science : a review journal of the Institute of Mathematical Statistics, 25(1): 1-21, 2010. doi: 10.1214/09-STS313.
280
+
281
+ David Widmann, Fredrik Lindsten, and Dave Zachariah. Calibration tests in multi-class classification: A unifying framework. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. WRL https://proceedings.neurips.cc/paper/2019/file/ 1c336b8080f82bcc2cd2499b4c57261d-Paper. pdf.
282
+
283
+ Bianca Zadrozny and Charles Elkan. Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers. In In Proceedings of the Eighteenth International Conference on Machine Learning, pages 609-616. Morgan Kaufmann, 2001.
284
+
285
+ Bianca Zadrozny and Charles Elkan. Transforming classifier scores into accurate multiclass probability estimates. In SIGKDD, 2002.
286
+
287
+ Jize Zhang, Bhavya Kailkhura, and T. Yong-Jin Han. Mix-n-match: Ensemble and compositional methods for uncertainty calibration in deep learning. 2020.
288
+
289
+ Shengjia Zhao, Tengyu Ma, and S. Ermon. Individual calibration with randomized forecasting. In ${ICML},{2020}$ .
290
+
291
+ Shengjia Zhao, Michael P. Kim, Roshni Sahoo, Tengyu Ma, and Stefano Ermon. Calibrating predictions to decisions: A novel approach to multi-class calibration, 2021.
292
+
293
+ ## A MODEL ARCHITECTURE, TRAINING, AND OTHER HYPERPARAMETERS
294
+
295
+ For ImageNet and CelebA, we compute the ECE, MCE, and LCE using 15 equal-width confidence bins. For the UCI communities and crime dataset, we use 5 equal-width bins because the dataset is much smaller (500 datapoints for recalibration). These numbers of bins represent a good tradeoff between bias and variance in estimating the relevant calibration errors. We also ran some initial experiments with equal-mass binning, but found that the results were very similar to those obtained with equal-width binning.
296
+
297
+ ### A.1 IMAGENET
298
+
299
+ For all experiments with the ImageNet dataset, we used the pre-trained ResNet-50 model from the PyTorch torchvision package as our classifier. To calculate the LCE and apply LoRe, we used pre-trained Inception-v3 features, applying either t-SNE to reduce their dimension to 3 or PCA to reduce their dimension to 50 , as a feature representation for the kernel.
300
+
301
+ ### A.2 UCI COMMUNITIES AND CRIME
302
+
303
+ For all experiments with the UCI communities and crime dataset, we used a 3-hidden-layer dense neural network as our base classifier. Each hidden layer had a width of 100 and was followed by a Leaky ReLU activation. We applied dropout with probability 0.4 after the final hidden layer. We trained the model using the Adam optimizer with a batch size of 64 and a learning rate of $3 \times {10}^{-4}$ until the validation accuracy stopped improving. All other hyperparameters were PyTorch defaults. Training was done locally on a laptop CPU. We trained 60 different models with different random seeds to perform the experiments described in Section 5.3 and Figure 5. To calculate the LCE and apply LoRe, we used the final hidden layer representation learned by our model, applying t-SNE to reduce the dimension to 2 or PCA to reduce their dimension to 20, as a feature representation for the kernel.
304
+
305
+ ### A.3 CELEBA
306
+
307
+ For all experiments with the CelebA dataset, we trained a ResNet50 model and used it as our base classifier. We applied standard data augmentation to our training data (random crops & random horizontal flips), and trained all models for 10 epochs using the Adam optimizer with a learning rate of $1 \times {10}^{-3}$ and a batch size of 256 . All other hyperparameters were PyTorch defaults. Training was distributed over 4 GPUs, and training a single model took about 30 minutes. For both Setting 2 and Setting 3 (described in Section 5.3), we trained 20 models with different random seeds to perform the experiments shown in Figures 6 and 7. To calculate the LCE and apply LoRe, we used pre-trained Inception-v3 features, applying t-SNE to reduce their dimension to 2 or PCA to reduce their dimension to 50 , as a feature representation for the kernel.
308
+
309
+ ## B ADDITIONAL EXPERIMENTAL RESULTS
310
+
311
+ In Figures 5, 6, and 7, we visualize the MLCE achieved by all recalibration methods for the three experimental settings evaluated in Section 5.3. Figure 3 in the main paper shows the same visualization for all methods on ImageNet. In Figure 8, we plot the MLCE achieved by all recalibration methods for CIFAR-100, and in Figure 9, we do the same for CIFAR-10. Across all settings and datasets, our method LoRe is the most effective at minimizing MLCE across a wide range of $\gamma$ , even accounting for variations between runs.
312
+
313
+ ![0196392e-c44f-7e24-962b-80131a96a6d9_11_156_184_1433_426_0.jpg](images/0196392e-c44f-7e24-962b-80131a96a6d9_11_156_184_1433_426_0.jpg)
314
+
315
+ Figure 5: MLCE vs. kernel bandwidth $\gamma$ for all methods on task 1 of Section 5.3, predicting whether a neighborhood’s crime rate is higher than the median. LoRe achieves the best (or competitive) MLCE for most $\gamma$ . Left: 2D t-SNE features. Right: 20D PCA features.
316
+
317
+ ![0196392e-c44f-7e24-962b-80131a96a6d9_11_156_782_1433_423_0.jpg](images/0196392e-c44f-7e24-962b-80131a96a6d9_11_156_782_1433_423_0.jpg)
318
+
319
+ Figure 6: MLCE vs. kernel bandwidth $\gamma$ for all methods on task 2 of Section 5.3, predicting hair color on CelebA. LoRe achieves the best MLCE for virtually all values of $\gamma$ . Left: 2D t-SNE features. Right: 50D PCA features.
320
+
321
+ ![0196392e-c44f-7e24-962b-80131a96a6d9_11_156_1376_1433_426_0.jpg](images/0196392e-c44f-7e24-962b-80131a96a6d9_11_156_1376_1433_426_0.jpg)
322
+
323
+ Figure 7: MLCE vs. kernel bandwidth for all methods on task 3 of Section 5.3, predicting hair type on CelebA. LoRe achieves the best MLCE for all $\gamma < 1$ and is tied with histogram binning for $\gamma > 1$ . Left: 2D t-SNE features. Right: 50D PCA features.
324
+
325
+ In these figures, "Original" represents no recalibration, "TS" represents temperature scaling, "HB" represents histogram binning, "IR" represents isotonic regression, "MMCE" represents direct MMCE optimization, and "LoRe" is our method.
326
+
327
+ Next, we examine the influence of the specific feature map used. In Figures 10, 11, 12, and 13, we plot the MLCE achieved by all recalibration methods for ImageNet using Inception-v3, AlexNet, DenseNet121, and ResNet101 features. In Figures 14 and 15, we plot the MLCE achieved by all recalibration methods for ImageNet when the features used to calculate the
328
+
329
+ ![0196392e-c44f-7e24-962b-80131a96a6d9_12_156_186_691_424_0.jpg](images/0196392e-c44f-7e24-962b-80131a96a6d9_12_156_186_691_424_0.jpg)
330
+
331
+ Figure 8: MLCE vs. kernel bandwidth $\gamma$ for all recalibration methods for CIFAR-100 (3D t-SNE features). LoRe achieves lower MLCE for most $\gamma$ .
332
+
333
+ ![0196392e-c44f-7e24-962b-80131a96a6d9_12_901_186_682_422_0.jpg](images/0196392e-c44f-7e24-962b-80131a96a6d9_12_901_186_682_422_0.jpg)
334
+
335
+ Figure 9: MLCE vs. kernel bandwidth $\gamma$ for all recalibration methods for CIFAR-10 (3D t-SNE features). LoRe achieves lower MLCE for most $\gamma$ .
336
+
337
+ ![0196392e-c44f-7e24-962b-80131a96a6d9_12_159_776_681_419_0.jpg](images/0196392e-c44f-7e24-962b-80131a96a6d9_12_159_776_681_419_0.jpg)
338
+
339
+ Figure 10: MLCE vs. kernel bandwidth $\gamma$ for all recalibration methods on ImageNet using Inception-v3 features. LoRe achieves the best MLCE for most $\gamma$ .
340
+
341
+ ![0196392e-c44f-7e24-962b-80131a96a6d9_12_902_776_684_422_0.jpg](images/0196392e-c44f-7e24-962b-80131a96a6d9_12_902_776_684_422_0.jpg)
342
+
343
+ Figure 11: MLCE vs. kernel bandwidth $\gamma$ for all recalibration methods on ImageNet using AlexNet features. LoRe achieves the best MLCE for most $\gamma$ .
344
+
345
+ MLCE are different from the features used by LoRe. For completeness, in Figures 16, 17, 18, and 19, we also visualize the average LCE for all experimental settings. All plots show similar results: LoRe performs best over a wide range of $\gamma$ .
346
+
347
+ ## C PROOF OF LEMMA 1
348
+
349
+ We restate Lemma 1 below, and provide the proof:
350
+
351
+ Lemma 2. Assume that $\mathop{\lim }\limits_{{\gamma \rightarrow \infty }}{k}_{\gamma }\left( {x,{x}^{\prime }}\right) = 1$ for all $x,{x}^{\prime } \in \mathcal{X}$ . Then, as $\gamma \rightarrow \infty$ , the MLCE converges to the MCE.
352
+
353
+ Proof. Since $\mathop{\lim }\limits_{{\gamma \rightarrow \infty }}{k}_{\gamma }\left( {x,{x}^{\prime }}\right) = 1$ identically,
354
+
355
+ $$
356
+ \mathop{\lim }\limits_{{\gamma \rightarrow \infty }}\mathop{\max }\limits_{x}{\widehat{\operatorname{LCE}}}_{\gamma }\left( {x;f,\widehat{p}}\right) = \mathop{\max }\limits_{x}\frac{1}{\left| \beta \left( x\right) \right| }\left| {\mathop{\sum }\limits_{{i \in \beta \left( x\right) }}\widehat{p}\left( {x}_{i}\right) - \mathbb{1}\left\lbrack {f\left( {x}_{i}\right) = {y}_{i}}\right\rbrack }\right|
357
+ $$
358
+
359
+ $$
360
+ = \mathop{\max }\limits_{k}\frac{1}{\left| {B}_{k}\right| }\left| {\mathop{\sum }\limits_{{i \in {B}_{k}}}\widehat{p}\left( {x}_{i}\right) - \mathbb{1}\left\lbrack {f\left( {x}_{i}\right) = {y}_{i}}\right\rbrack }\right|
361
+ $$
362
+
363
+ $$
364
+ = \mathop{\max }\limits_{k}\left| {\operatorname{conf}\left( {B}_{k}\right) - \operatorname{acc}\left( {B}_{k}\right) }\right|
365
+ $$
366
+
367
+ $$
368
+ = \operatorname{MCE}\left( {x;f,\widehat{p}}\right)
369
+ $$
370
+
371
+ ![0196392e-c44f-7e24-962b-80131a96a6d9_13_159_229_681_420_0.jpg](images/0196392e-c44f-7e24-962b-80131a96a6d9_13_159_229_681_420_0.jpg)
372
+
373
+ Figure 12: MLCE vs. kernel bandwidth $\gamma$ for all recalibration methods on ImageNet using DenseNet121 features. LoRe achieves the best MLCE for most $\gamma$ .
374
+
375
+ ![0196392e-c44f-7e24-962b-80131a96a6d9_13_902_230_683_421_0.jpg](images/0196392e-c44f-7e24-962b-80131a96a6d9_13_902_230_683_421_0.jpg)
376
+
377
+ Figure 13: MLCE vs. kernel bandwidth $\gamma$ for all recalibration methods on ImageNet using ResNet101 features. LoRe achieves the best MLCE for most $\gamma$ .
378
+
379
+ ![0196392e-c44f-7e24-962b-80131a96a6d9_13_159_889_683_424_0.jpg](images/0196392e-c44f-7e24-962b-80131a96a6d9_13_159_889_683_424_0.jpg)
380
+
381
+ Figure 14: MLCE vs. kernel bandwidth $\gamma$ for all recalibration methods on ImageNet using Inception-v3 features to calculate the MLCE and AlexNet features for applying LoRe.
382
+
383
+ ![0196392e-c44f-7e24-962b-80131a96a6d9_13_905_886_678_426_0.jpg](images/0196392e-c44f-7e24-962b-80131a96a6d9_13_905_886_678_426_0.jpg)
384
+
385
+ Figure 15: MLCE vs. kernel bandwidth $\gamma$ for all recalibration methods on ImageNet using DenseNet121 features to calculate the MLCE and AlexNet features for applying LoRe.
386
+
387
+ ![0196392e-c44f-7e24-962b-80131a96a6d9_13_159_1548_684_425_0.jpg](images/0196392e-c44f-7e24-962b-80131a96a6d9_13_159_1548_684_425_0.jpg)
388
+
389
+ Figure 16: Average LCE vs. kernel bandwidth $\gamma$ for all recalibration methods on ImageNet (3D t-SNE features). LoRe gets lower average LCE for most $\gamma$ .
390
+
391
+ ![0196392e-c44f-7e24-962b-80131a96a6d9_13_902_1548_680_425_0.jpg](images/0196392e-c44f-7e24-962b-80131a96a6d9_13_902_1548_680_425_0.jpg)
392
+
393
+ Figure 17: Average LCE vs. kernel bandwidth $\gamma$ for all recalibration methods in task 1 (crime data, 2D t-SNE features). LoRe gets lower average LCE for most $\gamma$ .
394
+
395
+ ![0196392e-c44f-7e24-962b-80131a96a6d9_14_159_187_684_421_0.jpg](images/0196392e-c44f-7e24-962b-80131a96a6d9_14_159_187_684_421_0.jpg)
396
+
397
+ Figure 18: Average LCE vs. kernel bandwidth $\gamma$ for all recalibration methods in task 2 (CelebA, 2D t-SNE features). LoRe gets lower average LCE for most $\gamma$ .
398
+
399
+ ![0196392e-c44f-7e24-962b-80131a96a6d9_14_906_187_677_421_0.jpg](images/0196392e-c44f-7e24-962b-80131a96a6d9_14_906_187_677_421_0.jpg)
400
+
401
+ Figure 19: Average LCE vs. kernel bandwidth $\gamma$ for all recalibration methods in task 3 (CelebA, 2D t-SNE features). LoRe gets lower average LCE for most $\gamma$ .
402
+
403
+ ## D FORMAL STATEMENT AND PROOF OF THEOREM 1
404
+
405
+ Let ${B}_{1},\ldots ,{B}_{N}$ denote a set of bins that partition $\left\lbrack {0,1}\right\rbrack$ , and $B\left( p\right)$ denote the bin that a particular $p \in \left\lbrack {0,1}\right\rbrack$ belongs to. Let ${a}_{f}\left( {x, y}\right) = \mathbb{1}\left\lbrack {f\left( x\right) = y}\right\rbrack$ indicate the accuracy of a the classifier $\left( {f,\widehat{p}}\right)$ on an input $x$ . We consider the signed local calibration error (SLCE):
406
+
407
+ $$
408
+ {\operatorname{SLCE}}_{\gamma }\left( {x;f,\widehat{p}}\right) \mathrel{\text{:=}} \frac{\mathbb{E}\left\lbrack {\left( {\widehat{p}\left( X\right) - {a}_{f}\left( {X, Y}\right) }\right) {k}_{\gamma }\left( {X, x}\right) \mid \widehat{p}\left( X\right) \in B\left( {\widehat{p}\left( x\right) }\right) }\right\rbrack }{\mathbb{E}\left\lbrack {{k}_{\gamma }\left( {X, x}\right) \mid \widehat{p}\left( X\right) \in B\left( {\widehat{p}\left( x\right) }\right) }\right\rbrack }
409
+ $$
410
+
411
+ $$
412
+ = \frac{\mathbb{E}\left\lbrack {\left( {\widehat{p}\left( X\right) - {a}_{f}\left( {X, Y}\right) }\right) {k}_{\gamma }\left( {X, x}\right) \mathbb{1}\left\lbrack {\widehat{p}\left( X\right) \in B\left( {\widehat{p}\left( x\right) }\right) }\right\rbrack }\right\rbrack }{\mathbb{E}\left\lbrack {{k}_{\gamma }\left( {X, x}\right) \mathbb{1}\left\lbrack {\widehat{p}\left( X\right) \in B\left( {\widehat{p}\left( x\right) }\right) }\right\rbrack }\right\rbrack }.
413
+ $$
414
+
415
+ ### D.1 ASSUMPTIONS AND FORMAL STATEMENT OF THEOREM
416
+
417
+ We make the following assumptions:
418
+
419
+ Assumption A (Lipschitz kernel). The kernel ${k}_{\gamma }$ takes the form
420
+
421
+ $$
422
+ {k}_{\gamma }\left( {x,{x}^{\prime }}\right) = g\left( \frac{\phi \left( x\right) - \phi \left( {x}^{\prime }\right) }{\gamma }\right) ,
423
+ $$
424
+
425
+ where $\phi : \mathcal{X} \rightarrow {\mathbb{R}}^{d}$ is a representation function, and $g : {\mathbb{R}}^{d} \rightarrow \left\lbrack {0,1}\right\rbrack$ is L-Lipschitz with respect to some norm $\parallel \cdot \parallel$ .
426
+
427
+ Note this definition may require an implicit rescaling (for example, we can take $\phi \left( x\right) \leftarrow {\phi }^{\text{feature }}\left( x\right) /d$ for a $d$ -dimensional feature map ${\phi }^{\text{feature }}$ and take $g\left( z\right) = \exp \left( {-\parallel z{\parallel }_{1}}\right)$ , which corresponds to the Laplacian kernel we used in Section 3.2).
428
+
429
+ Assumption B (Binning-aware covering number). For any $\epsilon > 0$ , the range of the representation function $\phi \left( \mathcal{X}\right) \mathrel{\text{:=}}$ $\{ \phi \left( x\right) : x \in \mathcal{X}\}$ has an $\epsilon$ -cover in the $\parallel \cdot \parallel$ -norm of size ${\left( C/\epsilon \right) }^{d}$ for some absolute constant $C > 0$ : There exists a set ${\mathcal{N}}_{\epsilon } \in \mathcal{X}$ with $\left| {\mathcal{N}}_{\epsilon }\right| \leq {\left( C/\epsilon \right) }^{d}$ such that for any $x \in \mathcal{X}$ , there exists some ${x}^{\prime } \in {\mathcal{N}}_{\epsilon }$ such that $\begin{Vmatrix}{\phi \left( x\right) - \phi \left( {x}^{\prime }\right) }\end{Vmatrix} \leq \epsilon$ and $B\left( {\widehat{p}\left( x\right) }\right) = B\left( {\widehat{p}\left( {x}^{\prime }\right) }\right)$ .
430
+
431
+ Assumption $\mathbf{C}$ (Lower bound on expectation of kernel within bin). We have
432
+
433
+ $$
434
+ \mathop{\inf }\limits_{{x \in \mathcal{X}}}\mathbb{E}\left\lbrack {{k}_{\gamma }\left( {X, x}\right) \mathbb{1}\left\lbrack {\widehat{p}\left( X\right) \in B\left( {\widehat{p}\left( x\right) }\right) }\right\rbrack }\right\rbrack \geq \alpha
435
+ $$
436
+
437
+ for some constant $\alpha \in \left( {0,1}\right)$ .
438
+
439
+ The constant $\alpha$ characterizes the hardness of estimating the SLCE from samples. Intuitively, with a smaller $\alpha$ , the denominator in SLCE gets smaller and we desire a higher accuracy in estimating both the numerator and the denominator. Also note that in practice the value of $\alpha$ typically depends on $\gamma$ . We analyze the following estimator of the SLCE using $n$ samples:
440
+
441
+ $$
442
+ {\widehat{\operatorname{SLCE}}}_{\gamma }\left( {x;f,\widehat{p}}\right) = \frac{\frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\left( {\widehat{p}\left( {x}_{i}\right) - {a}_{f}\left( {{x}_{i},{y}_{i}}\right) }\right) {k}_{\gamma }\left( {{x}_{i}, x}\right) \mathbf{1}\left\lbrack {\widehat{p}\left( {x}_{i}\right) \in B\left( {\widehat{p}\left( x\right) }\right) }\right\rbrack }{\frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}{k}_{\gamma }\left( {{x}_{i}, x}\right) \mathbf{1}\left\lbrack {\widehat{p}\left( {x}_{i}\right) \in B\left( {\widehat{p}\left( x\right) }\right) }\right\rbrack }. \tag{7}
443
+ $$
444
+
445
+ Theorem 2. Under Assumptions $A, B$ , and $C$ , Suppose the sample size $n \geq \widetilde{O}\left( {d/{\alpha }^{4}{\epsilon }^{2}}\right)$ where $\epsilon > 0$ is a target accuracy level, then with probability at least $1 - \delta$ we have
446
+
447
+ $$
448
+ \mathop{\sup }\limits_{{x \in \mathcal{X}}}\left| {{\widehat{\operatorname{SLCE}}}_{\gamma }\left( {x;f,\widehat{p}}\right) - {\operatorname{SLCE}}_{\gamma }\left( {x;f,\widehat{p}}\right) }\right| \leq \epsilon ,
449
+ $$
450
+
451
+ where $\widetilde{O}$ hides $\log$ factors of the form $\log \left( {L/{\gamma \epsilon \delta \alpha }}\right)$ .
452
+
453
+ Theorem 2 shows that $\widetilde{O}\left( {d/{\epsilon }^{2}{\alpha }^{4}}\right)$ samples is sufficient to estimate the SLCE simultaneously for all $x \in \mathcal{X}$ . When $\alpha = \Omega \left( 1\right)$ , this sample complexity only depends polynomially in terms of the representation dimension $d$ and logarithmically in other constants (such as $L,\gamma$ , and the failure probability $\delta$ ).
454
+
455
+ ### D.2 PROOF OF THEOREM 2
456
+
457
+ Step 1. We first study the estimation at finitely many $x$ ’s. Let $\mathcal{N} \subseteq \mathcal{X}$ be a finite set of $x$ ’s with $\left| \mathcal{N}\right| = N$ . Since ${k}_{\gamma } \in \left\lbrack {0,1}\right\rbrack$ and $\left| {\widehat{p}\left( x\right) - {a}_{f}\left( {x, y}\right) }\right| \leq 1$ are bounded variables, by the Hoeffding inequality and a union bound, we have
458
+
459
+ $$
460
+ \mathbb{P}\left( {\mathop{\sup }\limits_{{x \in \mathcal{N}}}\left| {\;\frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\left( {\widehat{p}\left( {x}_{i}\right) - {a}_{f}\left( {{x}_{i},{y}_{i}}\right) }\right) {k}_{\gamma }\left( {{x}_{i}, x}\right) \mathbb{1}\left\lbrack {\widehat{p}\left( {x}_{i}\right) \in B\left( {\widehat{p}\left( x\right) }\right) }\right\rbrack }\right. }\right.
461
+ $$
462
+
463
+ $$
464
+ \left. {-\mathbb{E}\left\lbrack {\left( {\widehat{p}\left( X\right) - {a}_{f}\left( {X, Y}\right) }\right) {k}_{\gamma }\left( {X, x}\right) \mathbb{1}\left\lbrack {\widehat{p}\left( X\right) \in B\left( {\widehat{p}\left( x\right) }\right) }\right\rbrack }\right\rbrack > {\alpha \epsilon }/{10}}\right)
465
+ $$
466
+
467
+ $$
468
+ \leq \exp \left( {-{cn}{\alpha }^{2}{\epsilon }^{2} + \log N}\right) .
469
+ $$
470
+
471
+ Therefore, as long as $n \geq O\left( {\log \left( {N/\delta }\right) /{\epsilon }^{2}{\alpha }^{2}}\right)$ samples, the above probability is bounded by $\delta$ . In other words, with probability at least $1 - \delta$ , we have simultaneously
472
+
473
+ $$
474
+ \underset{ \mathrel{\text{:=}} \widehat{A}\left( x\right) }{\underbrace{\frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\left( {\widehat{p}\left( {x}_{i}\right) - {a}_{f}\left( {{x}_{i},{y}_{i}}\right) }\right) {k}_{\gamma }\left( {{x}_{i}, x}\right) \mathbb{1}\left\lbrack {\widehat{p}\left( {x}_{i}\right) \in B\left( {\widehat{p}\left( x\right) }\right) }\right\rbrack }}
475
+ $$
476
+
477
+ $$
478
+ - \underset{ \mathrel{\text{:=}} A\left( x\right) }{\underbrace{\mathbb{E}\left\lbrack {\left( {\widehat{p}\left( X\right) - {a}_{f}\left( {X, Y}\right) }\right) {k}_{\gamma }\left( {X, x}\right) \mathbb{1}\left\lbrack {\widehat{p}\left( X\right) \in B\left( {\widehat{p}\left( x\right) }\right) }\right\rbrack }\right\rbrack }}
479
+ $$
480
+
481
+ $$
482
+ \leq {\alpha \epsilon }/{10}\text{.}
483
+ $$
484
+
485
+ for all $x \in \mathcal{N}$ . Similarly, when $n \geq O\left( {\log \left( {N/\delta }\right) /{\epsilon }^{2}{\alpha }^{4}}\right)$ , we also have (with probability at least $1 - \delta$ )
486
+
487
+ $$
488
+ \left| {\underset{ \mathrel{\text{:=}} \widehat{B}\left( x\right) }{\underbrace{\frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}{k}_{\gamma }\left( {{x}_{i}, x}\right) \mathbb{1}\left\lbrack {\widehat{p}\left( {x}_{i}\right) \in B\left( {\widehat{p}\left( x\right) }\right) }\right\rbrack }} - \underset{ \mathrel{\text{:=}} B\left( x\right) }{\underbrace{\mathbb{E}\left\lbrack {{k}_{\gamma }\left( {X, x}\right) \mathbb{1}\left\lbrack {\widehat{p}\left( X\right) \in B\left( {\widehat{p}\left( x\right) }\right) }\right\rbrack }\right\rbrack }}}\right| \leq {\alpha }^{2}\epsilon /{10}
489
+ $$
490
+
491
+ On these concentration events, we have for any $x \in \mathcal{N}$ that
492
+
493
+ $$
494
+ \left| {{\widehat{\operatorname{SLCE}}}_{\gamma }\left( {x;f,\widehat{p}}\right) - {\operatorname{SLCE}}_{\gamma }\left( {x;f,\widehat{p}}\right) }\right| = \left| {\frac{\widehat{A}\left( x\right) }{\widehat{B}\left( x\right) } - \frac{A\left( x\right) }{B\left( x\right) }}\right|
495
+ $$
496
+
497
+ $$
498
+ \leq \left| {\widehat{A}\left( x\right) }\right| \left| {\frac{1}{\widehat{B}\left( x\right) } - \frac{1}{B\left( x\right) }}\right| + \frac{1}{\left| B\left( x\right) \right| }\left| {\widehat{A}\left( x\right) - A\left( x\right) }\right|
499
+ $$
500
+
501
+ $$
502
+ \leq 1 \cdot \frac{{\alpha }^{2}\epsilon /{10}}{\alpha \left( {\alpha - {\alpha }^{2}\epsilon /{10}}\right) } + \frac{1}{\alpha } \cdot {\alpha \epsilon }/{10}
503
+ $$
504
+
505
+ $$
506
+ \leq \epsilon \text{.}
507
+ $$
508
+
509
+ Step 2. We now extend the bound to all $x \in \mathcal{X}$ using the covering argument. By Assumption B, we can take an ${\alpha }^{2}{\epsilon \gamma }/\left( {10L}\right)$ - covering of $\phi \left( \mathcal{X}\right)$ with cardinality $N \leq {\left( {10}CL/{\alpha }^{2}\epsilon \gamma \right) }^{d}$ . Let $\mathcal{N} \subset \mathcal{X}$ denote the covering set (in the $\mathcal{X}$ space). This means that for any $x \in \mathcal{X}$ , there exists ${x}^{\prime } \in \mathcal{N}$ such that $\begin{Vmatrix}{\phi \left( x\right) - \phi \left( {x}^{\prime }\right) }\end{Vmatrix} \leq {\alpha }^{2}{\epsilon \gamma }/\left( {10L}\right)$ and $B\left( {\widehat{p}\left( x\right) }\right) = B\left( {\widehat{p}\left( {x}^{\prime }\right) }\right)$ , which implies that for any $\widetilde{x} \in \mathcal{X}$ we have
510
+
511
+ $$
512
+ \left| {k\left( {\widetilde{x}, x}\right) - k\left( {\widetilde{x},{x}^{\prime }}\right) }\right| = \left| {f\left( \frac{\phi \left( \widetilde{x}\right) - \phi \left( x\right) }{\gamma }\right) - f\left( \frac{\phi \left( \widetilde{x}\right) - \phi \left( {x}^{\prime }\right) }{\gamma }\right) }\right|
513
+ $$
514
+
515
+ $$
516
+ \leq \frac{L}{\gamma }\begin{Vmatrix}{\phi \left( x\right) - \phi \left( {x}^{\prime }\right) }\end{Vmatrix}
517
+ $$
518
+
519
+ $$
520
+ \leq {\alpha }^{2}\epsilon /{10},
521
+ $$
522
+
523
+ where we have used the Lipschitzness assumption of $g$ (Assumption A). This further implies
524
+
525
+ $$
526
+ \left| {\widehat{A}\left( x\right) - \widehat{A}\left( {x}^{\prime }\right) }\right| = \left| {\;\frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\left( {\widehat{p}\left( {x}_{i}\right) - {a}_{f}\left( {{x}_{i},{y}_{i}}\right) }\right) {k}_{\gamma }\left( {{x}_{i}, x}\right) \mathbb{1}\left\lbrack {\widehat{p}\left( {x}_{i}\right) \in B\left( {\widehat{p}\left( x\right) }\right) }\right\rbrack }\right.
527
+ $$
528
+
529
+ $$
530
+ - \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\left( {\widehat{p}\left( {x}_{i}\right) - {a}_{f}\left( {{x}_{i},{y}_{i}}\right) }\right) {k}_{\gamma }\left( {{x}_{i},{x}^{\prime }}\right) \mathbb{1}\left\lbrack {\widehat{p}\left( {x}_{i}\right) \in B\left( {\widehat{p}\left( {x}^{\prime }\right) }\right) }\right\rbrack
531
+ $$
532
+
533
+ $$
534
+ = \left| {\frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\left( {\widehat{p}\left( {x}_{i}\right) - {a}_{f}\left( {{x}_{i},{y}_{i}}\right) }\right) \left\lbrack {{k}_{\gamma }\left( {{x}_{i}, x}\right) - {k}_{\gamma }\left( {{x}_{i},{x}^{\prime }}\right) }\right\rbrack \mathbb{1}\left\lbrack {\widehat{p}\left( {x}_{i}\right) \in B\left( {\widehat{p}\left( x\right) }\right) }\right\rbrack }\right|
535
+ $$
536
+
537
+ $$
538
+ \leq \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\left| {\widehat{p}\left( {x}_{i}\right) - {a}_{f}\left( {{x}_{i},{y}_{i}}\right) }\right| \cdot \left| {{k}_{\gamma }\left( {{x}_{i}, x}\right) - {k}_{\gamma }\left( {{x}_{i},{x}^{\prime }}\right) }\right| \cdot \mathbb{1}\left\lbrack {\widehat{p}\left( {x}_{i}\right) \in B\left( {\widehat{p}\left( x\right) }\right) }\right\rbrack
539
+ $$
540
+
541
+ $$
542
+ \leq {\alpha }^{2}\epsilon /{10}\text{.}
543
+ $$
544
+
545
+ Similarly, we have $\left| {A\left( x\right) - A\left( {x}^{\prime }\right) }\right| \leq {\alpha }^{2}\epsilon /{10},\left| {\widehat{B}\left( x\right) - \widehat{B}\left( {x}^{\prime }\right) }\right| \leq {\alpha }^{2}\epsilon /{10}$ , and $\left| {B\left( x\right) - B\left( {x}^{\prime }\right) }\right| \leq {\alpha }^{2}\epsilon /{10}$ . This means that the estimation error at $x$ is close to that at ${x}^{\prime } \in \mathcal{N}$ and consequently also bounded by $\epsilon$ :
546
+
547
+ $$
548
+ \left| {{\widehat{\operatorname{SLCE}}}_{\gamma }\left( {x;f,\widehat{p}}\right) - {\operatorname{SLCE}}_{\gamma }\left( {x;f,\widehat{p}}\right) }\right| = \left| {\frac{\widehat{A}\left( x\right) }{\widehat{B}\left( x\right) } - \frac{A\left( x\right) }{B\left( x\right) }}\right|
549
+ $$
550
+
551
+ $$
552
+ \leq \left| {\frac{\widehat{A}\left( x\right) }{\widehat{B}\left( x\right) } - \frac{\widehat{A}\left( {x}^{\prime }\right) }{\widehat{B}\left( {x}^{\prime }\right) }}\right| + \left| {\frac{\widehat{A}\left( {x}^{\prime }\right) }{\widehat{B}\left( {x}^{\prime }\right) } - \frac{A\left( {x}^{\prime }\right) }{B\left( {x}^{\prime }\right) }}\right| + \left| {\frac{A\left( {x}^{\prime }\right) }{B\left( {x}^{\prime }\right) } - \frac{A\left( x\right) }{B\left( x\right) }}\right|
553
+ $$
554
+
555
+ $$
556
+ \leq 3\left\lbrack {1 \cdot \frac{{\alpha }^{2}\epsilon /{10}}{\alpha \left( {\alpha - {\alpha }^{2}\epsilon /{10}}\right) } + \frac{1}{\alpha } \cdot {\alpha }^{2}\epsilon /{10}}\right\rbrack
557
+ $$
558
+
559
+ $$
560
+ \leq \epsilon \text{.}
561
+ $$
562
+
563
+ Therefore, taking this $\mathcal{N}$ in step 1, we know that as long as the sample size
564
+
565
+ $$
566
+ N \geq O\left( \frac{\log \left( {\left| \mathcal{N}\right| /\delta }\right) }{{\epsilon }^{2}{\alpha }^{4}}\right) = O\left( \frac{d\left\lbrack {\log \left( {{10CL}/{\alpha }^{2}{\epsilon \gamma }}\right) + \log \left( {1/\delta }\right) }\right\rbrack }{{\alpha }^{4}{\epsilon }^{2}}\right) = \widetilde{O}\left( {d/{\alpha }^{4}{\epsilon }^{2}}\right) ,
567
+ $$
568
+
569
+ we have with probability at least $1 - \delta$ that
570
+
571
+ $$
572
+ \mathop{\sup }\limits_{{x \in \mathcal{X}}}\left| {{\widehat{\operatorname{SLCE}}}_{\gamma }\left( {x;f,\widehat{p}}\right) - {\operatorname{SLCE}}_{\gamma }\left( {x;f,\widehat{p}}\right) }\right| \leq \epsilon .
573
+ $$
574
+
575
+ This is the desired result.
UAI/UAI 2022/UAI 2022 Conference/BCg4lD8ice5/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,330 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Local Calibration: Metrics and Recalibration
2
+
3
+ § ABSTRACT
4
+
5
+ Probabilistic classifiers output confidence scores along with their predictions, and these confidence scores should be calibrated, i.e., they should reflect the reliability of the prediction. Confidence scores that minimize standard metrics such as the expected calibration error (ECE) accurately measure the reliability on average across the entire population. However, it is in general impossible to measure the reliability of an individual prediction. In this work, we propose the local calibration error (LCE) to span the gap between average and individual reliability. For each individual prediction, the LCE measures the average reliability of a set of similar predictions, where similarity is quantified by a kernel function on a pretrained feature space and by a binning scheme over predicted model confidences. We show theoretically that the LCE can be estimated sample-efficiently from data, and empirically find that it reveals miscalibration modes that are more fine-grained than the ECE can detect. Our key result is a novel local recalibration method LoRe, to improve confidence scores for individual predictions and decrease the LCE. Experimentally, we show that our recalibration method produces more accurate confidence scores, which improves downstream fairness and decision making on classification tasks with both image and tabular data.
6
+
7
+ § 1 INTRODUCTION
8
+
9
+ Uncertainty estimation is extremely important in high stakes decision-making tasks. For example, a patient wants to know the probability that a medical diagnosis is correct; an autonomous driving system wants to know the probability that a pedestrian is correctly identified. Uncertainty estimates are usually achieved by predicting a probability along with each classification. Ideally, we want to achieve individual calibration, i.e., we want to predict the probability that each sample is misclassified.
10
+
11
+ However, each sample is observed only once for most datasets (e.g., image classification datasets do not contain identical images), making it impossible to estimate, or even define, the probability of incorrect classification for individual samples. Because of this, commonly used metrics such as the expected calibration error (ECE) measure the gap between a classifier's confidence and accuracy averaged across the entire dataset. Consequently, ECE can be accurately estimated but does not measure the reliability of individual predictions.
12
+
13
+ In this work, we propose the local calibration error (LCE), a calibration metric that spans the gap between fully global (e.g., ECE) and fully individual calibration. Motivated by the success of kernel-based locality in other fields such as fairness (where similar individuals should be treated similarly) [Dwork et al., 2012, Pleiss et al., 2017] and causal inference (where matching techniques are used to find similar neighboring samples) [Stuart, 2010], we approximate the probability of misclassification for an individual sample by computing the average classification error over similar samples, where similarity is measured by a kernel function in a pre-trained feature space and a binning scheme over predicted confidences. Intuitively, two samples are similar if they are close in a pretrained feature space and have similar predicted confidence scores. By choosing the bandwidth of the kernel function, we can trade off estimation accuracy and individuality: when the bandwidth is very large, we recover existing global calibration metrics; when the bandwidth is small, we approximate individual calibration. We choose an intermediate bandwidth, so our metric can be accurately estimated, and provides some measurement on the reliability of individual predictions.
14
+
15
+ Theoretically, we show that the LCE can be estimated with polynomially many samples if the kernel function is bounded. Empirically, we also show that for intermediate values of the bandwidth, the LCE can be accurately estimated and reveals modes of miscalibration that global metrics (such as ECE) fail to uncover.
16
+
17
+ In addition, we introduce a non-parametric, post-hoc localized recalibration method (LoRe), for lowering the LCE. Empirically, LoRe improves fairness by achieving low calibration error on all potentially sensitive subsets of the data, such as racial groups. Notably, it can do so without any prior knowledge of those groups, and is more effective than global methods at this task. In addition, our recalibration method improves decision making when there is a "safe" action that is selected whenever the predicted confidence is low. For example, an automated system which classifies tissue samples as cancerous should request a human expert opinion whenever it is unsure about a classification. In a simulation on an image classification dataset, we show that recalibrated prediction models more accurately choose whether to use the "safe" action, which improves the overall utility.
18
+
19
+ In summary, the contributions of our paper are as follows. (1) We introduce a local calibration metric, the LCE, that is both easy to compute and can estimate the reliability of individual predictions. (2) We introduce a post-hoc localized recalibration method LoRe, that transforms a model's confidence predictions to improve the local calibration. (3) We empirically evaluate LoRe on several downstream tasks and observe that LoRe improves fairness and decision-making more than existing baselines.
20
+
21
+ § 2 BACKGROUND AND RELATED WORK
22
+
23
+ § 2.1 GLOBAL CALIBRATION METRICS
24
+
25
+ Consider a classification task that maps from some input domain (e.g., images) $\mathcal{X}$ to a finite set of labels $\mathcal{Y} =$ $\{ 1,\cdots ,m\}$ . A classifier is a pair $\left( {f,\widehat{p}}\right)$ where $f : \mathcal{X} \rightarrow \mathcal{Y}$ maps each input $x \in \mathcal{X}$ to a label $y \in \mathcal{Y}$ and $\widehat{p} : \mathcal{X} \rightarrow \left\lbrack {0,1}\right\rbrack$ maps each input $x$ to a confidence value $c$ . Let $\Pr$ be a joint distribution on $\mathcal{X} \times \mathcal{Y}$ (e.g., from which training or test data pairs(x, y)are drawn). The classifier $\left( {f,\widehat{p}}\right)$ is perfectly calibrated [Guo et al.,2017] with respect to $\Pr$ if for all $c \in \left\lbrack {0,1}\right\rbrack$
26
+
27
+ $$
28
+ \Pr \left\lbrack {f\left( X\right) = Y \mid \widehat{p}\left( X\right) = c}\right\rbrack = c. \tag{1}
29
+ $$
30
+
31
+ To numerically measure how well a classifier is calibrated, the most commonly used metric is the expected calibration error (ECE) [Naeini et al., 2015, Guo et al., 2017], which measures the average absolute deviation from Eq. 1 over the domain. In practice, given a finite dataset, the ECE is approximated by binning. The predicted confidences $\widehat{p}$ are partitioned into bins ${B}_{1},\ldots ,{B}_{k}$ , and then a weighted average is taken of the absolute difference between the average confidence $\operatorname{conf}\left( {B}_{i}\right)$ and average accuracy $\operatorname{acc}\left( {B}_{i}\right)$ for each
32
+
33
+ bin ${B}_{i}$ :
34
+
35
+ $$
36
+ \operatorname{ECE}\left( {f,\widehat{p}}\right) \mathrel{\text{ := }} \mathop{\sum }\limits_{{i = 1}}^{k}\frac{\left| {B}_{i}\right| }{N}\left| {\operatorname{conf}\left( {B}_{i}\right) - \operatorname{acc}\left( {B}_{i}\right) }\right| . \tag{2}
37
+ $$
38
+
39
+ Similarly, the maximum calibration error (MCE) [Naeini et al., 2015, Guo et al., 2017] measures the average deviation from Eq. 1 in the bin with the highest calibration error, and is defined as
40
+
41
+ $$
42
+ \operatorname{MCE}\left( {f,\widehat{p}}\right) \mathrel{\text{ := }} \mathop{\max }\limits_{i}\left| {\operatorname{conf}\left( {B}_{i}\right) - \operatorname{acc}\left( {B}_{i}\right) }\right| . \tag{3}
43
+ $$
44
+
45
+ § 2.2 EXISTING GLOBAL RECALIBRATION METHODS
46
+
47
+ Many existing methods apply a post-hoc adjustment that changes a model's confidence predictions to improve global calibration, including Platt scaling [Platt, 1999], temperature scaling [Guo et al., 2017], isotonic regression [Zadrozny and Elkan, 2002], and histogram binning [Zadrozny and Elkan, 2001]. These methods all learn a simple transformation from the original confidence predictions to new confidence predictions, and aim to decrease the expected calibration error (ECE). Platt scaling fits a logistic regression model; temperature scaling learns a single temperature parameter to rescale confidence scores for all samples simultaneously; isotonic regression learns a piece-wise constant monotonic function; histogram binning partitions confidence scores into bins $\{ \lbrack 0,\epsilon ),\left\lbrack {\epsilon ,{2\epsilon }),\cdots ,\left\lbrack {1 - \epsilon ,1}\right\rbrack }\right\rbrack$ and sorts each validation sample into a bin based on its confidence $\widehat{p}\left( x\right)$ ; it then resets the confidence level of all samples in the bin to match the classification accuracy of that bin.
48
+
49
+ § 2.3 LOCAL CALIBRATION
50
+
51
+ Two notions of calibration that address some of the deficits of global calibration are class-wise calibration and groupwise calibration. Class-wise calibration groups samples by their true class label [Kull et al., 2019, Nixon et al., 2019] and measures the average class ECE, while group-wise calibration uses pre-specified groupings (e.g., race or gender) [Kleinberg et al., 2016, Pleiss et al., 2017] and measures the average group-wise ECE or maximum group-wise MCE.
52
+
53
+ A few recalibration methods have been proposed for these notions of calibration as well. Dirichlet calibration [Kull et al., 2019] achieves calibration for groups defined by class labels, but does not generalize well to settings with many classes [Zhao et al., 2021]. Multicalibration [Hébert-Johnson et al., 2017] achieves calibration for any group that can be represented by a polynomial sized circuit, but lacks a tractable algorithm. If the groups are known a priori, one can also apply global calibration methods within each group; however, this is impractical in many situations where the groups are not known for new examples at inference time. At an even more local level, Zhao et al. [2020] looks at individual calibration in the regression setting and concludes that individual calibration is impossible to verify with a deterministic forecaster, and thus there is no general method to achieve individual calibration.
54
+
55
+ § 2.4 KERNEL-BASED CALIBRATION METRICS
56
+
57
+ Kumar et al. [2018] introduces the maximum mean calibration error (MMCE), a kernel-based quantity that replaces the hard binning of the standard ECE estimator with a kernel similarity $k\left( {\widehat{p}\left( x\right) ,\widehat{p}\left( {x}^{\prime }\right) }\right)$ between the confidence of two examples. They further propose to optimize the MMCE directly in order to achieve better model calibration globally. Widmann et al. [2019] extends their work and proposes the more general kernel calibration error. Zhang et al. [2020] and Gupta et al. [2020] also consider kernel-based calibration. However, these methods only consider the similarity between model confidences $\widehat{p}\left( x\right) ,\widehat{p}\left( {x}^{\prime }\right)$ , rather than the inputs $x,{x}^{\prime }$ themselves.
58
+
59
+ § 3 THE LOCAL CALIBRATION ERROR
60
+
61
+ Recall that commonly used metrics for calibration, such as the ECE or the MCE, are global in nature and thus only measure an aggregate reliability over the entire dataset, making them insufficient for many applications. An ideal calibration metric would instead measure calibration at an individual level; however, doing so is impossible without making assumptions about the ground truth distribution [Zhao et al., 2020]. A localized calibration metric represents an adjustable balance between these two extremes. Ideally, such a metric should measure calibration at a local level (where the extent of the local neighborhood can be chosen by the user) and group similar data points together.
62
+
63
+ In this section, we introduce the local calibration error (LCE), a kernel-based metric that allows us to measure the calibration locally around a prediction. Our metric leverages learned features to automatically group similar samples into a soft neighborhood, and allows the neighborhood size to be set with a hyperparameter $\gamma$ . We also consider only points with a similar model confidence as the prediction, so that similarity is defined in terms of distance both in the feature space and in model confidence. Thus, the LCE effectively creates soft groupings that depend on the feature space; with a semantically meaningful feature space, these groupings correspond to useful subsets of the data. We then mention a few design choices and visualize LCE maps over a 2D feature space to show that we can use our metric to diagnose regions of local miscalibration.
64
+
65
+ § 3.1 LOCAL CALIBRATION ERROR METRIC
66
+
67
+ We propose a metric to measure calibration locally around a given prediction. The calibration of similar samples should be similar, so we use a kernel similarity function ${k}_{\gamma } : \mathcal{X} \times$ $\mathcal{X} \rightarrow {\mathbb{R}}_{ + }$ , which provides similarity scores, to define soft local neighborhoods. ${k}_{\gamma }\left( {x,{x}^{\prime }}\right)$ has bandwidth $\gamma > 0$ , which determines the extent of the local neighborhood - as $\gamma$ increases, the neighborhood grows. Less similar (i.e., further away) samples ${x}^{\prime }$ have less influence on the local calibration metric at $x$ . Also, as with the ECE and MCE (Eqs. 2 and 3), we use binning and consider only the points in the same confidence bin as $x$ . Thus, the samples that influence the local calibration metric at $x$ are similar to $x$ in both features and model confidences.
68
+
69
+ More formally, let $\phi : \mathcal{X} \rightarrow {\mathbb{R}}^{d}$ be a feature map that transforms an input to a feature vector, and let ${k}_{\gamma }$ be parameterized as ${k}_{\gamma }\left( {x,{x}^{\prime }}\right) = g\left( {\left( {\phi \left( x\right) - \phi \left( {x}^{\prime }\right) }\right) /\gamma }\right)$ for some Lipschitz function $g : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}_{ + }$ . Then given a data point $x \in \mathcal{X}$ and a classifier $\left( {f,\widehat{p}}\right)$ , the local calibration error (LCE) of the model at $x$ is the expected difference between the model's confidence and accuracy on a randomly sampled data point ${x}^{\prime } \sim \Pr$ , weighted by the kernel similarity ${k}_{\gamma }\left( {x,{x}^{\prime }}\right)$ .
70
+
71
+ We say a probabilistic classifier $\left( {\widehat{p},f}\right)$ is perfectly locally calibrated with respect to ${k}_{\gamma }$ if
72
+
73
+ $$
74
+ \mathop{\sup }\limits_{{x \in \operatorname{supp}\left( \Pr \right) }}\underset{ \mathrel{\text{ := }} {\mathrm{{LCE}}}_{\gamma }^{ * }\left( {x;f,\widehat{p}}\right) }{\underbrace{\left( \begin{matrix} {\mathbb{E}}_{\left( {{x}^{\prime },{y}^{\prime }}\right) \sim \Pr }\left\lbrack {\left( {\widehat{p}\left( {x}^{\prime }\right) }\right) - \mathbb{1}\left\lbrack {f\left( {x}^{\prime }\right) = {y}^{\prime }}\right\rbrack }\right\rbrack \\ \cdot {k}_{\gamma }\left( {x,{x}^{\prime }}\right) \left| {\widehat{p}\left( {x}^{\prime }\right) = \widehat{p}\left( x\right) }\right| \end{matrix}\right) }} = 0.
75
+ $$
76
+
77
+ Similar to perfect calibration, perfect local calibration is achieved by the Bayes-optimal classifier. In general, perfect local calibration is a much stricter notion than perfect calibration due to localizing to each indiviudal data point $x$ , and reduces to perfect calibration if ${k}_{\gamma }\left( {x,{x}^{\prime }}\right) \equiv 1$ is a trivial kernel.
78
+
79
+ To define LCE on a finite dataset, we perform an additional binning on the confidence to deal with the conditioning.
80
+
81
+ Let $\mathcal{D} = \left( {\left( {{x}_{1},{y}_{1}}\right) ,\ldots ,\left( {{x}_{N},{y}_{N}}\right) }\right)$ be a dataset, and let $\beta \left( x\right) = \left\{ {i : \widehat{p}\left( {x}_{i}\right) \in B\left( {\widehat{p}\left( x\right) }\right) }\right\}$ be the set of indices of the points in $\mathcal{D}$ occupying the same confidence bin as $x$ . Then we can compute the LCE by
82
+
83
+ $$
84
+ {\operatorname{LCE}}_{\gamma }\left( {x;f,\widehat{p}}\right) =
85
+ $$
86
+
87
+ $$
88
+ \left| \frac{\mathop{\sum }\limits_{{i \in \beta \left( x\right) }}\left( {\widehat{p}\left( {x}_{i}\right) - \mathbb{1}\left\lbrack {f\left( {x}_{i}\right) = {y}_{i}}\right\rbrack }\right) k\left( {x,{x}_{i}}\right) }{\mathop{\sum }\limits_{{i \in \beta \left( x\right) }}k\left( {x,{x}_{i}}\right) }\right| . \tag{4}
89
+ $$
90
+
91
+ Note that the quantity $\left( {\widehat{p}\left( {x}_{i}\right) - \mathbb{1}\left\lbrack {f\left( {x}_{i}\right) = {y}_{i}}\right\rbrack }\right)$ is simply the difference between the confidence and the accuracy for sample ${x}_{i}$ , and the denominator is a normalization term.
92
+
93
+ We then define the maximum local calibration error (MLCE) as
94
+
95
+ $$
96
+ {\operatorname{MLCE}}_{\gamma }\left( {f,\widehat{p}}\right) \mathrel{\text{ := }} \mathop{\max }\limits_{x}{\operatorname{LCE}}_{\gamma }\left( {x;f,\widehat{p}}\right) . \tag{5}
97
+ $$
98
+
99
+ Intuitively, the LCE considers a neighborhood about a sample $x$ (as defined by the kernel ${k}_{\gamma }$ and the confidence bin $B$ ), and computes the kernel-weighted average of the difference between the confidence and accuracy for each sample in that neighborhood. Note that by changing the bandwidth $\gamma$ , we can interpolate the LCE between an individualized calibration metric (as $\gamma \rightarrow 0$ ) and a global one (as $\gamma \rightarrow \infty$ ). Lemma 1 makes this more concrete under the assumption that $\mathop{\lim }\limits_{{\gamma \rightarrow \infty }}{k}_{\gamma }\left( {x,{x}^{\prime }}\right) = 1$ (proof in Appendix C). For example, the Laplacian and Gaussian kernels satisfy this condition. Lemma 1. As $\gamma \rightarrow \infty$ , the MLCE converges to the MCE.
100
+
101
+ Theorem 1 shows that under certain regularity conditions, the finite-sample estimator $\operatorname{LCE}\left( x\right)$ converges uniformly and sample-efficiently to its true expected value ${\mathrm{{LCE}}}^{ * }\left( x\right)$ :
102
+
103
+ Theorem 1. (Informal) Let $\alpha \leq$ $\mathop{\inf }\limits_{{x \in \mathcal{X}}}\mathbb{E}\left\lbrack {{k}_{\gamma }\left( {X,x}\right) \mathbb{1}\left\lbrack {\widehat{p}\left( X\right) \in B\left( {\widehat{p}\left( x\right) }\right) }\right\rbrack }\right\rbrack \;$ be a lower bound on the expectation of the kernel, and $d$ be the dimension of the kernel's feature space. If the sample size is at least $\widetilde{O}\left( {d/{\alpha }^{4}{\epsilon }^{2}}\right)$ where $\epsilon > 0$ is a target accuracy level, then with probability at least $1 - \delta$ we have
104
+
105
+ $$
106
+ \mathop{\sup }\limits_{{x \in \mathcal{X}}}\left| {{\operatorname{LCE}}_{\gamma }\left( {x;f,\widehat{p}}\right) - {\operatorname{LCE}}_{\gamma }^{ * }\left( {x;f,\widehat{p}}\right) }\right| \leq \epsilon .
107
+ $$
108
+
109
+ Here, $\widetilde{O}$ hides $\log$ factors of the form $\log \left( {1/{\alpha \gamma \delta \epsilon }}\right)$ . In practice, $\alpha$ depends inversely on $\gamma$ .
110
+
111
+ To summarize, the MLCE measures a worst-case individual calibration error as $\gamma \rightarrow 0$ (i.e., the effective neighborhood is very small) and converges to the global MCE metric as $\gamma \rightarrow \infty$ (i.e., the effective neighborhood is very large). In practice, one must pick intermediate values of $\gamma$ to balance a more local notion of calibration error with the sample efficiency of its estimation. A more formal statement and full proof of Theorem 1 can be found in Appendix D.
112
+
113
+ § 3.2 CHOICE OF KERNEL AND FEATURE MAP
114
+
115
+ In this work, we compute the LCE using 15 equal-width bins and use the Laplacian kernel
116
+
117
+ $$
118
+ {k}_{\gamma }\left( {x,{x}^{\prime }}\right) = \exp \left( {-\frac{{\begin{Vmatrix}\phi \left( x\right) - \phi \left( {x}^{\prime }\right) \end{Vmatrix}}_{1}}{d\gamma }}\right) .
119
+ $$
120
+
121
+ Because distances in a high-dimensional input space (e.g., image data) may not be meaningful on their own, we evaluate the kernel on a feature representation of $x$ rather than on $x$ itself. Features learned from neural networks have proven useful for a wide range of tasks, and they have been shown to capture useful semantic features of their inputs [Huh et al., 2016, Chen et al., 2020, Li et al., 2020]. The kernel similarity term ${k}_{\gamma }\left( {x,{x}^{\prime }}\right)$ in the LCE thus leverages learned features to automatically capture rich subgroups of the data. For image data, we chose an Inception-v3 model as our feature map, since Inception features are widely accepted as useful and representative in many areas (e.g., for generative models [Salimans et al., 2016]), though other neural features can also be used (Appendix B). For tabular data, we used the final hidden layer of the neural network trained for classification.
122
+
123
+ 1.0 ${\phi }_{2} =$ tsne3 ${\phi }_{2} = \mathrm{{pca}}{50}$ ${\phi }_{2} =$ identity $\gamma = {0.2}$ $\gamma = {0.4}$ ${10}^{0}$ ${10}^{1}$ ${10}^{2}$ Kernel Bandwidth $\gamma$ 0.8 MLCE 0.6 0.4 ${10}^{-3}$ ${10}^{-2}$ ${10}^{-1}$
124
+
125
+ Figure 1: MLCE of a Resnet-50 classifier on the ImageNet test split, as a function of the kernel bandwidth $\gamma$ . We use a Laplacian kernel with feature map ${\phi }_{2} \circ {\phi }_{1}$ , where ${\phi }_{1} : \mathcal{X} \rightarrow {\mathbb{R}}^{2048}$ is the Inception-v3 model’s hidden layer. Blue: ${\phi }_{2} : {\mathbb{R}}^{2048} \rightarrow {\mathbb{R}}^{3}$ is t-SNE; orange: ${\phi }_{2} : {\mathbb{R}}^{2048} \rightarrow {\mathbb{R}}^{50}$ is PCA; green: ${\phi }_{2}\left( z\right) = z$ is the identity.
126
+
127
+ In general, we also use t-SNE or PCA to reduce the dimension of the feature space. For example, the 2048-D Inception-v3 embedding is still very high-dimensional. We report results using t-SNE to reduce the dimension to 2 or 3, as well as PCA to reduce the dimension to reduce the dimension to 50 for image data and 20 for tabular data. Thus the overall representation function is $\phi \left( x\right) = {\phi }_{2}\left( {{\phi }_{1}\left( x\right) }\right)$ , where ${\phi }_{1}$ maps from the inputs to the neural features, and ${\phi }_{2}$ reduces the feature space dimension.
128
+
129
+ Figure 1 plots the MLCE as a function of the kernel bandwidth for an ImageNet classification task. Note that when $\gamma$ is small, the MLCE is 1 (a worst-case individual calibration error), and when $\gamma$ is large, the MLCE approaches the global MCE. To obtain a single summary statistic describing the local calibration error, we can view this plot and pick a value of $\gamma$ between the limiting behaviors. We find that $\gamma = {0.2}$ and $\gamma = {0.4}$ are good intermediate points for the 3-D t-SNE and 50-D PCA features, respectively (Figure 1).
130
+
131
+ § 3.3 LOCAL CALIBRATION ERROR VISUALIZATIONS
132
+
133
+ To provide more intuition for the LCE, we will now visualize some examples of the LCE metric over a 2-D feature embedding. We consider a ResNet-50 model pre-trained on ImageNet as our classifier $\left( {f,\widehat{p}}\right)$ , and pre-trained Inception-v3 features as a feature map ${\phi }_{1} : \mathcal{X} \rightarrow {\mathbb{R}}^{2048}$ . ${\phi }_{2} : {\mathbb{R}}^{2048} \rightarrow {\mathbb{R}}^{2}$ then reduces the 2048-D feature vectors with t-SNE to two dimensions for ease of visualization in the LCE landscapes, so our overall representation function is $\phi = {\phi }_{2}\left( {{\phi }_{1}\left( x\right) }\right)$ . Figure 2 visualizes the landscape of ${\operatorname{LCE}}_{0.2}\left( {x;f,\widehat{p}}\right)$ as a function of $\phi \left( x\right)$ for the entire ImageNet validation set, as well as the marginal CDF of ${\mathrm{{LCE}}}_{0.2}\left( {x,f,\widehat{p}}\right)$ . We show these visualizations for the two confidence bins with the best and worst global calibration.
134
+
135
+ LCE(x) in Bin 5/15 (Best Global Calibration) CDF of LCE(x) in Bin 5/15 (Best Global Calibration) 1.0 0.8 0.6 0.4 0.2 CDF of LCE(x) 0.0 0.02 0.04 0.06 0.08 0.10 0.12 $\operatorname{LCE}\left( x\right)$ CDF of LCE(x) in Bin 12/15 (Worst Global Calibration) 1.0 0.8 0.6 0.4 0.2 CDF of LCE(x) 0.0 Bin Calibration Error 0.02 0.04 0.06 0.08 0.10 0.12 $\operatorname{LCE}\left( x\right)$ 0.12 0.10 0.08 LCI 0.06 0.02 $\phi {\left( x\right) }_{1}^{0}$ LCE(x) in Bin 12/15 (Worst Global Calibration) 0.10 0.08 LC 0.06 0.04 0.02 -60
136
+
137
+ Figure 2: We visualize ${\operatorname{LCE}}_{0.2}\left( {x;f,\widehat{p}}\right)$ for a ResNet-50 classifier $\left( {f,\widehat{p}}\right)$ pre-trained on ImageNet, for every image $x$ in the ImageNet validation set. We focus on the bins with the best and worst global calibration errors.
138
+
139
+ In the bin with the best global calibration, Figure 2 (top) shows that the landscape of the LCE is highly non-uniform, and the CDF of the LCE lies almost entirely to the right of the bin's average calibration error. Numerically, the bin's average calibration error is 0.0022, while its average LCE is 0.0259 . This implies that the regions where the model is un-derconfident and overconfident are spatially clustered within the bin. Because global calibration metrics solely consider the average accuracy and average confidence within a bin, confidence predictions that are too high and too low are averaged out to obtain a low overall error value; they fail to capture this localized miscalibration.
140
+
141
+ In the bin with the worst global calibration, Figure 2 (bottom) clearly shows that the LCE still has high variance, even though the average calibration error of the bin (0.0455) is much closer to its average LCE (0.0515). The CDF plot provides more evidence that the landscape is not flat - there is no sharp rise at the bin calibration error. However, in this case the regions that are underconfident and overconfident are not clustered spatially.
142
+
143
+ § 4 LCE RECALIBRATION
144
+
145
+ In this section, we introduce local recalibration (LoRe), a non-parametric recalibration method that adjusts a model's output confidences to achieve better local calibration. Our method improves the LCE more than existing recalibration methods, and using our method improves performance on both downstream fairness tasks and downstream decision-making tasks. Specifically, we can leverage the kernel similarity to achieve strong calibration for all sensitive subgroups of a population, without knowing those groups a priori. As long as the feature space is semantically meaningful, LoRe provides utility for downstream tasks without needing subgroup labels for the samples. If the subgroups are known, one can recover standard group-wise recalibration methods (and metrics) by using the improper kernel $k\left( {x,{x}^{\prime }}\right) = \mathbb{1}\left\lbrack {x,{x}^{\prime }\text{ in same group }}\right\rbrack$ .
146
+
147
+ The idea behind our method is simple: we can compute the kernel-weighted accuracy for each point $x$ of all the points that are in the same confidence bin as $x$ , and then reset the confidence of $x$ to this kernel-weighted accuracy value. Note that using the kernel function to compute this value is intuitively like taking a weighted average of the accuracy of the points in the local neighborhood of $x$ . Thus, LoRe can be considered a local analogue to histogram binning.
148
+
149
+ More formally, given a trained classifier $\left( {f,\widehat{p}}\right)$ , a recalibration dataset $\mathcal{D} = \left( {\left( {{x}_{1},{y}_{1}}\right) ,\ldots ,\left( {{x}_{N},{y}_{N}}\right) }\right)$ , and a fixed point $x \in \mathcal{X}$ , let $\beta \left( x\right) = \left\{ {i : \widehat{p}\left( {x}_{i}\right) \in B\left( {\widehat{p}\left( x\right) }\right) }\right\}$ be the set of indices of the points in $\mathcal{D}$ occupying the same confidence bin as $x$ . Then, we compute the recalibrated confidence as
150
+
151
+ $$
152
+ {\widehat{p}}^{\prime }\left( x\right) = \frac{\mathop{\sum }\limits_{{i \in \beta \left( x\right) }}{k}_{\gamma }\left( {x,{x}_{i}}\right) \mathbb{1}\left\lbrack {f\left( {x}_{i}\right) = {y}_{i}}\right\rbrack }{\mathop{\sum }\limits_{{i \in \beta \left( x\right) }}{k}_{\gamma }\left( {x,{x}_{i}}\right) }. \tag{6}
153
+ $$
154
+
155
+ Equation 6 represents the kernel-weighted average accuracy of all points in the same confidence bin as $x$ . In the limit as the kernel bandwidth $\gamma \rightarrow \infty ,{\widehat{p}}^{\prime }\left( x\right) \rightarrow \mathop{\sum }\limits_{{i \in \beta \left( x\right) }}\mathbb{1}\left\lbrack {f\left( {x}_{i}\right) = }\right.$ $\left. {y}_{i}\right\rbrack /\left| {\beta \left( x\right) }\right|$ recovers histogram binning. As $\gamma \rightarrow 0,{\widehat{p}}^{\prime }\left( x\right) \rightarrow$ $\mathbb{1}\left\lbrack {f\left( {x}_{{i}^{ * }}\right) = {y}_{{i}^{ * }}}\right\rbrack$ , where ${i}^{ * } = \arg \mathop{\min }\limits_{{i \in \beta \left( x\right) }}{k}_{\gamma }\left( {x,{x}_{i}}\right)$ , thus recovering a nearest-neighbor method. For intermediate $\gamma$ , our method interpolates between the two extremes. Throughout this work, we used used $\gamma = {0.2}$ for LoRe with tSNE and $\gamma = {0.4}$ for LoRe with PCA throughout this work, since these represent intermediate points between the limiting behaviors of the LCE (e.g., see Fig. 1).
156
+
157
+ § 5 EXPERIMENTS
158
+
159
+ In this section, we show empirically that LoRe substantially improves LCE values, and that these lower LCE values lead to better performance on downstream fairness and decision-making tasks. In particular, we evaluate the local calibration through the MLCE, because we are interested in understanding a model's worst-case local miscalibration. On each task, we compare the performance of LoRe to no recalibration ('Original'), temperature scaling ('TS') [Guo et al., 2017], histogram binning ('HB') [Zadrozny and Elkan, 2001], isotonic regression ('IR') [Zadrozny and Elkan, 2002], and direct MMCE optimization ('MMCE') [Kumar et al., 2018], all strong global recalibration methods.
160
+
161
+ We first run extensive experiments on three datasets to demonstrate that LoRe outperforms all baselines and achieves the lowest MLCE over a wide range of $\gamma$ values. We then evaluate the performance of our method on a fairness task, where it is important that a model is well-calibrated for all sensitive subgroups of a given population, and we demonstrate that it achieves the lowest group-wise MCE. Notably, we find that the MLCE is well-correlated with the group-wise MCE across all experimental settings, and thus achieving low MLCE is a good indicator that a model has good group-wise calibration. Finally, we compare the performance of our method against the baselines on a cost-sensitive decision-making task, where there is a low cost for a prediction of "unsure" but a high cost for an incorrect prediction, and show that our method achieves the lowest cost.
162
+
163
+ § 5.1 DATASETS
164
+
165
+ ImageNet dataset [Deng et al., 2009]: A large-scale dataset of natural scene images with 1000 classes; over 1.3 million images total. The training/validation/test split is 1.3mil / 25,000 / 25,000.
166
+
167
+ 1.0 Original TS HB IR MMCE LoRe ${10}^{0}$ ${10}^{1}$ ${10}^{2}$ Kernel Bandwidth $\gamma$ 0.8 MLCE 0.6 0.4 0.2 ${10}^{-3}$ ${10}^{-2}$ ${10}^{-1}$
168
+
169
+ Figure 3: MLCE vs. kernel bandwidth $\gamma$ for ImageNet. LoRe (with t-SNE and $\gamma = {0.2}$ ) achieves the lowest MLCE for a wide range of $\gamma$ . This suggests that LoRe leads to lower LCE values across the whole dataset.
170
+
171
+ UCI Communities and Crime dataset [Dua and Graff, 2017]: This tabular dataset contains a number of attributes about American neighborhoods (e.g., race, age, employment, housing, etc.). The task is to predict the neighborhood's violent crime rate. The training/validation/test split is 1494/500/500. We randomize the training/validation/test split over multiple trials.
172
+
173
+ CelebA dataset [Liu et al., 2015]: A large-scale dataset of face images with 40 attribute annotations (e.g., glasses, hair color, etc.); 202,599 images total. The training/validation/test split is 162,770/19,867/19,962.
174
+
175
+ § 5.2 RECALIBRATION PERFORMANCE
176
+
177
+ LoRe substantially improves the LCE values. In Figure 3, we plot the MLCE as a function of $\gamma$ . We can see that our method outperforms all baselines (strong global calibration methods) across a wide range of $\gamma$ values on Ima-geNet. This is true despite the fact that we only implement LoRe for a single $\gamma$ . Appendix B provides similar results on the Communities & Crime, CelebA, CIFAR-10, and CIFAR-100 datasets. Note that LoRe works well regardless of the feature map and the dimensionality reduction method (see Section 5.3 for results with both t-SNE and PCA). Although the results shown in this section use Inception-v3 features, we show similar results in Appendix B with AlexNet [Krizhevsky, 2014], DenseNet121 [Huang et al., 2018], and ResNet101 [He et al., 2015] features.
178
+
179
+ Recall that as $\gamma$ gets large, the MLCE recovers the MCE; because LoRe does well even at large $\gamma$ , our method also works well at minimizing global calibration errors. The fact that LoRe lowers the worst-case LCE suggests that it leads to lower LCE values across the entire dataset.
180
+
181
+ § 5.3 DOWNSTREAM FAIRNESS PERFORMANCE
182
+
183
+ Experimental Setup In many fairness-related applications, it is important to show that a model is well-calibrated for all sensitive subgroups of a given population. For example, when predicting the crime rate of a neighborhood, a model should not be considered well-calibrated if it consistently underestimates the crime rate for neighborhoods of one demographic, while overestimating the crime rate for neighborhoods of a different demographic. Therefore, in this section, we examine the worst-case group-wise miscalibra-tion of a classifier, as measured by the maximum group-wise MCE when evaluated only on sensitive sub-groups. We consider the following experimental settings:
184
+
185
+ max width=
186
+
187
+ Recalibration method Setting 1 Setting 2 Setting 3
188
+
189
+ 1-4
190
+ No recalibration ${0.588} \pm {0.107}$ ${0.407} \pm {0.087}$ 0.446 ± 0.083
191
+
192
+ 1-4
193
+ Temperature scaling ${0.521} \pm {0.092}$ ${0.532} \pm {0.089}$ ${0.441} \pm {0.079}$
194
+
195
+ 1-4
196
+ Histogram binning ${0.515} \pm {0.081}$ ${0.218} \pm {0.056}$ ${0.268} \pm {0.067}$
197
+
198
+ 1-4
199
+ Isotonic regression 0.596 ± 0.063 0.615 ± 0.100 0.716 ± 0.082
200
+
201
+ 1-4
202
+ MMCE optimization 0.526 ± 0.172 ${0.429} \pm {0.079}$ ${0.475} \pm {0.079}$
203
+
204
+ 1-4
205
+ Group temp. scaling ${0.423} \pm {0.066}$ ${0.673} \pm {0.075}$ ${0.329} \pm {0.108}$
206
+
207
+ 1-4
208
+ Group hist. binning ${0.542} \pm {0.083}$ ${0.260} \pm {0.053}$ ${0.352} \pm {0.068}$
209
+
210
+ 1-4
211
+ LoRe (tSNE) (ours) $\mathbf{{0.351} \pm {0.084}}$ $\mathbf{{0.165} \pm {0.055}}$ ${0.235} \pm {0.063}$
212
+
213
+ 1-4
214
+ LoRe (PCA) (ours) ${0.392} \pm {0.071}$ 0.167 ± 0.013 $\mathbf{{0.154} \pm {0.082}}$
215
+
216
+ 1-4
217
+
218
+ Table 1: Performance on downstream fairness, as measured by maximum group-wise MCE (lower is better). Experimental settings as described in Section 5.3. Mean and standard deviations are computed over 60 random seeds for setting 1, and 20 for settings 2 and 3. Best results are bold.
219
+
220
+ 1. UCI Communities and Crime: Predict whether a neighborhood's crime rate is higher than the median; group neighborhoods by their plurality race (White, Black, Asian, Indian, Hispanic). 60 random seeds for model training.
221
+
222
+ 2. CelebA: Predict a person's hair color (bald, black, blond, brown, gray, other); group people by hair type (bald, receding hairline, bangs, straight, wavy, other). 20 random seeds for model training.
223
+
224
+ 3. CelebA: Predict a person's hair type; group people by their hair color; inverse of Setting 2.20 random seeds for model training.
225
+
226
+ For each task, we train a classifier (see Appendix A for full details) and recalibrate its output confidences using each of the recalibration methods.
227
+
228
+ Results Table 1 reports the maximum group-wise MCE for each of the recalibration methods on each of the three tasks. LoRe outperforms the other baselines, achieving an average ${49}\%$ reduction over no recalibration and an average ${23}\%$ improvement over the next best global recalibration method. (Note in Figures 5, 6, and 7 in Appendix B that LoRe is the most effective method of lowering the MLCE over a wide range of $\gamma$ ). Notably, LoRe is robust to the feature map used (tSNE vs. PCA). It even outperforms global methods applied to each individual group, implying that correcting local calibration errors is a robust way to improve group calibration that generalizes better than naive alternatives.
229
+
230
+ max width=
231
+
232
+ X Setting 1 Setting 2 Setting 3
233
+
234
+ 1-4
235
+ ECE 0.102 -0.061 -0.195
236
+
237
+ 1-4
238
+ MCE 0.233 0.439 0.281
239
+
240
+ 1-4
241
+ NLL 0.542 0.045 -0.287
242
+
243
+ 1-4
244
+ Brier 0.101 0.144 -0.280
245
+
246
+ 1-4
247
+ ${\mathrm{{MLCE}}}_{0.2}$ (tSNE) 0.642 0.801 0.591
248
+
249
+ 1-4
250
+ ${\mathrm{{MLCE}}}_{0.4}$ (PCA) 0.639 0.659 0.778
251
+
252
+ 1-4
253
+
254
+ Table 2: Pearson correlation between max group-wise MCE and other calibration metrics (higher is better). Experimental settings as described in Section 5.3. Best results in bold. MLCE is better-correlated with the max group-wise MCE than any of the global metrics.
255
+
256
+ Moreover, Table 2 shows that the maximum group-wise MCE is well-correlated with the MLCE, and it is in fact much better correlated with MLCE than global calibration metrics are. Taken together, our results indicate that lowering the LCE has positive implications in fairness settings that cannot be achieved by simply lowering global metrics like the ECE. For reference, we also include the performance of all recalibration methods on various global calibration metrics in Table 3, which shows that LoRe is able to improve worst-case group-wise calibration without meaningfully sacrificing (and in some cases improving) average-case global calibration.
257
+
258
+ § 5.4 DOWNSTREAM DECISION-MAKING
259
+
260
+ Experimental Setup Machine learning predictions are often used to make decisions, and in many situations an agent must select a best action in expectation. As an example, suppose there is a low cost $u$ associated with returning "unsure" and a high cost $w$ associated with returning an incorrect classification (e.g., in situations such as autonomous driving, being unsure incurs only the small cost of calling a human operator, but making an incorrect classification incurs a high cost). An agent with good uncertainty quantification can make a more optimal decision about whether to return a classification or return "unsure"; for a calibrated model, it would be optimal for the agent to return "unsure" below the confidence threshold of $1 - u/w$ , and return a prediction above this threshold.
261
+
262
+ max width=
263
+
264
+ 2*Recalibration method 3|c|Setting 1 3|c|Setting 2 3|c|Setting 3
265
+
266
+ 2-10
267
+ ECE(%) NLL Brier ECE(%) NLL Brier ECE(%) NLL Brier
268
+
269
+ 1-10
270
+ No recalibration 15. ${1}_{2,7}$ .96.25 .17.02 1.80.3 .617 .004 .641.004 1.10.3 .782 .006 .571.006
271
+
272
+ 1-10
273
+ Temperature scaling ${4.9}_{1.7}$ .43.03 .14.01 2.0 0.3 .619.004 .622 .003 1.0 0.2 .781.006 .569 .002
274
+
275
+ 1-10
276
+ Histogram binning ${\mathbf{{3.3}}}_{1.1}$ .48.03 .150.1 2.5 ${}_{0,2}$ .619.004 .614.003 2.5 0.4 .788 .006 .552 .002
277
+
278
+ 1-10
279
+ Isotonic regression ${30.6}_{2.3}$ .79.05 .30.02 ${2.6}_{0.2}$ .618 .004 ${.615}_{.003}$ 2.40.2 .785 .006 .553 .002
280
+
281
+ 1-10
282
+ MMCE optimization ${4.4}_{1.3}$ .43.03 .14.01 ${3.8}_{0.7}$ .646.014 .679.012 5.40.8 .808 .009 .619.009
283
+
284
+ 1-10
285
+ LoRe (tSNE) (ours) ${3.5}_{1.1}$ .42.02 .13.01 2.80.2 .623.004 .613 .003 2.6 .792.006 .551 .002
286
+
287
+ 1-10
288
+ LoRe (PCA) (ours) ${4.5}_{1.4}$ .44.02 .14.01 ${3.1}_{0.2}$ .628 .004 .606.003 2.80.4 .792 .007 .538 .002
289
+
290
+ 1-10
291
+
292
+ Table 3: Performance on global calibration metrics, formatted as ${\mathrm{{mean}}}_{\mathrm{{sd}}}$ . Lower is better. Experimental settings as described in Section 5.3. Best results are bold. Across all settings, LoRe generally achieves a global calibration error that is comparable to the baselines.
293
+
294
+ 1500 Original TS LoRe 10 12 Reward Ratio Improvement in Reward 1000 500 0 -500 -1000 -1500 2 6
295
+
296
+ Figure 4: Reward attained vs. reward ratio for the ImageNet dataset (higher is better). LoRe achieves the highest rewards across a wide range of reward ratios.
297
+
298
+ Following this policy - i.e., returning "unsure" when the confidence is below this threshold and returning a prediction when the confidence is above it, we used a ResNet-50 model to make predictions on ImageNet, and recalibrated the predictions with each of the recalibration methods. For each method, we then calculated the total reward attained under various reward ratios $w/u$ , as well as various global calibration metrics.
299
+
300
+ Results In Figure 4, we show the improvement in the total reward over the original classifier (i.e., no recalibration) as a function of the reward ratio $w/u$ (the ratio of the cost of an incorrect classification to the cost of being unsure). Across a wide range of reward ratios, LoRe achieves the highest reward. The MLCE curves for this task are shown in Figure 3; note that LoRe also achieves lower LCE values than the global recalibration methods. Table 4 reports several global calibration metrics; LoRe achieves strong global calibration. These results indicate that our recalibration method most effectively lowers LCE values without sacrificing (and indeed often improving) average-case global calibration, and that these lower LCE values correspond to better performance on this decision-making task.
301
+
302
+ max width=
303
+
304
+ Recalibration method ECE NLL Brier
305
+
306
+ 1-4
307
+ No recalibration 0.037 0.959 40.64
308
+
309
+ 1-4
310
+ Temperature scaling 0.022 0.948 40.60
311
+
312
+ 1-4
313
+ Histogram binning 0.012 0.952 40.59
314
+
315
+ 1-4
316
+ Isotonic regression 0.011 0.945 40.59
317
+
318
+ 1-4
319
+ MMCE optimization 0.061 0.965 40.67
320
+
321
+ 1-4
322
+ LoRe (ours) 0.007 0.955 40.58
323
+
324
+ 1-4
325
+
326
+ Table 4: Performance on global calibration metrics on ImageNet. Lower is better. Best results are bold. LoRe achieves strong global calibration according to all metrics.
327
+
328
+ § 6 CONCLUSION
329
+
330
+ In this paper, we introduce the local calibration error (LCE), a metric that measures calibration in a localized neighborhood around a prediction. The LCE spans the gap between fully global and fully individualized calibration error, with an effective neighborhood size that can be set with a bandwidth parameter $\gamma$ . We also introduce LoRe, a recalibration method that greatly improves the local calibration. Finally, we demonstrate that achieving lower LCE values leads to better performance on downstream fairness and decision-making tasks. In future work, we hope to further explore alternative feature spaces to define similarity, since the quality of our metric depends on the quality of the feature space underpinning the notion of locality.
UAI/UAI 2022/UAI 2022 Conference/BElGwDLoqlc/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,702 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Abstract
2
+
3
+ Processing sets or other unordered, potentially variable-sized inputs in neural networks is usually handled by aggregating a number of input tensors into a single representation. While a number of aggregation methods already exist from simple sum pooling to multi-head attention, they are limited in their representational power both from theoretical and empirical perspectives. On the search of a principally more powerful aggregation strategy, we propose an optimization-based method called Equilibrium Aggregation. We show that many existing aggregation methods can be recovered as special cases of Equilibrium Aggregation and that it is provably more efficient in some important cases. Equilibrium Aggregation can be used as a drop-in replacement in many existing architectures and applications. We validate its efficiency on three different tasks: median estimation, class counting, and molecular property prediction. In all experiments, Equilibrium Aggregation achieves higher performance than the other aggregation techniques we test.
4
+
5
+ ## 1 INTRODUCTION
6
+
7
+ Early neural networks research focused on processing fixed-dimensional vector inputs. Since then, advanced architectures have been developed for processing fixed-dimensional data efficiently and effectively. This format, however, is not natural for applications where inputs do not have a fixed dimensionality, are unordered, or have both of these properties. A strikingly successful strategy for tackling this issue has been to process such inputs with a series of aggregation $\rightarrow$ transformation operations.
8
+
9
+ An aggregation operation compresses a set of input tensors into a single representation of a known, predefined dimensionality that can be then further sent to the downstream transformation block. Since the latter deals with fixed-dimensional inputs with a defined ordering, it can profit from the variety of techniques available for vector-to-vector computations.
10
+
11
+ ![01963937-e38a-7962-9292-28b6a2e0e1c6_0_935_787_606_778_0.jpg](images/01963937-e38a-7962-9292-28b6a2e0e1c6_0_935_787_606_778_0.jpg)
12
+
13
+ Figure 1: Global aggregation layers in typical neural networks for sets (top) and graphs (bottom). Top: each input set element ${\mathbf{x}}_{i}$ is first processed individually before being pooled into a global representation $\mathbf{y}$ . This is followed by a final transformation block. Bottom: for graph data, the first part of the network is replaced by a graph or message passing neural network, but the global aggregation step is similar. In both cases, the global aggregation step drastically reduces the number of embeddings from many to one, rendering the right choice of aggregation technique critical for good model performance. The aggregation layer is typically implemented using sum-, max-, or attention-pooling. We propose a new aggregation mechanism, called Equilibrium Aggregation.
14
+
15
+ This pattern can be seen in many architectures. For instance, Deep Sets [Zaheer et al., 2017] builds a representation of a set of objects by first transforming each object and then summing their embeddings. Similarly, Graph Neural Networks [Kipf and Welling, 2016, Battaglia et al., 2018] use a message-passing mechanism, which amounts to aggregating the set of input messages received by each node from its neighbours and then transforming the aggregate into a new message on the next layer (local aggregation). In many cases, several message passing layers are then followed by a global aggregation layer, where all node embeddings are aggregated into one global embedding vector describing the entire graph. Finally, Transformers [Vaswani et al., 2017] use self-attention, a mechanism that allows each object in the input set to interact with every other object and update its embedding by aggregating value embeddings from the rest of the set.
16
+
17
+ Mathematically, the aggregation $\phi \left( X\right) = \mathbf{y}$ compresses the input set $X = \left\{ {{\mathbf{x}}_{1},{\mathbf{x}}_{2},\ldots ,{\mathbf{x}}_{N}}\right\} \in {2}^{\mathcal{X}}$ into a $D$ - dimensional vector $\mathbf{y} \in {\mathbb{R}}^{D}$ . In the case of Deep Sets [Za-heer et al., 2017] with sum aggregation, this reads
18
+
19
+ $$
20
+ \phi \left( X\right) = \rho \left( {\mathop{\sum }\limits_{{i = 1}}^{N}f\left( {\mathbf{x}}_{i}\right) }\right) , \tag{1}
21
+ $$
22
+
23
+ where $f$ and $\rho$ are the optional input and output transformations, respectively.
24
+
25
+ Besides yielding a fixed-dimensional output embedding, [1] enforces an important inductive bias: permutation invariance. Global properties of sets or graphs (such as the free energy of a molecule) are independent of the ordering of the set elements. Taking advantage of such task symmetries [Mallat, 2016] can add robustness guarantees with respect to important classes of input transformations, and is known to help generalisation performance [Worrall et al., 2017, Weiler et al., 2018, Winkels and Cohen, 2018]. Other ways of incorporating permutation invariance are max-pooling, mean-pooling or attention aggregators [Kipf and Welling, 2016, Battaglia et al., 2018, Vaswani et al., 2017, Velickovic et al., 2018].[
26
+
27
+ However, it is exactly these aggregation functions which often introduce a bottleneck in the information flow [Zaheer et al., 2017, Wagstaff et al., 2019, Cai and Wang, 2020, Chen et al., 2020, Wagstaff et al., 2021]. It is easy to see that sum aggregation may struggle to selectively extract relevant information from individual inputs or subsets and while methods like multi-head attention (effectively amounting to weighted mean per each head) partially address this issue, we believe there is a fundamental need for more expressive aggregation mechanisms.
28
+
29
+ ![01963937-e38a-7962-9292-28b6a2e0e1c6_1_913_183_659_536_0.jpg](images/01963937-e38a-7962-9292-28b6a2e0e1c6_1_913_183_659_536_0.jpg)
30
+
31
+ Figure 2: Schematic illustration of Equilibrium Aggregation. Each input $\mathbf{x} \in X$ contributes a potential value $F\left( {\mathbf{x},\mathbf{y}}\right)$ which are summed over the set $X$ and, together with the regularizer $R\left( \mathbf{y}\right)$ , form the total energy. Equilibrium Aggregation seeks to minimize this energy and the found minimum serves as the aggregation result.
32
+
33
+ Motivated by this need, we develop a method called Equilibrium Aggregation which is a generalization over existing pooling-based aggregation methods and can be obtained as an implicit solution to optimization-based formulation of aggregation. We further investigate its theoretical properties and show that not only it is a universal approximator of set functions but that it also provably more expressive than sum or max aggregation in some cases. Finally, we validate our insights empirically on a series of experiments where Equilibrium Aggregation demonstrates its practical effectiveness.
34
+
35
+ ## 2 EQUILIBRIUM AGGREGATION
36
+
37
+ Our insight for developing better aggregation functions is grounded in the fact that the standard, pooling-based aggregation methods can be recovered as solutions to a certain optimization problem:
38
+
39
+ $$
40
+ \phi \left( X\right) = \arg \mathop{\min }\limits_{\mathbf{y}}\mathop{\sum }\limits_{{i = 1}}^{N}F\left( {{\mathbf{x}}_{i},\mathbf{y}}\right) , \tag{2}
41
+ $$
42
+
43
+ where $F\left( {\mathbf{x},\mathbf{y}}\right)$ is a potential function.
44
+
45
+ For example, with $F\left( {\mathbf{x},\mathbf{y}}\right) = {\left( \mathbf{x} - \mathbf{y}\right) }^{2}$ (and assuming $\mathcal{X} =$ $\mathbb{R}$ ), one obtains the mean aggregation $\phi \left( X\right) = \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{\mathbf{x}}_{i}$ , more examples can be found in Table 1. A natural question following this observation arises: can a more interesting aggregation strategy be induced by other choices of the potential function $F\left( {\mathbf{x},\mathbf{y}}\right)$ ?
46
+
47
+ ---
48
+
49
+ ${}^{1}$ Interestingly, even though in the case of Transformers for natural language processing the input is an ordered sequence, it appears beneficial to model the data as an order-independent set (or fully connected graph) with the sequential structure added via positional encodings.
50
+
51
+ ---
52
+
53
+ We propose a method called Equilibrium Aggregation that addresses this question by letting the potential be a learnable neural network ${F}_{\theta }\left( {\mathbf{x},\mathbf{y}}\right)$ parameterized by $\theta$ which takes a set element $\mathbf{x}$ and the aggregation result $\mathbf{y} \in \mathcal{Y} = {\mathbb{R}}^{M}$ as an input and outputs a non-negative real scalar expressing the degree of "disagreement" between the two inputs. By also adding a regularization term, we obtain the energy-minimization equation for Equilibrium Aggregation:
54
+
55
+ $$
56
+ {\phi }_{\theta }\left( X\right) = \arg \mathop{\min }\limits_{\mathbf{y}}{E}_{\theta }\left( {X,\mathbf{y}}\right) ,
57
+ $$
58
+
59
+ $$
60
+ {E}_{\theta }\left( {X,\mathbf{y}}\right) = {R}_{\theta }\left( \mathbf{y}\right) + \mathop{\sum }\limits_{{i = 1}}^{N}{F}_{\theta }\left( {{\mathbf{x}}_{i},\mathbf{y}}\right) , \tag{3}
61
+ $$
62
+
63
+ where for the scope of the paper the regularizer is simply ${R}_{\theta }\left( \mathbf{y}\right) = \operatorname{softplus}\left( \lambda \right) \cdot \parallel \mathbf{y}{\parallel }_{2}^{2}$ . A graphical illustration for this construction can be found on Figure 2.
64
+
65
+ Interestingly, this makes the result of the aggregation $\mathbf{y}$ be defined implicitly and generally not available as a closed-form expression. Instead, one can find $\mathbf{y}$ by numerically solving the optimization problem (3), e.g., by gradient descent:
66
+
67
+ $$
68
+ {\mathbf{y}}^{\left( t + 1\right) } = {\mathbf{y}}^{t} - \alpha {\nabla }_{\mathbf{y}}{E}_{\theta }\left( {X,{\mathbf{y}}^{\left( t\right) }}\right) ,\;{\phi }_{\theta }\left( X\right) = {\mathbf{y}}^{\left( T\right) }. \tag{4}
69
+ $$
70
+
71
+ Under certain conditions and with a large enough number of steps $T$ , this procedure provides a sufficiently accurate solution that is itself well-defined and differentiable: either explicitly, through the unrolled gradient descent [Andrychow-icz et al., 2016, Finn et al., 2017], or via the implicit function theorem applied to the optimality condition of (3) [Bai et al., 2019, Blondel et al., 2021]. This allows to learn parameters of the potential $\theta$ and also to train the whole model involving the aggregation end-to-end.
72
+
73
+ In general, it is not guaranteed that gradient-based optimization will converge to the global minimum of (3) when the potential is an arbitrarily structured neural network. However, with a large enough regularization weight $\lambda$ , it is possible to enforce convexity at least in the subspace of $\mathcal{Y}$ [Rajeswaran et al., 2019b]. When the gradient descent is initialized from a learnable starting point or, as in our implementation, from the zero vector, it becomes sufficient to find just a stationary point as long as the next layer in the network makes use of the aggregation result. Relaxing the need for convergence to the global minimum together with the use of flexible neural networks allows to implement a potentially complex and expressive aggregation mechanism. In our implementation we employ explicit differentiation through gradient descent and find that the network generally learns convergent dynamics (4) automatically, even with a fairly small number of iterations such as $T = {10}$ .
74
+
75
+ <table><tr><td>Aggregation</td><td>$\phi \left( X\right)$</td><td>$F\left( {\mathbf{x},\mathbf{y}}\right)$</td></tr><tr><td>Mean</td><td>$\frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{\mathbf{x}}_{i}$</td><td>${\left( \mathbf{x} - \mathbf{y}\right) }^{2}$</td></tr><tr><td>Median</td><td>${\mathbf{x}}_{\left\lbrack N/2\right\rbrack }$</td><td>$\left| {\mathbf{x} - \mathbf{y}}\right|$</td></tr><tr><td>Max</td><td>$\max \{ {\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{N}\}$</td><td>$\max \left( {0,\mathbf{x} - \mathbf{y}}\right)$</td></tr><tr><td>Sum</td><td>$\mathop{\sum }\limits_{{i = 1}}^{N}{\mathbf{x}}_{i}$ or $\arg \mathop{\min }\limits_{\mathbf{y}}\left\lbrack {\frac{{\mathbf{y}}^{2}}{2} + \mathop{\sum }\limits_{i}F\left( {{\mathbf{x}}_{i},\mathbf{y}}\right) }\right\rbrack$</td><td>$- \mathbf{x} \cdot \mathbf{y}$</td></tr><tr><td>Equilibrium Aggregation</td><td>$\arg \mathop{\min }\limits_{\mathbf{y}}{E}_{\theta }\left( {X,\mathbf{y}}\right)$</td><td>Neural network ${\mathrm{F}}_{\theta }\left( {\mathbf{x},\mathbf{y}}\right)$</td></tr></table>
76
+
77
+ Table 1: A comparison between Equilibrium Aggregation and pooling-based aggregation methods. Equations are given for the scalar case or can be applied coordinate-wise in higher dimensions.
78
+
79
+ To additionally encourage convergence, we consider the following auxiliary loss that penalizes the norm of the energy gradient at each step of optimization:
80
+
81
+ $$
82
+ {L}_{\text{aux }}\left( {X,\mathbf{y},\theta }\right) = \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{\begin{Vmatrix}{\nabla }_{\mathbf{y}}{E}_{\theta }\left( X,{\mathbf{y}}^{\left( t\right) }\right) \end{Vmatrix}}_{2}^{2}. \tag{5}
83
+ $$
84
+
85
+ We simply add the auxiliary loss to the main loss incurred by the task of interest and optimize the sum during the training. We further empirically assess convergence of the inner-loop optimization in Section 5.3.
86
+
87
+ ## 3 UNIVERSAL FUNCTION APPROXIMATION ON SETS
88
+
89
+ According to the universal function approximation theorem for neural networks [Hornik et al., 1989, Cybenko, 1989, Funahashi, 1989], an infinitely large multi-layer perceptron can approximate any continuous function on compact domains in $\mathbb{R}$ with arbitrary accuracy. In machine learning, we typically do not know the function we aim to approximate. Hence, knowing that neural networks can in theory approximate anything is comforting. Equally, we seek to build inductive biases into the networks in order to facilitate learning, using more sophisticated architectures than multilayered perceptrons. It is imperative to be aware whether and to what extent those modifications restrict the space of learnable functions.
90
+
91
+ Similar constructions to Equilibirum Aggregation, i.e. optimization-defined models defined as $\mathbf{y} = \arg \mathop{\min }\limits_{\mathbf{y}}G\left( {X,\mathbf{y}}\right)$ , have previously been studied in the literature, especially in the context of permutation-sensitive (i.e. not permutation invariant) functions [Pineda, 1987, Finn and Levine, 2017, Bai et al., 2019] and various results with respect to universal function approximation were obtained. It is not obvious, however, how these results translate to the important permutation-invariant case we consider in this paper. Introducing permutation invariance self-evidently restricts the space of functions that can be approximated. In the next section we directly address the question of what set functions can be learned by Equilibrium Aggregations and establish a universality guarantee.
92
+
93
+ ### 3.1 UNIVERSALITY OF EQUILIBRIUM AGGREGATION
94
+
95
+ In this section, we will see that Equilibrium Aggregation is indeed able to approximate all continuous permutation invariant functions $\psi$ . We start by stating a few assumptions: We assume a fixed input set size $N$ of scalar inputs ${}^{2}{x}_{i}$ (note the dropping of the boldface to indicate that these are not vectors anymore) and a scalar output. We further assume that input space $\mathcal{X}$ is a compact subset of ${\mathbb{R}}^{N}$ . For simplicity, without loss of generality (as we can always rescale the inputs), we choose this to be ${\left\lbrack 0,1\right\rbrack }^{N}$ :
96
+
97
+ $$
98
+ \psi : {\left\lbrack 0,1\right\rbrack }^{N} \rightarrow \mathbb{R}. \tag{6}
99
+ $$
100
+
101
+ As $\psi$ is permutation invariant, the vector valued inputs can be seen as (multi)sets. For a discussion on why considering uncountable domains (i.e. the real numbers) is important for continuity, see Section 3 of Wagstaff et al. [2019].
102
+
103
+ We consider a neural network architecture with Equilibrium Aggregation as a global pooling operation of the following form:
104
+
105
+ $$
106
+ \phi \left( X\right) = \rho \left( {\arg \mathop{\min }\limits_{\mathbf{y}}\mathop{\sum }\limits_{i}{F}_{\theta }\left( {{x}_{i},\mathbf{y}}\right) }\right) , \tag{7}
107
+ $$
108
+
109
+ where ${F}_{\theta }$ (the potential function) and $\rho$ are modeled by neural networks, which are assumed to be universal function approximators. Note that, for simplicity of the proof, we implicitly set the regulariser to 0 . We refer to the output of $\operatorname{argmin}\mathop{\sum }\limits_{i}{F}_{\theta }\left( {{x}_{i},\mathbf{y}}\right)$ as the latent space, analogous to the terminology used in Wagstaff et al. [2019] with respect to the Deep Sets architecture [Zaheer et al., 2017]. We prove the following:
110
+
111
+ Theorem 1 Let the latent space be of size $M = N$ , i.e. $\mathbf{y} \in$ ${\mathbb{R}}^{N}$ . Then all permutation invariant continuous functions $\psi$ can be approximated with Equilibrium Aggregation as defined in (7).
112
+
113
+ Proof For the purpose of this proof, we assume ${F}_{\theta }$ takes the form:
114
+
115
+ $$
116
+ {F}_{\theta }\left( {{x}_{i},\mathbf{y}}\right) = \mathop{\sum }\limits_{{k = 1}}^{M}{\left( \frac{{y}_{k}}{N} - {x}_{i}^{k}\right) }^{2}, \tag{8}
117
+ $$
118
+
119
+ where $k$ serves both as an index for the vector $\mathbf{y}$ and as an exponent for ${x}_{i}$ . There are two sums now, an inner one in the definition of ${F}_{\theta }$ and an outer one over the nodes in (7). Note that ${F}_{\theta }$ is continuous and can therefore be approximated by a neural network. Importantly, ${F}_{\theta }$ is also convex and can therefore assumed to be optimised with gradient descent to find $\arg \min \left( \mathbf{y}\right)$ . Note that all $M$ terms can be optimised independently as $X$ is fixed. It is a well-known fact that minimising the sum of squares yields the mean:
120
+
121
+ $$
122
+ \arg \mathop{\min }\limits_{z}\mathop{\sum }\limits_{{i = 1}}^{N}{\left( z - {x}_{i}\right) }^{2} = \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{x}_{i}. \tag{9}
123
+ $$
124
+
125
+ It follows that minimising the sum of energies defined in (8) yields
126
+
127
+ $$
128
+ {y}_{k}^{\min } = \mathop{\sum }\limits_{i}{x}_{i}^{k}\;\text{ for }k \in \{ 1,\ldots , N\} . \tag{10}
129
+ $$
130
+
131
+ For inputs $\left( {{x}_{1},\ldots ,{x}_{M}}\right) \in {\left\lbrack 0,1\right\rbrack }^{M}$ , this mapping to $\mathbf{y}$ is evidently continuous and surjective with respect to its range ${\left\lbrack 0, M\right\rbrack }^{N}$ . We also know from Lemma 4 in Zaheer et al. [2017] that this mapping is injective and from Lemma 6 that it has a continuous inverse. ${}^{3}\psi$ is continuous by definition and, therefore,
132
+
133
+ $$
134
+ \rho = \psi \circ {\left( \arg \mathop{\min }\limits_{\mathbf{y}}\mathop{\sum }\limits_{i}{F}_{\theta }\left( {x}_{i},\mathbf{y}\right) \right) }^{-1} \tag{11}
135
+ $$
136
+
137
+ is continuous ${}^{4}$ as long as the inputs ${x}_{i}$ are constrained to $\left\lbrack {0,1}\right\rbrack$ and can therefore be approximated by a neural network. However, via a global re-scaling of the inputs, this proof can be used for any bounded input domain. Hence, any permutation invariant, continuous $\psi$ on a bounded domain can be appoximated via Equilibrium Aggregation for a latent space of size $M = N$ .
138
+
139
+ ### 3.2 COMPARISON TO DEEP SETS
140
+
141
+ So far, we have only been able to prove that Equilibrium Aggregation scales at least as well as Deep Sets. By that, we mean that universal function approximation can be achieved with $N = M$ , i.e. having as many latents as inputs is sufficient. (For Deep Sets, we also know that $N = M$ is necessary [Wagstaff et al., 2019].) Even though we currently do not know whether it is possible to achieve universal function approximation with a smaller latent space, there is some indication that Equilibrium Aggregation may have more representational power, as we will lay out in the following:
142
+
143
+ Using one latent dimension, Deep Sets with max-pooling can obviously represent $\psi \left( X\right) = \max \left( X\right)$ , but it cannot represent (or even approximate) the sum for set sizes larger than 1. Vice versa, sum-pooling can represent $\psi \left( X\right) =$ $\operatorname{sum}\left( X\right)$ , but it cannot represent $\max \left( X\right)$ [Wagstaff et al., 2019]. Equilibrium Aggregation can represent both sum and max pooling, each with just one latent dimension (i.e. $\mathbf{y} \in {\mathbb{R}}^{1}$ ) as shown in Table 1.
144
+
145
+ ---
146
+
147
+ ${}^{2}$ This is a common simplification in the literature on universal function approximation on sets. For a discussion on how to generalise from the scalar to the vector case, see Hutter [2020].
148
+
149
+ ${}^{3}$ We refer to Appendix B. 4 in Wagstaff et al. [2019] as to why the term $k = 0$ in (10) can be dropped for fixed set sizes.
150
+
151
+ ${}^{4}$ The superscript -1 indicates the functional inverse w.r.t. $X$ .
152
+
153
+ ---
154
+
155
+ ## 4 RELATED WORK
156
+
157
+ Equilibrium Aggregations sits at the intersection of two machine learning research areas: aggregation functions and implicit layers. In the following, we give an overview over the work closest related in each of the fields, respectively.
158
+
159
+ ### 4.1 AGGREGATION FUNCTIONS
160
+
161
+ Perhaps the most popular approach for obtaining a permutation invariant encoding of sets is Sum pooling. A particular instance of this is Deep Sets [Zaheer et al., 2017], as described in (1). A central finding of Wagstaff et al. [2019] is that the latent space, i.e. the dimensionality of the result of $\mathop{\sum }\limits_{i}f\left( {x}_{i}\right) \in {\mathbb{R}}^{M}$ needs to be at least as large as the number of inputs $N$ , i.e. $M \geq N$ in order to guarantee universal function approximation. This applies to many other aggregation methods as well and, to the best of our knowledge, there is currently no known pooling operation which does not introduce this scaling issue.
162
+
163
+ Principal Neighbourhood Aggregation (PNA) [Corso et al., 2020 addresses the limitations of each individual pooling operator such as Sum or Max by combining four different pooling operators and three different scaling strategies resulting into a simultaneous 12-way aggregation. Despite the more sophisitaced aggregation procedure, Corso et al. [2020] come to very similar conclusions as Zaheer et al. [2017] and Wagstaff et al. [2019], namely that $N = M$ is both necessary and sufficient. They prove the necessity for any set of aggregators as well as the sufficiency for a specific set. In our work, we further expand this line of thinking by allowing the model to learn the desired aggregation operator which may include PNA or something drastically different.
164
+
165
+ Learnable Aggregation Functions (LAF) [Pellegrini et al., 2020] provide a similar framework for learning an aggregation operator by expressing it as a combination of several weighted ${L}_{p}$ norms, where the weights and the $p$ parameters are trained jointly with the model. Even though LAFs are capable of expressing operators used in PNA and beyond, it is not clear how they can reproduce other aggregation methods such as attention. In contrast, our method can learn attention (see Appendix B) as well as even more expressive aggregation functions.
166
+
167
+ Finally, Janossy Pooling [Murphy et al., 2019] generalizes the idea of standard, coordinate-wise pooling to make use of higher-order interactions between set elements. Despite the potential for practical effectiveness, it is unclear whether these developments guarantee better approximation results in the general case [Wagstaff et al., 2021]. While Equilibrium Aggregation is also fully compatible with Janossy Pooling and may profit from even more expressive energy functions with pairwise or triplet interactions, this may not be necessary as such interactions can be emulated within the optimization process and ultimately come at a significant computational cost for larger set sizes.
168
+
169
+ In addition to formulating more expressive pooling operators, there is also a body of work concerned with multi-step parametric models for set encoding [Vinyals et al., 2015, Lee et al., 2019]. Inevitably, to achieve permutation invariance these models rely on some kind of a pooling as a building block, such as the ones outlined above. Equilibrium Aggregation being a drop-in replacement for sum- or attention-pooling can be used in those models, too.
170
+
171
+ ### 4.2 IMPLICIT AND OPTIMIZATION-BASED MODELS
172
+
173
+ Gradient-based optimization has been utilized in a large number of applications [Amos, 2019]: image denois-ing [Putzky and Welling, 2017], molecule generation [Duve-naud et al., 2015, AlQuraishi, 2019], planning [Amos et al., 2018] and combinatorial search [Hottung et al., 2020, Bar-tunov et al., 2020] to name a few. While there is a large body of work where gradient descent dynamics is decoupled from learning, (e.g., Du and Mordatch [2019], Song and Ermon [2019], our work is particularly closely related to methods that seek to learn the underlying objective function end-to-end, such as Putzky and Welling [2017], Rubanova et al. [2021].
174
+
175
+ A closely-related family of methods involve the idea of defining computations inside a model implicitly, i.e. via a set of conditions that a particular variable must obey instead of prescribing directly how the variable's value should be computed. Deep Equilibrium Models (DEQs) formulate this via a fixed point of an update rule specified by the model [Pineda, 1987, Liao et al., 2018, Bai et al., 2019] and Implicit Graph Neural Networks explore this idea in the context of graphs [Gu et al., 2020]. Neural ODEs [Chen et al., 2018] allow to parametrize a derivative of a continuous-time function specifying the computation of interest. iMAML [Rajeswaran et al., 2019a] considers an implicit optimization procedure for the purpose of finding model parameters suitable for gradient-based meta-learning [Finn et al., 2017].
176
+
177
+ Our work is similar in spirit but focuses specifically on the aggregation block for encoding sets, which can be seen as a small but generic building block that can be combined with arbitrary model architectures. Similarly to OptNet [Amos and Kolter, 2017], we propose a layer architecture that can be used inside another implicit or traditional multi-layer neural network.
178
+
179
+ ![01963937-e38a-7962-9292-28b6a2e0e1c6_5_180_183_645_466_0.jpg](images/01963937-e38a-7962-9292-28b6a2e0e1c6_5_180_183_645_466_0.jpg)
180
+
181
+ Figure 3: Median estimation of a 100-number set with three different aggregation methods. The bold lines correspond to the average performance over 5 seeds, the faded lines show the best performing seed of the respective model. Mean square error is computed for varied set embedding sizes on $8 \times {10}^{5}$ number of sets.
182
+
183
+ ## 5 EXPERIMENTS
184
+
185
+ In this section, we describe three experiments with the goal of analyzing the performance of Equilibrium Aggregation in different tasks and comparing it to existing aggregation methods. Our intention is not to achieve state of the art results on any particular task. Instead, we strive to consider archetypal scenarios and applications in which performance significantly depends on the choice of aggregation method so it can be studied in isolation from other issues.
186
+
187
+ In all experiments we let the models to train for ${10}^{7}$ steps of Adam optimizer [Kingma and Ba, 2014]. Since maximizing performance is not the goal of our experiments, we do not perform an extensive hyperparameter search, only limiting it to a sweep over the learning rate (chosen from $\left\{ {{10}^{-4},3 \times {10}^{-4},{10}^{-3}}\right\}$ ) and the auxiliary loss weight (on MOLPCBA only). To that end, we use a small subset of the training set reserved for validation (Omniglot and MOLPCBA benchmarks only). We rely on a single GPU training regime using Nvidia P100s and V100s. All experimental code is written in Jax primitives [Bradbury et al., 2018] using Haiku [Hennigan et al., 2020]. Source code for the most crucial parts of our implementation can be found in the Appendix.
188
+
189
+ ### 5.1 MEDIAN ESTIMATION
190
+
191
+ In this experiment, the neural network is tasked with predicting the median value of a set of 100 randomly sampled numbers. Each set is sampled from either a Uniform, Gamma or Normal distribution with fixed parameters, similarly to Wagstaff et al. [2019]. The basic architecture for pooling-based aggregation baselines consists of first embedding each number in the set with a fully connected ResNet [He et al., 2016] with layer sizes $\left\lbrack {{256},{256}, D}\right\rbrack$ , where $D$ is the set embedding size. Then, the embeddings are pooled with the corresponding method into a $D$ -dimensional vector and the median is predicted from it using another fully connected network with layer sizes $\left\lbrack {D,{128},1}\right\rbrack$ . A simple square loss is used to regress the median.
192
+
193
+ Equilibrium aggregation, in contrast, performs the input encoding and aggregation simultaneously by doing a 5-step gradient optimization of (3) with the potential function implemented as a ResNet with layer sizes $\left\lbrack {{256},{256},1}\right\rbrack$ taking a $D + 1$ -dimensional input ( $D$ for the implicit aggregation result and 1 for the input number). The result is then also transformed into the prediction using the same output network as in the baseline methods.
194
+
195
+ We compare three models, Sum aggregation inspired by [?], Multi-head attention with 4 heads, each operating with $D/4$ dimensional keys, values and learned query vectors, and Equilibrium Aggregation as described above. For each of the models we vary the embedding size and assess the mean square error after ${10}^{7}$ training steps. Empirical results are shown on Figure 3.
196
+
197
+ Equilibrium aggregation achieves one (for average across 5 seeds) or two (for the best out of 5 seeds) orders of magnitude better estimation error than the baseline pooling methods which confirms its higher representational power in this simple setting. Importantly, in this experiment, there is no distinction between training and test distributions as the samples are continuously drawn and never repeated. Hence, we are primarily testing the representation power of the approaches as opposed to data efficiency in this particular example. However, it is worth noting that all architectures have roughly the same amount of trainable parameters. Presumably, the low error achieved by Equilibrium Aggregation suggests that it managed to discover or reasonably well approximate the analytical solution $F\left( {\mathbf{x},\mathbf{y}}\right) = \left| {\mathbf{x} - \mathbf{y}}\right|$ .
198
+
199
+ ### 5.2 OMNIGLOT CLASS COUNTING
200
+
201
+ We proceed to the more challenging task of counting the number of unique character classes in a set of ${16}\mathrm{{Om}}$ - niglot images, which is inspired by Lee et al. [2019]. Om-niglot [Lake et al., 2015] is a dataset of handwritten characters that are organized into alphabets and then into character classes for each of which only 20 instances are available.
202
+
203
+ ![01963937-e38a-7962-9292-28b6a2e0e1c6_6_149_177_704_767_0.jpg](images/01963937-e38a-7962-9292-28b6a2e0e1c6_6_149_177_704_767_0.jpg)
204
+
205
+ Figure 4: Omniglot class counting task.
206
+
207
+ We randomly choose between 1 and 10 character classes and sample their images to form the input set. The model then needs to aggregate those images and infer the number of unique character classes by outputting a vector of probabilities for each of the $1,2,\ldots ,{10}$ possible number of classes (see Figure 4a for a visual illustration).
208
+
209
+ Original images are downsized to ${32} \times {32}$ and encoded using a convolutional ResNet with $\left\lbrack {{16},{32},{64}}\right\rbrack$ hidden channels in each of the three blocks correspondingly. Each block operates with $3 \times 3$ filters and a stride of 2 and hence reduces spatial sizes of the input tensor by half. The ResNet output is then flattened and linearly projected into a 256-dimensional input embedding. After the encoding step, as in the previous experiment, Sum, Multi-Head Attention with 4 heads and Equilibrium Aggregation perform set aggregation into 256- dimensional set embedding and predicted the number of classes using a simple softmax distribution using a fully-connected ResNet with layer sizes of $\left\lbrack {{128},{10}}\right\rbrack$ . Equilibrium Aggregation also uses a ResNet potential with $\left\lbrack {{512},{512},{32}}\right\rbrack$ structure where the output of the last layer is squared and then summed to form a scalar potential value. We used 10 iterations of inner-loop optimization in this experiment.
210
+
211
+ Each model is trained on the characters from Omniglot train set for ${10}^{7}$ steps and with a batch size of 8 . Train and test accuracies are reported in Figure 4b. One can see that, again, Equilibrium Aggregation outperforms both of the baselines, both in terms of train and test set accuracy. This shows that, one the one hand, Equilibrium Aggregation has a significantly larger capacity and thus better fits the training data. On the other hand, this capacity results into better generalization and, presumably, a more robust aggregation strategy.
212
+
213
+ ![01963937-e38a-7962-9292-28b6a2e0e1c6_6_902_184_677_470_0.jpg](images/01963937-e38a-7962-9292-28b6a2e0e1c6_6_902_184_677_470_0.jpg)
214
+
215
+ Figure 5: Inner-loop optimization statistics on MOLPCBA with the GIN architecture. The pink curve shows the maximum value of the ${L}^{1}$ norm along any dimension of the gradient on the last (15th) iteration of the inner loop. A value of ${10}^{-2}$ indicates a small gradient update and therefore good convergence of the optimizer. The dark purple curve tracks the auxiliary loss, i.e. the ${L}^{2}$ norm of the gradient update averaged across all 15 optimization steps (see (5)). Overall, these curves indicate stable, convergent behaviour despite a modest number of inner-loop optimization steps.
216
+
217
+ ### 5.3 GLOBAL AGGREGATION IN GRAPH NEURAL NETWORKS
218
+
219
+ Finally, we study the effect of different aggregation methods in the global readout layer of a graph neural network (GNN) on a well-established MOLPCBA benchmark [Hu et al., 2020]. In this task, the model is required to predict 128 global binary properties of an input molecule. This is traditionally implemented within the GNN framework by first applying several layers of message-passing on a graph and then aggregating the resulting 300-dimensional node embeddings into a single 300-dimensional graph representation from which the predictions are made. Since there is more than one prediction task per molecule, mean average precision (MAP) is used as an evaluation metric. The test MAP is reported for the best MAP attained on the validation set as the model is training. The validation and test metrics are periodically evaluated from model snapshots taken approximately every ${10}^{4}$ training steps.
220
+
221
+ For this experiment, we choose two popular GNN architectures, namely a Graph Convolutional Network (GCN) [Kipf and Welling, 2016] and a Graph Isomorphism Network (GIN) [Xu et al., 2018] that both use a simple Sum readout in their canonical implementations by Hu et al. [2020]. We leave the architectures unchanged and only vary the global readout operation. Our implementation uses the Jraph library [Godwin* et al., 2020] and dynamic batch training with up to 8 graphs and 1024 nodes in a batch.
222
+
223
+ Table 2: Comparison between different aggregation methods on MOLPCBA.
224
+
225
+ <table><tr><td>Local Aggregation</td><td>Global Aggregation</td><td>Validation MAP</td><td>Test MAP</td></tr><tr><td rowspan="4">Graph Convolutional Network [Kipf and Welling, 2016]</td><td>Sum</td><td>0.223</td><td>0.203</td></tr><tr><td>Multi-Head Attention</td><td>0.248</td><td>0.229</td></tr><tr><td>Principal Neighbourhood Aggregation</td><td>0.226</td><td>0.209</td></tr><tr><td>Equilibrium Aggregation</td><td>0.269</td><td>0.252</td></tr><tr><td rowspan="4">Graph Isomorphism Network [Xu et al., 2018]</td><td>Sum</td><td>0.255</td><td>0.232</td></tr><tr><td>Multi-Head Attention</td><td>0.254</td><td>0.234</td></tr><tr><td>Principal Neighbourhood Aggregation</td><td>0.262</td><td>0.244</td></tr><tr><td>Equilibrium Aggregation</td><td>0.263</td><td>0.246</td></tr><tr><td>Equilibrium Aggregation</td><td>Equilibrium Aggregation</td><td>0.269</td><td>0.258</td></tr></table>
226
+
227
+ For the potential network we use an architecture similar to the previous experiment with layer sizes $\left\lbrack {{600},{300},{32}}\right\rbrack$ , sum-of-the-squares output and 15 iterations for energy minimization.
228
+
229
+ The results are provided in Table 2. Overall, the empirical findings on MOLPCBA are consistent with the previous experiments with Multi-Head Attention providing a noticeable performance improvement over the basic Sum aggregation and Equilibrium Aggregation performing even better. In addition, we also evaluate Principal Neighbourhood Aggregation (PNA) [Corso et al., 2020], which has been proposed to address limitations an each individual pooling method in the context of GNNs and combines 12 combinations of scaled pooling methods. When combinining PNA with the GCN model, our experiments only show minor performance improvements over Sum pooling, in part because of increased overfitting. However, when applied to the GIN architecture, it achieves performance levels almost on par with Equilibrium Aggregation.
230
+
231
+ These results confirm one of the central hypotheses of this research: namely that the global aggregation of node em-beddings is a critical step in graph neural networks. Perhaps surprisingly, the GCN generally benefited more from more advanced aggregation methods which is probably due to smaller number of parameters and thus decreased risk of overfitting. It is also worth noting that top performing GNN achitectures achieve significantly higher test MAP on this task (see, e.g., Yuan et al. [2020], Brossard et al. [2020]).
232
+
233
+ In addition, we test an architecture where both local (i.e. node-level) and the global aggregations are performed using Equilibrium Aggregation. This model yields even better performance, albeit only marginally. While more careful architecture design that takes into account the specifics of Equilibrium Aggregation could potentially lead to larger performance improvements, it should be noted that the molecular graphs in this task are relatively small and aggregation on the local level may be not the most critical step for a typical GNN.
234
+
235
+ Besides the task performance we also investigate the behaviour of the inner-loop optimization. Figure 5 plots two major statistics that quantify this: the max-norm of the final iterate of the optimization $\mathop{\max }\limits_{d}\left| {{\nabla }_{{y}_{d}}E\left( {X,{\mathbf{y}}^{\left( T\right) }}\right) }\right|$ and ${L}_{\text{aux }}$ (5). One can see that both rapidly decrease during the training and that a good degree of convergence is achieved. We observe similar behaviour with GCN and on other tasks we considered earlier.
236
+
237
+ ## 6 DISCUSSION AND CONCLUSION
238
+
239
+ This work provides a novel optimization-based perspective on the widely encountered problem of aggregating sets that is provably universal. Our proposed algorithm, Equilibrium Aggregation, allows learning a problem-specific aggregation mechanism which, as we show, is beneficial across different applications and neural network architectures. The consistent empirical improvement brought by the use of Equilibrium Aggregation not only shows that many existing models are struggling from aggressive compression and inefficient representation of sets but also suggests a whole new class of set- or graph-oriented architectures that employ a composition of Equilibrium Aggregation operations. Beyond GNNs, other classes of models, such as Transformers, may also profit from more expressive aggregation operations, specificially in modelling long-term memory - a topic strongly connected to compression of sets [Rae et al., 2019, Bartunov et al., 2019], as well as potentially reduce the number of layers needed.
240
+
241
+ While there is a strong indication that using Equilibrium Aggregation as a building block is effective, the incurred computational cost may require more developments in differentiable optimization [Ernoult et al., 2020], architecture [Amos et al., 2017] and hardware design [Kendall et al., 2020], especially in order to compete with modern extra large models.
242
+
243
+ M. AlQuraishi. Alphafold at casp13. Bioinformatics, 35 (22):4862-4865, 2019.
244
+
245
+ B. Amos. Differentiable optimization-based modeling for machine learning. PhD thesis, PhD thesis, Carnegie Mellon University, 2019.
246
+
247
+ B. Amos and J. Z. Kolter. Optnet: Differentiable optimization as a layer in neural networks. In ${ICML}$ , pages 136- 145. PMLR, 2017.
248
+
249
+ B. Amos, L. Xu, and J. Z. Kolter. Input convex neural networks. In ICML, pages 146-155. PMLR, 2017.
250
+
251
+ B. Amos, I. Jimenez, J. Sacks, B. Boots, and J. Z. Kolter. Differentiable mpc for end-to-end planning and control. NeurIPS, 31, 2018.
252
+
253
+ M. Andrychowicz, M. Denil, S. Gomez, M. W. Hoffman, D. Pfau, T. Schaul, B. Shillingford, and N. De Freitas. Learning to learn by gradient descent by gradient descent. NeurIPS, 29, 2016.
254
+
255
+ J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. arXiv:1607.06450, 2016.
256
+
257
+ S. Bai, J. Z. Kolter, and V. Koltun. Deep equilibrium models. NeurIPS, 2019.
258
+
259
+ S. Bartunov, J. Rae, S. Osindero, and T. Lillicrap. Meta-learning deep energy-based memory models. In ICLR, 2019.
260
+
261
+ S. Bartunov, V. Nair, P. Battaglia, and T. Lillicrap. Continuous latent search for combinatorial optimization. In Learning Meets Combinatorial Algorithms at NeurIPS2020, 2020.
262
+
263
+ P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv:1806.01261, 2018.
264
+
265
+ M. Blondel, Q. Berthet, M. Cuturi, R. Frostig, S. Hoyer, F. Llinares-López, F. Pedregosa, and J.-P. Vert. Efficient and modular implicit differentiation. arXiv:2105.15183, 2021.
266
+
267
+ J. Bradbury, R. Frostig, P. Hawkins, M. J. Johnson, C. Leary, D. Maclaurin, G. Necula, A. Paszke, J. VanderPlas, S. Wanderman-Milne, and Q. Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax.
268
+
269
+ R. Brossard, O. Frigo, and D. Dehaene. Graph convolutions that can finally model local structure. arXiv:2011.15069, 2020.
270
+
271
+ C. Cai and Y. Wang. A note on over-smoothing for graph neural networks. arXiv:2006.13318, 2020.
272
+
273
+ D. Chen, Y. Lin, W. Li, P. Li, J. Zhou, and X. Sun. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. In AAAI, volume 34, pages 3438-3445, 2020.
274
+
275
+ R. T. Chen, Y. Rubanova, J. Bettencourt, and D. K. Duve-naud. Neural ordinary differential equations. NeurIPS, 31, 2018.
276
+
277
+ G. Corso, L. Cavalleri, D. Beaini, P. Liò, and P. Velick-ovic. Principal neighbourhood aggregation for graph nets. NeurIPS, 2020.
278
+
279
+ G. Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals, and Systems (MCSS), 2(4):303-314, 1989.
280
+
281
+ P. Diaconis and D. Freedman. A dozen de finetti-style results in search of a theory. In Annales de l'IHP Probabilités et statistiques, volume 23, pages 397-423, 1987.
282
+
283
+ Y. Du and I. Mordatch. Implicit generation and generalization in energy-based models. arXiv:1903.08689, 2019.
284
+
285
+ D. K. Duvenaud, D. Maclaurin, J. Iparraguirre, R. Bom-barell, T. Hirzel, A. Aspuru-Guzik, and R. P. Adams. Convolutional networks on graphs for learning molecular fingerprints. NeurIPS, 28, 2015.
286
+
287
+ M. Ernoult, J. Grollier, D. Querlioz, Y. Bengio, and B. Scel-lier. Equilibrium propagation with continual weight updates. arXiv:2005.04168, 2020.
288
+
289
+ C. Finn and S. Levine. Meta-learning and universality: Deep representations and gradient descent can approximate any learning algorithm. ICLR, 2017.
290
+
291
+ C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. ICML, 2017.
292
+
293
+ K.-I. Funahashi. On the approximate realization of continuous mappings by neural networks. Neural networks, 1989.
294
+
295
+ J. Godwin*, T. Keck*, P. Battaglia, V. Bapst, T. Kipf, Y. Li, K. Stachenfeld, P. Veličković, and A. Sanchez-Gonzalez. Jraph: A library for graph neural networks in jax., 2020. URL http://github.com/deepmind/jraph
296
+
297
+ F. Gu, H. Chang, W. Zhu, S. Sojoudi, and L. El Ghaoui. Implicit graph neural networks. NeurIPS, 2020.
298
+
299
+ K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. CVPR, 2016.
300
+
301
+ T. Hennigan, T. Cai, T. Norman, and I. Babuschkin. Haiku: Sonnet for JAX, 2020. URL http://github.com/ deepmind/dm-haiku.
302
+
303
+ K. Hornik, M. Stinchcombe, and H. White. Multilayer feed-
304
+
305
+ forward networks are universal approximators. Neural networks, 1989.
306
+
307
+ A. Hottung, B. Bhandari, and K. Tierney. Learning a latent search space for routing problems using variational autoencoders. In ${ICLR},{2020}$ .
308
+
309
+ W. Hu, M. Fey, M. Zitnik, Y. Dong, H. Ren, B. Liu, M. Catasta, and J. Leskovec. Open graph benchmark: Datasets for machine learning on graphs. NeurIPS, 33: 22118-22133, 2020.
310
+
311
+ M. Hutter. On representing (anti)symmetric functions. arX-iv/2007.15298, 2020.
312
+
313
+ J. Kendall, R. Pantone, K. Manickavasagam, Y. Bengio, and B. Scellier. Training end-to-end analog neural networks with equilibrium propagation. arXiv:2006.01981, 2020.
314
+
315
+ D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014.
316
+
317
+ T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. arXiv:1609.02907, 2016.
318
+
319
+ B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332-1338, 2015.
320
+
321
+ J. Lee, Y. Lee, J. Kim, A. Kosiorek, S. Choi, and Y. W. Teh. Set transformer: A framework for attention-based permutation-invariant neural networks. In ${ICML}$ , pages 3744-3753. PMLR, 2019.
322
+
323
+ R. Liao, Y. Xiong, E. Fetaya, L. Zhang, K. Yoon, X. Pitkow, R. Urtasun, and R. Zemel. Reviving and improving recurrent back-propagation. In ICML, pages 3082-3091. PMLR, 2018.
324
+
325
+ S. Mallat. Understanding deep convolutional networks. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374, 2016.
326
+
327
+ R. L. Murphy, B. Srinivasan, V. Rao, and B. Ribeiro. Janossy pooling: Learning deep permutation-invariant functions for variable-size inputs. ICLR, 2019.
328
+
329
+ Y. E. Nesterov. A method for solving the convex programming problem with convergence rate o (1/k^ 2). In Dokl. akad. nauk Sssr, volume 269, pages 543-547, 1983.
330
+
331
+ G. Pellegrini, A. Tibo, P. Frasconi, A. Passerini, and M. Jaeger. Learning aggregation functions, 2020.
332
+
333
+ F. J. Pineda. Generalization of back-propagation to recurrent neural networks. Physical review letters, 59(19):2229, 1987.
334
+
335
+ P. Putzky and M. Welling. Recurrent inference machines for solving inverse problems. arXiv:1706.04008, 2017.
336
+
337
+ J. W. Rae, A. Potapenko, S. M. Jayakumar, and T. P. Lilli-crap. Compressive transformers for long-range sequence modelling. arXiv:1911.05507, 2019.
338
+
339
+ A. Rajeswaran, C. Finn, S. M. Kakade, and S. Levine. Meta-learning with implicit gradients. NeurIPS, 2019a.
340
+
341
+ A. Rajeswaran, C. Finn, S. M. Kakade, and S. Levine. Meta-learning with implicit gradients. NeurIPS, 32, 2019b.
342
+
343
+ Y. Rubanova, A. Sanchez-Gonzalez, T. Pfaff, and P. Battaglia. Constraint-based graph network simulator. arXiv:2112.09161, 2021.
344
+
345
+ Y. Song and S. Ermon. Generative modeling by estimating gradients of the data distribution. NeurIPS, 32, 2019.
346
+
347
+ A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. In NeurIPS, pages 5998-6008, 2017.
348
+
349
+ P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio. Graph attention networks, 2018.
350
+
351
+ O. Vinyals, S. Bengio, and M. Kudlur. Order matters: Sequence to sequence for sets. arXiv:1511.06391, 2015.
352
+
353
+ E. Wagstaff, F. B. Fuchs, M. Engelcke, I. Posner, and M. A. Osborne. On the limitations of representing functions on sets. ICML, 2019.
354
+
355
+ E. Wagstaff, F. B. Fuchs, M. Engelcke, M. A. Osborne, and I. Posner. Universal approximation of functions on sets. ArXiv, 2021.
356
+
357
+ M. Weiler, M. Geiger, M. Welling, W. Boomsma, and T. Cohen. 3d steerable cnns: Learning rotationally equivariant features in volumetric data. In NeurIPS, 2018.
358
+
359
+ M. Winkels and T. S. Cohen. 3d g-cnns for pulmonary nodule detection. NeurIPS, 2018.
360
+
361
+ D. E. Worrall, S. J. Garbin, D. Turmukhambetov, and G. J. Brostow. Harmonic networks: Deep translation and rotation equivariance. CVPR, 2017.
362
+
363
+ K. Xu, W. Hu, J. Leskovec, and S. Jegelka. How powerful are graph neural networks? arXiv:1810.00826, 2018.
364
+
365
+ Z. Yuan, Y. Yan, M. Sonka, and T. Yang. Large-scale robust deep auc maximization: A new surrogate loss and empirical studies on medical image classification. arXiv:2012.03173, 2020.
366
+
367
+ M. Zaheer, S. Kottur, S. Ravanbakhsh, B. Póczos, R. Salakhutdinov, and A. J. Smola. Deep sets. NeurIPS, 2017.
368
+
369
+ ## A EQUILIBRIUM AGGREGATION AS MAP INFERENCE
370
+
371
+ Here we provide another useful perspective on Equilibrium Aggregation which is connecting the method to prior work in Bayesian inference and continuing one of the arguments made by Zaheer et al. [2017].
372
+
373
+ Consider a joint distribution over a sequence of random variables ${\mathbf{x}}_{1},{\mathbf{x}}_{2},\ldots$ . The sequence is called infinitely exchangeable if, for any $N$ the joint probability $p\left( {{\mathbf{x}}_{1},{\mathbf{x}}_{2},\ldots ,{\mathbf{x}}_{N}}\right)$ is invariant to permutation of the indices. Formally speaking, for any permutation over indices $\pi$ we have
374
+
375
+ $$
376
+ p\left( {{\mathbf{x}}_{1},{\mathbf{x}}_{2},\ldots ,{\mathbf{x}}_{N}}\right) = p\left( {{\mathbf{x}}_{\pi \left( 1\right) },{\mathbf{x}}_{\pi \left( 2\right) },\ldots ,{\mathbf{x}}_{\pi \left( N\right) }}\right) .
377
+ $$
378
+
379
+ According to De Finetti's theorem (see, for example, [Di-aconis and Freedman,[1987]), the sequence ${\mathbf{x}}_{1},{\mathbf{x}}_{2},\ldots$ is infinitely exchangeable iff, for all $N$ , it admits the following mixture-style decomposition:
380
+
381
+ $$
382
+ p\left( {{\mathbf{x}}_{1},{\mathbf{x}}_{2},\ldots ,{\mathbf{x}}_{N}}\right) = \int \mathop{\prod }\limits_{{i = 1}}^{N}p\left( {{\mathbf{x}}_{i} \mid \mathbf{y}}\right) p\left( \mathbf{y}\right) d\mathbf{y}.
383
+ $$
384
+
385
+ Since the existence of this model for exchangeable sequences is guaranteed, one can consider the posterior distribution $p\left( {\mathbf{y} \mid {\mathbf{x}}_{1},{\mathbf{x}}_{2},\ldots ,{\mathbf{x}}_{N}}\right)$ which effectively encodes all global information about the observed inputs.
386
+
387
+ Since full and exact posterior inference is often infeasible (and the theorem does not guarantee at all that the prior $p\left( \mathbf{y}\right)$ and the likelihood $p\left( {\mathbf{x} \mid \mathbf{y}}\right)$ are conjugate or otherwise admit closed-form inference), in practice maximum a posteriori probability(MAP)estimates are used when a point estimate is sufficient:
388
+
389
+ $$
390
+ \widehat{\mathbf{y}} = \arg \mathop{\max }\limits_{\mathbf{y}}\log p\left( {\mathbf{y} \mid {\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{N}}\right)
391
+ $$
392
+
393
+ $$
394
+ = \arg \mathop{\max }\limits_{\mathbf{y}}\left\lbrack {\mathop{\sum }\limits_{{i = 1}}^{N}\underset{ = - F\left( {{\mathbf{x}}_{i},\mathbf{y}}\right) }{\underbrace{\log p\left( {{\mathbf{x}}_{i} \mid \mathbf{y}}\right) }} + \underset{ = - R\left( \mathbf{y}\right) }{\underbrace{\log p\left( \mathbf{y}\right) }}}\right\rbrack . \tag{12}
395
+ $$
396
+
397
+ Informally speaking, this means that MAP encoding of sets under a probabilistic model with a global hidden variable (which must exists albeit potentially in a complicated form) amounts to the optimization problem (12) which is almost the same as the Equilibrium Aggregation formulation (2). Allowing the potential $F\left( {\mathbf{x},\mathbf{y}}\right)$ to be a flexible neural network, it is possible to recover the desired negative $\log$ -likelihood - $\log p\left( {\mathbf{x} \mid \mathbf{y}}\right)$ (up to an additive constant).
398
+
399
+ This observation provides an additional theoretical argument in support of Equilibrium Aggregation and also suggests a number of interesting extensions one can imagine by further exploring the vast toolset of probabilistic inference.
400
+
401
+ ## B ATTENTION AS EQUILIBRIUM AGGREGATION
402
+
403
+ We have already outlined how simple pooling methods can be recovered as special cases of Equilibrium Aggregation. Here, we demonstrate how Equilibrium Aggregtaion can learn to model the popular attention mechanism.
404
+
405
+ We denote the interaction or query vector as $\mathbf{h}$ . Note that we consider many-to-one aggregation and therefore only have one query vector. Here, the query vector is learned and independent of the input set. For brevity, we will ignore the commonly used distinction between keys and values over which the attention is computed and will simply consider a set of vectors $X = {\left\{ {\mathbf{x}}_{i}\right\} }_{i = 1}^{N}$ serving as both. Now, we split the aggregation result as $\mathbf{y} = \left\lbrack {{\mathbf{y}}_{r},{y}_{s}}\right\rbrack$ and define the potential function as follows:
406
+
407
+ $$
408
+ F\left( {\mathbf{x},\mathbf{y}}\right) = \exp \left( {{\mathbf{h}}^{T}\mathbf{x}}\right) {\begin{Vmatrix}\mathbf{x} - {\mathbf{y}}_{r}\end{Vmatrix}}_{2}^{2} + {\left( {y}_{s} - \exp \left( {\mathbf{h}}^{T}\mathbf{x}\right) \right) }^{2}.
409
+ $$
410
+
411
+ Assuming no prior, the optimization problem (3) would then lead to the following solution:
412
+
413
+ $$
414
+ {\mathbf{y}}_{r} = \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}\exp \left( {{\mathbf{h}}^{T}{\mathbf{x}}_{i}}\right) {\mathbf{x}}_{i},\;{y}_{s} = \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}\exp \left( {{\mathbf{h}}^{T}{\mathbf{x}}_{i}}\right) ,
415
+ $$
416
+
417
+ from which the normalized result can be recovered trivially
418
+
419
+ as
420
+
421
+ $$
422
+ \frac{{\mathbf{y}}_{r}}{{y}_{s}} = \mathop{\sum }\limits_{{i = 1}}^{N}\frac{\exp \left( {{\mathbf{h}}^{T}{\mathbf{x}}_{i}}\right) }{\mathop{\sum }\limits_{{j = 1}}^{N}\exp \left( {{\mathbf{h}}^{T}{\mathbf{x}}_{j}}\right) }{\mathbf{x}}_{i}.
423
+ $$
424
+
425
+ ## C PRACTICAL IMPLEMENTATION OF EQUILIBRIUM AGGREGATION
426
+
427
+ While we generally found Equilibrium Aggregation to be robust to various aspects of implementation, in this appendix we share the best practices discovered in our experiments.
428
+
429
+ ### C.1 POTENTIAL FUNCTION
430
+
431
+ The potential function $F\left( {\mathbf{x},\mathbf{y}}\right)$ in experiments has been implemented as a two-layer ResNet with tanh activations, layer normalization [Ba et al., 2016] and, importantly, sum-of-the-squares output. The Jax implementation can be found in Listing 1
432
+
433
+ tanh activations and layer normalization ensured numerically stable gradients with respect to $\mathbf{y}$ . At the same time, sum of the squares allowed the potential to exhibit more rich behaviour, especially when all of the potentials are summed in the total energy.
434
+
435
+ ### C.2 SCALED ENERGY
436
+
437
+ The number of elements in the set $N$ may vary significantly across different data points in a dataset which ultimately would make it difficult to set the single optimization schedule (learning rate and momentum) that would work equally well for all values of $N$ . This is because energy 2 is a sum over all elements in the set and so the gradient ${\nabla }_{\mathbf{y}}E\left( {X,\mathbf{y}}\right)$ is scaled linearly with $N$ .
438
+
439
+ ---
440
+
441
+ from typing import Callable, Sequence
442
+
443
+ import haiku as hk
444
+
445
+ import jax.numpy as jnp
446
+
447
+ import numpy as np
448
+
449
+ class SuperMLP ( hk. Module ) :
450
+
451
+ def _init_(self, hidden: Sequence[int],
452
+
453
+ activation: Callable [[jnp.ndarray], jnp.ndarray],
454
+
455
+ activate_final: bool = False,
456
+
457
+ normalize: bool = False,
458
+
459
+ spectral_norm: bool = False,
460
+
461
+ residual: bool = False, name=None):
462
+
463
+ super( )._init_(name=name)
464
+
465
+ self._hidden = hidden
466
+
467
+ self._activation = activation
468
+
469
+ self._activate_final = activate_final
470
+
471
+ self._normalize = normalize
472
+
473
+ self._residual = residual
474
+
475
+ def _____call_____(self, x, conditional=None, is_training=True):
476
+
477
+ for i, size in enumerate(self._hidden):
478
+
479
+ if conditional is not None:
480
+
481
+ x = jnp.concatenate([x, conditional], axis=-1)
482
+
483
+ h = hk.Linear(size)(x)
484
+
485
+ if i < len (self._hidden) -1 or self._activate_final:
486
+
487
+ if self._normalize:
488
+
489
+ h = hk.LayerNorm(-1, True, True)(h)
490
+
491
+ h = self._activation (h)
492
+
493
+ else:
494
+
495
+ pass
496
+
497
+ if self._residual:
498
+
499
+ if size != x.shape[1]:
500
+
501
+ x = hk.Linear(size) (x)
502
+
503
+ $\mathrm{x} + = \mathrm{h}$
504
+
505
+ else:
506
+
507
+ $\mathrm{x} = \mathrm{h}$
508
+
509
+ return x
510
+
511
+ def potential_net (x, y, hidden_size):
512
+
513
+ z = jnp.concatenate([x, y], axis=-1)
514
+
515
+ h = utils.SuperMLP([hidden_size * 2, hidden_size, 32], activation=jax.nn.tanh,
516
+
517
+ activate_final=False, residual=True,
518
+
519
+ normalize=True)(z)
520
+
521
+ h = jnp.square(h)
522
+
523
+ return jnp.mean(h, axis=1)
524
+
525
+ Listing 1: Potential function implementation in Jax.
526
+
527
+ ---
528
+
529
+ ![01963937-e38a-7962-9292-28b6a2e0e1c6_12_159_183_683_466_0.jpg](images/01963937-e38a-7962-9292-28b6a2e0e1c6_12_159_183_683_466_0.jpg)
530
+
531
+ Figure 6: Evolution of various trainable parameters of the inner-loop optimizer.
532
+
533
+ A potential solution to this problem would be to simply average the potentials instead of summing them, but this would make it very difficult if not impossible to reason about the number of elements in the set from $\mathbf{y}$ . Thus, we use a different solution where we still scale the energy so that it does increase in magnitude as $N$ grows but does so at a sublinear rate:
534
+
535
+ $$
536
+ E\left( {X,\mathbf{y}}\right) = \frac{R\left( \mathbf{y}\right) + \mathop{\sum }\limits_{{i = 1}}^{N}F\left( {{\mathbf{x}}_{i},\mathbf{y}}\right) }{\left( N + \epsilon \right) }{\log }_{2}\left( {N + 1}\right) , \tag{13}
537
+ $$
538
+
539
+ where $\epsilon = {10}^{-8}$ is a small constant to prevent division by zero in the case of an empty set.
540
+
541
+ ### C.3 INITIALIZATION
542
+
543
+ In all experiments ${\mathbf{y}}^{\left( 0\right) }$ has been set to a zero vector which, as we found, facilitated faster training.
544
+
545
+ ### C.4 INNER-LOOP OPTIMIZATION ALGORITHM
546
+
547
+ We used gradient descent with Nesterov-accelerated momentum [Nesterov, 1983] as an algorithm for optimizing (3). We provide the full code in Listing 2.
548
+
549
+ Figure 6 shows the evolution of the trainable learning rate and momentum parameters of the optimizer on the MOLPCBA-GIN experiment, as well as the regularization weight. One can see that all three parameters largely stabilize after first ${10}^{6}$ training steps.
550
+
551
+ ### C.5 IMPLICIT DIFFERENTIATION
552
+
553
+ In the course of this work we briefly explored the possibility of employing implicit differentiation. However, in this regime it is not trivial to allow e.g. the learning rate to be trained together with the model end-to-end and we found it difficult to propose an optimization schedule that would work well in all phases of training. Larger step sizes led to unstable training and smaller step sizes required too many iterations to converge making implicit differentiation less efficient computationally than the straightforward explicit differentiation which we ended up using for all the experiments.
554
+
555
+ ## D FURTHER EXPERIMENTAL DETAILS
556
+
557
+ ### D.1 MEDIAN ESTIMATION
558
+
559
+ Data Creation The data is created indefinitely on the fly. For each sample, first, one of three probability distributions is selected by chance: uniform (between 0 and 1), gamma (scale 0.2, shape 0.5), or normal (mean 0.5, standard deviation 0.4). Then, 100 values are randomly drawn from the selected distribution. The label is the median value of the set of these 100 values.
560
+
561
+ Evaluation For average performance (bold lines in Fig. 3), we average across seeds, do exponential smoothing and report the performance after 10 million training steps. Equilibrium Aggregation is roughly one order of magnitude better. For best performing seed (faded lines in Fig. 3), we report the best performing evaluation step (each evaluation step uses 80000 samples) across all seeds.
562
+
563
+ ### D.2 MOLPCBA
564
+
565
+ As mentioned in the main text, we performed a brief hy-perparameter search for the weight of the ${L}_{\text{aux }}\left( 5\right)$ . Based on these results, we proceeded with the weight of 1 with both of the architectures. We did not optimize this hyperpa-rameter for local aggregation and simply used the value of ${10}^{-4}$ as in the rest of the experiments. Both local and global aggregations used 15 iterations of energy minimization.
566
+
567
+ ---
568
+
569
+ from typing import Any, Callable, Optional
570
+
571
+ import haiku as hk
572
+
573
+ import jax
574
+
575
+ import jax.numpy as jnp
576
+
577
+ import jax.scipy as jsp
578
+
579
+ def inverse_softplus(x):
580
+
581
+ return np.log(np.exp(x) - 1.)
582
+
583
+ class MomentumOptimizer (hk.Module) :
584
+
585
+ def _____init_(self, learning_rate: float = 0.125,
586
+
587
+ momentum: float = 0.9 ,
588
+
589
+ name: Optional[str] = None):
590
+
591
+ super( )._init_(name=name)
592
+
593
+ self._mu = hk.get_parameter( )
594
+
595
+ "momentum", [], jnp.float32,
596
+
597
+ hk.initializers.Constant (jsp.special.logit (momentum)))
598
+
599
+ self._lr = hk.get_parameter( )
600
+
601
+ "lr", [], jnp.float32,
602
+
603
+ hk.initializers.Constant(inverse_softplus(learning_rate)))
604
+
605
+ @property
606
+
607
+ def learning_rate(self):
608
+
609
+ return jax.nn.softplus(self._lr)
610
+
611
+ @property
612
+
613
+ def momentum(self):
614
+
615
+ return jax.nn.sigmoid(self._mu)
616
+
617
+ def _____call_____(self, f: Callable[[Any, jnp.ndarray, Any], jnp.ndarray],
618
+
619
+ y_init: jnp.ndarray, x: Any, theta: Any, max_iters: int = 5,
620
+
621
+ gtol: float = 1e-3, clip_value: Optional[float] = None):
622
+
623
+ $n \times n$
624
+
625
+ Args:
626
+
627
+ f: objective that takes y (optimization argument) of shape
628
+
629
+ [batch_size, ...], x (conditioning input) of shape [batch_size, ...],
630
+
631
+ and theta (shared params) and outputs a vector of objective values of
632
+
633
+ shape [batch_size].
634
+
635
+ y_init: the initial value for y of shape [batch_size, ...].
636
+
637
+ x: Conditioning parameters.
638
+
639
+ theta: shared parameters for the objective.
640
+
641
+ max_iters: maximum number of optimization iterations.
642
+
643
+ gtol: tolerance level for stopping optimization (in terms of gradient
644
+
645
+ max norm).
646
+
647
+ clip_value: if specified, defines an inverval [-clip_value, clip_value]
648
+
649
+ to project each dimension of the state variable on.
650
+
651
+ Returns:
652
+
653
+ (y_optimal, optimizer_results).
654
+
655
+ """
656
+
657
+ def combined_objective(y, x, theta):
658
+
659
+ fval = f(y, x, theta)
660
+
661
+ return jnp.sum(fval), fval
662
+
663
+ grad_fn = jax.grad(combined_objective, argnums=0, has_aux=True)
664
+
665
+ y = y_init
666
+
667
+ grad_norm = jnp.zeros([y.shape[0]], dtype=y.dtype)
668
+
669
+ fval = jnp.zeros([y.shape[0]], dtype=y.dtype)
670
+
671
+ max_norm = jnp.zeros([y.shape[0]], dtype=y.dtype)
672
+
673
+ momentum = jnp.zeros_like(y)
674
+
675
+ def loop_body(_, args):
676
+
677
+ y, grad_norm, momentum, max_norm, f_val = args
678
+
679
+ grad, $f$ _val = grad_fn(y + self.momentum * momentum, $x$ , theta)
680
+
681
+ max_norm = jnp.max(jnp.abs(grad), axis=1)
682
+
683
+ grad_mask = jnp.greater_equal(max_norm, gtol)
684
+
685
+ grad_mask = grad_mask.astype(y.dtype)
686
+
687
+ momentum = self.momentum * momentum - self.learning_rate * grad
688
+
689
+ y += grad_mask[:, None] * momentum
690
+
691
+ if clip_value is not None:
692
+
693
+ y = jnp.clip(y, 0. - clip_value, clip_value)
694
+
695
+ grad_norm += jnp.square(grad).mean(axis=1)
696
+
697
+ return jax.lax.fori_loop(0, max_iters, loop_body, (y, grad_norm, momentum, max_norm, fval))
698
+
699
+ Listing 2: Optimizer code in Jax.
700
+
701
+ ---
702
+
UAI/UAI 2022/UAI 2022 Conference/BElGwDLoqlc/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,283 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § ABSTRACT
2
+
3
+ Processing sets or other unordered, potentially variable-sized inputs in neural networks is usually handled by aggregating a number of input tensors into a single representation. While a number of aggregation methods already exist from simple sum pooling to multi-head attention, they are limited in their representational power both from theoretical and empirical perspectives. On the search of a principally more powerful aggregation strategy, we propose an optimization-based method called Equilibrium Aggregation. We show that many existing aggregation methods can be recovered as special cases of Equilibrium Aggregation and that it is provably more efficient in some important cases. Equilibrium Aggregation can be used as a drop-in replacement in many existing architectures and applications. We validate its efficiency on three different tasks: median estimation, class counting, and molecular property prediction. In all experiments, Equilibrium Aggregation achieves higher performance than the other aggregation techniques we test.
4
+
5
+ § 1 INTRODUCTION
6
+
7
+ Early neural networks research focused on processing fixed-dimensional vector inputs. Since then, advanced architectures have been developed for processing fixed-dimensional data efficiently and effectively. This format, however, is not natural for applications where inputs do not have a fixed dimensionality, are unordered, or have both of these properties. A strikingly successful strategy for tackling this issue has been to process such inputs with a series of aggregation $\rightarrow$ transformation operations.
8
+
9
+ An aggregation operation compresses a set of input tensors into a single representation of a known, predefined dimensionality that can be then further sent to the downstream transformation block. Since the latter deals with fixed-dimensional inputs with a defined ordering, it can profit from the variety of techniques available for vector-to-vector computations.
10
+
11
+ < g r a p h i c s >
12
+
13
+ Figure 1: Global aggregation layers in typical neural networks for sets (top) and graphs (bottom). Top: each input set element ${\mathbf{x}}_{i}$ is first processed individually before being pooled into a global representation $\mathbf{y}$ . This is followed by a final transformation block. Bottom: for graph data, the first part of the network is replaced by a graph or message passing neural network, but the global aggregation step is similar. In both cases, the global aggregation step drastically reduces the number of embeddings from many to one, rendering the right choice of aggregation technique critical for good model performance. The aggregation layer is typically implemented using sum-, max-, or attention-pooling. We propose a new aggregation mechanism, called Equilibrium Aggregation.
14
+
15
+ This pattern can be seen in many architectures. For instance, Deep Sets [Zaheer et al., 2017] builds a representation of a set of objects by first transforming each object and then summing their embeddings. Similarly, Graph Neural Networks [Kipf and Welling, 2016, Battaglia et al., 2018] use a message-passing mechanism, which amounts to aggregating the set of input messages received by each node from its neighbours and then transforming the aggregate into a new message on the next layer (local aggregation). In many cases, several message passing layers are then followed by a global aggregation layer, where all node embeddings are aggregated into one global embedding vector describing the entire graph. Finally, Transformers [Vaswani et al., 2017] use self-attention, a mechanism that allows each object in the input set to interact with every other object and update its embedding by aggregating value embeddings from the rest of the set.
16
+
17
+ Mathematically, the aggregation $\phi \left( X\right) = \mathbf{y}$ compresses the input set $X = \left\{ {{\mathbf{x}}_{1},{\mathbf{x}}_{2},\ldots ,{\mathbf{x}}_{N}}\right\} \in {2}^{\mathcal{X}}$ into a $D$ - dimensional vector $\mathbf{y} \in {\mathbb{R}}^{D}$ . In the case of Deep Sets [Za-heer et al., 2017] with sum aggregation, this reads
18
+
19
+ $$
20
+ \phi \left( X\right) = \rho \left( {\mathop{\sum }\limits_{{i = 1}}^{N}f\left( {\mathbf{x}}_{i}\right) }\right) , \tag{1}
21
+ $$
22
+
23
+ where $f$ and $\rho$ are the optional input and output transformations, respectively.
24
+
25
+ Besides yielding a fixed-dimensional output embedding, [1] enforces an important inductive bias: permutation invariance. Global properties of sets or graphs (such as the free energy of a molecule) are independent of the ordering of the set elements. Taking advantage of such task symmetries [Mallat, 2016] can add robustness guarantees with respect to important classes of input transformations, and is known to help generalisation performance [Worrall et al., 2017, Weiler et al., 2018, Winkels and Cohen, 2018]. Other ways of incorporating permutation invariance are max-pooling, mean-pooling or attention aggregators [Kipf and Welling, 2016, Battaglia et al., 2018, Vaswani et al., 2017, Velickovic et al., 2018].[
26
+
27
+ However, it is exactly these aggregation functions which often introduce a bottleneck in the information flow [Zaheer et al., 2017, Wagstaff et al., 2019, Cai and Wang, 2020, Chen et al., 2020, Wagstaff et al., 2021]. It is easy to see that sum aggregation may struggle to selectively extract relevant information from individual inputs or subsets and while methods like multi-head attention (effectively amounting to weighted mean per each head) partially address this issue, we believe there is a fundamental need for more expressive aggregation mechanisms.
28
+
29
+ < g r a p h i c s >
30
+
31
+ Figure 2: Schematic illustration of Equilibrium Aggregation. Each input $\mathbf{x} \in X$ contributes a potential value $F\left( {\mathbf{x},\mathbf{y}}\right)$ which are summed over the set $X$ and, together with the regularizer $R\left( \mathbf{y}\right)$ , form the total energy. Equilibrium Aggregation seeks to minimize this energy and the found minimum serves as the aggregation result.
32
+
33
+ Motivated by this need, we develop a method called Equilibrium Aggregation which is a generalization over existing pooling-based aggregation methods and can be obtained as an implicit solution to optimization-based formulation of aggregation. We further investigate its theoretical properties and show that not only it is a universal approximator of set functions but that it also provably more expressive than sum or max aggregation in some cases. Finally, we validate our insights empirically on a series of experiments where Equilibrium Aggregation demonstrates its practical effectiveness.
34
+
35
+ § 2 EQUILIBRIUM AGGREGATION
36
+
37
+ Our insight for developing better aggregation functions is grounded in the fact that the standard, pooling-based aggregation methods can be recovered as solutions to a certain optimization problem:
38
+
39
+ $$
40
+ \phi \left( X\right) = \arg \mathop{\min }\limits_{\mathbf{y}}\mathop{\sum }\limits_{{i = 1}}^{N}F\left( {{\mathbf{x}}_{i},\mathbf{y}}\right) , \tag{2}
41
+ $$
42
+
43
+ where $F\left( {\mathbf{x},\mathbf{y}}\right)$ is a potential function.
44
+
45
+ For example, with $F\left( {\mathbf{x},\mathbf{y}}\right) = {\left( \mathbf{x} - \mathbf{y}\right) }^{2}$ (and assuming $\mathcal{X} =$ $\mathbb{R}$ ), one obtains the mean aggregation $\phi \left( X\right) = \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{\mathbf{x}}_{i}$ , more examples can be found in Table 1. A natural question following this observation arises: can a more interesting aggregation strategy be induced by other choices of the potential function $F\left( {\mathbf{x},\mathbf{y}}\right)$ ?
46
+
47
+ ${}^{1}$ Interestingly, even though in the case of Transformers for natural language processing the input is an ordered sequence, it appears beneficial to model the data as an order-independent set (or fully connected graph) with the sequential structure added via positional encodings.
48
+
49
+ We propose a method called Equilibrium Aggregation that addresses this question by letting the potential be a learnable neural network ${F}_{\theta }\left( {\mathbf{x},\mathbf{y}}\right)$ parameterized by $\theta$ which takes a set element $\mathbf{x}$ and the aggregation result $\mathbf{y} \in \mathcal{Y} = {\mathbb{R}}^{M}$ as an input and outputs a non-negative real scalar expressing the degree of "disagreement" between the two inputs. By also adding a regularization term, we obtain the energy-minimization equation for Equilibrium Aggregation:
50
+
51
+ $$
52
+ {\phi }_{\theta }\left( X\right) = \arg \mathop{\min }\limits_{\mathbf{y}}{E}_{\theta }\left( {X,\mathbf{y}}\right) ,
53
+ $$
54
+
55
+ $$
56
+ {E}_{\theta }\left( {X,\mathbf{y}}\right) = {R}_{\theta }\left( \mathbf{y}\right) + \mathop{\sum }\limits_{{i = 1}}^{N}{F}_{\theta }\left( {{\mathbf{x}}_{i},\mathbf{y}}\right) , \tag{3}
57
+ $$
58
+
59
+ where for the scope of the paper the regularizer is simply ${R}_{\theta }\left( \mathbf{y}\right) = \operatorname{softplus}\left( \lambda \right) \cdot \parallel \mathbf{y}{\parallel }_{2}^{2}$ . A graphical illustration for this construction can be found on Figure 2.
60
+
61
+ Interestingly, this makes the result of the aggregation $\mathbf{y}$ be defined implicitly and generally not available as a closed-form expression. Instead, one can find $\mathbf{y}$ by numerically solving the optimization problem (3), e.g., by gradient descent:
62
+
63
+ $$
64
+ {\mathbf{y}}^{\left( t + 1\right) } = {\mathbf{y}}^{t} - \alpha {\nabla }_{\mathbf{y}}{E}_{\theta }\left( {X,{\mathbf{y}}^{\left( t\right) }}\right) ,\;{\phi }_{\theta }\left( X\right) = {\mathbf{y}}^{\left( T\right) }. \tag{4}
65
+ $$
66
+
67
+ Under certain conditions and with a large enough number of steps $T$ , this procedure provides a sufficiently accurate solution that is itself well-defined and differentiable: either explicitly, through the unrolled gradient descent [Andrychow-icz et al., 2016, Finn et al., 2017], or via the implicit function theorem applied to the optimality condition of (3) [Bai et al., 2019, Blondel et al., 2021]. This allows to learn parameters of the potential $\theta$ and also to train the whole model involving the aggregation end-to-end.
68
+
69
+ In general, it is not guaranteed that gradient-based optimization will converge to the global minimum of (3) when the potential is an arbitrarily structured neural network. However, with a large enough regularization weight $\lambda$ , it is possible to enforce convexity at least in the subspace of $\mathcal{Y}$ [Rajeswaran et al., 2019b]. When the gradient descent is initialized from a learnable starting point or, as in our implementation, from the zero vector, it becomes sufficient to find just a stationary point as long as the next layer in the network makes use of the aggregation result. Relaxing the need for convergence to the global minimum together with the use of flexible neural networks allows to implement a potentially complex and expressive aggregation mechanism. In our implementation we employ explicit differentiation through gradient descent and find that the network generally learns convergent dynamics (4) automatically, even with a fairly small number of iterations such as $T = {10}$ .
70
+
71
+ max width=
72
+
73
+ Aggregation $\phi \left( X\right)$ $F\left( {\mathbf{x},\mathbf{y}}\right)$
74
+
75
+ 1-3
76
+ Mean $\frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{\mathbf{x}}_{i}$ ${\left( \mathbf{x} - \mathbf{y}\right) }^{2}$
77
+
78
+ 1-3
79
+ Median ${\mathbf{x}}_{\left\lbrack N/2\right\rbrack }$ $\left| {\mathbf{x} - \mathbf{y}}\right|$
80
+
81
+ 1-3
82
+ Max $\max \{ {\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{N}\}$ $\max \left( {0,\mathbf{x} - \mathbf{y}}\right)$
83
+
84
+ 1-3
85
+ Sum $\mathop{\sum }\limits_{{i = 1}}^{N}{\mathbf{x}}_{i}$ or $\arg \mathop{\min }\limits_{\mathbf{y}}\left\lbrack {\frac{{\mathbf{y}}^{2}}{2} + \mathop{\sum }\limits_{i}F\left( {{\mathbf{x}}_{i},\mathbf{y}}\right) }\right\rbrack$ $- \mathbf{x} \cdot \mathbf{y}$
86
+
87
+ 1-3
88
+ Equilibrium Aggregation $\arg \mathop{\min }\limits_{\mathbf{y}}{E}_{\theta }\left( {X,\mathbf{y}}\right)$ Neural network ${\mathrm{F}}_{\theta }\left( {\mathbf{x},\mathbf{y}}\right)$
89
+
90
+ 1-3
91
+
92
+ Table 1: A comparison between Equilibrium Aggregation and pooling-based aggregation methods. Equations are given for the scalar case or can be applied coordinate-wise in higher dimensions.
93
+
94
+ To additionally encourage convergence, we consider the following auxiliary loss that penalizes the norm of the energy gradient at each step of optimization:
95
+
96
+ $$
97
+ {L}_{\text{ aux }}\left( {X,\mathbf{y},\theta }\right) = \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{\begin{Vmatrix}{\nabla }_{\mathbf{y}}{E}_{\theta }\left( X,{\mathbf{y}}^{\left( t\right) }\right) \end{Vmatrix}}_{2}^{2}. \tag{5}
98
+ $$
99
+
100
+ We simply add the auxiliary loss to the main loss incurred by the task of interest and optimize the sum during the training. We further empirically assess convergence of the inner-loop optimization in Section 5.3.
101
+
102
+ § 3 UNIVERSAL FUNCTION APPROXIMATION ON SETS
103
+
104
+ According to the universal function approximation theorem for neural networks [Hornik et al., 1989, Cybenko, 1989, Funahashi, 1989], an infinitely large multi-layer perceptron can approximate any continuous function on compact domains in $\mathbb{R}$ with arbitrary accuracy. In machine learning, we typically do not know the function we aim to approximate. Hence, knowing that neural networks can in theory approximate anything is comforting. Equally, we seek to build inductive biases into the networks in order to facilitate learning, using more sophisticated architectures than multilayered perceptrons. It is imperative to be aware whether and to what extent those modifications restrict the space of learnable functions.
105
+
106
+ Similar constructions to Equilibirum Aggregation, i.e. optimization-defined models defined as $\mathbf{y} = \arg \mathop{\min }\limits_{\mathbf{y}}G\left( {X,\mathbf{y}}\right)$ , have previously been studied in the literature, especially in the context of permutation-sensitive (i.e. not permutation invariant) functions [Pineda, 1987, Finn and Levine, 2017, Bai et al., 2019] and various results with respect to universal function approximation were obtained. It is not obvious, however, how these results translate to the important permutation-invariant case we consider in this paper. Introducing permutation invariance self-evidently restricts the space of functions that can be approximated. In the next section we directly address the question of what set functions can be learned by Equilibrium Aggregations and establish a universality guarantee.
107
+
108
+ § 3.1 UNIVERSALITY OF EQUILIBRIUM AGGREGATION
109
+
110
+ In this section, we will see that Equilibrium Aggregation is indeed able to approximate all continuous permutation invariant functions $\psi$ . We start by stating a few assumptions: We assume a fixed input set size $N$ of scalar inputs ${}^{2}{x}_{i}$ (note the dropping of the boldface to indicate that these are not vectors anymore) and a scalar output. We further assume that input space $\mathcal{X}$ is a compact subset of ${\mathbb{R}}^{N}$ . For simplicity, without loss of generality (as we can always rescale the inputs), we choose this to be ${\left\lbrack 0,1\right\rbrack }^{N}$ :
111
+
112
+ $$
113
+ \psi : {\left\lbrack 0,1\right\rbrack }^{N} \rightarrow \mathbb{R}. \tag{6}
114
+ $$
115
+
116
+ As $\psi$ is permutation invariant, the vector valued inputs can be seen as (multi)sets. For a discussion on why considering uncountable domains (i.e. the real numbers) is important for continuity, see Section 3 of Wagstaff et al. [2019].
117
+
118
+ We consider a neural network architecture with Equilibrium Aggregation as a global pooling operation of the following form:
119
+
120
+ $$
121
+ \phi \left( X\right) = \rho \left( {\arg \mathop{\min }\limits_{\mathbf{y}}\mathop{\sum }\limits_{i}{F}_{\theta }\left( {{x}_{i},\mathbf{y}}\right) }\right) , \tag{7}
122
+ $$
123
+
124
+ where ${F}_{\theta }$ (the potential function) and $\rho$ are modeled by neural networks, which are assumed to be universal function approximators. Note that, for simplicity of the proof, we implicitly set the regulariser to 0 . We refer to the output of $\operatorname{argmin}\mathop{\sum }\limits_{i}{F}_{\theta }\left( {{x}_{i},\mathbf{y}}\right)$ as the latent space, analogous to the terminology used in Wagstaff et al. [2019] with respect to the Deep Sets architecture [Zaheer et al., 2017]. We prove the following:
125
+
126
+ Theorem 1 Let the latent space be of size $M = N$ , i.e. $\mathbf{y} \in$ ${\mathbb{R}}^{N}$ . Then all permutation invariant continuous functions $\psi$ can be approximated with Equilibrium Aggregation as defined in (7).
127
+
128
+ Proof For the purpose of this proof, we assume ${F}_{\theta }$ takes the form:
129
+
130
+ $$
131
+ {F}_{\theta }\left( {{x}_{i},\mathbf{y}}\right) = \mathop{\sum }\limits_{{k = 1}}^{M}{\left( \frac{{y}_{k}}{N} - {x}_{i}^{k}\right) }^{2}, \tag{8}
132
+ $$
133
+
134
+ where $k$ serves both as an index for the vector $\mathbf{y}$ and as an exponent for ${x}_{i}$ . There are two sums now, an inner one in the definition of ${F}_{\theta }$ and an outer one over the nodes in (7). Note that ${F}_{\theta }$ is continuous and can therefore be approximated by a neural network. Importantly, ${F}_{\theta }$ is also convex and can therefore assumed to be optimised with gradient descent to find $\arg \min \left( \mathbf{y}\right)$ . Note that all $M$ terms can be optimised independently as $X$ is fixed. It is a well-known fact that minimising the sum of squares yields the mean:
135
+
136
+ $$
137
+ \arg \mathop{\min }\limits_{z}\mathop{\sum }\limits_{{i = 1}}^{N}{\left( z - {x}_{i}\right) }^{2} = \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{x}_{i}. \tag{9}
138
+ $$
139
+
140
+ It follows that minimising the sum of energies defined in (8) yields
141
+
142
+ $$
143
+ {y}_{k}^{\min } = \mathop{\sum }\limits_{i}{x}_{i}^{k}\;\text{ for }k \in \{ 1,\ldots ,N\} . \tag{10}
144
+ $$
145
+
146
+ For inputs $\left( {{x}_{1},\ldots ,{x}_{M}}\right) \in {\left\lbrack 0,1\right\rbrack }^{M}$ , this mapping to $\mathbf{y}$ is evidently continuous and surjective with respect to its range ${\left\lbrack 0,M\right\rbrack }^{N}$ . We also know from Lemma 4 in Zaheer et al. [2017] that this mapping is injective and from Lemma 6 that it has a continuous inverse. ${}^{3}\psi$ is continuous by definition and, therefore,
147
+
148
+ $$
149
+ \rho = \psi \circ {\left( \arg \mathop{\min }\limits_{\mathbf{y}}\mathop{\sum }\limits_{i}{F}_{\theta }\left( {x}_{i},\mathbf{y}\right) \right) }^{-1} \tag{11}
150
+ $$
151
+
152
+ is continuous ${}^{4}$ as long as the inputs ${x}_{i}$ are constrained to $\left\lbrack {0,1}\right\rbrack$ and can therefore be approximated by a neural network. However, via a global re-scaling of the inputs, this proof can be used for any bounded input domain. Hence, any permutation invariant, continuous $\psi$ on a bounded domain can be appoximated via Equilibrium Aggregation for a latent space of size $M = N$ .
153
+
154
+ § 3.2 COMPARISON TO DEEP SETS
155
+
156
+ So far, we have only been able to prove that Equilibrium Aggregation scales at least as well as Deep Sets. By that, we mean that universal function approximation can be achieved with $N = M$ , i.e. having as many latents as inputs is sufficient. (For Deep Sets, we also know that $N = M$ is necessary [Wagstaff et al., 2019].) Even though we currently do not know whether it is possible to achieve universal function approximation with a smaller latent space, there is some indication that Equilibrium Aggregation may have more representational power, as we will lay out in the following:
157
+
158
+ Using one latent dimension, Deep Sets with max-pooling can obviously represent $\psi \left( X\right) = \max \left( X\right)$ , but it cannot represent (or even approximate) the sum for set sizes larger than 1. Vice versa, sum-pooling can represent $\psi \left( X\right) =$ $\operatorname{sum}\left( X\right)$ , but it cannot represent $\max \left( X\right)$ [Wagstaff et al., 2019]. Equilibrium Aggregation can represent both sum and max pooling, each with just one latent dimension (i.e. $\mathbf{y} \in {\mathbb{R}}^{1}$ ) as shown in Table 1.
159
+
160
+ ${}^{2}$ This is a common simplification in the literature on universal function approximation on sets. For a discussion on how to generalise from the scalar to the vector case, see Hutter [2020].
161
+
162
+ ${}^{3}$ We refer to Appendix B. 4 in Wagstaff et al. [2019] as to why the term $k = 0$ in (10) can be dropped for fixed set sizes.
163
+
164
+ ${}^{4}$ The superscript -1 indicates the functional inverse w.r.t. $X$ .
165
+
166
+ § 4 RELATED WORK
167
+
168
+ Equilibrium Aggregations sits at the intersection of two machine learning research areas: aggregation functions and implicit layers. In the following, we give an overview over the work closest related in each of the fields, respectively.
169
+
170
+ § 4.1 AGGREGATION FUNCTIONS
171
+
172
+ Perhaps the most popular approach for obtaining a permutation invariant encoding of sets is Sum pooling. A particular instance of this is Deep Sets [Zaheer et al., 2017], as described in (1). A central finding of Wagstaff et al. [2019] is that the latent space, i.e. the dimensionality of the result of $\mathop{\sum }\limits_{i}f\left( {x}_{i}\right) \in {\mathbb{R}}^{M}$ needs to be at least as large as the number of inputs $N$ , i.e. $M \geq N$ in order to guarantee universal function approximation. This applies to many other aggregation methods as well and, to the best of our knowledge, there is currently no known pooling operation which does not introduce this scaling issue.
173
+
174
+ Principal Neighbourhood Aggregation (PNA) [Corso et al., 2020 addresses the limitations of each individual pooling operator such as Sum or Max by combining four different pooling operators and three different scaling strategies resulting into a simultaneous 12-way aggregation. Despite the more sophisitaced aggregation procedure, Corso et al. [2020] come to very similar conclusions as Zaheer et al. [2017] and Wagstaff et al. [2019], namely that $N = M$ is both necessary and sufficient. They prove the necessity for any set of aggregators as well as the sufficiency for a specific set. In our work, we further expand this line of thinking by allowing the model to learn the desired aggregation operator which may include PNA or something drastically different.
175
+
176
+ Learnable Aggregation Functions (LAF) [Pellegrini et al., 2020] provide a similar framework for learning an aggregation operator by expressing it as a combination of several weighted ${L}_{p}$ norms, where the weights and the $p$ parameters are trained jointly with the model. Even though LAFs are capable of expressing operators used in PNA and beyond, it is not clear how they can reproduce other aggregation methods such as attention. In contrast, our method can learn attention (see Appendix B) as well as even more expressive aggregation functions.
177
+
178
+ Finally, Janossy Pooling [Murphy et al., 2019] generalizes the idea of standard, coordinate-wise pooling to make use of higher-order interactions between set elements. Despite the potential for practical effectiveness, it is unclear whether these developments guarantee better approximation results in the general case [Wagstaff et al., 2021]. While Equilibrium Aggregation is also fully compatible with Janossy Pooling and may profit from even more expressive energy functions with pairwise or triplet interactions, this may not be necessary as such interactions can be emulated within the optimization process and ultimately come at a significant computational cost for larger set sizes.
179
+
180
+ In addition to formulating more expressive pooling operators, there is also a body of work concerned with multi-step parametric models for set encoding [Vinyals et al., 2015, Lee et al., 2019]. Inevitably, to achieve permutation invariance these models rely on some kind of a pooling as a building block, such as the ones outlined above. Equilibrium Aggregation being a drop-in replacement for sum- or attention-pooling can be used in those models, too.
181
+
182
+ § 4.2 IMPLICIT AND OPTIMIZATION-BASED MODELS
183
+
184
+ Gradient-based optimization has been utilized in a large number of applications [Amos, 2019]: image denois-ing [Putzky and Welling, 2017], molecule generation [Duve-naud et al., 2015, AlQuraishi, 2019], planning [Amos et al., 2018] and combinatorial search [Hottung et al., 2020, Bar-tunov et al., 2020] to name a few. While there is a large body of work where gradient descent dynamics is decoupled from learning, (e.g., Du and Mordatch [2019], Song and Ermon [2019], our work is particularly closely related to methods that seek to learn the underlying objective function end-to-end, such as Putzky and Welling [2017], Rubanova et al. [2021].
185
+
186
+ A closely-related family of methods involve the idea of defining computations inside a model implicitly, i.e. via a set of conditions that a particular variable must obey instead of prescribing directly how the variable's value should be computed. Deep Equilibrium Models (DEQs) formulate this via a fixed point of an update rule specified by the model [Pineda, 1987, Liao et al., 2018, Bai et al., 2019] and Implicit Graph Neural Networks explore this idea in the context of graphs [Gu et al., 2020]. Neural ODEs [Chen et al., 2018] allow to parametrize a derivative of a continuous-time function specifying the computation of interest. iMAML [Rajeswaran et al., 2019a] considers an implicit optimization procedure for the purpose of finding model parameters suitable for gradient-based meta-learning [Finn et al., 2017].
187
+
188
+ Our work is similar in spirit but focuses specifically on the aggregation block for encoding sets, which can be seen as a small but generic building block that can be combined with arbitrary model architectures. Similarly to OptNet [Amos and Kolter, 2017], we propose a layer architecture that can be used inside another implicit or traditional multi-layer neural network.
189
+
190
+ < g r a p h i c s >
191
+
192
+ Figure 3: Median estimation of a 100-number set with three different aggregation methods. The bold lines correspond to the average performance over 5 seeds, the faded lines show the best performing seed of the respective model. Mean square error is computed for varied set embedding sizes on $8 \times {10}^{5}$ number of sets.
193
+
194
+ § 5 EXPERIMENTS
195
+
196
+ In this section, we describe three experiments with the goal of analyzing the performance of Equilibrium Aggregation in different tasks and comparing it to existing aggregation methods. Our intention is not to achieve state of the art results on any particular task. Instead, we strive to consider archetypal scenarios and applications in which performance significantly depends on the choice of aggregation method so it can be studied in isolation from other issues.
197
+
198
+ In all experiments we let the models to train for ${10}^{7}$ steps of Adam optimizer [Kingma and Ba, 2014]. Since maximizing performance is not the goal of our experiments, we do not perform an extensive hyperparameter search, only limiting it to a sweep over the learning rate (chosen from $\left\{ {{10}^{-4},3 \times {10}^{-4},{10}^{-3}}\right\}$ ) and the auxiliary loss weight (on MOLPCBA only). To that end, we use a small subset of the training set reserved for validation (Omniglot and MOLPCBA benchmarks only). We rely on a single GPU training regime using Nvidia P100s and V100s. All experimental code is written in Jax primitives [Bradbury et al., 2018] using Haiku [Hennigan et al., 2020]. Source code for the most crucial parts of our implementation can be found in the Appendix.
199
+
200
+ § 5.1 MEDIAN ESTIMATION
201
+
202
+ In this experiment, the neural network is tasked with predicting the median value of a set of 100 randomly sampled numbers. Each set is sampled from either a Uniform, Gamma or Normal distribution with fixed parameters, similarly to Wagstaff et al. [2019]. The basic architecture for pooling-based aggregation baselines consists of first embedding each number in the set with a fully connected ResNet [He et al., 2016] with layer sizes $\left\lbrack {{256},{256},D}\right\rbrack$ , where $D$ is the set embedding size. Then, the embeddings are pooled with the corresponding method into a $D$ -dimensional vector and the median is predicted from it using another fully connected network with layer sizes $\left\lbrack {D,{128},1}\right\rbrack$ . A simple square loss is used to regress the median.
203
+
204
+ Equilibrium aggregation, in contrast, performs the input encoding and aggregation simultaneously by doing a 5-step gradient optimization of (3) with the potential function implemented as a ResNet with layer sizes $\left\lbrack {{256},{256},1}\right\rbrack$ taking a $D + 1$ -dimensional input ( $D$ for the implicit aggregation result and 1 for the input number). The result is then also transformed into the prediction using the same output network as in the baseline methods.
205
+
206
+ We compare three models, Sum aggregation inspired by [?], Multi-head attention with 4 heads, each operating with $D/4$ dimensional keys, values and learned query vectors, and Equilibrium Aggregation as described above. For each of the models we vary the embedding size and assess the mean square error after ${10}^{7}$ training steps. Empirical results are shown on Figure 3.
207
+
208
+ Equilibrium aggregation achieves one (for average across 5 seeds) or two (for the best out of 5 seeds) orders of magnitude better estimation error than the baseline pooling methods which confirms its higher representational power in this simple setting. Importantly, in this experiment, there is no distinction between training and test distributions as the samples are continuously drawn and never repeated. Hence, we are primarily testing the representation power of the approaches as opposed to data efficiency in this particular example. However, it is worth noting that all architectures have roughly the same amount of trainable parameters. Presumably, the low error achieved by Equilibrium Aggregation suggests that it managed to discover or reasonably well approximate the analytical solution $F\left( {\mathbf{x},\mathbf{y}}\right) = \left| {\mathbf{x} - \mathbf{y}}\right|$ .
209
+
210
+ § 5.2 OMNIGLOT CLASS COUNTING
211
+
212
+ We proceed to the more challenging task of counting the number of unique character classes in a set of ${16}\mathrm{{Om}}$ - niglot images, which is inspired by Lee et al. [2019]. Om-niglot [Lake et al., 2015] is a dataset of handwritten characters that are organized into alphabets and then into character classes for each of which only 20 instances are available.
213
+
214
+ < g r a p h i c s >
215
+
216
+ Figure 4: Omniglot class counting task.
217
+
218
+ We randomly choose between 1 and 10 character classes and sample their images to form the input set. The model then needs to aggregate those images and infer the number of unique character classes by outputting a vector of probabilities for each of the $1,2,\ldots ,{10}$ possible number of classes (see Figure 4a for a visual illustration).
219
+
220
+ Original images are downsized to ${32} \times {32}$ and encoded using a convolutional ResNet with $\left\lbrack {{16},{32},{64}}\right\rbrack$ hidden channels in each of the three blocks correspondingly. Each block operates with $3 \times 3$ filters and a stride of 2 and hence reduces spatial sizes of the input tensor by half. The ResNet output is then flattened and linearly projected into a 256-dimensional input embedding. After the encoding step, as in the previous experiment, Sum, Multi-Head Attention with 4 heads and Equilibrium Aggregation perform set aggregation into 256- dimensional set embedding and predicted the number of classes using a simple softmax distribution using a fully-connected ResNet with layer sizes of $\left\lbrack {{128},{10}}\right\rbrack$ . Equilibrium Aggregation also uses a ResNet potential with $\left\lbrack {{512},{512},{32}}\right\rbrack$ structure where the output of the last layer is squared and then summed to form a scalar potential value. We used 10 iterations of inner-loop optimization in this experiment.
221
+
222
+ Each model is trained on the characters from Omniglot train set for ${10}^{7}$ steps and with a batch size of 8 . Train and test accuracies are reported in Figure 4b. One can see that, again, Equilibrium Aggregation outperforms both of the baselines, both in terms of train and test set accuracy. This shows that, one the one hand, Equilibrium Aggregation has a significantly larger capacity and thus better fits the training data. On the other hand, this capacity results into better generalization and, presumably, a more robust aggregation strategy.
223
+
224
+ < g r a p h i c s >
225
+
226
+ Figure 5: Inner-loop optimization statistics on MOLPCBA with the GIN architecture. The pink curve shows the maximum value of the ${L}^{1}$ norm along any dimension of the gradient on the last (15th) iteration of the inner loop. A value of ${10}^{-2}$ indicates a small gradient update and therefore good convergence of the optimizer. The dark purple curve tracks the auxiliary loss, i.e. the ${L}^{2}$ norm of the gradient update averaged across all 15 optimization steps (see (5)). Overall, these curves indicate stable, convergent behaviour despite a modest number of inner-loop optimization steps.
227
+
228
+ § 5.3 GLOBAL AGGREGATION IN GRAPH NEURAL NETWORKS
229
+
230
+ Finally, we study the effect of different aggregation methods in the global readout layer of a graph neural network (GNN) on a well-established MOLPCBA benchmark [Hu et al., 2020]. In this task, the model is required to predict 128 global binary properties of an input molecule. This is traditionally implemented within the GNN framework by first applying several layers of message-passing on a graph and then aggregating the resulting 300-dimensional node embeddings into a single 300-dimensional graph representation from which the predictions are made. Since there is more than one prediction task per molecule, mean average precision (MAP) is used as an evaluation metric. The test MAP is reported for the best MAP attained on the validation set as the model is training. The validation and test metrics are periodically evaluated from model snapshots taken approximately every ${10}^{4}$ training steps.
231
+
232
+ For this experiment, we choose two popular GNN architectures, namely a Graph Convolutional Network (GCN) [Kipf and Welling, 2016] and a Graph Isomorphism Network (GIN) [Xu et al., 2018] that both use a simple Sum readout in their canonical implementations by Hu et al. [2020]. We leave the architectures unchanged and only vary the global readout operation. Our implementation uses the Jraph library [Godwin* et al., 2020] and dynamic batch training with up to 8 graphs and 1024 nodes in a batch.
233
+
234
+ Table 2: Comparison between different aggregation methods on MOLPCBA.
235
+
236
+ max width=
237
+
238
+ Local Aggregation Global Aggregation Validation MAP Test MAP
239
+
240
+ 1-4
241
+ 4*Graph Convolutional Network [Kipf and Welling, 2016] Sum 0.223 0.203
242
+
243
+ 2-4
244
+ Multi-Head Attention 0.248 0.229
245
+
246
+ 2-4
247
+ Principal Neighbourhood Aggregation 0.226 0.209
248
+
249
+ 2-4
250
+ Equilibrium Aggregation 0.269 0.252
251
+
252
+ 1-4
253
+ 4*Graph Isomorphism Network [Xu et al., 2018] Sum 0.255 0.232
254
+
255
+ 2-4
256
+ Multi-Head Attention 0.254 0.234
257
+
258
+ 2-4
259
+ Principal Neighbourhood Aggregation 0.262 0.244
260
+
261
+ 2-4
262
+ Equilibrium Aggregation 0.263 0.246
263
+
264
+ 1-4
265
+ Equilibrium Aggregation Equilibrium Aggregation 0.269 0.258
266
+
267
+ 1-4
268
+
269
+ For the potential network we use an architecture similar to the previous experiment with layer sizes $\left\lbrack {{600},{300},{32}}\right\rbrack$ , sum-of-the-squares output and 15 iterations for energy minimization.
270
+
271
+ The results are provided in Table 2. Overall, the empirical findings on MOLPCBA are consistent with the previous experiments with Multi-Head Attention providing a noticeable performance improvement over the basic Sum aggregation and Equilibrium Aggregation performing even better. In addition, we also evaluate Principal Neighbourhood Aggregation (PNA) [Corso et al., 2020], which has been proposed to address limitations an each individual pooling method in the context of GNNs and combines 12 combinations of scaled pooling methods. When combinining PNA with the GCN model, our experiments only show minor performance improvements over Sum pooling, in part because of increased overfitting. However, when applied to the GIN architecture, it achieves performance levels almost on par with Equilibrium Aggregation.
272
+
273
+ These results confirm one of the central hypotheses of this research: namely that the global aggregation of node em-beddings is a critical step in graph neural networks. Perhaps surprisingly, the GCN generally benefited more from more advanced aggregation methods which is probably due to smaller number of parameters and thus decreased risk of overfitting. It is also worth noting that top performing GNN achitectures achieve significantly higher test MAP on this task (see, e.g., Yuan et al. [2020], Brossard et al. [2020]).
274
+
275
+ In addition, we test an architecture where both local (i.e. node-level) and the global aggregations are performed using Equilibrium Aggregation. This model yields even better performance, albeit only marginally. While more careful architecture design that takes into account the specifics of Equilibrium Aggregation could potentially lead to larger performance improvements, it should be noted that the molecular graphs in this task are relatively small and aggregation on the local level may be not the most critical step for a typical GNN.
276
+
277
+ Besides the task performance we also investigate the behaviour of the inner-loop optimization. Figure 5 plots two major statistics that quantify this: the max-norm of the final iterate of the optimization $\mathop{\max }\limits_{d}\left| {{\nabla }_{{y}_{d}}E\left( {X,{\mathbf{y}}^{\left( T\right) }}\right) }\right|$ and ${L}_{\text{ aux }}$ (5). One can see that both rapidly decrease during the training and that a good degree of convergence is achieved. We observe similar behaviour with GCN and on other tasks we considered earlier.
278
+
279
+ § 6 DISCUSSION AND CONCLUSION
280
+
281
+ This work provides a novel optimization-based perspective on the widely encountered problem of aggregating sets that is provably universal. Our proposed algorithm, Equilibrium Aggregation, allows learning a problem-specific aggregation mechanism which, as we show, is beneficial across different applications and neural network architectures. The consistent empirical improvement brought by the use of Equilibrium Aggregation not only shows that many existing models are struggling from aggressive compression and inefficient representation of sets but also suggests a whole new class of set- or graph-oriented architectures that employ a composition of Equilibrium Aggregation operations. Beyond GNNs, other classes of models, such as Transformers, may also profit from more expressive aggregation operations, specificially in modelling long-term memory - a topic strongly connected to compression of sets [Rae et al., 2019, Bartunov et al., 2019], as well as potentially reduce the number of layers needed.
282
+
283
+ While there is a strong indication that using Equilibrium Aggregation as a building block is effective, the incurred computational cost may require more developments in differentiable optimization [Ernoult et al., 2020], architecture [Amos et al., 2017] and hardware design [Kendall et al., 2020], especially in order to compete with modern extra large models.
UAI/UAI 2022/UAI 2022 Conference/BElx3S8s5e9/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,480 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Ordinal Causal Discovery
2
+
3
+ ## Abstract
4
+
5
+ Causal discovery for purely observational, categorical data is a long-standing challenging problem. Unlike continuous data, the vast majority of existing methods for categorical data focus on inferring the Markov equivalence class only, which leaves the direction of some causal relationships undetermined. This paper proposes an identifiable ordinal causal discovery method that exploits the ordinal information contained in many real-world applications to uniquely identify the causal structure. The proposed method is applicable beyond ordinal data via data discretization. Through real-world and synthetic experiments, we demonstrate that the proposed ordinal causal discovery method combined with simple score-and-search algorithms has favorable and robust performance compared to state-of-the-art alternative methods in both ordinal categorical and non-categorical data. An accompanied R package OCD is freely available.
6
+
7
+ ## 1 INTRODUCTION
8
+
9
+ Causal discovery [Spirtes et al., 2000, Pearl, 2009] is becoming increasingly more popular in machine learning, and finds numerous applications, e.g., biology [Sachs et al., 2005], psychology [Steyvers et al., 2003], and neuroscience [Shen et al., 2020], of which the prevailing goal is to discover causal relationships of variables of interest. The discovered causal relationships are useful for predicting a system's response to external interventions [Pearl, 2009], a key step towards understanding and engineering that system. While the gold standard for causal discovery remains the controlled experimentation, it can be too expensive, unethical, or even impossible in many cases, particularly on human beings. Therefore, inferring the unknown causal structures of complex systems from purely observational data is often desirable and, sometimes, the only option.
10
+
11
+ This paper considers causal discovery for ordinal categorical data. Categorical data are common across multiple disciplines. For example, psychologists often use questionnaires to measure latent traits such as personality and depression. The responses to those questionnaires are often categorical, say, with five levels (5-point Likert scale): "strongly disagree", "disagree", "neutral", "agree", and "strongly agree". In genetics, single-nucleotide polymorphisms are categorical variables with three levels (mutation on neither, one, or both alleles). Categorical data also arise as a result of dis-cretization of non-categorical (e.g., continuous and count) data. For instance, in biology, gene expression data are often trichotomized to "underexpression", "normal expression", and "overexpression" [Parmigiani et al., 2002, Pe'er, 2005, Sachs et al., 2005] in order to reduce sequencing technical noise while retaining biological interpretability.
12
+
13
+ While causal discovery for purely observational categorical data have been extensively studied, the vast majority of existing methods [Heckerman et al., 1995, Chickering, 2002] have exclusively focused on Bayesian networks (BNs) with nominal (unordered) categorical variables. It has been well established that a nominal/multinomial BN is generally only identifiable up to Markov equivalence class in which all BNs encode the same Markov properties. For example, $X \rightarrow Y$ and $Y \rightarrow X$ are Markov equivalent and also distribution equivalent [Spirtes and Zhang, 2016] with a multinomial likelihood; therefore, they are non-identifiable with purely observational data.
14
+
15
+ In many real-world applications, categorical data (including the aforementioned Likert scale, single-nucleotide polymorphisms, and discretized gene expression data) contain ordinal information. In this paper, we show that this often-overlooked ordinal information is crucial in causal discovery for categorical data. We propose an ordinal causal discovery (OCD) method via an ordinal BN. Assuming causal Markov and causal sufficiency, we prove OCD to be identifiable in general for ordinal categorical data. Score-and-search BN structure learning algorithms are developed - exhaustive search for small networks (e.g., bivariate data) and greedy search for moderate-sized networks. Through extensive experiments with real-world and synthetic datasets, we demonstrate that the proposed OCD is identifiable, robust, applicable to both categorical and non-categorical data, and competitive against a range of state-of-the-art causal discovery methods. To the best of our knowledge, we are the first to exploit the ordinal information for causal discovery in categorical data. Our major contributions are four-fold.
16
+
17
+ 1. We advocate the usefulness of ordinal information of categorical data in causal discovery, which has been overlooked in the literature.
18
+
19
+ 2. We propose the first causal discovery method, OCD, for ordinal categorical data.
20
+
21
+ 3. We prove that OCD is generally identifiable for bivariate data, in contrast to the non-identifiability of multinomial BNs.
22
+
23
+ 4. We demonstrate the strong utility of OCD by comparison with state-of-the-art alternatives using real-world and synthetic datasets.
24
+
25
+ ### 1.1 RELATED WORK
26
+
27
+ For brevity, we review causal discovery methods that are fully identifiable with observational data.
28
+
29
+ Non-Categorical Data. Model-based BNs for continuous data are often represented as additive noise models. Under such representation, BNs are generally identifiable if the noises are non-Gaussian [Shimizu et al., 2006], if the functional form of the additive noise model is nonlinear [Hoyer et al., 2009, Zhang and Hyvärinen, 2009], or if the noise variances are equal [Peters and Bühlmann, 2014]. Also see much of the recent literature that focuses on bivariate causal discovery [Mooij et al., 2010, Janzing et al., 2012, Chen et al., 2014, Sgouritsa et al., 2015, Hernandez-Lobato et al., 2016, Marx and Vreeken, 2017, Blöbaum et al., 2018, Marx and Vreeken, 2019, Tagasovska et al., 2020]. For count data, Park and Raskutti 2015 proposed a Poisson BN and showed that it is identifiable based on the overdispersion property of Poisson BNs. By replacing overdispersion property with constant moments ratio property, Park and Park 2019 extended Poisson BNs to the generalized hypergeometric family which contains many count distributions such as binomial, Poisson, and negative binomial. Recently, Choi et al. 2020 developed a zero-inflated Poisson BN for zero-inflated count data.
30
+
31
+ Categorical Data. For nominal categorical data, causal identification, primarily focused on bivariate data, is possible under certain assumptions [Peters et al., 2010, Suzuki et al., 2014, Liu and Chan, 2016, Cai et al., 2018, Compton et al., 2020], e.g., when the categories admit hidden compact representations or when data follow a discrete additive noise model. However, to the best of our knowledge, causal discovery for ordinal data, which are very common in practice, has not been studied. Whether a categorical variable is ordinal or not is, in our opinion, easier to comprehend than the aforementioned assumptions of categorical data (e.g., discrete additive noise).
32
+
33
+ Mixed Data. There are recent developments for mixed data causal discovery [Cui et al., 2018, Tsagris et al., 2018, Sedgewick et al., 2019], some of which include categorical data. However, the ordinal nature of the categorical data is not exploited for causal identification; therefore, these algorithms output Markov equivalent BNs instead of individual BNs. The latent variable approach by Wei et al. [2018] could in principle be extended to ordinal data. However, the causal Markov assumption of latent variables cannot translate to the observed variables and the inferred causality does not have direct causal interpretation on the observed variables.
34
+
35
+ ## 2 BIVARIATE ORDINAL CAUSAL DISCOVERY
36
+
37
+ We first introduce the proposed OCD method for bivariate data, which will be extended to multivariate data in Section 4. Let $\left( {X, Y}\right) \in \{ 1,\ldots , S\} \times \{ 1,\ldots , L\}$ denote a pair of ordinal variables with $S$ and $L$ levels, of which the possible causal relationships, $X \rightarrow Y$ or $Y \rightarrow X$ , are under investigation. Throughout the paper, we make the causal Markov and causal sufficient assumptions, which are frequently adopted in the causal discovery literature [Pearl, 2009]. The former allows us to interpret the proposed model causally (beyond conditional independence) whereas the latter asserts that there are no unmeasured confounders.
38
+
39
+ The bivariate OCD considers the following probability distribution for causal model $X \rightarrow Y$ ,
40
+
41
+ $$
42
+ {p}_{X \rightarrow Y}\left( {X, Y}\right) = p\left( X\right) p\left( {Y \mid X}\right) , \tag{1}
43
+ $$
44
+
45
+ where $p\left( X\right)$ is a multinomial/categorical distribution with probabilities $\mathbf{\pi } = \left( {{\pi }_{1},\ldots ,{\pi }_{S}}\right)$ with $\mathop{\sum }\limits_{{s = 1}}^{S}{\pi }_{s} = 1$ , and $p\left( {Y \mid X}\right)$ is defined by an ordinal regression model [Agresti, 2003],
46
+
47
+ $$
48
+ \Pr \left( {Y \leq \ell \mid X}\right) = F\left( {{\gamma }_{\ell } - {\beta }_{X}}\right) ,\ell = 1,\ldots , L, \tag{2}
49
+ $$
50
+
51
+ where ${\beta }_{X}$ is a generic notation of ${\beta }_{1},\ldots ,{\beta }_{S}$ for $X =$ $1,\ldots , S$ . Typical choices of the link function $F$ are probit and inverse logit, which are empirically quite similar; hereafter we always use the probit link except for the identifiability theory, which is valid for both link functions. We fix ${\gamma }_{1} = 0$ for ordinal regression parameter identifiability [Agresti, 2003]. Equation (2) implies the conditional probability distribution $\Pr \left( {Y = \ell \mid X = s}\right) = F\left( {{\gamma }_{\ell } - }\right.$ $\left. {\beta }_{s}\right) - F\left( {{\gamma }_{\ell - 1} - {\beta }_{s}}\right)$ for $\ell = 1,\ldots , L$ and $s = 1,\ldots , S$ where ${\gamma }_{0} = - \infty$ and ${\gamma }_{L} = \infty$ . Let $\mathbf{\beta } = \left( {{\beta }_{1},\ldots ,{\beta }_{S}}\right)$ and $\gamma = \left( {{\gamma }_{2},\ldots ,{\gamma }_{L - 1}}\right)$ . We denote the model ${p}_{X \rightarrow Y}$ by ${p}_{X \rightarrow Y}\left( {X, Y \mid \mathbf{\pi },\mathbf{\beta },\mathbf{\gamma }}\right)$ . Similarly, we define the probability model ${p}_{Y \rightarrow X}$ as ${p}_{Y \rightarrow X}\left( {Y, X \mid \mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right)$ . If the maximum likelihood estimate ${\widehat{p}}_{X \rightarrow Y}$ given observations of(X, Y)is strictly larger than ${\widehat{p}}_{Y \rightarrow X}$ , then $X \rightarrow Y$ is deemed a more likely data generating causal model.
52
+
53
+ ## 3 IDENTIFIABILITY
54
+
55
+ We will show that the proposed OCD is generally identifiable if at least one of the variable has at least three levels.
56
+
57
+ ## Definition 1 (Distribution Equivalence)
58
+
59
+ ${p}_{X \rightarrow Y}\left( {X, Y \mid \mathbf{\pi },\mathbf{\beta },\mathbf{\gamma }}\right) \;$ and $\;{p}_{Y \rightarrow X}\left( {Y, X \mid \mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right)$ are distribution equivalent if for any values of $\left( {\mathbf{\pi },\mathbf{\beta },\mathbf{\gamma }}\right)$ there exist values of $\left( {\mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right)$ such that ${p}_{X \rightarrow Y}\left( {X, Y \mid \mathbf{\pi },\mathbf{\beta },\mathbf{\gamma }}\right) = {p}_{Y \rightarrow X}\left( {Y, X \mid \mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right)$ for any $X, Y$ , and vice versa.
60
+
61
+ Distribution equivalent causal models are clearly not distinguishable from each other by examining their observational distributions. The well-known multinomial BNs are distribution equivalent as illustrated in the following example.
62
+
63
+ Example 1 (Multinomial BN) Consider a bivariate multinomial ${BN}$ of $X \rightarrow Y$ whose conditional $p\left( {Y \mid X}\right)$ and marginal $p\left( X\right)$ probability distributions are given in Figure 1(a), and the joint distribution $p\left( {X, Y}\right)$ is given in Figure $\overline{1}\left( b\right)$ . Because of the multinomial assumption, we can find $\bar{a}$ set of parameters, i.e., the conditional $p\left( {X \mid Y}\right)$ and marginal $p\left( Y\right)$ probabilities (Figure 1(c)) of the reverse causal model $Y \rightarrow X$ , which leads to the same joint distribution. Therefore, the probability distribution does not provide information for causal identification.
64
+
65
+ Incorporating the underappreciated ordinal information, we will show that ${p}_{X \rightarrow Y}\left( {X, Y \mid \mathbf{\pi },\mathbf{\beta },\mathbf{\gamma }}\right)$ and ${p}_{Y \rightarrow X}\left( {Y, X \mid \mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right)$ are generally not distribution equivalent and are, therefore, identifiable.
66
+
67
+ Theorem 1 (Identifiability of OCD) Let $X \in \{ 1,\ldots , S\}$ and $Y \in \{ 1,\ldots , L\}$ where $S, L \geq 2$ and $\max \{ S, L\} \geq 3$ . Suppose $X \rightarrow Y$ is the data generating causal model and the observational probability distribution of(X, Y)is given ${by}$
68
+
69
+ $$
70
+ p\left( {X, Y}\right) = {p}_{X \rightarrow Y}\left( {X, Y \mid \mathbf{\pi },\mathbf{\beta },\mathbf{\gamma }}\right) .
71
+ $$
72
+
73
+ For almost all $\left( {\mathbf{\pi },\mathbf{\beta },\mathbf{\gamma }}\right)$ with respect to the Lebesgue measure, the distribution cannot be equivalently represented by the reverse causal model, i.e., there does not exist $\left( {\mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right)$ such that,
74
+
75
+ $$
76
+ p\left( {X, Y}\right) = {p}_{Y \rightarrow X}\left( {Y, X \mid \mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right) ,\forall X, Y.
77
+ $$
78
+
79
+ The proof based on properties of real analytic functions is provided in the Supplementary Materials. We demonstrate Theorem 1 by revisiting Example 1.
80
+
81
+ Example 2 (Ordinal BN) The conditional $p\left( {Y \mid X}\right)$ and marginal $p\left( X\right)$ probability distributions in Figure 1(a) coincide with those under the ordinal ${BN}{p}_{X \rightarrow Y}\left( {X,\overline{Y \mid \pi },\mathbf{\beta },\mathbf{\gamma }}\right)$ with $\mathbf{\pi } = \left( {{0.25},{0.25},{0.5}}\right) ,\gamma = 1$ , and $\mathbf{\beta } = \left( {1, - 1,1}\right)$ . Given a large enough dataset, the MLE of $p\left( {X, Y}\right)$ can be arbitrarily close to that in Figure 1(b). However, there does not exist any set of parameter values in the reverse causal model ${p}_{Y \rightarrow X}\left( {Y, X \mid \mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right)$ that produces the conditional $p\left( {X \mid Y}\right)$ and marginal $p\left( Y\right)$ probability distributions in Figure 1(c). Therefore, the reverse causal model ${p}_{Y \rightarrow X}\left( {Y, X \mid \mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right)$ cannot adequately fit the data generated from ${p}_{X \rightarrow Y}\left( {X, Y \mid \mathbf{\pi },\mathbf{\beta },\mathbf{\gamma }}\right)$ . For example, even with 100,000 observations, the MLE of $p\left( {X, Y}\right)$ under ${p}_{Y \rightarrow X}\left( {Y, X \mid \mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right)$ still has a large bias (Figure 1(d)), which will never approach 0. Therefore, ${p}_{X \rightarrow Y}\left( {X, Y \mid \mathbf{\pi },\mathbf{\beta },\mathbf{\gamma }}\right)$ can be distinguished from ${p}_{Y \rightarrow X}\left( {Y, X \mid \mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right) .$
82
+
83
+ Note that Theorem 1 excludes the case where both $X$ and $Y$ are binary (i.e., $L = S = 2$ ), under which OCD is not identifiable. This is expected because there is no difference between ordinal and nominal categorical variables in this case; the latter is known to be non-identifiable.
84
+
85
+ ## 4 EXTENSION TO MULTIVARIATE ORDINAL CAUSAL DISCOVERY
86
+
87
+ While the vast majority of the existing identifiable causal discovery methods for categorical data [Peters et al., 2010, Suzuki et al., 2014, Liu and Chan, 2016, Cai et al., 2018, Compton et al., 2020] have primarily focused on bivariate cases, we extend the proposed bivariate OCD to multivariate data. Let $\mathbf{X} = \left( {{X}_{1},\ldots ,{X}_{p}}\right) \in \left\{ {1,\ldots ,{L}_{1}}\right\} \times \cdots \times$ $\left\{ {1,\ldots ,{L}_{p}}\right\}$ denote $p$ ordinal variables. Let $G = \left( {V, E}\right)$ denote a causal BN with a set of nodes $V = \{ 1,\ldots , p\}$ representing $\mathbf{X}$ and directed edges $E \subset V \times V$ representing direct causal relationships (with respect to $\mathbf{X}$ ). Let ${pa}\left( j\right) = \{ k \mid k \rightarrow j\} \subseteq V$ denote the set of direct causes (parents) of node $j$ in $G$ and let ${\mathbf{X}}_{{pa}\left( j\right) } = \left\{ {{X}_{k} \mid k \in {pa}\left( j\right) }\right\}$ . Given $G$ , the joint distribution of $\mathbf{X}$ factorizes,
88
+
89
+ $$
90
+ p\left( {\mathbf{X} \mid G}\right) = \mathop{\prod }\limits_{{j = 1}}^{p}p\left( {{X}_{j} \mid {\mathbf{X}}_{{pa}\left( j\right) }}\right) , \tag{3}
91
+ $$
92
+
93
+ where each conditional distribution $p\left( {{X}_{j} \mid {\mathbf{X}}_{{pa}\left( j\right) }}\right)$ is an ordinal regression model of which the cumulative distribution is given by, for $\ell = 1,\ldots ,{L}_{j}$ ,
94
+
95
+ $$
96
+ \Pr \left( {{X}_{j} \leq \ell \mid {\mathbf{X}}_{{pa}\left( j\right) }}\right) = F\left( {{\gamma }_{j\ell } - \mathop{\sum }\limits_{{k \in {pa}\left( j\right) }}{\beta }_{{jk}{X}_{k}} - {\alpha }_{j}}\right) ,
97
+ $$
98
+
99
+ where ${\alpha }_{j}$ is the intercept and ${\beta }_{{jk}{X}_{k}}$ is a generic notation of ${\beta }_{jk1},\ldots ,{\beta }_{{jk}{L}_{k}}$ for ${X}_{k} = 1,\ldots ,{L}_{k}$ . We set ${\gamma }_{j1} = {\beta }_{{jk}{L}_{k}} = 0$ for ordinal regression parameter identifiability [Agresti, 2003]. The implied conditional probability distribution $\Pr \left( {{X}_{j} = \ell \mid {\mathbf{X}}_{{pa}\left( j\right) } = \mathbf{s}}\right) = F\left( {{\gamma }_{j\ell } - }\right.$ $\left. {\mathop{\sum }\limits_{{k \in {pa}\left( j\right) }}{\beta }_{{jk}{h}_{k}} - {\alpha }_{j}}\right) - F\left( {{\gamma }_{j,\ell - 1} - \mathop{\sum }\limits_{{k \in {pa}\left( j\right) }}{\beta }_{{jk}{h}_{k}} - {\alpha }_{j}}\right)$ for $\ell = 1,\ldots ,{L}_{j}$ and $s \in \mathop{\prod }\limits_{{k \in {pa}\left( j\right) }}\left\{ {1,\ldots ,{L}_{k}}\right\}$ . In summary, the multivariate OCD model is parameterized by ${\mathbf{\gamma }}_{j} = \left( {{\gamma }_{j2},\ldots ,{\gamma }_{j,{L}_{j} - 1}}\right) ,{\mathbf{\beta }}_{jk} = \left( {{\beta }_{jk1},\ldots ,{\beta }_{{jk},{L}_{k} - 1}}\right)$ , and ${\alpha }_{j}$ , for $j = 1,\ldots , p$ and $k \in {pa}\left( j\right)$ .
100
+
101
+ ![019639ca-3f6e-7a2b-b969-fe00d8433099_3_222_174_1311_496_0.jpg](images/019639ca-3f6e-7a2b-b969-fe00d8433099_3_222_174_1311_496_0.jpg)
102
+
103
+ Figure 1: Illustration. (a) Conditional $p\left( {Y \mid X}\right)$ and marginal $p\left( X\right)$ probability distributions. They coincide with those under ${p}_{X \rightarrow Y}\left( {X, Y \mid \mathbf{\pi },\mathbf{\beta },\mathbf{\gamma }}\right)$ with $\mathbf{\pi } = \left( {{0.25},{0.25},{0.5}}\right) ,\gamma = 1$ , and $\mathbf{\beta } = \left( {1, - 1,1}\right)$ . (b) The joint distribution $p\left( {X, Y}\right) =$ $p\left( X\right) p\left( {Y \mid X}\right)$ . (c) Conditional $p\left( {X \mid Y}\right)$ and marginal $p\left( Y\right)$ probability distributions from the same joint distribution $p\left( {X, Y}\right)$ . (d) Maximum likelihood estimate of $p\left( {X, Y}\right)$ under ${p}_{Y \rightarrow X}\left( {Y, X \mid \mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right)$ using data generated from $p\left( {X, Y}\right)$ in (b) with sample size 100,000 .
104
+
105
+ ## 5 CAUSAL GRAPH STRUCTURE LEARNING
106
+
107
+ We develop simple score-and-search learning algorithms to estimate the structure of causal graphs, which already show strong empirical performance (see Section 6), although more sophisticated learning methods such as Bayesian inference could be adopted to further improve the performance.
108
+
109
+ Score. We score causal graphs by the Bayesian information criterion (BIC). We choose BIC over AIC because it favors a more parsimonious causal graph due to the heavier penalty on model complexity and generally has a better empirical performance. Let $\mathbf{x} = \left( {{\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{n}}\right)$ denote $n$ realizations of $\mathbf{X}$ . The score of $G$ (smaller is better) is given by
110
+
111
+ $$
112
+ \operatorname{BIC}\left( {G \mid \mathbf{x}}\right) = - 2\mathop{\sum }\limits_{{i = 1}}^{n}\log \widehat{p}\left( {{\mathbf{x}}_{i} \mid G}\right) + K\log \left( n\right) ,
113
+ $$
114
+
115
+ where $K$ is the number of model parameters and $\widehat{p}\left( {{\mathbf{x}}_{i} \mid G}\right)$ is the joint distribution (3) evaluated at ${\mathbf{x}}_{i}$ given the MLE of model parameters.
116
+
117
+ Exhaustive Search. For small networks (say $p = 2$ or 3), we compute the scores for all networks $\mathcal{G}$ , and identify $\widehat{G} = \arg \mathop{\min }\limits_{{G \in \mathcal{G}}}\operatorname{BIC}\left( {G \mid \mathbf{x}}\right)$ . While this approach is exact and useful for bivariate OCD, it becomes computationally infeasible for moderate-sized networks as the number of networks $\left| \mathcal{G}\right|$ grows super-exponentially in $p$ .
118
+
119
+ Greedy Search. We use a simple iterative greedy search algorithm [Chickering, 2002, Scutari et al., 2019] for moderate-sized networks. At each iteration, we score all the graphs that can be reached from the current graph by an edge addition, removal, or reversal. We replace the current graph by the graph with the largest improvement (largest decrease in BIC) and stop the algorithm when the score can no longer be improved. The greedy search algorithm is summarized in Algorithm 1 which is guaranteed to find a local optimal graph. The algorithm can be improved by tabu search and random non-local moves [Scutari et al., 2019] but we do not pursue this direction as the simple greedy algorithm already yields favorable results against state-of-the-art alternative methods. The worst per iteration cost is $O\left( {{pf}\left( {n, m, L}\right) }\right)$ for $p$ nodes, $n$ observations, $m$ maximum number of parents, and $L = \mathop{\max }\limits_{j}{L}_{j}$ maximum levels, where $f\left( {n, m, L}\right)$ is the computational complexity of an ordinal regression with $m$ regressors. This is because at most ${2p}$ score evaluations are required at each iteration [Scutari et al., 2019]. We use polr function in the R package MASS for ordinal regression which appears to scale linearly in $n, m$ , and $L$ , empirically.
120
+
121
+ ## 6 EXPERIMENTS
122
+
123
+ We evaluate the proposed and state-of-the-art alternative causal discovery methods with synthetic as well as three sets of real data. The real data are not categorical and therefore allow us to extend our comparison to causal models designed for continuous data.
124
+
125
+ Algorithm 1 Greedy Search
126
+
127
+ ---
128
+
129
+ Input: data $\mathbf{x}$ , initial graph $G$
130
+
131
+ Compute $\operatorname{BIC}\left( {G \mid \mathbf{x}}\right)$ and set ${\mathrm{{BIC}}}_{ \star } = \mathrm{{BIC}}\left( {G \mid \mathbf{x}}\right)$ .
132
+
133
+ repeat
134
+
135
+ Initialize Improvement $=$ false.
136
+
137
+ for all graphs ${G}^{\prime }$ reachable from $G$ do
138
+
139
+ Compute $\operatorname{BIC}\left( {{G}^{\prime } \mid \mathbf{x}}\right)$ .
140
+
141
+ if $\operatorname{BIC}\left( {{G}^{\prime } \mid \mathbf{x}}\right) < {\mathrm{{BIC}}}_{ \star }$ then
142
+
143
+ Set $G = {G}^{\prime }$ and ${\mathrm{{BIC}}}_{ \star } = \mathrm{{BIC}}\left( {{G}^{\prime } \mid \mathbf{x}}\right)$
144
+
145
+ Set Improvement $=$ true.
146
+
147
+ end if
148
+
149
+ end for
150
+
151
+ until Improvement is false
152
+
153
+ Output: graph $G$
154
+
155
+ ---
156
+
157
+ ### 6.1 SYNTHETIC ORDINAL DATA
158
+
159
+ We simulate low-dimensional, higher-dimensional, and bivariate (with confounders) synthetic ordinal data.
160
+
161
+ #### 6.1.1 Low-Dimensional Multivariate Ordinal Data
162
+
163
+ We consider synthetic ordinal data $\left( {n = {500}, p = {10}}\right)$ . To mimic survey data with 5-point Likert-scale questionnaires, we simulate data from the proposed OCD model with ${L}_{j} = L = 5,\forall j$ . The true BN is generated randomly (Figure 2(a)) which has one v-structure (i.e., subgraph $j \rightarrow k \leftarrow i$ ). Its Markov equivalence class, represented by a completed partially directed acyclic graph (CPDAG), can be obtained by removing the directionality of the red dashed edges in Figure 2(a). We consider 6 scenarios with different levels of signal strength by generating simulation true ${\beta }_{{jk}\ell }$ ’s and ${\alpha }_{j}$ ’s independently from $N\left( {0,{\sigma }^{2}}\right)$ with $\sigma = {0.25},{0.5},{0.75},1,{1.25},{1.5}$ . Parameters ${\gamma }_{j\ell }$ ’s are chosen to have balanced class size for each variable.
164
+
165
+ Implementations. Standard causal discovery methods for categorical data are multinomial BNs with BIC or BDe score which discard the ordinal information and therefore only estimate the Markov equivalence classes. They are implemented using model averaging with 500 bootstrapped samples (page 145, Scutari and Denis 2014). We compare them with the proposed OCD, all implemented using greedy search. In addition, we also consider a two-step procedure [Friedman and Koller, 2003] which first learns a causal ordering and then estimates the causal multinomial BN given the ordering based on BIC (called "BIC+" hereafter). This procedure outputs an estimated BN.
166
+
167
+ Metrics. We compute the structural hamming distance (SHD) and the structural intervention distance (SID) with R package SID. The SHD between two graphs is the number of edge additions, deletions, or reversals required to transform one graph to the other. The SID measures "closeness" between two causal graphs in terms of their implied intervention distributions (see Peters and Bühlmann 2015 for the formal definition). Note that since multinomial BNs with BIC and BDe can only identify CPDAG, the smallest SHD that they can achieve is 5 (the number of undirected edges in the true CPDAG).
168
+
169
+ Results. The SHD and SID averaged over 5 repeat simulations are shown in Figure 2(b)-(c) as functions of signal strength $\sigma$ . Since multinomial BNs with BDe and BIC only estimate CPDAGs, we report the lower bounds of their SID. There are several conclusions that can be drawn. First, OCD is empirically identifiable because both SHD and SID quickly approach 0 as signal becomes stronger. Second, OCD uniformly outperforms the alternative methods in both SHD and SID across all signal levels, which suggests that exploiting the ordinal nature of ordinal categorical data is crucial for causal discovery. Third, BIC+ is better than BIC and BDe in SHD but not necessarily in SID, suggesting the estimated causal ordering from BIC+ is biased.
170
+
171
+ Different Number of Categories. In the Supplementary Materials, we present additional simulation scenarios with a different number $L = 3$ of categories. Similarly to the scenarios with $L = 5$ , OCD significantly outperforms the competing methods.
172
+
173
+ #### 6.1.2 Higher-Dimensional Multivariate Ordinal Data
174
+
175
+ We fix the sample size $n = {500}$ and the number of categories $L = 5$ but vary the number of nodes $p = {10},{20},\ldots ,{100}$ and the signal strength $\sigma = {0.25},{0.5},{0.75},1$ . The graphs are kept at the same sparsity as in Section 6.1.1 across $p$ (denser graphs will be considered later). The SHD is shown in Figure 3 whereas the SID is provided in the Supplementary Materials due to the space limit. The proposed OCD uniformly outperforms the competing methods BDe, BIC, and BIC+ across $p$ and $\sigma$ . In general, OCD is quite stable as $p$ increases when the signal strength is moderate to moderately large $\sigma \geq {0.5}$ whereas the competing methods quickly deteriorate with $p$ regardless of the signal strength.
176
+
177
+ Scalability. We investigate the scalability of the proposed OCD with respect to $n, L$ , and $p$ . We vary $n =$ ${500},{750},\cdots ,{2750}$ (keeping $p = {10}$ and $L = 5$ ), $L =$ $5,\ldots ,{14}$ (keeping $n = {500}$ and $p = {10}$ ), and $p =$ ${10},{20},\ldots ,{100}$ (keeping $n = {500}$ and $L = 5$ ). The total CPU times in seconds on a ${2.9}\mathrm{{GHz}}6 -$ Core Intel Core i9 laptop are provided in the Supplementary Materials. The greedy search appears to scale linearly in $n$ and $L$ , and quadratically in $p$ , which agrees with the complexity analysis in Section 5. It is moderately scalable: e.g., for $p = {100}$ , the search completes in about 3 hours.
178
+
179
+ Denser Graphs. In the Supplementary Materials, we present additional simulation scenarios with denser graphs for $p = {50}$ nodes and more v-structures, which lead to similar conclusions, i.e., OCD significantly outperforms the competing methods in SHD and SID.
180
+
181
+ ![019639ca-3f6e-7a2b-b969-fe00d8433099_5_389_177_967_442_0.jpg](images/019639ca-3f6e-7a2b-b969-fe00d8433099_5_389_177_967_442_0.jpg)
182
+
183
+ Figure 2: Synthetic ordinal data. The dashed lines in (c) are the lower bounds of SID of BDe and BIC which output CPDAGs instead of BNs.
184
+
185
+ ![019639ca-3f6e-7a2b-b969-fe00d8433099_5_166_739_1414_408_0.jpg](images/019639ca-3f6e-7a2b-b969-fe00d8433099_5_166_739_1414_408_0.jpg)
186
+
187
+ Figure 3: SHD for OCD, BDe, BIC, and BIC+ as functions of $p$ in the synthetic ordinal data with the sample size fixed at $n = {500}$ and different signal strength $\sigma \in \{ {0.25},{0.5},{0.75},1\}$ .
188
+
189
+ #### 6.1.3 Bivariate Ordinal Data with Unmeasured Confounders
190
+
191
+ While our identifiability theory assumes no unmeasured confounders, we now empirically test the sensitivity of OCD to unmeasured confounders for bivariate ordinal data. We generate trivariate ordinal data $\left( {{X}_{1},{X}_{2},{X}_{3}}\right)$ with $L = 5$ from the following true causal graph,
192
+
193
+ ![019639ca-3f6e-7a2b-b969-fe00d8433099_5_391_1706_222_118_0.jpg](images/019639ca-3f6e-7a2b-b969-fe00d8433099_5_391_1706_222_118_0.jpg)
194
+
195
+ We hide ${X}_{3}$ as a confounder and apply OCD to $\left( {{X}_{1},{X}_{2}}\right)$ . In the simulation truth, we assume ${\beta }_{{jk}\ell }$ , for each $\ell = 1,\ldots , L$ , to be the same for all $j \neq k$ , i.e., the confounding effect is the same as the causal effect, which is simulated from $N\left( {0,{\sigma }^{2}}\right)$ . We consider different levels of signal strength $\sigma = {0.25},{0.5},{0.75},1,{1.25},{1.5}$ and different sample sizes $n = {100},{200},\ldots ,{1000}$ . Under each combination of $\left( {\sigma , n}\right)$ , we repeat the experiment 100 times, and report the average accuracy (ACC) for forced decisions. The forced decision forces methods to choose between ${X}_{1} \rightarrow {X}_{2}$ and ${X}_{2} \rightarrow$ ${X}_{1}$ . The same metric has been used in similar bivariate causal discovery problems [Mooij et al., 2016, Tagasovska et al., 2020]. OCD is relatively robust to confounders (Figure 4(a)): it is able to correctly identify the causal direction given a large enough sample size or when the signal is sufficiently strong. For comparison, we apply a recent causal discovery method for bivariate nominal categorical data, HCR [Cai et al., 2018]. Its average ACC is shown in Figure 4(b). We find the ACC of HCR is uniformly lower than that of OCD although we note that HCR is not specifically designed for this task.
196
+
197
+ ### 6.2 SACHS'S SINGLE-CELL FLOW CYTOMETRY DATA
198
+
199
+ We evaluate the proposed OCD on the well-known single-cell flow cytometry dataset [Sachs et al., 2005], which contains measurements of 11 phosphorylated proteins under different experimental conditions. Sachs et al. 2005 provided a consensus causal network of these proteins, which could be used to gauge the performance of causal discovery algorithms. As in Tagasovska et al. 2020, we consider the ${cd3cd28}$ dataset with 853 cells subject to the same experimental condition.
200
+
201
+ ![019639ca-3f6e-7a2b-b969-fe00d8433099_6_154_177_1449_485_0.jpg](images/019639ca-3f6e-7a2b-b969-fe00d8433099_6_154_177_1449_485_0.jpg)
202
+
203
+ Figure 4: Synthetic ordinal data with confounders. Average ACC of (a) OCD and (b) HCR under different sample sizes and levels of signal strength.
204
+
205
+ Implementations. Since the raw measurements are highly skewed and heavy-tailed, Sachs et al. 2005 discretized the data into $L = 3$ levels ("low","average", and "high") and fit a multinomial BN based on the Bayesian Dirichlet equivalent uniform (BDe) score [Heckerman et al., 1995]. As we will see, this approach throws away the ordinal information inherent in the raw measurements and hence significantly underperforms OCD (with greedy search). For comparison, we also apply ANM [Hoyer et al., 2009], LiNGAM [Shimizu et al., 2006], RESIT with the Gaussian process implementation [Peters et al., 2014], bivariate causal discovery methods (HCR, bQCD [Tagasovska et al., 2020], GR-AN [Hernandez-Lobato et al., 2016], IGCI with uniform measure [Janzing et al., 2012], SLOPE [Marx and Vreeken, 2017]), and methods inferring Markov equivalence classes (PC [Spirtes et al., 2000], CPC [Ramsey et al., 2012], GES [Chickering, 2002], IAMB [Tsamardinos et al., 2003], multinomial BNs with BIC and BDe), and the mixed data approach MXM [Tsagris et al., 2018] to the raw continuous data. For bivariate causal discovery methods, we follow a similar ad hoc procedure in Tagasovska et al. 2020: first run CAM [Bühlmann et al., 2014] and then orient the estimated edges by the bivariate methods. HCR is the closest competitor as it is also designed for categorical data although with a very different scope (only applicable to bivariate nominal categorical data and assuming the existence of hidden compact representations).
206
+
207
+ Metrics. We use the same SHD and SID metrics as in Section 6.1. For methods that output CPDAGs instead of BNs, we report the lower and upper bounds of SID.
208
+
209
+ Results. In Table 1, we summarize the SHD and SID. OCD shows very strong performance comparing to state-of-the-art alternatives. It has the lowest SHD and the second lowest SID, which shows benefit of discretization for highly noisy data. The substantial improvement of OCD from multinomial BN with BDe (SHD 14 vs 21) highlights the importance of exploiting the ordinal information of discrete data for causal discovery. While there is strong motivation (e.g., biological interpretation) to use $L = 3$ for this dataset, we test OCD with $L$ up to 10. OCD stays very competitive within this range: the SID remains 62 for all $L$ whereas the SHD slightly increases as $L$ increases possibly due to relatively small sample size, e.g., $\mathrm{{SHD}} = {16}$ for $L = {10}$ , which is still quite competitive (second to $\mathrm{{SHD}} = {15}$ for bQCD and IGCI).
210
+
211
+ Table 1: Sachs's data. Methods (marked by *) that are only applicable to bivariate data are combined with CAM. PC, CPC, GES, IAMB, BIC, BDe, and MXM only learn CPDAGs; we provide the lower and upper bounds of SID.
212
+
213
+ <table><tr><td/><td>OCD</td><td>bQCD*</td><td>IGCI*</td><td>GR-AN*</td></tr><tr><td>SHD</td><td>14</td><td>15</td><td>15</td><td>16</td></tr><tr><td>SID</td><td>62</td><td>69</td><td>82</td><td>80</td></tr><tr><td/><td>HCR*</td><td>SLOPE*</td><td>ANM</td><td>LiNGAM</td></tr><tr><td>SHD</td><td>16</td><td>17</td><td>17</td><td>17</td></tr><tr><td>SID</td><td>76</td><td>86</td><td>78</td><td>86</td></tr><tr><td/><td>PC</td><td>CPC</td><td>GES</td><td>IAMB</td></tr><tr><td>SHD</td><td>18</td><td>18</td><td>18</td><td>20</td></tr><tr><td>SID</td><td>50-83</td><td>50-80</td><td>50-80</td><td>79-70</td></tr><tr><td/><td>BIC</td><td>BDe</td><td>MXM</td><td>RESIT</td></tr><tr><td>SHD</td><td>20</td><td>21</td><td>21</td><td>40</td></tr><tr><td>SID</td><td>53-77</td><td>49-104</td><td>49-104</td><td>45</td></tr></table>
214
+
215
+ ### 6.3 CAUSEEFFECTPAIRS (CEP) BENCHMARK DATA
216
+
217
+ We consider the CauseEffectPairs (CEP) benchmark data [Mooij et al., 2016] (version: 12/20/2017), which contain 108 datasets from 37 domains (e.g., biology, economy, engineering, and meteorology). Each dataset contains a pair of variables(X, Y)for which the causal relationship is clear from the context, e.g., older "age" causes higher "glucose". We retain the same 99 pairs as in Tagasovska et al. 2020 that have univariate non-binary cause and effect variables.
218
+
219
+ Implementations. We compare OCD with HCR, bQCD, IGCI, CAM, SLOPE, LiNGAM, and RESIT. To apply OCD and HCR, we discretize each variable at $L - 1$ quantiles for $L \in \{ {10},\ldots ,{20}\}$ . All other methods are applied to the (standardized) continuous data without discretization.
220
+
221
+ Metrics. We compute the ACC for forced decisions as in Section 6.1.3 and, additionally, the area under the receiver operating curve (AUC) for ranked decision. The ranked decision ranks the confidence of the causal direction [Mooij et al., 2016, Tagasovska et al., 2020]. The simple heuristic confidence [Mooij et al., 2016] is adopted here. For instance, for the proposed OCD, we define the confidence of $X \rightarrow Y$ to be ${C}_{X \rightarrow Y} = \operatorname{BIC}\left( {Y \rightarrow X \mid \mathbf{x}}\right) - \operatorname{BIC}\left( {X \rightarrow Y \mid \mathbf{x}}\right)$ .
222
+
223
+ Results. In Table 2, we summarize the ACC, AUC, and CPU times. For OCD and HCR, the average metrics over $L = {10},\ldots ,{20}$ as well as their standard errors are reported. The proposed OCD is highly competitive in all metrics. OCD has the second highest ACC and AUC, and is fast; it completes the analysis of 99 datasets in 36 seconds. Only IGCI, CAM, and LiNGAM are faster but they have worse ACC and AUC than OCD. SLOPE has slightly higher ACC and AUC than OCD. However, SLOPE is about 1 or 2 orders of magnitude slower than OCD and relatively sensitive to small added noise (see the additional experiments that investigate the "Sensitivity to Small Added Noise" in the Supplementary Materials). Finally, the small standard errors of the performance metrics of OCD indicate its relative robustness with respect to the number $L$ of levels of dis-cretization for the considered datasets and range.
224
+
225
+ Table 2: CEP data. Metrics of OCD and HCR are averaged over different values of $L = {10},\ldots ,{20}$ with standard errors given within the parentheses.
226
+
227
+ <table><tr><td/><td>OCD</td><td>HCR</td><td>bQCD</td><td>CAM</td></tr><tr><td>ACC</td><td>0.73 (0.01)</td><td>0.44 (0.02)</td><td>0.70</td><td>0.58</td></tr><tr><td>AUC</td><td>0.76 (0.00)</td><td>0.56 (0.02)</td><td>0.72</td><td>0.58</td></tr><tr><td>CPU</td><td>36s (1.7s)</td><td>12m (2.2m)</td><td>7m</td><td>11s</td></tr><tr><td/><td>IGCI</td><td>SLOPE</td><td>LiNGAM</td><td>RESIT</td></tr><tr><td>ACC</td><td>0.66</td><td>0.76</td><td>0.42</td><td>0.53</td></tr><tr><td>AUC</td><td>0.51</td><td>0.84</td><td>0.59</td><td>0.56</td></tr><tr><td>CPU</td><td>1s</td><td>24m</td><td>3s</td><td>12h</td></tr></table>
228
+
229
+ ### 6.4 SINGLE-CELL RNA-SEQUENCING DATA
230
+
231
+ We further validate the proposed OCD with a publicly available single-cell RNA-sequencing (scRNA-seq) dataset of 2, 717 murine embryonic stem cells [Klein et al., 2015]. We obtain a list of literature-curated pairs of transcription factor (X)and its target(Y)from the TRRUST database [Han et al., 2018], which provides biological ground truth of the casual relationships, namely $X \rightarrow Y$ . We then extract the corresponding genes from the scRNA-seq dataset. Removing genes with more than ${90}\%$ zeros (these genes have very low statistical variability), we retained 6701 pairs for causal validation which still have ${62}\%$ zeros. The zeros in scRNA-seq data are either (a) true biological zero counts or (b) small counts that are too low to detect. In either case, they can be regarded as "low expression". We compare OCD with the best performing methods in Section 6.3, bQCD and SLOPE, as well as the closest competitor HCR. We are not able to generate results (runtime errors) from CAM, LiNGAM, and RESIT possibly because of the large percentages of zeros. To apply OCD and HCR, we trichotomize the data at 0 and the median of the non-zero expression (i.e., "low", "average", and "high" expression). ACC and CPU time are reported in Table 3. OCD is the best and is the only method that is better than random guess (p-value $= {10}^{-{75}}$ , binomial test with ${H}_{0} : p = {0.5}$ vs ${H}_{a} : p > {0.5}$ ) for this dataset possibly because of its highly non-standard distribution due to zero-inflation. Therefore, although discretizing continuous or count data may lose information, it often improves the robustness by not having to impose a particular distributional assumption on the raw data.
232
+
233
+ Table 3: Single-cell RNA-seq data.
234
+
235
+ <table><tr><td/><td>OCD</td><td>HCR</td><td>bQCD</td><td>SLOPE</td></tr><tr><td>ACC</td><td>0.61</td><td>0.36</td><td>0.45</td><td>0.50</td></tr><tr><td>CPU</td><td>19m</td><td>22m</td><td>3.4h</td><td>2h</td></tr></table>
236
+
237
+ ## 7 CONCLUSION
238
+
239
+ There are several limitations of the current work, which we plan to address in our future work. First, the current score-and-search algorithm outputs a point estimate of the causal graph with no uncertainty quantification. We plan to develop a fully Bayesian approach by assigning sparse priors (i.e., spike-and-slab priors on $\beta$ ’s) and carrying out posterior inference via the Markov chain Monte Carlo. Second, we have empirically assessed the identifiability of the proposed OCD for multivariate data and for bivariate data with unmeasured confounders. The identifiability theory for multivariate categorical data or bivariate categorical data with unmeasured confounders is in general lacking in the causal discovery literature. Third, we have not explicitly addressed the problem of choosing the number $L$ of categories in data discretization. We picked $L = 3$ for genomic data by convention and assessed its robustness up to $L = {10}$ . For non-genomic data, there is no obvious/universal choice of $L$ . Instead of picking a specific $L$ , we have tested the proposed OCD in a range of values. In the future, we plan to propose data-driven ways (e.g., via BIC) to objectively choose $L$ .
240
+
241
+ ## References
242
+
243
+ Alan Agresti. Categorical Data Analysis, volume 482. John Wiley & Sons, 2003.
244
+
245
+ Patrick Blöbaum, Dominik Janzing, Takashi Washio, Shohei Shimizu, and Bernhard Schölkopf. Cause-effect inference by comparing regression errors. In International Conference on Artificial Intelligence and Statistics, pages 900-909, 2018.
246
+
247
+ Peter Bühlmann, Jonas Peters, Jan Ernest, et al. Cam: Causal additive models, high-dimensional order search and penalized regression. The Annals of Statistics, 42(6):2526- 2556, 2014.
248
+
249
+ Ruichu Cai, Jie Qiao, Kun Zhang, Zhenjie Zhang, and Zhifeng Hao. Causal discovery from discrete data using hidden compact representation. Advances in Neural Information Processing Systems, 2018:2666, 2018.
250
+
251
+ Zhitang Chen, Kun Zhang, Laiwan Chan, and Bernhard Schölkopf. Causal discovery via reproducing kernel Hilbert space embeddings. Neural Computation, 26(7): 1484-1517, 2014.
252
+
253
+ David Maxwell Chickering. Optimal structure identification with greedy search. Journal of Machine Learning Research, 3(Nov):507-554, 2002.
254
+
255
+ Junsouk Choi, Robert Chapkin, and Yang Ni. Bayesian causal structural learning with zero-inflated Poisson Bayesian networks. In Advances in Neural Information Processing Systems 33, 2020.
256
+
257
+ Spencer Compton, Murat Kocaoglu, Kristjan Greenewald, and Dmitriy Katz. Entropic causal inference: Identifiability and finite sample results. In Advances in Neural Information Processing Systems, volume 33, pages 14772-14782. Curran Associates, Inc., 2020.
258
+
259
+ Ruifei Cui, Perry Groot, Moritz Schauer, and Tom Heskes. Learning the causal structure of copula models with latent variables. In Proceedings of the Thirty-Fourth Conference on Uncertainty in Artificial Intelligence, page 188-197, 2018.
260
+
261
+ Nir Friedman and Daphne Koller. Being Bayesian about network structure. A Bayesian approach to structure discovery in Bayesian networks. Machine Learning, 50(1-2): 95-125, 2003.
262
+
263
+ Heonjong Han et al. TRRUST v2: an expanded reference database of human and mouse transcriptional regulatory interactions. Nucleic Acids Research, 46(D1): D380-D386, 2018.
264
+
265
+ David Heckerman, Dan Geiger, and David M Chickering. Learning Bayesian networks: The combination of knowledge and statistical data. Machine Learning, 20(3):197- 243, 1995.
266
+
267
+ Daniel Hernandez-Lobato, Pablo Morales-Mombiela, David Lopez-Paz, and Alberto Suarez. Non-linear causal infer-
268
+
269
+ ence using Gaussianity measures. The Journal of Machine Learning Research, 17(1):939-977, 2016.
270
+
271
+ Patrik O Hoyer, Dominik Janzing, Joris M Mooij, Jonas Peters, and Bernhard Schölkopf. Nonlinear causal discovery with additive noise models. In Advances in Neural Information Processing Systems, pages 689-696, 2009.
272
+
273
+ Dominik Janzing, Joris Mooij, Kun Zhang, Jan Lemeire, Jakob Zscheischler, Povilas Daniušis, Bastian Steudel, and Bernhard Schölkopf. Information-geometric approach to inferring causal directions. Artificial Intelligence, 182:1-31, 2012.
274
+
275
+ Allon M Klein et al. Droplet barcoding for single-cell transcriptomics applied to embryonic stem cells. Cell, 161(5):1187-1201, 2015.
276
+
277
+ Furui Liu and Laiwan Chan. Causal inference on discrete data via estimating distance correlations. Neural Computation, 28(5):801-814, 2016.
278
+
279
+ Alexander Marx and Jilles Vreeken. Telling cause from effect using MDL-based local and global regression. In 2017 IEEE International Conference on Data Mining (ICDM), pages 307-316. IEEE, 2017.
280
+
281
+ Alexander Marx and Jilles Vreeken. Identifiability of cause and effect using regularized regression. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 852-861, 2019.
282
+
283
+ Joris M Mooij, Oliver Stegle, Dominik Janzing, Kun Zhang, and Bernhard Schölkopf. Probabilistic latent variable models for distinguishing between cause and effect. In ${Ad}$ - vances in Neural Information Processing Systems, pages 1687-1695, 2010.
284
+
285
+ Joris M Mooij, Jonas Peters, Dominik Janzing, Jakob Zscheischler, and Bernhard Schölkopf. Distinguishing cause from effect using observational data: methods and benchmarks. The Journal of Machine Learning Research, 17(1):1103-1204, 2016.
286
+
287
+ Gunwoong Park and Hyewon Park. Identifiability of generalized hypergeometric distribution (GHD) directed acyclic graphical models. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 158-166, 2019.
288
+
289
+ Gunwoong Park and Garvesh Raskutti. Learning large-scale Poisson DAG models based on overdispersion scoring. In Advances in Neural Information Processing Systems, pages 631-639, 2015.
290
+
291
+ Giovanni Parmigiani et al. A statistical framework for expression-based molecular classification in cancer. Journal of the Royal Statistical Society: Series B, 64(4):717- 736, 2002.
292
+
293
+ Judea Pearl. Causality: Models, Reasoning and Inference. Cambridge University Press, USA, 2nd edition, 2009. ISBN 052189560X.
294
+
295
+ Dana Pe'er. Bayesian network analysis of signaling networks: a primer. Science's STKE, 2005(281):pl4-pl4, 2005.
296
+
297
+ Jonas Peters and Peter Bühlmann. Identifiability of Gaussian structural equation models with equal error variances. Biometrika, 101(1):219-228, 2014.
298
+
299
+ Jonas Peters and Peter Bühlmann. Structural intervention distance for evaluating causal graphs. Neural Computation, 27(3):771-799, 2015.
300
+
301
+ Jonas Peters, Dominik Janzing, and Bernhard Schölkopf. Identifying cause and effect on discrete data using additive noise models. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pages 597-604, 2010.
302
+
303
+ Jonas Peters, Joris M Mooij, Dominik Janzing, and Bernhard Schölkopf. Causal discovery with continuous additive noise models. The Journal of Machine Learning Research, 15(1):2009-2053, 2014.
304
+
305
+ Joseph Ramsey, Jiji Zhang, and Peter L Spirtes. Adjacency-faithfulness and conservative causal inference. arXiv preprint arXiv:1206.6843, 2012.
306
+
307
+ Karen Sachs, Omar Perez, Dana Pe'er, Douglas A Lauf-fenburger, and Garry P Nolan. Causal protein-signaling networks derived from multiparameter single-cell data. Science, 308(5721):523-529, 2005.
308
+
309
+ Marco Scutari and Jean-Baptiste Denis. Bayesian Networks: with Examples in R. CRC press, 2014.
310
+
311
+ Marco Scutari, Claudia Vitolo, and Allan Tucker. Learning Bayesian networks from big data with greedy search: Computational complexity and efficient implementation. Statistics and Computing, 29(5):1095-1108, 2019.
312
+
313
+ Andrew J Sedgewick et al. Mixed graphical models for integrative causal analysis with application to chronic lung disease diagnosis and prognosis. Bioinformatics, 35 (7):1204-1212, 2019.
314
+
315
+ Eleni Sgouritsa, Dominik Janzing, Philipp Hennig, and Bernhard Schölkopf. Inference of cause and effect with unsupervised inverse regression. In Artificial Intelligence and Statistics, pages 847-855, 2015.
316
+
317
+ Xinpeng Shen, Sisi Ma, Prashanthi Vemuri, and Gyorgy Simon. Challenges and opportunities with causal discovery algorithms: Application to Alzheimer's pathophysiology. Scientific Reports, 10(1):1-12, 2020.
318
+
319
+ Shohei Shimizu, Patrik O Hoyer, Aapo Hyvärinen, and Antti Kerminen. A linear non-Gaussian acyclic model for causal discovery. Journal of Machine Learning Research, 7(Oct):2003-2030, 2006.
320
+
321
+ Peter Spirtes and Kun Zhang. Causal discovery and inference: Concepts and recent methodological advances. Applied Informatics, 3(1):3, 2016.
322
+
323
+ Peter Spirtes, Clark N Glymour, Richard Scheines, and David Heckerman. Causation, Prediction, and Search. MIT press, 2000.
324
+
325
+ Mark Steyvers, Joshua B Tenenbaum, Eric-Jan Wagenmak-ers, and Ben Blum. Inferring causal networks from observations and interventions. Cognitive Science, 27(3): 453-489, 2003.
326
+
327
+ Joe Suzuki, Takanori Inazumi, Takashi Washio, and Shohei Shimizu. Identifiability of an integer modular acyclic additive noise model and its causal structure discovery. arXiv preprint arXiv:1401.5625, 2014.
328
+
329
+ Natasa Tagasovska, Valérie Chavez-Demoulin, and Thibault Vatter. Distinguishing cause from effect using quantiles: Bivariate quantile causal discovery. In International Conference on Machine Learning, pages 9311-9323. PMLR, 2020.
330
+
331
+ Michail Tsagris, Giorgos Borboudakis, Vincenzo Lagani, and Ioannis Tsamardinos. Constraint-based causal discovery with mixed data. International Journal of Data Science and Analytics, 6(1):19-30, 2018.
332
+
333
+ Ioannis Tsamardinos, Constantin F Aliferis, Alexander R Statnikov, and Er Statnikov. Algorithms for large scale Markov blanket discovery. In FLAIRS Conference, volume 2, pages 376-380, 2003.
334
+
335
+ Wenjuan Wei, Lu Feng, and Chunchen Liu. Mixed causal structure discovery with application to prescriptive pricing. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pages 5126-5134, 2018.
336
+
337
+ Kun Zhang and Aapo Hyvärinen. On the identifiability of the post-nonlinear causal model. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI '09, page 647-655, Arlington, Virginia, USA, 2009. AUAI Press. ISBN 9780974903958.
338
+
339
+ ## Ordinal Causal Discovery: Supplementary Materials
340
+
341
+ ## 1 Proof of Theorem 1
342
+
343
+ We need the notion of real analytic function.
344
+
345
+ Definition (Real Analytic Function) A real function is said to be analytic if it is infinitely differentiable and matches its Taylor series in a neighborhood of every point.
346
+
347
+ Suppose $X \in \{ 1,\ldots , S\}$ and $Y \in \{ 1,\ldots , L\}$ . Consider two competing causal models ${p}_{X \rightarrow Y}\left( {X, Y \mid \mathbf{\pi },\mathbf{\beta },\mathbf{\gamma }}\right)$ and ${p}_{Y \rightarrow X}\left( {Y, X \mid \mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right)$ . We will show that these two causal models are in general not equivalent, i.e., ${P}_{X \rightarrow Y}(X =$ $s, Y = \ell \mid \mathbf{\pi },\mathbf{\beta },\mathbf{\gamma }) \neq {P}_{Y \rightarrow X}\left( {X = s, Y = \ell \mid \mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right)$ for some $s \in \{ 1,\ldots , S\}$ and $\ell \in \{ 1,\ldots , L\}$ , where $S, L \geq 2$ and $\max \{ S, L\} \geq 3$ . Without loss of generality, assume $S \geq 3$ . We prove it by contradiction. Suppose for any $s \in \{ 1,\ldots , S\}$ and $\ell \in \{ 1,\ldots , L\}$ ,
348
+
349
+ $$
350
+ {P}_{X \rightarrow Y}\left( {X = s, Y = \ell \mid \mathbf{\pi },\mathbf{\beta },\mathbf{\gamma }}\right) = {P}_{Y \rightarrow X}\left( {X = s, Y = \ell \mid \mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right) . \tag{1}
351
+ $$
352
+
353
+ The left-hand side of (1) is given by
354
+
355
+ $$
356
+ {P}_{X \rightarrow Y}\left( {X = s, Y = \ell \mid \mathbf{\pi },\mathbf{\beta },\mathbf{\gamma }}\right) = {P}_{X \rightarrow Y}\left( {Y = \ell \mid X = s,\mathbf{\beta },\mathbf{\gamma }}\right) {P}_{X \rightarrow Y}\left( {X = s \mid \mathbf{\pi }}\right)
357
+ $$
358
+
359
+ $$
360
+ = \left\lbrack {{P}_{X \rightarrow Y}\left( {Y \leq \ell \mid X = s,\mathbf{\beta },\mathbf{\gamma }}\right) - {P}_{X \rightarrow Y}\left( {Y \leq \ell - 1 \mid X = s,\mathbf{\beta },\mathbf{\gamma }}\right) }\right\rbrack {P}_{X \rightarrow Y}\left( {X = s \mid \mathbf{\pi }}\right)
361
+ $$
362
+
363
+ $$
364
+ = \left\lbrack {F\left( {{\gamma }_{\ell } - {\beta }_{s}}\right) - F\left( {{\gamma }_{\ell - 1} - {\beta }_{s}}\right) }\right\rbrack {\pi }_{s},
365
+ $$
366
+
367
+ where $F\left( x\right)$ is the logistic $F\left( x\right) = \frac{{e}^{x}}{1 + {e}^{x}}$ or the probit $F\left( x\right) = \Phi \left( x\right)$ link function where $\Phi \left( x\right)$ is the standard normal cumulative distribution function. Similarly, the right-hand side of (1) is given by
368
+
369
+ $$
370
+ {P}_{Y \rightarrow X}\left( {X = s, Y = \ell \mid \mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right) = {P}_{Y \rightarrow X}\left( {X = s \mid Y = \ell ,\mathbf{\alpha },\mathbf{\eta }}\right) {P}_{Y \rightarrow X}\left( {Y = \ell \mid \mathbf{\rho }}\right) = \left\lbrack {F\left( {{\eta }_{s} - {\alpha }_{\ell }}\right) - F\left( {{\eta }_{s - 1} - {\alpha }_{\ell }}\right) }\right\rbrack {\rho }_{\ell }.
371
+ $$
372
+
373
+ Therefore, (1) leads to
374
+
375
+ $$
376
+ \left\lbrack {F\left( {{\gamma }_{\ell } - {\beta }_{s}}\right) - F\left( {{\gamma }_{\ell - 1} - {\beta }_{s}}\right) }\right\rbrack {\pi }_{s} = \left\lbrack {F\left( {{\eta }_{s} - {\alpha }_{\ell }}\right) - F\left( {{\eta }_{s - 1} - {\alpha }_{\ell }}\right) }\right\rbrack {\rho }_{\ell } \tag{2}
377
+ $$
378
+
379
+ Note that the right-hand side of (2) is a telescoping series in $s$ . Hence, summing up both sides of (2) over $s$ from 1 to $S$ , we have
380
+
381
+ $$
382
+ \mathop{\sum }\limits_{{s = 1}}^{S}\left\lbrack {F\left( {{\gamma }_{\ell } - {\beta }_{s}}\right) - F\left( {{\gamma }_{\ell - 1} - {\beta }_{s}}\right) }\right\rbrack {\pi }_{s} = \left\lbrack {F\left( {{\eta }_{S} - {\alpha }_{\ell }}\right) - F\left( {{\eta }_{0} - {\alpha }_{\ell }}\right) }\right\rbrack {\rho }_{\ell }
383
+ $$
384
+
385
+ $$
386
+ = {\rho }_{\ell }\text{.} \tag{3}
387
+ $$
388
+
389
+ The last equation is because ${\eta }_{S} = \infty$ and ${\eta }_{0} = - \infty$ and hence $F\left( {{\eta }_{S} - {\alpha }_{\ell }}\right) = 1$ and $F\left( {{\eta }_{0} - {\alpha }_{\ell }}\right) = 0$ . Plug (3) into (2),
390
+
391
+ $$
392
+ \left\lbrack {F\left( {{\gamma }_{\ell } - {\beta }_{s}}\right) - F\left( {{\gamma }_{\ell - 1} - {\beta }_{s}}\right) }\right\rbrack {\pi }_{s} = \left\lbrack {F\left( {{\eta }_{s} - {\alpha }_{\ell }}\right) - F\left( {{\eta }_{s - 1} - {\alpha }_{\ell }}\right) }\right\rbrack \mathop{\sum }\limits_{{s = 1}}^{S}\left\lbrack {F\left( {{\gamma }_{\ell } - {\beta }_{s}}\right) - F\left( {{\gamma }_{\ell - 1} - {\beta }_{s}}\right) }\right\rbrack {\pi }_{s}
393
+ $$
394
+
395
+ and hence
396
+
397
+ $$
398
+ \frac{\left\lbrack {F\left( {{\gamma }_{\ell } - {\beta }_{s}}\right) - F\left( {{\gamma }_{\ell - 1} - {\beta }_{s}}\right) }\right\rbrack {\pi }_{s}}{\mathop{\sum }\limits_{{s = 1}}^{S}\left\lbrack {F\left( {{\gamma }_{\ell } - {\beta }_{s}}\right) - F\left( {{\gamma }_{\ell - 1} - {\beta }_{s}}\right) }\right\rbrack {\pi }_{s}} = F\left( {{\eta }_{s} - {\alpha }_{\ell }}\right) - F\left( {{\eta }_{s - 1} - {\alpha }_{\ell }}\right) \tag{4}
399
+ $$
400
+
401
+ Now, consider $s = 1$ in (4) and note ${\eta }_{0} = - \infty$ and ${\eta }_{1} = 0$ ,
402
+
403
+ $$
404
+ \frac{\left\lbrack {F\left( {{\gamma }_{\ell } - {\beta }_{1}}\right) - F\left( {{\gamma }_{\ell - 1} - {\beta }_{1}}\right) }\right\rbrack {\pi }_{1}}{\mathop{\sum }\limits_{{s = 1}}^{S}\left\lbrack {F\left( {{\gamma }_{\ell } - {\beta }_{s}}\right) - F\left( {{\gamma }_{\ell - 1} - {\beta }_{s}}\right) }\right\rbrack {\pi }_{s}} = F\left( {{\eta }_{1} - {\alpha }_{\ell }}\right) - F\left( {{\eta }_{0} - {\alpha }_{\ell }}\right)
405
+ $$
406
+
407
+ $$
408
+ = F\left( {-{\alpha }_{\ell }}\right) \text{.}
409
+ $$
410
+
411
+ Therefore,
412
+
413
+ $$
414
+ {\alpha }_{\ell } = - {F}^{-1}\left\{ \frac{\left\lbrack {F\left( {{\gamma }_{\ell } - {\beta }_{1}}\right) - F\left( {{\gamma }_{\ell - 1} - {\beta }_{1}}\right) }\right\rbrack {\pi }_{1}}{\mathop{\sum }\limits_{{s = 1}}^{S}\left\lbrack {F\left( {{\gamma }_{\ell } - {\beta }_{s}}\right) - F\left( {{\gamma }_{\ell - 1} - {\beta }_{s}}\right) }\right\rbrack {\pi }_{s}}\right\} \tag{5}
415
+ $$
416
+
417
+ Sequentially plug (5) into (4) for ${s}^{ * } = 2,\ldots , S - 1$ (note that one can at least plug in once for ${s}^{ * } = 2$ because $S \geq 3)$ ,
418
+
419
+ $$
420
+ {\eta }_{{s}^{ * }} = {F}^{-1}\left\{ \frac{\mathop{\sum }\limits_{{s = 1}}^{{s}^{ * }}\left\lbrack {F\left( {{\gamma }_{\ell } - {\beta }_{s}}\right) - F\left( {{\gamma }_{\ell - 1} - {\beta }_{s}}\right) }\right\rbrack {\pi }_{s}}{\mathop{\sum }\limits_{{s = 1}}^{S}\left\lbrack {F\left( {{\gamma }_{\ell } - {\beta }_{s}}\right) - F\left( {{\gamma }_{\ell - 1} - {\beta }_{s}}\right) }\right\rbrack {\pi }_{s}}\right\} - {F}^{-1}\left\{ \frac{\left\lbrack {F\left( {{\gamma }_{\ell } - {\beta }_{1}}\right) - F\left( {{\gamma }_{\ell - 1} - {\beta }_{1}}\right) }\right\rbrack {\pi }_{1}}{\mathop{\sum }\limits_{{s = 1}}^{S}\left\lbrack {F\left( {{\gamma }_{\ell } - {\beta }_{s}}\right) - F\left( {{\gamma }_{\ell - 1} - {\beta }_{s}}\right) }\right\rbrack {\pi }_{s}}\right\} , \tag{6}
421
+ $$
422
+
423
+ Because the left-hand side of (6) is independent of $\ell$ whereas the right-hand side of (6) depends on $\ell$ , we have,
424
+
425
+ $$
426
+ {F}^{-1}\left\{ \frac{\mathop{\sum }\limits_{{s = 1}}^{{s}^{ * }}\left\lbrack {F\left( {{\gamma }_{\ell } - {\beta }_{s}}\right) - F\left( {{\gamma }_{\ell - 1} - {\beta }_{s}}\right) }\right\rbrack {\pi }_{s}}{\mathop{\sum }\limits_{{s = 1}}^{S}\left\lbrack {F\left( {{\gamma }_{\ell } - {\beta }_{s}}\right) - F\left( {{\gamma }_{\ell - 1} - {\beta }_{s}}\right) }\right\rbrack {\pi }_{s}}\right\} - {F}^{-1}\left\{ \frac{\left\lbrack {F\left( {{\gamma }_{\ell } - {\beta }_{1}}\right) - F\left( {{\gamma }_{\ell - 1} - {\beta }_{1}}\right) }\right\rbrack {\pi }_{1}}{\mathop{\sum }\limits_{{s = 1}}^{S}\left\lbrack {F\left( {{\gamma }_{\ell } - {\beta }_{s}}\right) - F\left( {{\gamma }_{\ell - 1} - {\beta }_{s}}\right) }\right\rbrack {\pi }_{s}}\right\}
427
+ $$
428
+
429
+ $$
430
+ - {F}^{-1}\left\{ \frac{\mathop{\sum }\limits_{{s = 1}}^{{s}^{ * }}\left\lbrack {F\left( {{\gamma }_{{\ell }^{ * }} - {\beta }_{s}}\right) - F\left( {{\gamma }_{{\ell }^{ * } - 1} - {\beta }_{s}}\right) }\right\rbrack {\pi }_{s}}{\mathop{\sum }\limits_{{s = 1}}^{S}\left\lbrack {F\left( {{\gamma }_{{\ell }^{ * }} - {\beta }_{s}}\right) - F\left( {{\gamma }_{{\ell }^{ * } - 1} - {\beta }_{s}}\right) }\right\rbrack {\pi }_{s}}\right\} + {F}^{-1}\left\{ \frac{\left\lbrack {F\left( {{\gamma }_{{\ell }^{ * }} - {\beta }_{1}}\right) - F\left( {{\gamma }_{{\ell }^{ * } - 1} - {\beta }_{1}}\right) }\right\rbrack {\pi }_{1}}{\mathop{\sum }\limits_{{s = 1}}^{S}\left\lbrack {F\left( {{\gamma }_{{\ell }^{ * }} - {\beta }_{s}}\right) - F\left( {{\gamma }_{{\ell }^{ * } - 1} - {\beta }_{s}}\right) }\right\rbrack {\pi }_{s}}\right\} = 0, \tag{7}
431
+ $$
432
+
433
+ for any $\ell ,{\ell }^{ * } \in \{ 1,\ldots , L\}$ and $\ell \neq {\ell }^{ * }$ (note that one can always find $\ell \neq {\ell }^{ * }$ because $L \geq 2$ ). The link function $F$ is an analytic function: (i) the logistic link $F\left( x\right) = \frac{{e}^{x}}{1 + {e}^{x}}$ is a composition of elementary functions and hence is analytic; and (ii) the probit link $F\left( x\right) = \Phi \left( x\right)$ is analytic because the error function erf( ) is analytic. Since ${F}^{\prime }\left( x\right)$ is nowhere zero in either case, ${F}^{-1}\left( x\right)$ is analytic. Since the left-hand side of (7) is a composition of $F,{F}^{-1}$ , sums, products, and reciprocals of ${r}_{\ell },{r}_{{\ell }^{ * }},{r}_{\ell - 1},{r}_{{\ell }^{ * } - 1},{b}_{1},\ldots ,{b}_{S},{\pi }_{1},\ldots ,{\pi }_{S}$ , it is an analytic function (Krantz and Parks, 2002) and therefore its zero set must have Lebesgue measure zero (Mityagin, 2015). In summary, we have proven that the two causal models are not equivalent for almost all $\left( {\mathbf{\pi },\mathbf{\beta },\mathbf{\gamma }}\right)$ with respect to the Lebesgue measure. Note that although our proof is for logistic or probit link, it is generalizable to other link functions as long as they are analytic functions and their derivatives are nowhere zero.
434
+
435
+ ## 2 Additional Experiment Results
436
+
437
+ ### 2.1 Synthetic Data
438
+
439
+ Number of Categories $L = 3$ We investigate scenarios where the number of categories $L = 3$ . The data are generated as in the main text $\left( {n = {500}, p = {10}}\right)$ except that the number of categories is now set to $L = 3$ . Six scenarios with different levels of signal strength are considered, $\sigma = {0.25},{0.5},{0.75},1,{1.25},{1.5}$ . We report the SHD of OCD, BIC+, BIC, and BDe in Table 1, which shows that OCD significantly outperforms competing methods.
440
+
441
+ Higher-Dimensional Synthetic Data As shown in Figure 1, for all tested signal strength $\sigma \in$ $\{ {0.25},{0.5},{0.75},1\}$ and number of nodes $p = {10},\ldots ,{100}$ , SHD and SID of OCD are uniformly better than the competing methods and in general, OCD is quite stable as $p$ increases when the signal strength is at least moderate $\sigma \geq {0.5}$ whereas the competing methods quickly deteriorate with $p$ regardless of the signal strength.
442
+
443
+ The CPU times of OCD in the synthetic data are shown in Figure 2, which appear to scale linearly in $n$ and $L$ , and quadratically in $p$ .
444
+
445
+ Synthetic Data with Denser Graphs We consider a scenario with $n = {500}$ observations, $p = {50}$ nodes, and $L = 5$ categories. We randomly generate graphs with 25,50, and 100 edges (Figure 3). We report the Structural hamming distance (SHD) between the true graph and the estimated graphs from OCD, BIC+, BIC, and BDe in Table 2. We find very minor decrease in performance of OCD whereas the competing methods perform substantially worse and deteriorate much faster.
446
+
447
+ <table><tr><td rowspan="2"/><td colspan="6">Signal Strength $\sigma$</td></tr><tr><td>0.25</td><td>0.5</td><td>0.75</td><td>1</td><td>1.25</td><td>1.5</td></tr><tr><td>OCD</td><td>5.2</td><td>1.6</td><td>1</td><td>0.8</td><td>0.2</td><td>0.2</td></tr><tr><td>BIC+</td><td>6.4</td><td>5</td><td>3.6</td><td>3.8</td><td>3.8</td><td>3.6</td></tr><tr><td>BIC</td><td>7</td><td>5.8</td><td>4.6</td><td>4</td><td>3.2</td><td>3.8</td></tr><tr><td>BDe</td><td>7</td><td>6.8</td><td>6.2</td><td>5.2</td><td>5</td><td>4.6</td></tr></table>
448
+
449
+ Table 1: Structural hamming distance between the true graph and the estimated graphs from OCD, BIC+, BIC, and BDe. The data are generated as in the main text with different levels of signal strength except that the number of categories is set to $L = 3$ .
450
+
451
+ ![019639ca-3f6e-7a2b-b969-fe00d8433099_12_186_580_1422_415_0.jpg](images/019639ca-3f6e-7a2b-b969-fe00d8433099_12_186_580_1422_415_0.jpg)
452
+
453
+ Figure 1: SID for OCD, BDe, BIC, and BIC+ as functions of $p$ in the synthetic ordinal data with the sample size fixed at $n = {500}$ and different signal strength $\sigma \in \{ {0.25},{0.5},{0.75},1\}$ .
454
+
455
+ ### 2.2 Real Data
456
+
457
+ Sensitivity to Small Added Noise to CEP Data Following the idea in Mooij et al. 2016, we test the sensitivity of the best performing causal discovery methods (OCD with $L = {15}$ , SLOPE, and bQCD) to small added noises. Specifically, we add independent centered Gaussian noise to $X$ and $Y$ with standard deviation $\tau \in \left\{ {{10}^{-8},{10}^{-7},\ldots ,{10}^{-1}}\right\}$ . We repeat the simulation of noises 5 times under each noise level and the average ACC and AUC are shown in Figure 4. All methods have stable performance for $\tau = {10}^{-8} - {10}^{-5}$ . The performance of SLOPE starts deteriorating at $\tau = {10}^{-4}$ whereas OCD and bQCD are much more robust (significant drop in ACC at $\tau = {10}^{-1}$ ). The robustness of OCD is expected because small added noise will not significantly affect data discretization.
458
+
459
+ ## References
460
+
461
+ Krantz, S. G. and Parks, H. R. (2002). A primer of real analytic functions. Springer Science & Business Media. Mityagin, B. (2015). The zero set of a real analytic function. arXiv preprint arXiv:1512.07276.
462
+
463
+ Mooij, J. M., Peters, J., Janzing, D., Zscheischler, J., and Schölkopf, B. (2016). Distinguishing cause from effect using observational data: methods and benchmarks. The Journal of Machine Learning Research, 17(1):1103- 1204.
464
+
465
+ <table><tr><td rowspan="2"/><td colspan="3">#ofEdges</td></tr><tr><td>25</td><td>50</td><td>100</td></tr><tr><td>OCD</td><td>0</td><td>0</td><td>1</td></tr><tr><td>BIC+</td><td>13</td><td>-</td><td>-</td></tr><tr><td>BIC</td><td>25</td><td>48</td><td>92</td></tr><tr><td>BDe</td><td>23</td><td>40</td><td>75</td></tr></table>
466
+
467
+ Table 2: Structural hamming distance between the true graph and the estimated graphs from OCD, BIC+, BIC, and BDe. The true graphs are generated randomly with 25,50, and 100 edges. BIC+ is not applicable for 50 and 100 edges as it takes 150GB of memory.
468
+
469
+ ![019639ca-3f6e-7a2b-b969-fe00d8433099_13_337_950_1121_423_0.jpg](images/019639ca-3f6e-7a2b-b969-fe00d8433099_13_337_950_1121_423_0.jpg)
470
+
471
+ Figure 2: CPU times of OCD as functions of $n, L$ , and $p$ in the synthetic ordinal data.
472
+
473
+ ![019639ca-3f6e-7a2b-b969-fe00d8433099_14_542_271_717_1803_0.jpg](images/019639ca-3f6e-7a2b-b969-fe00d8433099_14_542_271_717_1803_0.jpg)
474
+
475
+ Figure 3: Simulation truth of denser graphs.
476
+
477
+ ![019639ca-3f6e-7a2b-b969-fe00d8433099_15_522_945_751_427_0.jpg](images/019639ca-3f6e-7a2b-b969-fe00d8433099_15_522_945_751_427_0.jpg)
478
+
479
+ Figure 4: Sensitivity to small added noise for the CEP data.
480
+
UAI/UAI 2022/UAI 2022 Conference/BElx3S8s5e9/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,307 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § ORDINAL CAUSAL DISCOVERY
2
+
3
+ § ABSTRACT
4
+
5
+ Causal discovery for purely observational, categorical data is a long-standing challenging problem. Unlike continuous data, the vast majority of existing methods for categorical data focus on inferring the Markov equivalence class only, which leaves the direction of some causal relationships undetermined. This paper proposes an identifiable ordinal causal discovery method that exploits the ordinal information contained in many real-world applications to uniquely identify the causal structure. The proposed method is applicable beyond ordinal data via data discretization. Through real-world and synthetic experiments, we demonstrate that the proposed ordinal causal discovery method combined with simple score-and-search algorithms has favorable and robust performance compared to state-of-the-art alternative methods in both ordinal categorical and non-categorical data. An accompanied R package OCD is freely available.
6
+
7
+ § 1 INTRODUCTION
8
+
9
+ Causal discovery [Spirtes et al., 2000, Pearl, 2009] is becoming increasingly more popular in machine learning, and finds numerous applications, e.g., biology [Sachs et al., 2005], psychology [Steyvers et al., 2003], and neuroscience [Shen et al., 2020], of which the prevailing goal is to discover causal relationships of variables of interest. The discovered causal relationships are useful for predicting a system's response to external interventions [Pearl, 2009], a key step towards understanding and engineering that system. While the gold standard for causal discovery remains the controlled experimentation, it can be too expensive, unethical, or even impossible in many cases, particularly on human beings. Therefore, inferring the unknown causal structures of complex systems from purely observational data is often desirable and, sometimes, the only option.
10
+
11
+ This paper considers causal discovery for ordinal categorical data. Categorical data are common across multiple disciplines. For example, psychologists often use questionnaires to measure latent traits such as personality and depression. The responses to those questionnaires are often categorical, say, with five levels (5-point Likert scale): "strongly disagree", "disagree", "neutral", "agree", and "strongly agree". In genetics, single-nucleotide polymorphisms are categorical variables with three levels (mutation on neither, one, or both alleles). Categorical data also arise as a result of dis-cretization of non-categorical (e.g., continuous and count) data. For instance, in biology, gene expression data are often trichotomized to "underexpression", "normal expression", and "overexpression" [Parmigiani et al., 2002, Pe'er, 2005, Sachs et al., 2005] in order to reduce sequencing technical noise while retaining biological interpretability.
12
+
13
+ While causal discovery for purely observational categorical data have been extensively studied, the vast majority of existing methods [Heckerman et al., 1995, Chickering, 2002] have exclusively focused on Bayesian networks (BNs) with nominal (unordered) categorical variables. It has been well established that a nominal/multinomial BN is generally only identifiable up to Markov equivalence class in which all BNs encode the same Markov properties. For example, $X \rightarrow Y$ and $Y \rightarrow X$ are Markov equivalent and also distribution equivalent [Spirtes and Zhang, 2016] with a multinomial likelihood; therefore, they are non-identifiable with purely observational data.
14
+
15
+ In many real-world applications, categorical data (including the aforementioned Likert scale, single-nucleotide polymorphisms, and discretized gene expression data) contain ordinal information. In this paper, we show that this often-overlooked ordinal information is crucial in causal discovery for categorical data. We propose an ordinal causal discovery (OCD) method via an ordinal BN. Assuming causal Markov and causal sufficiency, we prove OCD to be identifiable in general for ordinal categorical data. Score-and-search BN structure learning algorithms are developed - exhaustive search for small networks (e.g., bivariate data) and greedy search for moderate-sized networks. Through extensive experiments with real-world and synthetic datasets, we demonstrate that the proposed OCD is identifiable, robust, applicable to both categorical and non-categorical data, and competitive against a range of state-of-the-art causal discovery methods. To the best of our knowledge, we are the first to exploit the ordinal information for causal discovery in categorical data. Our major contributions are four-fold.
16
+
17
+ 1. We advocate the usefulness of ordinal information of categorical data in causal discovery, which has been overlooked in the literature.
18
+
19
+ 2. We propose the first causal discovery method, OCD, for ordinal categorical data.
20
+
21
+ 3. We prove that OCD is generally identifiable for bivariate data, in contrast to the non-identifiability of multinomial BNs.
22
+
23
+ 4. We demonstrate the strong utility of OCD by comparison with state-of-the-art alternatives using real-world and synthetic datasets.
24
+
25
+ § 1.1 RELATED WORK
26
+
27
+ For brevity, we review causal discovery methods that are fully identifiable with observational data.
28
+
29
+ Non-Categorical Data. Model-based BNs for continuous data are often represented as additive noise models. Under such representation, BNs are generally identifiable if the noises are non-Gaussian [Shimizu et al., 2006], if the functional form of the additive noise model is nonlinear [Hoyer et al., 2009, Zhang and Hyvärinen, 2009], or if the noise variances are equal [Peters and Bühlmann, 2014]. Also see much of the recent literature that focuses on bivariate causal discovery [Mooij et al., 2010, Janzing et al., 2012, Chen et al., 2014, Sgouritsa et al., 2015, Hernandez-Lobato et al., 2016, Marx and Vreeken, 2017, Blöbaum et al., 2018, Marx and Vreeken, 2019, Tagasovska et al., 2020]. For count data, Park and Raskutti 2015 proposed a Poisson BN and showed that it is identifiable based on the overdispersion property of Poisson BNs. By replacing overdispersion property with constant moments ratio property, Park and Park 2019 extended Poisson BNs to the generalized hypergeometric family which contains many count distributions such as binomial, Poisson, and negative binomial. Recently, Choi et al. 2020 developed a zero-inflated Poisson BN for zero-inflated count data.
30
+
31
+ Categorical Data. For nominal categorical data, causal identification, primarily focused on bivariate data, is possible under certain assumptions [Peters et al., 2010, Suzuki et al., 2014, Liu and Chan, 2016, Cai et al., 2018, Compton et al., 2020], e.g., when the categories admit hidden compact representations or when data follow a discrete additive noise model. However, to the best of our knowledge, causal discovery for ordinal data, which are very common in practice, has not been studied. Whether a categorical variable is ordinal or not is, in our opinion, easier to comprehend than the aforementioned assumptions of categorical data (e.g., discrete additive noise).
32
+
33
+ Mixed Data. There are recent developments for mixed data causal discovery [Cui et al., 2018, Tsagris et al., 2018, Sedgewick et al., 2019], some of which include categorical data. However, the ordinal nature of the categorical data is not exploited for causal identification; therefore, these algorithms output Markov equivalent BNs instead of individual BNs. The latent variable approach by Wei et al. [2018] could in principle be extended to ordinal data. However, the causal Markov assumption of latent variables cannot translate to the observed variables and the inferred causality does not have direct causal interpretation on the observed variables.
34
+
35
+ § 2 BIVARIATE ORDINAL CAUSAL DISCOVERY
36
+
37
+ We first introduce the proposed OCD method for bivariate data, which will be extended to multivariate data in Section 4. Let $\left( {X,Y}\right) \in \{ 1,\ldots ,S\} \times \{ 1,\ldots ,L\}$ denote a pair of ordinal variables with $S$ and $L$ levels, of which the possible causal relationships, $X \rightarrow Y$ or $Y \rightarrow X$ , are under investigation. Throughout the paper, we make the causal Markov and causal sufficient assumptions, which are frequently adopted in the causal discovery literature [Pearl, 2009]. The former allows us to interpret the proposed model causally (beyond conditional independence) whereas the latter asserts that there are no unmeasured confounders.
38
+
39
+ The bivariate OCD considers the following probability distribution for causal model $X \rightarrow Y$ ,
40
+
41
+ $$
42
+ {p}_{X \rightarrow Y}\left( {X,Y}\right) = p\left( X\right) p\left( {Y \mid X}\right) , \tag{1}
43
+ $$
44
+
45
+ where $p\left( X\right)$ is a multinomial/categorical distribution with probabilities $\mathbf{\pi } = \left( {{\pi }_{1},\ldots ,{\pi }_{S}}\right)$ with $\mathop{\sum }\limits_{{s = 1}}^{S}{\pi }_{s} = 1$ , and $p\left( {Y \mid X}\right)$ is defined by an ordinal regression model [Agresti, 2003],
46
+
47
+ $$
48
+ \Pr \left( {Y \leq \ell \mid X}\right) = F\left( {{\gamma }_{\ell } - {\beta }_{X}}\right) ,\ell = 1,\ldots ,L, \tag{2}
49
+ $$
50
+
51
+ where ${\beta }_{X}$ is a generic notation of ${\beta }_{1},\ldots ,{\beta }_{S}$ for $X =$ $1,\ldots ,S$ . Typical choices of the link function $F$ are probit and inverse logit, which are empirically quite similar; hereafter we always use the probit link except for the identifiability theory, which is valid for both link functions. We fix ${\gamma }_{1} = 0$ for ordinal regression parameter identifiability [Agresti, 2003]. Equation (2) implies the conditional probability distribution $\Pr \left( {Y = \ell \mid X = s}\right) = F\left( {{\gamma }_{\ell } - }\right.$ $\left. {\beta }_{s}\right) - F\left( {{\gamma }_{\ell - 1} - {\beta }_{s}}\right)$ for $\ell = 1,\ldots ,L$ and $s = 1,\ldots ,S$ where ${\gamma }_{0} = - \infty$ and ${\gamma }_{L} = \infty$ . Let $\mathbf{\beta } = \left( {{\beta }_{1},\ldots ,{\beta }_{S}}\right)$ and $\gamma = \left( {{\gamma }_{2},\ldots ,{\gamma }_{L - 1}}\right)$ . We denote the model ${p}_{X \rightarrow Y}$ by ${p}_{X \rightarrow Y}\left( {X,Y \mid \mathbf{\pi },\mathbf{\beta },\mathbf{\gamma }}\right)$ . Similarly, we define the probability model ${p}_{Y \rightarrow X}$ as ${p}_{Y \rightarrow X}\left( {Y,X \mid \mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right)$ . If the maximum likelihood estimate ${\widehat{p}}_{X \rightarrow Y}$ given observations of(X, Y)is strictly larger than ${\widehat{p}}_{Y \rightarrow X}$ , then $X \rightarrow Y$ is deemed a more likely data generating causal model.
52
+
53
+ § 3 IDENTIFIABILITY
54
+
55
+ We will show that the proposed OCD is generally identifiable if at least one of the variable has at least three levels.
56
+
57
+ § DEFINITION 1 (DISTRIBUTION EQUIVALENCE)
58
+
59
+ ${p}_{X \rightarrow Y}\left( {X,Y \mid \mathbf{\pi },\mathbf{\beta },\mathbf{\gamma }}\right) \;$ and $\;{p}_{Y \rightarrow X}\left( {Y,X \mid \mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right)$ are distribution equivalent if for any values of $\left( {\mathbf{\pi },\mathbf{\beta },\mathbf{\gamma }}\right)$ there exist values of $\left( {\mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right)$ such that ${p}_{X \rightarrow Y}\left( {X,Y \mid \mathbf{\pi },\mathbf{\beta },\mathbf{\gamma }}\right) = {p}_{Y \rightarrow X}\left( {Y,X \mid \mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right)$ for any $X,Y$ , and vice versa.
60
+
61
+ Distribution equivalent causal models are clearly not distinguishable from each other by examining their observational distributions. The well-known multinomial BNs are distribution equivalent as illustrated in the following example.
62
+
63
+ Example 1 (Multinomial BN) Consider a bivariate multinomial ${BN}$ of $X \rightarrow Y$ whose conditional $p\left( {Y \mid X}\right)$ and marginal $p\left( X\right)$ probability distributions are given in Figure 1(a), and the joint distribution $p\left( {X,Y}\right)$ is given in Figure $\overline{1}\left( b\right)$ . Because of the multinomial assumption, we can find $\bar{a}$ set of parameters, i.e., the conditional $p\left( {X \mid Y}\right)$ and marginal $p\left( Y\right)$ probabilities (Figure 1(c)) of the reverse causal model $Y \rightarrow X$ , which leads to the same joint distribution. Therefore, the probability distribution does not provide information for causal identification.
64
+
65
+ Incorporating the underappreciated ordinal information, we will show that ${p}_{X \rightarrow Y}\left( {X,Y \mid \mathbf{\pi },\mathbf{\beta },\mathbf{\gamma }}\right)$ and ${p}_{Y \rightarrow X}\left( {Y,X \mid \mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right)$ are generally not distribution equivalent and are, therefore, identifiable.
66
+
67
+ Theorem 1 (Identifiability of OCD) Let $X \in \{ 1,\ldots ,S\}$ and $Y \in \{ 1,\ldots ,L\}$ where $S,L \geq 2$ and $\max \{ S,L\} \geq 3$ . Suppose $X \rightarrow Y$ is the data generating causal model and the observational probability distribution of(X, Y)is given ${by}$
68
+
69
+ $$
70
+ p\left( {X,Y}\right) = {p}_{X \rightarrow Y}\left( {X,Y \mid \mathbf{\pi },\mathbf{\beta },\mathbf{\gamma }}\right) .
71
+ $$
72
+
73
+ For almost all $\left( {\mathbf{\pi },\mathbf{\beta },\mathbf{\gamma }}\right)$ with respect to the Lebesgue measure, the distribution cannot be equivalently represented by the reverse causal model, i.e., there does not exist $\left( {\mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right)$ such that,
74
+
75
+ $$
76
+ p\left( {X,Y}\right) = {p}_{Y \rightarrow X}\left( {Y,X \mid \mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right) ,\forall X,Y.
77
+ $$
78
+
79
+ The proof based on properties of real analytic functions is provided in the Supplementary Materials. We demonstrate Theorem 1 by revisiting Example 1.
80
+
81
+ Example 2 (Ordinal BN) The conditional $p\left( {Y \mid X}\right)$ and marginal $p\left( X\right)$ probability distributions in Figure 1(a) coincide with those under the ordinal ${BN}{p}_{X \rightarrow Y}\left( {X,\overline{Y \mid \pi },\mathbf{\beta },\mathbf{\gamma }}\right)$ with $\mathbf{\pi } = \left( {{0.25},{0.25},{0.5}}\right) ,\gamma = 1$ , and $\mathbf{\beta } = \left( {1, - 1,1}\right)$ . Given a large enough dataset, the MLE of $p\left( {X,Y}\right)$ can be arbitrarily close to that in Figure 1(b). However, there does not exist any set of parameter values in the reverse causal model ${p}_{Y \rightarrow X}\left( {Y,X \mid \mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right)$ that produces the conditional $p\left( {X \mid Y}\right)$ and marginal $p\left( Y\right)$ probability distributions in Figure 1(c). Therefore, the reverse causal model ${p}_{Y \rightarrow X}\left( {Y,X \mid \mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right)$ cannot adequately fit the data generated from ${p}_{X \rightarrow Y}\left( {X,Y \mid \mathbf{\pi },\mathbf{\beta },\mathbf{\gamma }}\right)$ . For example, even with 100,000 observations, the MLE of $p\left( {X,Y}\right)$ under ${p}_{Y \rightarrow X}\left( {Y,X \mid \mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right)$ still has a large bias (Figure 1(d)), which will never approach 0. Therefore, ${p}_{X \rightarrow Y}\left( {X,Y \mid \mathbf{\pi },\mathbf{\beta },\mathbf{\gamma }}\right)$ can be distinguished from ${p}_{Y \rightarrow X}\left( {Y,X \mid \mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right) .$
82
+
83
+ Note that Theorem 1 excludes the case where both $X$ and $Y$ are binary (i.e., $L = S = 2$ ), under which OCD is not identifiable. This is expected because there is no difference between ordinal and nominal categorical variables in this case; the latter is known to be non-identifiable.
84
+
85
+ § 4 EXTENSION TO MULTIVARIATE ORDINAL CAUSAL DISCOVERY
86
+
87
+ While the vast majority of the existing identifiable causal discovery methods for categorical data [Peters et al., 2010, Suzuki et al., 2014, Liu and Chan, 2016, Cai et al., 2018, Compton et al., 2020] have primarily focused on bivariate cases, we extend the proposed bivariate OCD to multivariate data. Let $\mathbf{X} = \left( {{X}_{1},\ldots ,{X}_{p}}\right) \in \left\{ {1,\ldots ,{L}_{1}}\right\} \times \cdots \times$ $\left\{ {1,\ldots ,{L}_{p}}\right\}$ denote $p$ ordinal variables. Let $G = \left( {V,E}\right)$ denote a causal BN with a set of nodes $V = \{ 1,\ldots ,p\}$ representing $\mathbf{X}$ and directed edges $E \subset V \times V$ representing direct causal relationships (with respect to $\mathbf{X}$ ). Let ${pa}\left( j\right) = \{ k \mid k \rightarrow j\} \subseteq V$ denote the set of direct causes (parents) of node $j$ in $G$ and let ${\mathbf{X}}_{{pa}\left( j\right) } = \left\{ {{X}_{k} \mid k \in {pa}\left( j\right) }\right\}$ . Given $G$ , the joint distribution of $\mathbf{X}$ factorizes,
88
+
89
+ $$
90
+ p\left( {\mathbf{X} \mid G}\right) = \mathop{\prod }\limits_{{j = 1}}^{p}p\left( {{X}_{j} \mid {\mathbf{X}}_{{pa}\left( j\right) }}\right) , \tag{3}
91
+ $$
92
+
93
+ where each conditional distribution $p\left( {{X}_{j} \mid {\mathbf{X}}_{{pa}\left( j\right) }}\right)$ is an ordinal regression model of which the cumulative distribution is given by, for $\ell = 1,\ldots ,{L}_{j}$ ,
94
+
95
+ $$
96
+ \Pr \left( {{X}_{j} \leq \ell \mid {\mathbf{X}}_{{pa}\left( j\right) }}\right) = F\left( {{\gamma }_{j\ell } - \mathop{\sum }\limits_{{k \in {pa}\left( j\right) }}{\beta }_{{jk}{X}_{k}} - {\alpha }_{j}}\right) ,
97
+ $$
98
+
99
+ where ${\alpha }_{j}$ is the intercept and ${\beta }_{{jk}{X}_{k}}$ is a generic notation of ${\beta }_{jk1},\ldots ,{\beta }_{{jk}{L}_{k}}$ for ${X}_{k} = 1,\ldots ,{L}_{k}$ . We set ${\gamma }_{j1} = {\beta }_{{jk}{L}_{k}} = 0$ for ordinal regression parameter identifiability [Agresti, 2003]. The implied conditional probability distribution $\Pr \left( {{X}_{j} = \ell \mid {\mathbf{X}}_{{pa}\left( j\right) } = \mathbf{s}}\right) = F\left( {{\gamma }_{j\ell } - }\right.$ $\left. {\mathop{\sum }\limits_{{k \in {pa}\left( j\right) }}{\beta }_{{jk}{h}_{k}} - {\alpha }_{j}}\right) - F\left( {{\gamma }_{j,\ell - 1} - \mathop{\sum }\limits_{{k \in {pa}\left( j\right) }}{\beta }_{{jk}{h}_{k}} - {\alpha }_{j}}\right)$ for $\ell = 1,\ldots ,{L}_{j}$ and $s \in \mathop{\prod }\limits_{{k \in {pa}\left( j\right) }}\left\{ {1,\ldots ,{L}_{k}}\right\}$ . In summary, the multivariate OCD model is parameterized by ${\mathbf{\gamma }}_{j} = \left( {{\gamma }_{j2},\ldots ,{\gamma }_{j,{L}_{j} - 1}}\right) ,{\mathbf{\beta }}_{jk} = \left( {{\beta }_{jk1},\ldots ,{\beta }_{{jk},{L}_{k} - 1}}\right)$ , and ${\alpha }_{j}$ , for $j = 1,\ldots ,p$ and $k \in {pa}\left( j\right)$ .
100
+
101
+ < g r a p h i c s >
102
+
103
+ Figure 1: Illustration. (a) Conditional $p\left( {Y \mid X}\right)$ and marginal $p\left( X\right)$ probability distributions. They coincide with those under ${p}_{X \rightarrow Y}\left( {X,Y \mid \mathbf{\pi },\mathbf{\beta },\mathbf{\gamma }}\right)$ with $\mathbf{\pi } = \left( {{0.25},{0.25},{0.5}}\right) ,\gamma = 1$ , and $\mathbf{\beta } = \left( {1, - 1,1}\right)$ . (b) The joint distribution $p\left( {X,Y}\right) =$ $p\left( X\right) p\left( {Y \mid X}\right)$ . (c) Conditional $p\left( {X \mid Y}\right)$ and marginal $p\left( Y\right)$ probability distributions from the same joint distribution $p\left( {X,Y}\right)$ . (d) Maximum likelihood estimate of $p\left( {X,Y}\right)$ under ${p}_{Y \rightarrow X}\left( {Y,X \mid \mathbf{\rho },\mathbf{\alpha },\mathbf{\eta }}\right)$ using data generated from $p\left( {X,Y}\right)$ in (b) with sample size 100,000 .
104
+
105
+ § 5 CAUSAL GRAPH STRUCTURE LEARNING
106
+
107
+ We develop simple score-and-search learning algorithms to estimate the structure of causal graphs, which already show strong empirical performance (see Section 6), although more sophisticated learning methods such as Bayesian inference could be adopted to further improve the performance.
108
+
109
+ Score. We score causal graphs by the Bayesian information criterion (BIC). We choose BIC over AIC because it favors a more parsimonious causal graph due to the heavier penalty on model complexity and generally has a better empirical performance. Let $\mathbf{x} = \left( {{\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{n}}\right)$ denote $n$ realizations of $\mathbf{X}$ . The score of $G$ (smaller is better) is given by
110
+
111
+ $$
112
+ \operatorname{BIC}\left( {G \mid \mathbf{x}}\right) = - 2\mathop{\sum }\limits_{{i = 1}}^{n}\log \widehat{p}\left( {{\mathbf{x}}_{i} \mid G}\right) + K\log \left( n\right) ,
113
+ $$
114
+
115
+ where $K$ is the number of model parameters and $\widehat{p}\left( {{\mathbf{x}}_{i} \mid G}\right)$ is the joint distribution (3) evaluated at ${\mathbf{x}}_{i}$ given the MLE of model parameters.
116
+
117
+ Exhaustive Search. For small networks (say $p = 2$ or 3), we compute the scores for all networks $\mathcal{G}$ , and identify $\widehat{G} = \arg \mathop{\min }\limits_{{G \in \mathcal{G}}}\operatorname{BIC}\left( {G \mid \mathbf{x}}\right)$ . While this approach is exact and useful for bivariate OCD, it becomes computationally infeasible for moderate-sized networks as the number of networks $\left| \mathcal{G}\right|$ grows super-exponentially in $p$ .
118
+
119
+ Greedy Search. We use a simple iterative greedy search algorithm [Chickering, 2002, Scutari et al., 2019] for moderate-sized networks. At each iteration, we score all the graphs that can be reached from the current graph by an edge addition, removal, or reversal. We replace the current graph by the graph with the largest improvement (largest decrease in BIC) and stop the algorithm when the score can no longer be improved. The greedy search algorithm is summarized in Algorithm 1 which is guaranteed to find a local optimal graph. The algorithm can be improved by tabu search and random non-local moves [Scutari et al., 2019] but we do not pursue this direction as the simple greedy algorithm already yields favorable results against state-of-the-art alternative methods. The worst per iteration cost is $O\left( {{pf}\left( {n,m,L}\right) }\right)$ for $p$ nodes, $n$ observations, $m$ maximum number of parents, and $L = \mathop{\max }\limits_{j}{L}_{j}$ maximum levels, where $f\left( {n,m,L}\right)$ is the computational complexity of an ordinal regression with $m$ regressors. This is because at most ${2p}$ score evaluations are required at each iteration [Scutari et al., 2019]. We use polr function in the R package MASS for ordinal regression which appears to scale linearly in $n,m$ , and $L$ , empirically.
120
+
121
+ § 6 EXPERIMENTS
122
+
123
+ We evaluate the proposed and state-of-the-art alternative causal discovery methods with synthetic as well as three sets of real data. The real data are not categorical and therefore allow us to extend our comparison to causal models designed for continuous data.
124
+
125
+ Algorithm 1 Greedy Search
126
+
127
+ Input: data $\mathbf{x}$ , initial graph $G$
128
+
129
+ Compute $\operatorname{BIC}\left( {G \mid \mathbf{x}}\right)$ and set ${\mathrm{{BIC}}}_{ \star } = \mathrm{{BIC}}\left( {G \mid \mathbf{x}}\right)$ .
130
+
131
+ repeat
132
+
133
+ Initialize Improvement $=$ false.
134
+
135
+ for all graphs ${G}^{\prime }$ reachable from $G$ do
136
+
137
+ Compute $\operatorname{BIC}\left( {{G}^{\prime } \mid \mathbf{x}}\right)$ .
138
+
139
+ if $\operatorname{BIC}\left( {{G}^{\prime } \mid \mathbf{x}}\right) < {\mathrm{{BIC}}}_{ \star }$ then
140
+
141
+ Set $G = {G}^{\prime }$ and ${\mathrm{{BIC}}}_{ \star } = \mathrm{{BIC}}\left( {{G}^{\prime } \mid \mathbf{x}}\right)$
142
+
143
+ Set Improvement $=$ true.
144
+
145
+ end if
146
+
147
+ end for
148
+
149
+ until Improvement is false
150
+
151
+ Output: graph $G$
152
+
153
+ § 6.1 SYNTHETIC ORDINAL DATA
154
+
155
+ We simulate low-dimensional, higher-dimensional, and bivariate (with confounders) synthetic ordinal data.
156
+
157
+ § 6.1.1 LOW-DIMENSIONAL MULTIVARIATE ORDINAL DATA
158
+
159
+ We consider synthetic ordinal data $\left( {n = {500},p = {10}}\right)$ . To mimic survey data with 5-point Likert-scale questionnaires, we simulate data from the proposed OCD model with ${L}_{j} = L = 5,\forall j$ . The true BN is generated randomly (Figure 2(a)) which has one v-structure (i.e., subgraph $j \rightarrow k \leftarrow i$ ). Its Markov equivalence class, represented by a completed partially directed acyclic graph (CPDAG), can be obtained by removing the directionality of the red dashed edges in Figure 2(a). We consider 6 scenarios with different levels of signal strength by generating simulation true ${\beta }_{{jk}\ell }$ ’s and ${\alpha }_{j}$ ’s independently from $N\left( {0,{\sigma }^{2}}\right)$ with $\sigma = {0.25},{0.5},{0.75},1,{1.25},{1.5}$ . Parameters ${\gamma }_{j\ell }$ ’s are chosen to have balanced class size for each variable.
160
+
161
+ Implementations. Standard causal discovery methods for categorical data are multinomial BNs with BIC or BDe score which discard the ordinal information and therefore only estimate the Markov equivalence classes. They are implemented using model averaging with 500 bootstrapped samples (page 145, Scutari and Denis 2014). We compare them with the proposed OCD, all implemented using greedy search. In addition, we also consider a two-step procedure [Friedman and Koller, 2003] which first learns a causal ordering and then estimates the causal multinomial BN given the ordering based on BIC (called "BIC+" hereafter). This procedure outputs an estimated BN.
162
+
163
+ Metrics. We compute the structural hamming distance (SHD) and the structural intervention distance (SID) with R package SID. The SHD between two graphs is the number of edge additions, deletions, or reversals required to transform one graph to the other. The SID measures "closeness" between two causal graphs in terms of their implied intervention distributions (see Peters and Bühlmann 2015 for the formal definition). Note that since multinomial BNs with BIC and BDe can only identify CPDAG, the smallest SHD that they can achieve is 5 (the number of undirected edges in the true CPDAG).
164
+
165
+ Results. The SHD and SID averaged over 5 repeat simulations are shown in Figure 2(b)-(c) as functions of signal strength $\sigma$ . Since multinomial BNs with BDe and BIC only estimate CPDAGs, we report the lower bounds of their SID. There are several conclusions that can be drawn. First, OCD is empirically identifiable because both SHD and SID quickly approach 0 as signal becomes stronger. Second, OCD uniformly outperforms the alternative methods in both SHD and SID across all signal levels, which suggests that exploiting the ordinal nature of ordinal categorical data is crucial for causal discovery. Third, BIC+ is better than BIC and BDe in SHD but not necessarily in SID, suggesting the estimated causal ordering from BIC+ is biased.
166
+
167
+ Different Number of Categories. In the Supplementary Materials, we present additional simulation scenarios with a different number $L = 3$ of categories. Similarly to the scenarios with $L = 5$ , OCD significantly outperforms the competing methods.
168
+
169
+ § 6.1.2 HIGHER-DIMENSIONAL MULTIVARIATE ORDINAL DATA
170
+
171
+ We fix the sample size $n = {500}$ and the number of categories $L = 5$ but vary the number of nodes $p = {10},{20},\ldots ,{100}$ and the signal strength $\sigma = {0.25},{0.5},{0.75},1$ . The graphs are kept at the same sparsity as in Section 6.1.1 across $p$ (denser graphs will be considered later). The SHD is shown in Figure 3 whereas the SID is provided in the Supplementary Materials due to the space limit. The proposed OCD uniformly outperforms the competing methods BDe, BIC, and BIC+ across $p$ and $\sigma$ . In general, OCD is quite stable as $p$ increases when the signal strength is moderate to moderately large $\sigma \geq {0.5}$ whereas the competing methods quickly deteriorate with $p$ regardless of the signal strength.
172
+
173
+ Scalability. We investigate the scalability of the proposed OCD with respect to $n,L$ , and $p$ . We vary $n =$ ${500},{750},\cdots ,{2750}$ (keeping $p = {10}$ and $L = 5$ ), $L =$ $5,\ldots ,{14}$ (keeping $n = {500}$ and $p = {10}$ ), and $p =$ ${10},{20},\ldots ,{100}$ (keeping $n = {500}$ and $L = 5$ ). The total CPU times in seconds on a ${2.9}\mathrm{{GHz}}6 -$ Core Intel Core i9 laptop are provided in the Supplementary Materials. The greedy search appears to scale linearly in $n$ and $L$ , and quadratically in $p$ , which agrees with the complexity analysis in Section 5. It is moderately scalable: e.g., for $p = {100}$ , the search completes in about 3 hours.
174
+
175
+ Denser Graphs. In the Supplementary Materials, we present additional simulation scenarios with denser graphs for $p = {50}$ nodes and more v-structures, which lead to similar conclusions, i.e., OCD significantly outperforms the competing methods in SHD and SID.
176
+
177
+ < g r a p h i c s >
178
+
179
+ Figure 2: Synthetic ordinal data. The dashed lines in (c) are the lower bounds of SID of BDe and BIC which output CPDAGs instead of BNs.
180
+
181
+ < g r a p h i c s >
182
+
183
+ Figure 3: SHD for OCD, BDe, BIC, and BIC+ as functions of $p$ in the synthetic ordinal data with the sample size fixed at $n = {500}$ and different signal strength $\sigma \in \{ {0.25},{0.5},{0.75},1\}$ .
184
+
185
+ § 6.1.3 BIVARIATE ORDINAL DATA WITH UNMEASURED CONFOUNDERS
186
+
187
+ While our identifiability theory assumes no unmeasured confounders, we now empirically test the sensitivity of OCD to unmeasured confounders for bivariate ordinal data. We generate trivariate ordinal data $\left( {{X}_{1},{X}_{2},{X}_{3}}\right)$ with $L = 5$ from the following true causal graph,
188
+
189
+ < g r a p h i c s >
190
+
191
+ We hide ${X}_{3}$ as a confounder and apply OCD to $\left( {{X}_{1},{X}_{2}}\right)$ . In the simulation truth, we assume ${\beta }_{{jk}\ell }$ , for each $\ell = 1,\ldots ,L$ , to be the same for all $j \neq k$ , i.e., the confounding effect is the same as the causal effect, which is simulated from $N\left( {0,{\sigma }^{2}}\right)$ . We consider different levels of signal strength $\sigma = {0.25},{0.5},{0.75},1,{1.25},{1.5}$ and different sample sizes $n = {100},{200},\ldots ,{1000}$ . Under each combination of $\left( {\sigma ,n}\right)$ , we repeat the experiment 100 times, and report the average accuracy (ACC) for forced decisions. The forced decision forces methods to choose between ${X}_{1} \rightarrow {X}_{2}$ and ${X}_{2} \rightarrow$ ${X}_{1}$ . The same metric has been used in similar bivariate causal discovery problems [Mooij et al., 2016, Tagasovska et al., 2020]. OCD is relatively robust to confounders (Figure 4(a)): it is able to correctly identify the causal direction given a large enough sample size or when the signal is sufficiently strong. For comparison, we apply a recent causal discovery method for bivariate nominal categorical data, HCR [Cai et al., 2018]. Its average ACC is shown in Figure 4(b). We find the ACC of HCR is uniformly lower than that of OCD although we note that HCR is not specifically designed for this task.
192
+
193
+ § 6.2 SACHS'S SINGLE-CELL FLOW CYTOMETRY DATA
194
+
195
+ We evaluate the proposed OCD on the well-known single-cell flow cytometry dataset [Sachs et al., 2005], which contains measurements of 11 phosphorylated proteins under different experimental conditions. Sachs et al. 2005 provided a consensus causal network of these proteins, which could be used to gauge the performance of causal discovery algorithms. As in Tagasovska et al. 2020, we consider the ${cd3cd28}$ dataset with 853 cells subject to the same experimental condition.
196
+
197
+ < g r a p h i c s >
198
+
199
+ Figure 4: Synthetic ordinal data with confounders. Average ACC of (a) OCD and (b) HCR under different sample sizes and levels of signal strength.
200
+
201
+ Implementations. Since the raw measurements are highly skewed and heavy-tailed, Sachs et al. 2005 discretized the data into $L = 3$ levels ("low","average", and "high") and fit a multinomial BN based on the Bayesian Dirichlet equivalent uniform (BDe) score [Heckerman et al., 1995]. As we will see, this approach throws away the ordinal information inherent in the raw measurements and hence significantly underperforms OCD (with greedy search). For comparison, we also apply ANM [Hoyer et al., 2009], LiNGAM [Shimizu et al., 2006], RESIT with the Gaussian process implementation [Peters et al., 2014], bivariate causal discovery methods (HCR, bQCD [Tagasovska et al., 2020], GR-AN [Hernandez-Lobato et al., 2016], IGCI with uniform measure [Janzing et al., 2012], SLOPE [Marx and Vreeken, 2017]), and methods inferring Markov equivalence classes (PC [Spirtes et al., 2000], CPC [Ramsey et al., 2012], GES [Chickering, 2002], IAMB [Tsamardinos et al., 2003], multinomial BNs with BIC and BDe), and the mixed data approach MXM [Tsagris et al., 2018] to the raw continuous data. For bivariate causal discovery methods, we follow a similar ad hoc procedure in Tagasovska et al. 2020: first run CAM [Bühlmann et al., 2014] and then orient the estimated edges by the bivariate methods. HCR is the closest competitor as it is also designed for categorical data although with a very different scope (only applicable to bivariate nominal categorical data and assuming the existence of hidden compact representations).
202
+
203
+ Metrics. We use the same SHD and SID metrics as in Section 6.1. For methods that output CPDAGs instead of BNs, we report the lower and upper bounds of SID.
204
+
205
+ Results. In Table 1, we summarize the SHD and SID. OCD shows very strong performance comparing to state-of-the-art alternatives. It has the lowest SHD and the second lowest SID, which shows benefit of discretization for highly noisy data. The substantial improvement of OCD from multinomial BN with BDe (SHD 14 vs 21) highlights the importance of exploiting the ordinal information of discrete data for causal discovery. While there is strong motivation (e.g., biological interpretation) to use $L = 3$ for this dataset, we test OCD with $L$ up to 10. OCD stays very competitive within this range: the SID remains 62 for all $L$ whereas the SHD slightly increases as $L$ increases possibly due to relatively small sample size, e.g., $\mathrm{{SHD}} = {16}$ for $L = {10}$ , which is still quite competitive (second to $\mathrm{{SHD}} = {15}$ for bQCD and IGCI).
206
+
207
+ Table 1: Sachs's data. Methods (marked by *) that are only applicable to bivariate data are combined with CAM. PC, CPC, GES, IAMB, BIC, BDe, and MXM only learn CPDAGs; we provide the lower and upper bounds of SID.
208
+
209
+ max width=
210
+
211
+ X OCD bQCD* IGCI* GR-AN*
212
+
213
+ 1-5
214
+ SHD 14 15 15 16
215
+
216
+ 1-5
217
+ SID 62 69 82 80
218
+
219
+ 1-5
220
+ X HCR* SLOPE* ANM LiNGAM
221
+
222
+ 1-5
223
+ SHD 16 17 17 17
224
+
225
+ 1-5
226
+ SID 76 86 78 86
227
+
228
+ 1-5
229
+ X PC CPC GES IAMB
230
+
231
+ 1-5
232
+ SHD 18 18 18 20
233
+
234
+ 1-5
235
+ SID 50-83 50-80 50-80 79-70
236
+
237
+ 1-5
238
+ X BIC BDe MXM RESIT
239
+
240
+ 1-5
241
+ SHD 20 21 21 40
242
+
243
+ 1-5
244
+ SID 53-77 49-104 49-104 45
245
+
246
+ 1-5
247
+
248
+ § 6.3 CAUSEEFFECTPAIRS (CEP) BENCHMARK DATA
249
+
250
+ We consider the CauseEffectPairs (CEP) benchmark data [Mooij et al., 2016] (version: 12/20/2017), which contain 108 datasets from 37 domains (e.g., biology, economy, engineering, and meteorology). Each dataset contains a pair of variables(X, Y)for which the causal relationship is clear from the context, e.g., older "age" causes higher "glucose". We retain the same 99 pairs as in Tagasovska et al. 2020 that have univariate non-binary cause and effect variables.
251
+
252
+ Implementations. We compare OCD with HCR, bQCD, IGCI, CAM, SLOPE, LiNGAM, and RESIT. To apply OCD and HCR, we discretize each variable at $L - 1$ quantiles for $L \in \{ {10},\ldots ,{20}\}$ . All other methods are applied to the (standardized) continuous data without discretization.
253
+
254
+ Metrics. We compute the ACC for forced decisions as in Section 6.1.3 and, additionally, the area under the receiver operating curve (AUC) for ranked decision. The ranked decision ranks the confidence of the causal direction [Mooij et al., 2016, Tagasovska et al., 2020]. The simple heuristic confidence [Mooij et al., 2016] is adopted here. For instance, for the proposed OCD, we define the confidence of $X \rightarrow Y$ to be ${C}_{X \rightarrow Y} = \operatorname{BIC}\left( {Y \rightarrow X \mid \mathbf{x}}\right) - \operatorname{BIC}\left( {X \rightarrow Y \mid \mathbf{x}}\right)$ .
255
+
256
+ Results. In Table 2, we summarize the ACC, AUC, and CPU times. For OCD and HCR, the average metrics over $L = {10},\ldots ,{20}$ as well as their standard errors are reported. The proposed OCD is highly competitive in all metrics. OCD has the second highest ACC and AUC, and is fast; it completes the analysis of 99 datasets in 36 seconds. Only IGCI, CAM, and LiNGAM are faster but they have worse ACC and AUC than OCD. SLOPE has slightly higher ACC and AUC than OCD. However, SLOPE is about 1 or 2 orders of magnitude slower than OCD and relatively sensitive to small added noise (see the additional experiments that investigate the "Sensitivity to Small Added Noise" in the Supplementary Materials). Finally, the small standard errors of the performance metrics of OCD indicate its relative robustness with respect to the number $L$ of levels of dis-cretization for the considered datasets and range.
257
+
258
+ Table 2: CEP data. Metrics of OCD and HCR are averaged over different values of $L = {10},\ldots ,{20}$ with standard errors given within the parentheses.
259
+
260
+ max width=
261
+
262
+ X OCD HCR bQCD CAM
263
+
264
+ 1-5
265
+ ACC 0.73 (0.01) 0.44 (0.02) 0.70 0.58
266
+
267
+ 1-5
268
+ AUC 0.76 (0.00) 0.56 (0.02) 0.72 0.58
269
+
270
+ 1-5
271
+ CPU 36s (1.7s) 12m (2.2m) 7m 11s
272
+
273
+ 1-5
274
+ X IGCI SLOPE LiNGAM RESIT
275
+
276
+ 1-5
277
+ ACC 0.66 0.76 0.42 0.53
278
+
279
+ 1-5
280
+ AUC 0.51 0.84 0.59 0.56
281
+
282
+ 1-5
283
+ CPU 1s 24m 3s 12h
284
+
285
+ 1-5
286
+
287
+ § 6.4 SINGLE-CELL RNA-SEQUENCING DATA
288
+
289
+ We further validate the proposed OCD with a publicly available single-cell RNA-sequencing (scRNA-seq) dataset of 2, 717 murine embryonic stem cells [Klein et al., 2015]. We obtain a list of literature-curated pairs of transcription factor (X)and its target(Y)from the TRRUST database [Han et al., 2018], which provides biological ground truth of the casual relationships, namely $X \rightarrow Y$ . We then extract the corresponding genes from the scRNA-seq dataset. Removing genes with more than ${90}\%$ zeros (these genes have very low statistical variability), we retained 6701 pairs for causal validation which still have ${62}\%$ zeros. The zeros in scRNA-seq data are either (a) true biological zero counts or (b) small counts that are too low to detect. In either case, they can be regarded as "low expression". We compare OCD with the best performing methods in Section 6.3, bQCD and SLOPE, as well as the closest competitor HCR. We are not able to generate results (runtime errors) from CAM, LiNGAM, and RESIT possibly because of the large percentages of zeros. To apply OCD and HCR, we trichotomize the data at 0 and the median of the non-zero expression (i.e., "low", "average", and "high" expression). ACC and CPU time are reported in Table 3. OCD is the best and is the only method that is better than random guess (p-value $= {10}^{-{75}}$ , binomial test with ${H}_{0} : p = {0.5}$ vs ${H}_{a} : p > {0.5}$ ) for this dataset possibly because of its highly non-standard distribution due to zero-inflation. Therefore, although discretizing continuous or count data may lose information, it often improves the robustness by not having to impose a particular distributional assumption on the raw data.
290
+
291
+ Table 3: Single-cell RNA-seq data.
292
+
293
+ max width=
294
+
295
+ X OCD HCR bQCD SLOPE
296
+
297
+ 1-5
298
+ ACC 0.61 0.36 0.45 0.50
299
+
300
+ 1-5
301
+ CPU 19m 22m 3.4h 2h
302
+
303
+ 1-5
304
+
305
+ § 7 CONCLUSION
306
+
307
+ There are several limitations of the current work, which we plan to address in our future work. First, the current score-and-search algorithm outputs a point estimate of the causal graph with no uncertainty quantification. We plan to develop a fully Bayesian approach by assigning sparse priors (i.e., spike-and-slab priors on $\beta$ ’s) and carrying out posterior inference via the Markov chain Monte Carlo. Second, we have empirically assessed the identifiability of the proposed OCD for multivariate data and for bivariate data with unmeasured confounders. The identifiability theory for multivariate categorical data or bivariate categorical data with unmeasured confounders is in general lacking in the causal discovery literature. Third, we have not explicitly addressed the problem of choosing the number $L$ of categories in data discretization. We picked $L = 3$ for genomic data by convention and assessed its robustness up to $L = {10}$ . For non-genomic data, there is no obvious/universal choice of $L$ . Instead of picking a specific $L$ , we have tested the proposed OCD in a range of values. In the future, we plan to propose data-driven ways (e.g., via BIC) to objectively choose $L$ .
UAI/UAI 2022/UAI 2022 Conference/BFULBwUocxq/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,707 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Semiparametric Causal Sufficient Dimension Reduction of Multidimensional Treatments
2
+
3
+ ## Abstract
4
+
5
+ Cause-effect relationships are typically evaluated by comparing outcome responses to binary treatment values, representing two arms of a hypothetical randomized controlled trial. However, in certain applications, treatments of interest are continuous and multidimensional. For example, understanding the causal relationship between severity of radiation therapy, summarized by a multidimensional vector of radiation exposure values and posttreatment side effects is a problem of clinical interest in radiation oncology. An appropriate strategy for making interpretable causal conclusions is to reduce the dimension of treatment. If individual elements of a multidimensional treatment vector weakly affect the outcome, but the overall relationship between treatment and outcome is strong, careless approaches to dimension reduction may not preserve this relationship. Further, methods developed for regression problems do not directly transfer to causal inference due to confounding complications. In this paper, we use semiparamet-ric inference theory for structural models to give a general approach to causal sufficient dimension reduction of a multidimensional treatment such that the cause-effect relationship between treatment and outcome is preserved. We illustrate the utility of our proposals through simulations and a real data application in radiation oncology.
6
+
7
+ ## 1 INTRODUCTION
8
+
9
+ In causal inference, the exposure of interest is commonly assumed to be either binary (e.g., comparing treatment vs placebo) or continuous (e.g., effect of treatment dosages on viral load.) In the latter cases, in addition to contrasts of responses to two specific doses, we may be interested in the entire dose-response relationship, and choose to model it via a simple functional, for example a logarithmic or sigmoidal function. In other applications, we might be interested in assessing causal relationships between outcomes and treatments with values that lie in a multidimensional space. For instance, in natural language processing interest lies in causal analyses that involve high dimensional text data [Gentzkow et al., 2019]. Another example is the neu-roimaging data used to relate neuronal network activity to cognitive processing and behavior. [Ramsey et al., 2010, Mather et al., 2013].
10
+
11
+ As our motivational example, we focus on an application in radiation oncology. In neck and head cancers, minor variations in dose and direction of radiation may result in similar tumor reduction but vastly improve secondary outcomes, such as weight loss, or dysfunction induced by radiation therapy, such as dysphasia or xerostomia [Robertson et al., 2015]. Thus, understanding the causal relationship between a multidimensional radiation exposure and downstream side effects in cancer patients undergoing radiation therapy is of clinical interest. Unlike standard treatments, radiation therapy is complex and is represented by three dimensional voxel maps of radiation doses in different parts of the body. Since this representation is very high dimensional, the exact dose localization information in the voxel map is sometimes represented by cumulative dose-volume histograms and summarized by a multidimensional vector of exposure dosages. Even such summaries complicates establishing clinically relevant causal relationships
12
+
13
+ Since we are interested in dimension reduction for the sake of explicating a particular relationship between treatments and outcomes, approaches that do not take outcomes into account in the right way run the risk of distorting the estimate of this relationship, or even falsely concluding the relationship is absent. Therefore, seemingly natural approaches to dimension reduction, such as principal component analysis (PCA), are not appropriate in our setting. On the other hand, there is a line of research in statistics on sufficient dimension reduction (SDR) [Li, 1991] with the objective of reducing the dimension of covariates by preserving the associational relations between covariates and outcome. However, due to spurious associations introduced by confounding which is ubiquitous in observational data sources, naive use of SDR approaches to discern causal relationships between treatments and outcomes leads to bias.
14
+
15
+ We are interested in applying SDR core ideas to reduce dimension of a treatment in a way that preserves a causal rather than associational relationship with the outcome. In addition, we are interested in doing so under the weakest possible assumptions, which entails generalizing the semi-parametric approaches in the SDR literature [Ma and Zhu, 2012]. In this paper, we provide a framework for structural (causal) models based on semiparametric inference theory developed for marginal structural models [Robins, 1999] to give what we believe is the first approach to causal SDR of a multidimensional treatment.
16
+
17
+ ## 2 PRELIMINARIES
18
+
19
+ Sufficient dimension reduction. Given an outcome variable $Y$ and a $p$ -dimensional covariate vector $X$ , the goal of SDR is to find a known function ${g}_{X}\left( {.;\beta }\right)$ parameterized by $\beta$ with a much smaller range than domain such that $Y$ depends on $X$ only through ${g}_{X}\left( {X;\beta }\right)$ . Often this function is assumed to be linear, in which case the goal is to find $\beta \in {\mathbb{R}}^{p \times d}$ , where $d < p$ , such that $Y$ depends on $X$ only through ${X}^{T}\beta$ , i.e., $\mathbb{E}\left\lbrack {Y \mid X}\right\rbrack = \mathbb{E}\left\lbrack {Y \mid {X}^{T}\beta }\right\rbrack$ . Often, proposed solutions to SDR rely on strong parametric assumptions that are unlikely to hold in practical applications, such as the linearity condition where $\mathbb{E}\left\lbrack {X \mid {X}^{T}\beta }\right\rbrack$ is assumed to be a linear function of $X$ , or the assumption that $\operatorname{cov}\left( {X \mid {X}^{T}\beta }\right)$ is constant rather than a function of $X$ ,[Li,1991, Cook and Weisberg,1991, Hardle and Stoker, 1989, Ichimura, 1993, Cook and Li, 2002].
20
+
21
+ Ma and Zhu [2012] introduced a new approach to SDR by recasting the problem in terms of estimation in a semi-parametric model. Crucially, this approach relies on far weaker assumptions than is typical in SDR, and is thus much more generally applicable. To obtain the relevant semiparametric model, we rewrite the above condition as $Y = \ell \left( {{X}^{T}\beta }\right) + \epsilon$ , where $\ell \left( {{X}^{T}\beta }\right) \mathrel{\text{:=}} \mathbb{E}\left\lbrack {Y \mid {X}^{T}\beta }\right\rbrack$ is an unspecified smooth function and $\mathbb{E}\left\lbrack {\epsilon \mid X}\right\rbrack = 0$ , while the distribution $p\left( {\epsilon \mid X}\right)$ remains otherwise unrestricted. Ma and Zhu [2012] derived the class of all influence functions for $\beta$ , a.k.a. the orthogonal nuisance tangent space denoted by ${\Lambda }_{\eta }^{ \bot }$ , as: ${\Lambda }_{\eta }^{ \bot } = \left\{ {\left( {Y - \mathbb{E}\left\lbrack {Y \mid {X}^{T}\beta }\right\rbrack }\right) \times \left( {\alpha \left( X\right) - \mathbb{E}\left\lbrack {\alpha \left( X\right) \mid {X}^{T}\beta }\right\rbrack }\right) }\right\}$ , where $\alpha \left( X\right)$ is any function of $X$ ; see Appendix for a brief overview of influence functions, and [Van der Vaart, 2000, Bang and Robins, 2005, Tsiatis, 2007] for more details.
22
+
23
+ A well-known property of semiparametric models is that all elements of ${\Lambda }_{\eta }^{ \bot }$ are mean 0 under the true distribution. Hence, a general class of estimating equations can be obtained using the sample version of
24
+
25
+ $$
26
+ \mathbb{E}\left\lbrack {U\left( \beta \right) }\right\rbrack \tag{1}
27
+ $$
28
+
29
+ $$
30
+ = \mathbb{E}\left\lbrack {\left( {Y - \mathbb{E}\left\lbrack {Y \mid {X}^{T}\beta }\right\rbrack }\right) \times \left( {\alpha \left( X\right) - \mathbb{E}\left\lbrack {\alpha \left( X\right) \mid {X}^{T}\beta }\right\rbrack }\right) }\right\rbrack = 0,
31
+ $$
32
+
33
+ where $U\left( \beta \right)$ is an arbitrary element in ${\Lambda }_{\eta }^{ \bot }$ . The estimator obtained from (T) is doubly robust under any choice of models for $\mathbb{E}\left\lbrack {Y \mid {\bar{X}}^{T}\beta }\right\rbrack$ and $\mathbb{E}\left\lbrack {\alpha \left( X\right) \mid {X}^{T}\beta }\right\rbrack$ , meaning that the estimator remains consistent if either of these two models is correctly specified [Ma and Zhu, 2012].
34
+
35
+ Causal inference. In causal inference, we seek to make inferences about the causal relationship of a treatment variable $A$ and an outcome variable $Y$ by counterfactual contrasts of the form $Y\left( a\right)$ representing a hypothetical experiment where treatment $A$ is set to $a$ , possibly contrary to the fact. A common setting considers, in addition to $A$ and $Y$ , a vector of baseline variables $C$ , yielding an observed data distribution of the form $p\left( {Y, A, C}\right)$ . Under standard assumptions of consistency, which states that counterfactual outcome is the same as observed outcome if treatment is set to observed value, conditional ignorability which states that $\{ Y\left( a\right) \}$ is independent of $A$ conditional on $C$ , and positivity of $p\left( {A \mid C}\right)$ , the counterfactual distribution $p\left( {Y\left( a\right) }\right)$ is identified as the following function of observed data
36
+
37
+ $$
38
+ p\left( {Y\left( a\right) }\right) = \mathop{\sum }\limits_{c}p\left( {Y \mid A = a, C = c}\right) \times p\left( {C = c}\right) . \tag{2}
39
+ $$
40
+
41
+ The average causal effect (ACE) of a binary treatment on an outcome is defined as $\mathrm{{ACE}} = \mathbb{E}\left\lbrack {Y\left( 1\right) }\right\rbrack - \mathbb{E}\left\lbrack {Y\left( 0\right) }\right\rbrack$ . Under the above assumptions, the counterfactual mean $\mathbb{E}\left\lbrack {Y\left( a\right) }\right\rbrack$ is given as the following function of the observed data, called the adjustment formula or g-formula,
42
+
43
+ $$
44
+ \mathbb{E}\left\lbrack {Y\left( a\right) }\right\rbrack = \mathbb{E}\left\lbrack {\mathbb{E}\left\lbrack {Y \mid A = a, C}\right\rbrack }\right\rbrack , \tag{3}
45
+ $$
46
+
47
+ where the outer expectation is taken with respect to $p\left( C\right)$ ; see Tian and Pearl [2002], Shpitser and Pearl [2006], Bhat-tacharya et al. [2020] for general identification algorithms in the presence of unmeasured confounders.
48
+
49
+ There are several different approaches on estimating the adjustment formula such as plug-in, inverse probability weighting (IPW), and semiparametric based estimators such as augmented IPW (AIPW) [Van der Vaart, 2000, Bang and Robins, 2005, Van der Laan et al., 2011]. An alternative class of IPW estimators models the relationship between $A$ and $Y$ via a marginal structural model (MSM), or a causal regression. A simple version of such a model takes the form $\mathbb{E}\left\lbrack {Y\left( a\right) }\right\rbrack = f\left( {a;\beta }\right)$ , for finite set of parameters $\beta$ . Given such a model, inferences about $\mathbb{E}\left\lbrack {Y\left( a\right) }\right\rbrack$ reduce to inferences about $\beta$ . For binary treatments, $f\left( {a;\beta }\right)$ can be written as ${\beta }_{0} + {\beta }_{a} \times a$ without loss of generality, with $\mathrm{{ACE}} = {\beta }_{a}$ . An MSM is different from an ordinary regression model, since $\mathbb{E}\left\lbrack {Y\left( a\right) }\right\rbrack \neq \mathbb{E}\left\lbrack {Y \mid A = a}\right\rbrack$ given our causal assumptions. Thus, one approach to estimating $\beta$ is via the following estimating equation, appropriately reweighted by the treatment propensity score model, ${W}_{a}\left( {C;{\eta }_{a}}\right) \mathrel{\text{:=}} p\left( {A \mid C;{\eta }_{a}}\right)$ ,
50
+
51
+ $$
52
+ {\mathbb{P}}_{n}\left\lbrack {\frac{{p}^{ * }\left( a\right) }{{W}_{a}\left( {C;{\widehat{\eta }}_{a}}\right) } \times \{ Y - f\left( {a;\beta }\right) \} }\right\rbrack = 0, \tag{4}
53
+ $$
54
+
55
+ where ${\mathbb{P}}_{n} = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\left( .\right) ,{p}^{ * }\left( a\right)$ is an arbitrary function of $a$ with the same dimension as $\beta$ , and ${\widehat{\eta }}_{a}$ is the maximum likelihood estimate of ${\eta }_{a}$ . This IPW procedure is known to be inefficient. A more efficient (in fact optimal in a wide class of reasonable estimators) approach is to use influence functions, described in detail in [Robins, 1999]. Our approach to causal SDR stands in the same relation to the semiparametric approach to SDR for regression problems in [Ma and Zhu, 2012] as fitting regression models does to fitting marginal structural models.
56
+
57
+ ## 3 CAUSAL SUFFICIENT DIMENSION REDUCTION
58
+
59
+ We are interested in the causal effect of a multidimensional treatment $A \in {\mathbb{R}}^{p}$ on outcome $Y$ , assuming all relevant covariates that need to be controlled for are observed and denoted by $C$ . We would like to reduce the dimension of treatment $A$ such that the causal relationship between $A$ and $Y$ is preserved. Let $g\left( {.;\beta }\right)$ be a function parameterized by $\beta$ that takes values in ${\mathbb{R}}^{p}$ and map them to values in ${\mathbb{R}}^{d}, d < p$ , i.e., $g : A \in {\mathbb{R}}^{p} \mapsto g\left( {A;\beta }\right) \in {\mathbb{R}}^{d}$ . We want to reduce the dimension of $A$ in such a way that the counterfactual response $\mathbb{E}\left\lbrack {Y\left( a\right) }\right\rbrack$ only depends on $A$ via $g\left( a\right)$ . Specifically, we assume that if $\mathbb{E}\left\lbrack {Y\left( a\right) }\right\rbrack$ is identified, that is if $\mathbb{E}\left\lbrack {Y\left( a\right) }\right\rbrack$ is a mapping $f$ from values $a$ of $A$ to functionals ${h}_{a}\left( {p\left( V\right) }\right)$ of the observed data distribution, where $p\left( V\right)$ denotes the joint distribution over the set of observed variables $V$ , then $f\left( a\right) = f\left( {g\left( {a;\beta }\right) }\right)$ . The methodology proposed in this paper does not depend on the choice of $g\left( {.;\beta }\right)$ , although we fix a particular $g\left( {.;\beta }\right)$ in our experiments. We assume the three identification assumptions that were discussed in the previous section, namely consistency, conditional ignor-ability, and positivity, hold in our analysis. Therefore, we fix ${h}_{a}\left( {p\left( {C, A, Y}\right) }\right) = \mathbb{E}\lbrack \mathbb{E}\left\lbrack {Y \mid A = a, C}\right\rbrack$ , as shown in (2).
60
+
61
+ The estimation procedure for MSMs shown in (4) can be viewed as a standard estimating equation for a regression model relating treatment and outcome, but applied to observed data readjusted via inverse weighting in such a way that treatment appear randomly assigned. In other words, MSMs are regressions applied to a version of observed data in such a way that regression parameters can be interpreted causally. Unlike other estimating equations that solve for $\beta$ by maximizing the feature outcome relationship, the equation in (T) fits $\beta$ to maintain the identity $\mathbb{E}\left\lbrack {Y \mid X}\right\rbrack = \mathbb{E}\left\lbrack {Y \mid {X}^{T}\beta }\right\rbrack$ . As a consequence, semiparamet-ric causal SDR can be viewed as an MSM version of this regression problem, which seeks to find $\beta$ which maintains $\mathbb{E}\left\lbrack {Y\left( a\right) }\right\rbrack = \mathbb{E}\left\lbrack {Y\left( {g\left( {a;\beta }\right) }\right) }\right\rbrack$ . In other words, our aim is to estimate $\beta$ by maintaining the following identity
62
+
63
+ $$
64
+ \mathbb{E}\left\lbrack {\mathbb{E}\left\lbrack {Y \mid a, C}\right\rbrack }\right\rbrack = \mathbb{E}\left\lbrack {\mathbb{E}\left\lbrack {Y \mid g\left( {a;\beta }\right) , C}\right\rbrack }\right\rbrack , \tag{5}
65
+ $$
66
+
67
+ where the outer expectation is wrt the density $p\left( C\right)$ .
68
+
69
+ We note here the different roles that variables play in regression SDR and causal SDR. The goal of regression SDR is to preserve the associative relationship between high dimensional features $X$ and outcome $Y$ . The goal of causal SDR, as we view it here, is to preserve the causal relationship between a multidimensional treatment $A$ and outcome $Y$ , which is made complicated by the presence of spurious associations induced by covariates $C$ . Thus, the goal in causal SDR is not to maintain the regression relationship between covariates and outcome by assuming $\mathbb{E}\left\lbrack {Y\mid \{ A, C\} }\right\rbrack = \mathbb{E}\left\lbrack {Y \mid g\left( {\{ A, C\} ;\beta }\right) }\right\rbrack$ , but to preserve the relationship as in (5) where $C$ is marginalized (adjusted for). The set of confounders $C$ could still be high dimensional, but they are not of primary interest in our problem. Incorporating baseline covariates into the dimension reduction strategy along with the treatment, as is done in some MSMs, is left as an interesting avenue for future work. Examples of work focusing on dimension reduction of common confounders include Imai and Ratkovic [2014], Hu et al. [2014], Shortreed and Ertefaie [2017], Banijamali et al. [2018], Ma et al. [2019], Luo and Zhu [2020], Cheng et al. [2020].
70
+
71
+ As stated earlier, our objective is to preserve the causal effect of $A$ on $Y$ , which is of the form shown in (3). However, it suffices to say that if the counterfactual response curve, i.e., $\mathbb{E}\left\lbrack {Y\left( a\right) }\right\rbrack$ , is preserved under our dimensionality reduction scheme, then the causal effect is preserved. Hence, we stated our constraint in (5) in terms of the counterfactual mean rather than the counterfactual contrast that would define the effect. Moreover, even though treatment is multidimensional, we emphasize that each unit still receives one treatment session; e.g., a single session of radiation therapy with no followups. Records of radiation treatment are usually stored as monodimentional cumulative dose-volume histograms, and are summarized as amount of radiation on $k\%$ of the organ’s volume, where $k$ ranges from 1 to 100 .
72
+
73
+ In a conditionally ignorable causal model, intervention on $A$ corresponds to dropping the term $p\left( {A \mid C}\right)$ from the observed density $p\left( {Y, A, C}\right)$ yielding (2). Define $q\left( {Y, A, C}\right)$ as the following modified version of (2):
74
+
75
+ $$
76
+ q\left( {Y, A, C}\right) \mathrel{\text{:=}} p\left( {Y \mid A, C}\right) \times {p}^{ * }\left( A\right) \times p\left( C\right) ,
77
+ $$
78
+
79
+ where ${p}^{ * }\left( A\right)$ is any density with the same support as $p\left( A\right)$ . Then (5) can be rewritten as
80
+
81
+ $$
82
+ {\mathbb{E}}_{q}\left\lbrack {Y \mid A = a}\right\rbrack = {\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {a;\beta }\right) }\right\rbrack , \tag{6}
83
+ $$
84
+
85
+ where ${\mathbb{E}}_{q}$ is the expectation taken with respect to the density $q\left( {Y, A, C}\right)$ defined above, and $q\left( {Y \mid A}\right) = \mathop{\sum }\limits_{C}q\left( {Y, C \mid A}\right) =$ $\mathop{\sum }\limits_{C}p\left( {Y \mid A, C}\right) \times p\left( C\right)$ by definition. Equations (5) and (6) are equivalent forms of our constraint in the causal SDR problem where the MSM model for $\mathbb{E}\left\lbrack {Y\left( a\right) }\right\rbrack = {\mathbb{E}}_{q}\left\lbrack {Y \mid a}\right\rbrack$ , is assumed to be a function of the multidimensional treatment intervention $a$ only through its lower dimension representation $g\left( {a;\beta }\right)$ . We now describe two approaches to estimating $\beta$ .
86
+
87
+ ### 3.1 INVERSE PROBABILITY WEIGHTED SDR
88
+
89
+ Let $\ell \left( {g\left( {A;\beta }\right) }\right) \mathrel{\text{:=}} {\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {A;\beta }\right) }\right\rbrack$ and $\nu \left( {g\left( {A;\beta }\right) }\right) \mathrel{\text{:=}}$ ${\mathbb{E}}_{q}\left\lbrack {\alpha \left( A\right) \mid g\left( {A;\beta }\right) }\right\rbrack$ be two unspecified smooth functions of $g\left( {A;\beta }\right)$ . A simple estimation strategy for $\beta$ based on generalizing (4), entails solving
90
+
91
+ $$
92
+ \mathbb{E}\left\lbrack {\frac{{p}^{ * }\left( a\right) }{p\left( {A = a \mid C}\right) } \times \widetilde{U}\left( \beta \right) }\right\rbrack = 0, \tag{7}
93
+ $$
94
+
95
+ where $\widetilde{U}\left( \beta \right) = \{ Y - \ell \left( {g\left( {a;\beta }\right) }\right) \} \times \{ \alpha \left( A\right) - \nu \left( {g\left( {a;\beta }\right) }\right) \}$ , ${p}^{ * }\left( a\right)$ is an arbitrary function of $a$ , and $p\left( {A \mid C}\right)$ is a correctly specified statistical model which governs how the treatment $A$ is assigned based on baseline characteristics $C$ . The above equation may be solved using observed data by evaluating the expectation empirically.
96
+
97
+ Lemma 1. An estimator for $\beta$ based on solving (7) is unbiased under correct specification of $p\left( {A \mid C}\right)$ , and either one of $\ell \left( {g\left( {A;\beta }\right) }\right) \mathrel{\text{:=}} {\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {A;\beta }\right) }\right\rbrack$ or $\nu \left( {g\left( {A;\beta }\right) }\right) \mathrel{\text{:=}}$ ${\mathbb{E}}_{q}\left\lbrack {\alpha \left( A\right) \mid g\left( {A;\beta }\right) }\right\rbrack$ .
98
+
99
+ ### 3.2 SEMIPARAMETRIC CAUSAL SDR
100
+
101
+ A general approach for deriving regular and asymptotically linear (RAL) estimators of $\beta$ is based on deriving ${\widetilde{\Lambda }}_{\eta }^{ \bot }$ , the orthogonal complement of the nuisance tangent space of a semiparametric model that enforces the constraint (5), but places no other restrictions on the observed data distribution; ${\widetilde{\Lambda }}_{\eta }^{ \bot }$ is the class of all influence functions. One approach is to derive this space explicitly, as was done in [Ma and Zhu, 2012]. An alternative is to take advantage of general theory relating orthogonal complements of regression problems, and orthogonal complements of "causal regression problems," or MSMs, developed by Robins [1999]. Given the semiparametric model $\mathcal{M}$ induced by the restriction (5), we take advantage of this theory in the following result.
102
+
103
+ Theorem 1. The orthogonal complement of the nuisance tangent space ${\widetilde{\Lambda }}_{\eta }^{ \bot }$ for $\beta$ that satisfies (6) is:
104
+
105
+ $$
106
+ {\widetilde{\Lambda }}_{\eta }^{ \bot } = \left\{ {\frac{\widetilde{U}\left( \beta \right) }{{W}_{a}\left( C\right) } - \phi \left( {A, C}\right) + \mathbb{E}\left\lbrack {\phi \left( {A, C}\right) \mid C}\right\rbrack }\right\} ,
107
+ $$
108
+
109
+ where $\phi \left( {A, C}\right)$ is an arbitrary function of $A$ and $C,{W}_{a}\left( C\right)$ is the IPW weight $p\left( {A = a \mid C}\right) /{p}^{ * }\left( a\right)$ for a fixed ${p}^{ * }\left( a\right)$ , and $\widetilde{U}\left( \beta \right)$ is of the form
110
+
111
+ $$
112
+ \widetilde{U}\left( \beta \right) = \{ Y - \ell \left( {g\left( {a;\beta }\right) }\right) \} \times \{ \alpha \left( A\right) - \nu \left( {g\left( {a;\beta }\right) }\right) \} ,
113
+ $$
114
+
115
+ where $\ell \left( {g\left( {a;\beta }\right) }\right) \mathrel{\text{:=}} {\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {a;\beta }\right) }\right\rbrack$ and $\nu \left( {g\left( {a;\beta }\right) }\right) \mathrel{\text{:=}}$ ${\mathbb{E}}_{q}\left\lbrack {\alpha \left( A\right) \mid g\left( {A;\beta }\right) }\right\rbrack$ . Moreover, the most efficient estimator in this class, for any fixed $\alpha \left( A\right)$ , is recovered by setting ${\phi }^{\text{opt }}\left( {A, C}\right) = \mathbb{E}\left\lbrack {\left. \frac{\widetilde{U}\left( \beta \right) }{{W}_{a}\left( C\right) }\right| \;A, C}\right\rbrack .$
116
+
117
+ Lemma 2. For a fixed choice of $\alpha \left( A\right)$ and normalized function ${p}^{ * }\left( A\right)$ , the element $\widetilde{U}\left( {\beta }^{ * }\right) \in {\widetilde{\Lambda }}_{\eta }^{ \bot }$ corresponding to the optimal choice of $\phi \left( {A, C}\right)$ has the form.
118
+
119
+ $$
120
+ \frac{{p}^{ * }\left( A\right) }{p\left( {A \mid C}\right) } \times \widetilde{U}\left( \beta \right) - \frac{{p}^{ * }\left( A\right) }{p\left( {A \mid C}\right) } \times \mathbb{E}\left\lbrack {\widetilde{U}\left( \beta \right) \mid A, C}\right\rbrack
121
+ $$
122
+
123
+ $$
124
+ + {\mathbb{E}}_{q}\left\lbrack {\mathbb{E}\left\lbrack {\widetilde{U}\left( \beta \right) \mid A, C}\right\rbrack \mid C}\right\rbrack , \tag{8}
125
+ $$
126
+
127
+ where ${\mathbb{E}}_{q}\left\lbrack \text{.}\right\rbrack {isthespectationtakenwithrespecttothe}$ density $q\left( {Y, A, C}\right) \mathrel{\text{:=}} p\left( {Y \mid A, C}\right) \times {p}^{ * }\left( A\right) \times p\left( C\right)$ .
128
+
129
+ ### 3.3 ROBUSTNESS PROPERTIES
130
+
131
+ Just as ${\Lambda }_{\eta }^{ \bot }$ in Section 2 entailed double robustness of $U\left( \beta \right)$ for semiparametric regression SDR, we now show that the structure of ${\widetilde{\Lambda }}_{\eta }^{ \bot }$ yields additional robustness properties.
132
+
133
+ Lemma 3. If one of $\{ p\left( {A \mid C}\right) ,\mathbb{E}\left\lbrack {\widetilde{U}\left( \beta \right) \mid A, C}\right\rbrack \}$ and one of $\left\{ {\ell \left( {g\left( {A;\beta }\right) }\right) \mathrel{\text{:=}} {\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {A;\beta }\right) }\right\rbrack ,\nu \left( {g\left( {A;\beta }\right) }\right) \mathrel{\text{:=}} }\right.$ $\left. {{\mathbb{E}}_{q}\left\lbrack {\alpha \left( A\right) \mid g\left( {A;\beta }\right) }\right\rbrack }\right\}$ is correctly specified, then the estimator for $\beta$ based on (8) is consistent and asymptotically normal with mean zero and variance ${\tau }^{-1} \times \operatorname{Var}\left( {\widetilde{U}\left( {\beta }^{ * }\right) }\right) \times {\tau }^{-{1}^{\prime }}$ , where $\widetilde{U}\left( {\beta }^{ * }\right)$ is given in (8) and $\tau = \mathbb{E}\left\lbrack {\partial \widetilde{U}\left( {\beta }^{ * }\right) /\partial \beta }\right\rbrack$ .
134
+
135
+ This result implies that the estimating equation in (8) yields a " $2 \times 2$ " robustness property. In practice, since we will be dealing with multidimensional problems, correct specification of models is difficult to ensure. However, robustness properties of semiparametric estimators also implies that in regions where sufficient subset of models are approximately correct, the overall bias remains small. If $p\left( {A \mid C}\right)$ and one of the models in $\widetilde{U}\left( \beta \right)$ is correctly specified, the AIPW estimator using (8) remains consistent for any choice of $\mathbb{E}\left\lbrack {\widetilde{U}\left( \beta \right) \mid A, C}\right\rbrack$ . One promising direction of future work is to consider cases where $p\left( {A \mid C}\right)$ and $\widetilde{U}\left( \beta \right)$ are known and search for $\mathbb{E}\left\lbrack {\widetilde{U}\left( \beta \right) \mid A, C}\right\rbrack$ which yields good properties of the overall estimator.
136
+
137
+ ## 4 ESTIMATION AND IMPLEMENTATION
138
+
139
+ In order to estimate the parameters $\beta$ in 6, we need to solve the estimating equation ${\mathbb{P}}_{n}\left\lbrack {\widetilde{U}\left( {\beta }^{ * }\right) }\right\rbrack = 0$ , where $\widetilde{U}\left( {\beta }^{ * }\right)$ is given in (8). For any $\widetilde{U}\left( \beta \right)$ of the form given in Section 3.1, Theorem 1, provides the class of all RAL estimators for ${\beta }^{ * }$ along with the most efficient estimator in this class. Under the general form of $\widetilde{U}\left( \beta \right) = \{ Y - \ell \left( {g\left( {A;\beta }\right) }\right) \} \times$ $\{ \alpha \left( A\right) - \nu \left( {g\left( {A;\beta }\right) }\right) \}$ , the term $\mathbb{E}\left\lbrack {\widetilde{U}\left( \beta \right) \mid A, C}\right\rbrack$ in $\widetilde{U}\left( {\beta }^{ * }\right)$ equals $\{ \mathbb{E}\left\lbrack {Y \mid A, C}\right\rbrack - \ell \left( {g\left( {A;\beta }\right) }\right) \} \times \{ \alpha \left( A\right) - \nu \left( {g\left( {A;\beta }\right) }\right) \}$ .
140
+
141
+ Hence, in the expression in [8], four different models are involved in estimating $\widetilde{U}\left( {\beta }^{ * }\right)$ , namely (i) $\ell \left( {g\left( {A;\beta }\right) }\right) \mathrel{\text{:=}}$ ${\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {A;\beta }\right) }\right\rbrack$ ,(ii) $\nu \left( {g\left( {A;\beta }\right) }\right) \mathrel{\text{:=}} {\mathbb{E}}_{q}\left\lbrack {\alpha \left( A\right) \mid g\left( {A;\beta }\right) }\right\rbrack$ ,(iii) $p\left( {A \mid C}\right)$ , and (iv) $\mathbb{E}\left\lbrack {Y \mid A, C}\right\rbrack = {\mathbb{E}}_{q}\left\lbrack {Y \mid A, C}\right\rbrack$ . The last term in (8) is equal to ${\mathbb{E}}_{a}\left\lbrack {\mathbb{E}\left\lbrack {U\left( \beta \right) \mid A, C}\right\rbrack }\right\rbrack$ , where ${\mathbb{E}}_{a}\left\lbrack \text{.}\right\rbrack$ is the expectation wrt the marginal distribution of $A$ which can be evaluated empirically without additional modeling.
142
+
143
+ For a pre-specified functional form of $\ell \left( {g\left( {A;\beta }\right) }\right)$ , we need to fit three different nuisance models. Given models $\nu \left( {g\left( {A;\beta }\right) ;{\eta }_{\nu }}\right) , p\left( {A \mid C;{\eta }_{a}}\right)$ , and $\mathbb{E}\left\lbrack {Y \mid A, C;{\eta }_{y}}\right\rbrack$ for $\nu \left( {g\left( {A;\beta }\right) }\right) , p\left( {A \mid C}\right)$ , and $\mathbb{E}\left\lbrack {Y \mid A, C}\right\rbrack$ , respectively, it can be shown that if ${n}^{\frac{1}{4} + \epsilon }\left( {\widehat{\eta } - {\eta }_{0}}\right)$ is bounded in probability for some $\epsilon > 0$ , then the estimating equation ${\mathbb{P}}_{n}\left\lbrack {\widetilde{U}\left( {\beta }^{ * }\right) ;\widehat{\eta }}\right\rbrack = 0$ yields an estimate of $\beta$ with the same asymptotic properties as if the nuisance models were known. Here $\eta =$ $\left\{ {{\eta }_{\nu },{\eta }_{a},{\eta }_{y}}\right\}$ , and $\widehat{\eta },{\eta }_{0}$ denote the estimated and the true parameters of the nuisance models, respectively.
144
+
145
+ Theorem 2. Let ${\phi }_{0}$ denote the influence function of the estimator $\beta$ obtained from the estimating equation ${\mathbb{P}}_{n}\left\lbrack {\widetilde{U}\left( {{\beta }^{ * },{\eta }_{0}}\right) }\right\rbrack = 0$ . If ${n}^{\frac{1}{4} + \epsilon }\left( {\widehat{\eta } - {\eta }_{0}}\right)$ is bounded in probability for some $\epsilon > 0$ , then the influence function corresponding to the estimator $\widehat{\beta }$ obtained from the estimating equation ${\mathbb{P}}_{n}\left\lbrack {\widetilde{U}\left( {{\beta }^{ * },\widehat{\eta }}\right) }\right\rbrack = 0$ is the same as ${\phi }_{0}$ . In other words, $\widehat{\beta }$ follows the same asymptotic properties as if we knew the true nuisance models.
146
+
147
+ The condition for the rate of convergence of nuisance models in Theorem 2 is a sufficient condition and is potentially too conservative. In practice, we might be able to use models with the slower convergence rates, see [Fisher and Kennedy, 2018] for more details. [Stone, 1982] provides a detailed analysis of the convergence rates of nonparametric models.
148
+
149
+ Implementation. We now describe in detail our procedure for estimating $\beta$ by solving the empirical version of $\mathbb{E}\left\lbrack {\widetilde{U}\left( {\beta }^{ * }\right) }\right\rbrack = 0$ , where $\widetilde{U}\left( {\beta }^{ * }\right)$ is given in (8). In what follows, we assume the structural dimension $d$ , i.e. the cardinality of the range of $g\left( {;\beta }\right)$ , is known; we provide a discussion on choosing the structural dimension at the end of this section.
150
+
151
+ For a given choice of ${p}^{ * }\left( A\right)$ and $\alpha \left( A\right)$ ,
152
+
153
+ 1. First estimate ${\widehat{\eta }}_{a}$ and ${\widehat{\eta }}_{y}$ in $p\left( {A \mid C;{\eta }_{a}}\right)$ and $\mathbb{E}\left\lbrack {Y \mid A, C;{\eta }_{y}}\right\rbrack$ by maximum likelihood or nonparametric methods. These two models do not depend on $\beta$ and are not updated within the iterations below.
154
+
155
+ 2. Pick starting values ${\beta }^{\left( 1\right) }$ .
156
+
157
+ 3. At ${j}^{\text{th }}$ iteration, given a fixed ${\beta }^{\left( j\right) }$ , estimate $\widehat{\ell }\left( {g\left( {A;{\beta }^{\left( j\right) }}\right) }\right)$ and $\widehat{\nu }\left( {g\left( {A;{\beta }^{\left( j\right) }}\right) }\right)$ , and compute:
158
+
159
+ $$
160
+ {U}^{q}\left( {\beta }^{\left( j\right) }\right) = \left\{ {Y - \widehat{\ell }\left( {g\left( {A;{\beta }^{\left( j\right) }}\right) }\right) }\right\} \times \{ \alpha \left( A\right) - \widehat{\nu }\left( {g\left( {A;{\beta }^{\left( j\right) }}\right) }\right) \} ,
161
+ $$
162
+
163
+ $$
164
+ \mathbb{E}\left\lbrack {{U}^{q}\left( {\beta }^{\left( j\right) }\right) \mid A, C}\right\rbrack = \left\{ {\mathbb{E}\left\lbrack {Y \mid A, C;{\widehat{\eta }}_{y}}\right\rbrack - \widehat{\ell }\left( {g\left( {A;{\beta }^{\left( j\right) }}\right) }\right) }\right\}
165
+ $$
166
+
167
+ $$
168
+ \times \left\{ {\alpha \left( A\right) - \widehat{\nu }\left( {g\left( {A;{\beta }^{\left( j\right) }}\right) }\right) }\right\} \text{.}
169
+ $$
170
+
171
+ 4. Form the sample version of $\mathbb{E}\left\lbrack {\widetilde{U}\left( {\beta }^{ * }\right) }\right\rbrack$ as follows.
172
+
173
+ $$
174
+ \zeta \left( {\beta }^{\left( j\right) }\right) = {\mathbb{P}}_{n}\left\lbrack {\frac{{p}^{ * }\left( A\right) }{p\left( {A \mid C;{\widehat{\eta }}_{a}}\right) } \times \left\{ {{U}^{q}\left( {\beta }^{\left( j\right) }\right) - }\right. }\right.
175
+ $$
176
+
177
+ $$
178
+ \left. {\left. {-\mathbb{E}\left\lbrack {{U}^{q}\left( {\beta }^{\left( j\right) }\right) \mid A, C}\right\rbrack }\right\} + {\mathbb{E}}_{q}\left\lbrack {\mathbb{E}\left\lbrack {{U}^{q}\left( {\beta }^{\left( j\right) }\right) \mid A, C}\right\rbrack \mid C}\right\rbrack }\right\rbrack
179
+ $$
180
+
181
+ where ${\mathbb{P}}_{n}\left\lbrack \text{.}\right\rbrack \mathrel{\text{:=}} \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}{\left\lbrack \text{.}\right\rbrack }_{i}.$
182
+
183
+ 5. Calculate the first and second derivatives of $\partial \left\{ {\parallel \zeta \left( \beta \right) {\parallel }^{2}}\right\} /\partial \{ \beta \}$ numerically and evaluate them at ${\beta }^{\left( j\right) }$ , and use the Newton-Raphson update rule to update ${\beta }^{\left( j\right) }$ .
184
+
185
+ 6. Repeat steps (3) through (5) until convergence.
186
+
187
+ The implementation of an empirical evaluation of [7] follows a similar set of steps, except all steps pertaining to second and third terms of [8] are skipped. Moreover, in step(c)of the above implementation, we need to specify individual models for $\ell \left( {g\left( {A;\beta }\right) }\right) \mathrel{\text{:=}} {\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {A;\beta }\right) }\right\rbrack$ and $\mathbb{E}\left\lbrack {Y \mid A, C}\right\rbrack \mathrel{\text{:=}} {\mathbb{E}}_{q}\left\lbrack {Y \mid A, C}\right\rbrack$ . However, due to variation dependence of these models, it may be difficult to fit these two models in a congenial way in general. We provide an alternative approach in the following section.
188
+
189
+ In order to deal with the issue of congeniality, we may opt to specify ${\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {A;\beta }\right) }\right\rbrack$ and $\widetilde{f}\left( {A, C,\beta }\right) =$ ${\mathbb{E}}_{q}\left\lbrack {Y \mid A, C}\right\rbrack - {\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {A;\beta }\right) }\right\rbrack$ , which yield a variationally independent specification of ${\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {A;\beta }\right) }\right\rbrack$ and ${\mathbb{E}}_{q}\left\lbrack {Y \mid A, C}\right\rbrack = {\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {A;\beta }\right) }\right\rbrack + \widetilde{f}\left( {A, C,\beta }\right)$ . Consequently, the four variationally independent models we need to specify are as follows: $\ell \left( {g\left( {A;\beta }\right) }\right) ,\nu \left( {g\left( {A;\beta }\right) }\right) , p\left( {A \mid C}\right)$ , and $\widetilde{f}\left( {A, C,\beta }\right)$ ; the last term in (8) can be evaluated empirically without additional modeling. Thus, we need to specify the additional nuisance model $\widetilde{f}$ . We propose to fit $\widetilde{f}$ by borrowing ideas from the theory of structural nested mean models (SNMMs) in [Vansteelandt and Joffe, 2014, Robins, 1999]. We defer the descriptions to the appendix and refer to $\widetilde{f}$ as an "inverted" structural nested mean model.
190
+
191
+ Choosing the structural dimension. Up until here, we assumed the structural dimension was known a priori. Finding the correct dimension is not an straightforward task and incorrect choices may greatly affect performance. We adapt the technique in [Ma and Zhu, 2012] that was used to select the structural dimension in regression SDR to causal SDR. Specifically, we utilize a resampling procedure to select the structural dimension. This procedure was originally described by [Dong and Li, 2010] and adapts the idea of [Ye and Weiss, 2003]. We consider a family of functions ${g}^{1}\left( {.;{\beta }^{1}}\right) ,\ldots ,{g}^{m}\left( {.;{\beta }^{m}}\right)$ with different structural dimensions, and use the cross-validation procedure we describe below to pick the best dimension.
192
+
193
+ Let ${\widehat{\beta }}_{\rho }$ be the estimate of $\beta$ from the original sample for the ${\rho }^{\text{th }}$ working dimension, where $\rho = 1,\ldots , p - 1$ , and let ${\widehat{\beta }}_{\rho , b}$ be the estimate of $\beta$ from the ${b}^{\text{th }}$ bootstrap sample, for $b = 1,\ldots , B$ . The structural dimension can be estimated by finding the dimension $\rho$ to be the cardinality of the range of the function
194
+
195
+ $$
196
+ {g}^{ * } = \arg \mathop{\max }\limits_{{g}^{i}}\frac{1}{B}\mathop{\sum }\limits_{{b = 1}}^{B}{r}^{2}\left( {{g}^{i}\left( {A;{\widehat{\beta }}_{\rho }}\right) ,{g}^{i}\left( {A;{\widehat{\beta }}_{\rho , b}}\right) }\right) ,
197
+ $$
198
+
199
+ where ${r}^{2}\left( {u, v}\right) = {k}^{-1}\mathop{\sum }\limits_{{i = 1}}^{k}{\lambda }_{i}$ and ${\lambda }_{i}$ ’s are the non-zero eigenvalues of
200
+
201
+ $$
202
+ \{ \operatorname{var}\left( {u, v}\right) {\} }^{-1/2}\operatorname{cov}\left( {u, v}\right) \{ \operatorname{var}\left( v\right) {\} }^{-1}\operatorname{cov}\left( {v, u}\right) \{ \operatorname{var}\left( u\right) {\} }^{-1/2}.
203
+ $$
204
+
205
+ This procedure uses resampling to choose $\beta$ to maximize variability of the reduced set of features given by ${g}^{i}\left( {.;{\beta }^{i}}\right)$ where ${g}^{i}\left( {.;{\beta }^{i}}\right)$ is chosen in a way that aims to preserve the causal regression relationship between $A$ and the mean of $Y$ . Exploring other alternatives for choosing the structural dimension is an interesting area for future work.
206
+
207
+ ## 5 SIMULATION STUDY
208
+
209
+ Causal SDR is not well-solved via standard methods for dimension reduction such as PCA, as they do not take the feature-outcome relationship into account, nor by standard SDR methods, as they do not take the confounding issues into account. We illustrate the utility of our proposal to causal SDR, via simulation studies, and compare them with regression SDR and PCA methods. We also illustrate the consistency of our estimators and illustrate the procedure for selecting the structural dimension. To provide continuity with previous work, our simulation study is similar to that described in [Ma and Zhu, 2012]. All code necessary to reproduce the simulations is included with this submission and will be made publicly available on publication.
210
+
211
+ We perform 50 replications with fixed sample sizes, where the true response $\mathbb{E}\left\lbrack {Y\left( {g\left( a\right) }\right) }\right\rbrack$ is an object of dimension $d =$ 2, and the observed data distribution $p\left( {Y, A, C}\right)$ is set as follows. The dimension of the baseline factors $C$ is fixed as 4 and the observed treatment dimension $p$ is set to be 6 and 12. The baseline factors $C$ are generated from a standard multivariate normal distribution. We consider two cases for the treatment vector: one where the linearity and the constant covariance conditions in regular SDR are violated, and one where these assumptions are satisfied.
212
+
213
+ Case 1. We generated ${\left( {A}_{1},{A}_{2}\right) }^{T}$ (when $p = 6$ ) and ${\left( {A}_{1},{A}_{2},{A}_{7 : {12}}\right) }^{T}$ (when $p = {12}$ ) from a multivariate normal distribution where the mean of each component is given as: ${\mu }_{1} = \mathop{\sum }\limits_{i}{C}_{i},{\mu }_{2} = \mathop{\sum }\limits_{i}{\left( -1\right) }^{i}{C}_{i},{\mu }_{7} = {C}_{1},{\mu }_{8} = {C}_{2}$ , ${\mu }_{9} = {C}_{3},{\mu }_{10} = - {C}_{1} + {C}_{2},{\mu }_{11} = - {C}_{2} + {C}_{3},{\mu }_{12} =$ $- {C}_{3} + {C}_{4}$ , and the covariance matrix is ${\left( {\sigma }_{ij}\right) }_{\left( {p - 4}\right) \times \left( {p - 4}\right) }$ where ${\sigma }_{ij} = {0.5}^{\left| i - j\right| }$ . We generated ${A}_{3}$ from a normal distribution with mean $\left| {{A}_{1} + {A}_{2}}\right|$ and variance $\left| {A}_{1}\right| .{A}_{4}$ has a normal distribution with mean ${\left| {A}_{1} + {A}_{2}\right| }^{1/2}$ and variance $\left| {A}_{2}\right| .{A}_{5}$ and ${A}_{6}$ were generated from Bernoulli distributions with success probabilities $\exp \left( {A}_{2}\right) /\left\{ {1 + \exp \left( {A}_{2}\right) }\right\}$ , and $\Phi \left( {A}_{2}\right)$ , respectively, where $\Phi \left( \text{.}\right)$ denotes the standard normal cumulative distribution.
214
+
215
+ Case 2. The treatment vector is generated from a multivariate normal distribution where the mean of each component is given as follows. ${\mu }_{1} = \mathop{\sum }\limits_{i}{C}_{i},{\mu }_{2} = \mathop{\sum }\limits_{i}{\left( -1\right) }^{i}{C}_{i},{\mu }_{3} =$ ${C}_{1} - {C}_{2} - {C}_{3} + {C}_{4},{\mu }_{4} = - {C}_{1} + {C}_{2} + {C}_{3} - {C}_{4},{\mu }_{5} =$ $\mathop{\sum }\limits_{i}{C}_{i} - 2{C}_{3},{\mu }_{6} = \mathop{\sum }\limits_{i}{C}_{i} - 2{C}_{1}$ , and ${\mu }_{6 + i} = {C}_{i},{\mu }_{9 + i} =$ $- {C}_{i}$ for $i = 1,2,3$ , and the covariance matrix is ${\left( {\sigma }_{ij}\right) }_{p \times p}$ where ${\sigma }_{ij} = {0.5}^{\left| i - j\right| }$ .
216
+
217
+ The response variable is generated using
218
+
219
+ $$
220
+ Y = {A}^{T}{\beta }_{1} + {\left( {A}^{T}\right) }^{2}{\beta }_{2} + \mathop{\sum }\limits_{{i = 1}}^{4}{C}_{i} + \left\{ {\mathop{\sum }\limits_{{j = 1}}^{p}{A}_{j}}\right\} \times \left\{ {\mathop{\sum }\limits_{{i = 1}}^{4}{C}_{i}}\right\} + \epsilon ,
221
+ $$
222
+
223
+ where the error term $\epsilon$ is generated from standard normal. For $p = 6$ , we set ${\beta }_{1} = {\left( 1,1,1,1,1\right) }^{T}/\sqrt{6}$ , and ${\beta }_{2} = {\left( 1, - 1,1, - 1,1, - 1\right) }^{T}/\sqrt{6}$ . For $p = {12}$ , the last 6 components of ${\beta }_{1}$ and ${\beta }_{2}$ are identically zero.
224
+
225
+ As mentioned in Section 3.2, Theorem 1 provides the whole class of estimating equations for a given $\widetilde{U}\left( \beta \right)$ . For simplicity, we assume $\mathbb{E}\left\lbrack {\alpha \left( A\right) \mid g\left( {A;\beta }\right) }\right\rbrack = 0$ , and therefore $\widetilde{U}\left( \beta \right) = \{ Y - \ell \left( {g\left( {A;\beta }\right) }\right) \} \times \alpha \left( A\right)$ in the following simulations. The performance of the estimates was computed using the distance between true $\beta$ and $\widehat{\beta }$ , defined as the Frobenius norm of the matrix $\widehat{\beta }{\left( {\widehat{\beta }}^{T}\widehat{\beta }\right) }^{-1}{\widehat{\beta }}^{T} - \beta {\left( {\beta }^{T}\beta \right) }^{-1}{\beta }^{T}$ .
226
+
227
+ Simulation 1. In this set of simulations, we aim for evaluating the performance of different estimation strategies for $\beta$ and fix the sample size to 200 . The results for both Case 1 and Case 2 when $p = 6$ are presented in Fig. 1, and the results for both Case 1 and Case 2 when $p = {12}$ is deferred to the appendix. In each case, there are 4 different boxplots. The first one, from the left hand side, labeled as Reg, corresponds to semiparametric SDR estimating equation (1). Since regular SDR ignores the influence of confounding variables $C$ , the estimates are not capturing the true causal relationship between $A$ and $Y$ . In the second boxplot, labeled as IPW, we use the IPW estimator in (7) with the correct model for $p\left( {A \mid C}\right)$ , by properly adjusting for all the confounders. This recovers a more reasonable ${\beta }^{ * }$ estimate than the first one. However, while IPW generally performs better than PCA or regression SDR, the improvement is relatively modest. This might be due to the inefficiency of naive IPW estimators at the reported sample size. The third plot, labeled AIPW, uses the augmented IPW (AIPW) estimator corresponding to (8), which greatly outperforms the other estimators. The last plot corresponds to the classical PCA dimension reduction technique where the treatment-outcome relation is ignored. In this case, the first two principal directions are reported as estimating the basis of the lower dimensional space. As illustrated in the plots, this naive approach does not seek to preserve a causal, nor indeed any, relationship to the outcome.
228
+
229
+ Our main objective was to reduce the dimension of the treatment such that the cause-effect relation between treatment and outcome is preserved. In order to show that our estimating procedures actually preserve this relation, we compute the contrast between $E\left\lbrack {Y\left( {g\left( {{a}_{i};\beta }\right) }\right) }\right\rbrack$ and $\mathbb{E}\left\lbrack {Y\left( {g\left( {{a}_{j};\beta }\right) }\right) }\right\rbrack$ for $i, j = 1,\ldots , n$ , given the true parameters and the estimated ones. The $n \times n$ heatmap of effects are provided in Fig. 2 for the true effects and the ones estimated by regular SDR and AIPW. We used 500 sample points generated from Case 2 with $p = 6$ to plot these heatmaps. The plots in 2(a) and (c) demonstrates the significant similarity between the true surface and the one estimated by AIPW. The surface estimated by regression SDR appears to be a very different surface. The root-mean-squared errors between the true causal surface and the ones estimated from AIPW and regular regression SDR are 0.48 and 14.29, respectively.
230
+
231
+ ![0196394a-4400-7a4b-aee8-263744e909ca_6_187_184_642_806_0.jpg](images/0196394a-4400-7a4b-aee8-263744e909ca_6_187_184_642_806_0.jpg)
232
+
233
+ Figure 1: Boxplots of Frobenius norms between true and estimated parameters in simulations $\left( {p = 6}\right)$ .
234
+
235
+ Simulation 2. We also evaluate the performance of our bootstrap procedure for estimating the structural dimension $d$ , discussed in Section 4. We use the same data generating process as in Simulation 1, with $p = 6$ and $n = {200}$ . We set the bootstrap size to $B = {50}$ . The relative frequency of the selected dimension are reported in the appendix, and it reveals that the bootstrap procedure reliably recovers the true structural dimension, namely 2 in both cases ( ${98}\%$ of the times in Case 1 and ${90}\%$ of the times in Case 2.)
236
+
237
+ Simulation 3. We also demonstrate the effect of sample size on ${IPW}$ and ${AIPW}$ estimators of $\beta$ in the causal SDR model. Results are revealed in the appendix.
238
+
239
+ ## 6 DATA APPLICATION
240
+
241
+ We now illustrate our methods using a cohort of patients treated with radiation therapy for head and neck cancer. The cohort consists of 613 patients who received radiation therapy at the $\mathrm{X}$ hospital prior to 2016. Radiation therapy is one of the most effective modalities for the treatment of head and neck cancers. However, because of the complex shape of target volumes in close proximity to sensitive organs, it may be associated with acute and late radiation morbidities such as xerostomia, mucositis, and dysphagia affecting the patient's quality of life. Such morbidities can lead to severe reduction in food intake and undesirable and possibly dangerous weight loss in patients. There are prospective studies that evaluated risk factors for weight loss in patients who undergo radiation therapy [Johnston et al., 1982, Cacicedo et al., 2014]. However, a proper analysis of whether radiation causes weight loss has not yet been reported likely due to the methodological challenges involved in using high dimensional variables such as radiation therapy as a treatment in causal analysis.
242
+
243
+ Here, we focus on the parotid glands which are incidentally irradiated by radiation and examine the summary measures of radiation therapy given by the cumulative dose-volume histograms extracted from the raw voxel maps of radiation doses. In particular, we looked at 5 equally spaced percentages of volume to construct a vector of treatment doses. We used weight loss as the outcome of interest, which was defined as the difference between weight measured within 100 to 160 days after the completion of treatment and the weight measured during consultation before the start of treatment. The data has records on demographics such as age, gender, race, and baseline clinical factors such as whether the patient had used feeding tubes and/or received chemotherapy before the initiation of treatment. We assumed these variables are sufficient to control for confounding and thus would ensure the conditional ignorability assumption was met.
244
+
245
+ There exists a rich literature relating parotid dose-volume characteristics to radiotherapy-induced salivary toxicity. It has been shown that the mean dose to the parotid glands correlates strongly with xerostomia and salivary dysfunction which are risk factors of weight loss [Deasy et al., 2010]. In light of such studies, we assume there exists a single dimension in the radiation exposure that captures the relationships between exposure and side effects including weight loss. Therefore, we set the structural dimension $d$ to be one. We set the mapping function $g\left( {.;\beta }\right)$ to be linear in its parameters $\beta$ , and use Bayesian additive regression trees to fit all nuisance models. The code is provided as part of the supplementary materials. The oncology data were excluded for reasons of patient confidentiality.
246
+
247
+ We generated $n \times n$ heatmaps in Fig. 3 to illustrate the cause-effect relationship between radiation treatment and weight loss. We use AIPW estimator obtained from Theo-
248
+
249
+ ![0196394a-4400-7a4b-aee8-263744e909ca_7_151_176_1359_388_0.jpg](images/0196394a-4400-7a4b-aee8-263744e909ca_7_151_176_1359_388_0.jpg)
250
+
251
+ Figure 2: Heatmaps of true causal effects and effects computed by estimating $\beta$ via the regular SDR and the AIPW estimators. Heatmaps are antidiagonally symmetric.
252
+
253
+ ![0196394a-4400-7a4b-aee8-263744e909ca_7_288_709_433_391_0.jpg](images/0196394a-4400-7a4b-aee8-263744e909ca_7_288_709_433_391_0.jpg)
254
+
255
+ Figure 3: Heatmap to illustrate the causal effect of radiation on weight loss, where effects are computed by estimating $\beta$ via AIPW estimator. Heatmap is antidiagonally symmetric with opposite color tones.
256
+
257
+ rem 1. The absolute values on the plots are antidiagonally symmetric. Radiation doses were sorted in increasing values along both axes. We interpret the heatmaps as follows. Consider the ${\left( i, i\right) }^{\text{th }}$ point on the plot and draw a line along the y-coordinate. Since radiation doses were sorted in increasing order, then the radiation value at any point on the line to the right of(i, i)is higher than the radiation value at the ${\left( i, i\right) }^{\text{th }}$ point. For any point to the left of(i, i), the radiation value is lower. The value at the ${\left( k, i\right) }^{\text{th }}$ coordinate corresponds to the contrast $\mathbb{E}\left\lbrack {Y\left( {g\left( {{a}_{k};\beta }\right) }\right) - Y\left( {g\left( {{a}_{i};\beta }\right) }\right) }\right\rbrack$ . Consequently, if $k > i$ , then a red dot at(k, i)coordinate implies that an increase in radiation doses leads to an increase in weight loss. On the other hand, a blue dot would imply that an increase in radiation doses would not lead to an increase in weight loss. Similarly, a blue dot at(k, i), for $k < i$ , would imply that a decrease in radiation leads to a decrease in weight loss. Reverse is implied when the dot is red. Focusing on the bottom right triangle, we note that most of the area is filled with red color. It implies that as we increase the amount of radiation, the severity of weight loss increases. Thus, radiation therapy is potentially a cause of weight loss among patients who undergo the treatment.
258
+
259
+ We investigated the relationship between the treatment and outcome as the treatment size increases by selecting larger numbers of equally spaced percentages of volume in the dose-volume histograms. The plots are provided in the supplement. Throughout the experiment, we used a crude summary of the treatment that itself had dimension greater than one. A more fine-tuned approach is to look at the raw voxel maps. A voxel-based approach would identify the relations between radiation-induced morbidity and local dose release, thus providing a potentially better insight into spatial signature of radiation sensitivity in composite regions like the head and neck district [Monti et al. 2017]. Given the small cohort of patients that we have access to, a voxel-based approach would fall into $p \gg n$ paradigm, and would require strong sparsity assumptions [Li, 2007] to deal with. This is an interesting and challenging direction for future work.
260
+
261
+ ## 7 CONCLUSIONS
262
+
263
+ In this paper, we have described a generalization of the semi-parametric sufficient dimension reduction (SDR) approach for regression problems described in [Ma and Zhu, 2012] to causal SDR. Specifically, we developed a method that reduces the dimension of a multidimensional treatment, while preserving the causal relationship between the treatment and the outcome quantified as a counterfactual mean. Using ideas from structural models [Robins, 1999], we provided semiparametric estimators for parameters of the function that maps the multidimensional treatment to a lower dimensional subspace. We have shown our estimator exhibits "2x2 robustness," where the estimator remains consistent if one of two models, for two pairs of models, is chosen correctly. In order to scale our methods to high dimensional applied settings, such as fMRI scans, text data, or radiation oncology voxel data, we need to incorporate ideas from parametric modeling, and sparsity within a semiparametric framework. Another natural extension for future work is to apply these methods to classical causal inference in longitudinal studies, where multiple time points render a collection of binary treatments a multidimensional object. Our causal SDR approach would provide an alternative to parametric marginal structural models typically employed in such settings.
264
+
265
+ H. Bang and J. M. Robins. Doubly robust estimation in missing data and causal inference models. Biometrics, 61:962-972, 2005.
266
+
267
+ Ershad Banijamali, Amir-Hossein Karimi, and Ali Gh-odsi. Deep variational sufficient dimensionality reduction. arXiv preprint arXiv:1812.07641, 2018.
268
+
269
+ Rohit Bhattacharya, Razieh Nabi, and Ilya Shpitser. Semiparametric inference for causal effects in graphical models with hidden variables. arXiv preprint arXiv:2003.12659, 2020.
270
+
271
+ Jon Cacicedo, Francisco Casquero, Lorea Martinez-Indart, Olga Del Hoyo, Alfonso Gomez de Iturriaga, Arturo Navarro, and Pedro Bilbao. A prospective analysis of factors that influence weight loss in patients undergoing radiotherapy. Chinese journal of cancer, 33(4):204, 2014.
272
+
273
+ Debo Cheng, Jiuyong Li, Lin Liu, and Jixue Liu. Sufficient dimension reduction for average causal effect estimation. arXiv preprint arXiv:2009.06444, 2020.
274
+
275
+ RD. Cook and B. Li. Dimension reduction for conditional mean in regression. The Annals of Statistics, 30:455-474, 2002.
276
+
277
+ RD. Cook and S. Weisberg. Discussion of sliced inverse regression for dimension reduction. Journal of the American Statistical Association, 86:28-33, 1991.
278
+
279
+ Joseph O Deasy, Vitali Moiseenko, Lawrence Marks, KS Clifford Chao, Jiho Nam, and Avraham Eisbruch. Radiotherapy dose-volume effects on salivary gland function. International Journal of Radiation Oncology* Biology* Physics, 76(3):S58-S63, 2010.
280
+
281
+ Y. Dong and B. Li. Dimension reduction for nonelliptically distributed predictors: Second-order moments. Biometrika, 97:279-294, 2010.
282
+
283
+ A. Fisher and E. H. Kennedy. Visually communicating and teaching intuition for influence functions. arxiv preprint: 1810.03260, 2018.
284
+
285
+ M. Gentzkow, B. Kelley, and M. Taddy. Text as data. Journal of Economic Literature, 57:535-574, 2019.
286
+
287
+ W. Hardle and TM. Stoker. Investigating smooth multiple regression by the method of average derivatives. Journal of the American Statistical Association, 84:986-995, 1989.
288
+
289
+ Zonghui Hu, Dean A Follmann, and Naisyin Wang. Estimation of mean response via the effective balancing score. Biometrika, 101(3):613-624, 2014.
290
+
291
+ H. Ichimura. Semiparametric least squares (SLS) and weighted SLS estimation of single-index models. Journal of Econometrics, 58:71-120, 1993.
292
+
293
+ Kosuke Imai and Marc Ratkovic. Covariate balancing propensity score. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(1):243-263, 2014.
294
+
295
+ Catherine A Johnston, Thomas J Keane, and Susan M Prudo. Weight loss in patients receiving radical radiation therapy for head and neck cancer: a prospective study. Journal of Parenteral and Enteral Nutrition, 6(5):399-402, 1982.
296
+
297
+ KC. Li. Sliced inverse regression for dimension reduction (with discussion). Journal of the American Statistical Association, 86:316-342, 1991.
298
+
299
+ L. Li. Sparse sufficient dimension reduction. Biometrika, 94:603-613, 2007.
300
+
301
+ Wei Luo and Yeying Zhu. Matching using sufficient dimension reduction for causal inference. Journal of Business & Economic Statistics, 38(4):888-900, 2020.
302
+
303
+ Shujie Ma, Liping Zhu, Zhiwei Zhang, Chih-Ling Tsai, and Raymond J Carroll. A robust and efficient approach to causal inference based on sparse sufficient dimension reduction. Annals of statistics, 47(3):1505, 2019.
304
+
305
+ Y. Ma and L. Zhu. A semiparametric approach to dimension reduction. Journal of American Statistical Association, 107:168-179, 2012.
306
+
307
+ Mara Mather, John T Cacioppo, and Nancy Kanwisher. How fmri can inform cognitive theories. Perspectives on Psychological Science, 8(1):108-113, 2013.
308
+
309
+ Serena Monti, Giuseppe Palma, Vittoria D?Avino, Marianna Gerardi, Giulia Marvaso, Delia Ciardo, Roberto Pacelli, Barbara A Jereczek-Fossa, Daniela Alterio, and Laura Cella. Voxel-based analysis unveils regional dose differences associated with radiation-induced morbidity in head and neck cancer patients. Scientific Reports, 7(1): 1-8, 2017.
310
+
311
+ Joseph D Ramsey, Stephen José Hanson, Catherine Hanson, Yaroslav O Halchenko, Russell A Poldrack, and Clark Glymour. Six problems for causal inference from fmri. neuroimage, 49(2):1545-1558, 2010.
312
+
313
+ S.P. Robertson, H. Quon, A. P. Kiess, J. A. Moore, W. Yang, Z. Cheng, S. Afonso, M. Allen, M. Richardson, A. Choflet, A. Sharabi, and T. R. McNutt. A data-mining framework for large scale analysis of dose-outcome relationships in a database of irradiated head and neck cancer patients. Med Phys, pages 4329-4337, 2015.
314
+
315
+ J. M. Robins. Marginal structural models versus structural nested models as tools for causal inference. In Statistical Models in Epidemiology: The Environment and Clinical Trials. NY: Springer-Verlag, 1999.
316
+
317
+ James M. Robins, Miguel Hernan, and Babette Brumback. Marginal structural models and causal inference in epidemiology. Epidemiology, 11(5):550-560, 2000.
318
+
319
+ Susan M Shortreed and Ashkan Ertefaie. Outcome-adaptive lasso: variable selection for causal inference. Biometrics, 73(4):1111-1122, 2017.
320
+
321
+ Ilya Shpitser and Judea Pearl. Identification of joint interventional distributions in recursive semi-Markovian causal models. In Proceedings of the Twenty-First National Conference on Artificial Intelligence (AAAI-06). AAAI Press, Palo Alto, 2006.
322
+
323
+ C. J. Stone. Optimal global rates of convergence for nonparametric regression. The Annals of Statistics, 10:1040-1053, 1982.
324
+
325
+ Jin Tian and Judea Pearl. A general identification condition for causal effects. In Eighteenth National Conference on Artificial Intelligence, pages 567-573, 2002. ISBN 0-262-51129-0.
326
+
327
+ Anastasios Tsiatis. Semiparametric Theory and Missing Data. Springer Science & Business Media, 2007.
328
+
329
+ Mark J Van der Laan, Sherri Rose, et al. Targeted learning: causal inference for observational and experimental data, volume 4. Springer, 2011.
330
+
331
+ Aad W Van der Vaart. Asymptotic statistics, volume 3. Cambridge university press, 2000.
332
+
333
+ S. Vansteelandt and M. Joffe. Structural nested models and g-estimation: The partially realized promise. Statistical Science, 29(4):707-731, 2014.
334
+
335
+ Z. Ye and R. E. Weiss. Using the bootstrap to select one of a new class of dimension reduction methods. Journal of the American Statistical Association, 98:968-979, 2003.
336
+
337
+ ## APPENDIX
338
+
339
+ For clearer presentation of materials and equations in this supplement, we switch to a single-column format. Appendix A contains additional discussion on how to ensure the nuisance models are trained in a congenial manner. Appendix B contains additional results with simulated data and real data application. Appendix C contains all the proofs.
340
+
341
+ ## A ADDITIONAL DISCUSSIONS
342
+
343
+ "Inverted" Structural Nested Mean Model. In order to deal with the issue of congeniality, we may opt to specify ${\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {A;\beta }\right) }\right\rbrack$ and $\widetilde{f}\left( {A, C,\beta }\right) = {\mathbb{E}}_{q}\left\lbrack {Y \mid A, C}\right\rbrack - {\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {A;\beta }\right) }\right\rbrack$ , which yield a variationally independent specification of ${\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {A;\beta }\right) }\right\rbrack$ and ${\mathbb{E}}_{q}\left\lbrack {Y \mid A, C}\right\rbrack = {\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {A;\beta }\right) }\right\rbrack + \widetilde{f}\left( {A, C,\beta }\right)$ . Consequently, the four variationally independent models we need to specify are as follows: $\ell \left( {g\left( {A;\beta }\right) }\right) ,\nu \left( {g\left( {A;\beta }\right) }\right) , p\left( {A \mid C}\right)$ , and $\widetilde{f}\left( {A, C,\beta }\right)$ . The last term in (8) can be evaluated empirically without additional modeling. Thus, we need to specify the additional nuisance model $\widetilde{f}$ . We propose to fit $\widetilde{f}$ by borrowing ideas from the theory of structural nested mean models (SNMMs) in [Vansteelandt and Joffe, 2014, Robins, [1999]. Unlike MSMs, which are regression models for causal relationships, SNMMs directly model the so called "blip effects," namely counterfactual differences between the response to a particular treatment, and a response to a reference treatment, given a particular observed trajectory. For a single treatment, this difference simplifies to $\gamma \left( {A, C;\psi }\right) = \mathbb{E}\left\lbrack {Y\left( A\right) \mid A, C}\right\rbrack - \mathbb{E}\left\lbrack {Y\left( 0\right) \mid A, C}\right\rbrack$ . Let ${U}_{sn}\left( \psi \right) \mathrel{\text{:=}} Y - \gamma \left( {A, C;\psi }\right)$ . Consequently, $\mathbb{E}\left\lbrack {{U}_{sn}\left( \psi \right) \mid A, C}\right\rbrack =$ $\mathbb{E}\left\lbrack {Y\left( 0\right) \mid A, C}\right\rbrack = \mathbb{E}\left\lbrack {Y\left( 0\right) \mid C}\right\rbrack = \mathbb{E}\left\lbrack {{U}_{sn}\left( \psi \right) \mid C}\right\rbrack$ (by conditional ignorability). This estimating equation then leads to a consistent estimation of parameters $\psi$ :
344
+
345
+ $$
346
+ {\mathbb{P}}_{n}\left\lbrack {\{ d\left( {A, C}\right) - \mathbb{E}\left\lbrack {d\left( {A, C}\right) \mid C}\right\rbrack \} \times \left\{ {{U}_{sn}\left( \psi \right) - \mathbb{E}\left\lbrack {{U}_{sn}\left( \psi \right) \mid C}\right\rbrack }\right\} }\right\rbrack = 0,
347
+ $$
348
+
349
+ where $d\left( {A, C}\right)$ is a function of $A$ and $C$ with the same cardinality as $\psi$ [Vansteelandt and Joffe,2014]. Assuming $\widetilde{f}$ is parameterized by $\psi$ , we now show that estimating $\psi$ can be viewed as an estimation problem for a kind of "inverted SNMM."
350
+
351
+ Lemma 4. Let ${U}_{dim}\left( \psi \right) = Y - \widetilde{f}\left( {A, C,\beta ;\psi }\right)$ , andfix any $d\left( {A, C}\right)$ . If either $\mathbb{E}\left\lbrack {d\left( {A, C}\right) \mid g\left( {A;\beta }\right) }\right\rbrack$ or $\mathbb{E}\left\lbrack {{U}_{dim}\left( \psi \right) \mid g\left( {A;\beta }\right) }\right\rbrack$ are correctly specified, the following estimating equations yield a consistent estimator of $\psi$ ,
352
+
353
+ $$
354
+ {\mathbb{P}}_{n}\left\lbrack {\{ d\left( {A, C}\right) - \mathbb{E}\left\lbrack {d\left( {A, C}\right) \mid g\left( {A;\beta }\right) }\right\rbrack \} \times \left\{ {{U}_{dim}\left( \psi \right) - \mathbb{E}\left\lbrack {{U}_{dim}\left( \psi \right) \mid g\left( {A;\beta }\right) }\right\rbrack }\right\} }\right\rbrack = 0.
355
+ $$
356
+
357
+ For the purposes of robustness, specifying both $\mathbb{E}\left\lbrack {\widetilde{f} \mid g\left( {A;\beta }\right) }\right\rbrack$ and $\mathbb{E}\left\lbrack {{U}_{dim}\left( \psi \right) \mid g\left( {A;\beta }\right) }\right\rbrack$ correctly is part of the correct specification of $\mathbb{E}\left\lbrack {U\left( \beta \right) \mid A, C}\right\rbrack$ , given the type of estimation strategy we use.
358
+
359
+ The implementation provided in Section 4 can be modified to take advantage of modeling congenial models. Right before step (c), we need to estimate $\widehat{{f}^{\left( j\right) }}\left( {A, C,{\beta }^{\left( j\right) };\widehat{\psi }}\right)$ using Lemma 4, and modify step (c) by letting $\mathbb{E}\left\lbrack {{U}^{q}\left( {\beta }^{\left( j\right) }\right) \mid A, C}\right\rbrack =$ $\widehat{{f}^{\left( j\right) }} \times \left\{ {\alpha \left( A\right) - \widehat{\nu }\left( {g\left( {A;{\beta }^{\left( j\right) }}\right) }\right) }\right\}$ . A downside of estimating congenial models is that the overall procedure becomes quite computationally intensive.
360
+
361
+ ## B ADDITIONAL EXPERIMENTS
362
+
363
+ ### B.1 SIMULATIONS
364
+
365
+ Simulation 1. The boxplots in Fig. 4(a) illustrate the performance of different estimation strategies for $\beta$ for both Case 1 and Case 2, when sample size is set for 200 and $p = {12}$ .
366
+
367
+ Simulation 2. The relative frequency of the selected dimension are reported in the table below, which reveals that the bootstrap procedure reliably recovers the true structural dimension, namely 2 .
368
+
369
+ <table><tr><td>Model $\left( {p = 6}\right)$</td><td>$\widehat{d} = 1$</td><td>$\widehat{d} = 2$</td><td>$\widehat{d} = 3$</td><td>$\widehat{d} = 4$</td><td>$\widehat{d} = 5$</td></tr><tr><td>Case 1</td><td>0%</td><td>98%</td><td>2%</td><td>0%</td><td>0%</td></tr><tr><td>Case 2</td><td>0%</td><td>90%</td><td>10%</td><td>0%</td><td>0%</td></tr></table>
370
+
371
+ ![0196394a-4400-7a4b-aee8-263744e909ca_11_187_187_1346_927_0.jpg](images/0196394a-4400-7a4b-aee8-263744e909ca_11_187_187_1346_927_0.jpg)
372
+
373
+ Figure 4: (a) Boxplots of Frobenius norms between true and estimated parameters in simulations $\left( {p = {12}}\right)$ ; (b) Illustration of the effect of sample size on the Frobenius norms between true and estimated parameters using data generated from Case 2 with $p = 6$ .
374
+
375
+ Simulation 3. In the third set of simulations, we demonstrated the effect of sample size on ${IPW}$ and ${AIPW}$ estimators of $\beta$ in the causal SDR model. Results are shown in Fig. 4(b). While both estimators are consistent under our model specification, AIPW exhibits favorable convergence rates compared to IPW, as expected.
376
+
377
+ ### B.2 REAL DATA
378
+
379
+ Assume treatment is collected using $p$ equally spaced percentages of volume. In other words, treatment is assumed to be a vector in ${\mathbb{R}}^{p}$ where the $i$ -th element corresponds to the radiation dose on $p$ percentages of the parotid glands. The effect of radiation on weight loss is illustrated in Fig. 5 by allowing $p$ to be 10 and 20, and reducing the size of treatment to one dimension. We use IPW estimators to calculate the effects. Both plots agree with our stated conclusion in the main body of the manuscript, i.e., radiation has a negative effect on weight loss.
380
+
381
+ ## C PROOFS
382
+
383
+ ## Lemma 1
384
+
385
+ Proof. Choosing $\phi \left( {A, C}\right) = 0$ in Theorem 1 yields (7). All elements of the orthocomplement of the nuisance tangent space are mean zero under the true distribution (we give an argument for elements of ${\widetilde{\Lambda }}_{\eta }^{ \bot }$ in Proposition 1). Since $\widetilde{U}\left( \beta \right)$ exhibits double robustness, i.e. remaining consistent if either $\ell \left( {g\left( {A;\beta }\right) }\right)$ or $\nu \left( {g\left( {A;\beta }\right) }\right)$ is correctly specified [Ma and Zhu,2012], the correct specification of $p\left( {A \mid C}\right)$ yields our conclusion.
386
+
387
+ Proposition 1. For all $\widetilde{U}\left( {\beta }^{ * }\right) \in {\widetilde{\Lambda }}_{\eta }^{ \bot },\mathbb{E}\left\lbrack {\widetilde{U}\left( {\beta }^{ * }\right) }\right\rbrack = 0$ .
388
+
389
+ ![0196394a-4400-7a4b-aee8-263744e909ca_12_277_175_1239_485_0.jpg](images/0196394a-4400-7a4b-aee8-263744e909ca_12_277_175_1239_485_0.jpg)
390
+
391
+ Figure 5: Heatmaps to illustrate the causal effect of radiation on weight loss. Treatment is collected using (right) 10 and (left) 20 equally spaced percentages of volume in parotid glands.
392
+
393
+ Proof. The second and third terms of $\widetilde{U}\left( {\beta }^{ * }\right)$ are mean zero by construction. The first term, under truth with the property that ${\mathbb{E}}_{q}\left\lbrack {Y \mid A}\right\rbrack = {\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {A;\beta }\right) }\right\rbrack$ , is
394
+
395
+ $$
396
+ \mathbb{E}\left\lbrack {\frac{{p}^{ * }\left( A\right) }{p\left( {A \mid C}\right) } \times \widetilde{U}\left( \beta \right) }\right\rbrack = \int \widetilde{U}\left( \beta \right) \times p\left( {Y \mid A, C}\right) \times {p}^{ * }\left( A\right) \times p\left( C\right) d{\mu }_{Y, A, C}
397
+ $$
398
+
399
+ $$
400
+ = \int \{ Y - \ell \left( {g\left( {a;\beta }\right) }\right) \} \times \{ \alpha \left( A\right) - \nu \left( {g\left( {a;\beta }\right) }\right) \} \times q\left( {Y, A, C}\right) d{\mu }_{Y, A, C}
401
+ $$
402
+
403
+ $$
404
+ = {\mathbb{E}}_{q}\left\lbrack {\{ Y - \ell \left( {g\left( {a;\beta }\right) }\right) \} \times \{ \alpha \left( A\right) - \nu \left( {g\left( {a;\beta }\right) }\right) \} }\right\rbrack
405
+ $$
406
+
407
+ $$
408
+ = {\mathbb{E}}_{q}\left\lbrack {\{ \alpha \left( A\right) - \nu \left( {g\left( {a;\beta }\right) }\right) \} \times {\mathbb{E}}_{q}\left\lbrack {\{ Y - \ell \left( {g\left( {a;\beta }\right) }\right) \} \mid A = a}\right\rbrack }\right\rbrack
409
+ $$
410
+
411
+ $$
412
+ = {\mathbb{E}}_{q}\left\lbrack {\{ \alpha \left( A\right) - \nu \left( {g\left( {a;\beta }\right) }\right) \} \times \left\{ {{\mathbb{E}}_{q}\left\lbrack {Y \mid A = a}\right\rbrack - \ell \left( {g\left( {a;\beta }\right) }\right) }\right\} }\right\rbrack
413
+ $$
414
+
415
+ $$
416
+ = 0\text{.}
417
+ $$
418
+
419
+ since $\ell \left( {g\left( {a;\beta }\right) }\right) \mathrel{\text{:=}} {\mathbb{E}}_{q}\left\lbrack {Y \mid A = a}\right\rbrack$ . Note that even if $\ell \left( {g\left( {a;\beta }\right) }\right)$ is miss-specified, the expectation will still be zero if $\nu \left( {g\left( {a;\beta }\right) }\right)$ is correctly specified, shown by iterative expectations.
420
+
421
+ ## Theorem 1.
422
+
423
+ Proof. This is a direct consequence of Theorems 3.1 and 3.2 in Robins [1999], and results in Appendix 3 of Ma and Zhu [2012].
424
+
425
+ ## Lemma 2
426
+
427
+ Proof. Plugging in the optimal $\phi \left( {A, C}\right)$ yields $\widetilde{U}\left( {\beta }^{ * }\right)$ to be
428
+
429
+ $$
430
+ \frac{{p}^{ * }\left( A\right) }{p\left( {A \mid C}\right) } \times \widetilde{U}\left( \beta \right) - \mathbb{E}\left\lbrack {\left. {\frac{{p}^{ * }\left( A\right) }{p\left( {A \mid C}\right) }\widetilde{U}\left( \beta \right) }\right| \;A, C}\right\rbrack + \mathbb{E}\left\lbrack {\left. {\mathbb{E}\left\lbrack {\left. {\frac{{p}^{ * }\left( A\right) }{p\left( {A \mid C}\right) }\widetilde{U}\left( \beta \right) }\right| \;A, C}\right. }\right| \;C}\right\rbrack .
431
+ $$
432
+
433
+ The conclusion follows, since
434
+
435
+ $$
436
+ \mathbb{E}\left\lbrack {\mathbb{E}\left\lbrack {\left. {\frac{{p}^{ * }\left( A\right) }{p\left( {A \mid C}\right) }\widetilde{U}\left( \beta \right) }\right| \;A, C}\right\rbrack \mid C}\right\rbrack
437
+ $$
438
+
439
+ $$
440
+ = \mathbb{E}\left\lbrack {\left. {\frac{{p}^{ * }\left( A\right) }{p\left( {A \mid C}\right) }\mathbb{E}\left\lbrack {\widetilde{U}\left( \beta \right) \mid A, C}\right\rbrack }\right| \;C}\right\rbrack
441
+ $$
442
+
443
+ $$
444
+ = \int \frac{{p}^{ * }\left( A\right) }{p\left( {A \mid C}\right) }\mathbb{E}\left\lbrack {\widetilde{U}\left( \beta \right) \mid A, C}\right\rbrack p\left( {Y, A \mid C}\right) d{\mu }_{Y, A}
445
+ $$
446
+
447
+ $$
448
+ = \int \mathbb{E}\left\lbrack {\widetilde{U}\left( \beta \right) \mid A, C}\right\rbrack p\left( {Y \mid A, C}\right) {p}^{ * }\left( A\right) d{\mu }_{Y, A}
449
+ $$
450
+
451
+ $$
452
+ = \int \mathbb{E}\left\lbrack {\widetilde{U}\left( \beta \right) \mid A, C}\right\rbrack q\left( {Y, A \mid C}\right) d{\mu }_{Y, A}
453
+ $$
454
+
455
+ $$
456
+ = {\mathbb{E}}_{q}\left\lbrack {\mathbb{E}\left\lbrack {\widetilde{U}\left( \beta \right) \mid A, C}\right\rbrack \mid C}\right\rbrack .
457
+ $$
458
+
459
+ ## Lemma 3.
460
+
461
+ Proof. Assume either $\ell \left( {g\left( {A;\beta }\right) }\right)$ or $\nu \left( {g\left( {A;\beta }\right) }\right)$ , and $p\left( {A \mid C}\right)$ are correctly specified. Consequently, the second and third terms in the expression of $\widetilde{U}\left( {\beta }^{ * }\right)$ are both mean zero, even under an incorrect specification of $\mathbb{E}\left\lbrack {\widetilde{U}\left( \beta \right) \mid A, C}\right\rbrack$ . Following the same the argument in Proposition 1, the first term is zero if either $\ell \left( {g\left( {A;\beta }\right) }\right)$ or $\nu \left( {g\left( {A;\beta }\right) }\right)$ is correctly specified.
462
+
463
+ Assume either $\ell \left( {g\left( {A;\beta }\right) }\right)$ or $\nu \left( {g\left( {A;\beta }\right) }\right)$ , and $\mathbb{E}\left\lbrack {\widetilde{U}\left( \beta \right) \mid A, C}\right\rbrack$ are correctly specified. Consequently, the first two terms in the expression of ${\widetilde{U}}^{ * }$ are both mean zero, even under an incorrect specification of ${p}^{ * }\left( {A \mid C}\right)$ . For the last term, we have:
464
+
465
+ $$
466
+ \mathbb{E}\left\lbrack {{\mathbb{E}}_{q}\left\lbrack {\mathbb{E}\left\lbrack {\widetilde{U}\left( \beta \right) \mid A, C}\right\rbrack \mid C}\right\rbrack }\right\rbrack
467
+ $$
468
+
469
+ $$
470
+ = \mathbb{E}\left\lbrack {{\mathbb{E}}_{q}\left\lbrack {\int \widetilde{U}\left( \beta \right) \times p\left( {Y \mid A, C}\right) d{\mu }_{Y} \mid C}\right\rbrack }\right\rbrack
471
+ $$
472
+
473
+ $$
474
+ = \mathbb{E}\left\lbrack {\int \left( {\int \widetilde{U}\left( \beta \right) \times p\left( {Y \mid A, C}\right) d{\mu }_{Y}}\right) \times {p}^{ * }\left( A\right) d{\mu }_{A}}\right\rbrack
475
+ $$
476
+
477
+ $$
478
+ = \int \left( {\iint \widetilde{U}\left( \beta \right) \times p\left( {Y \mid A, C}\right) \times {p}^{ * }\left( A\right) d{\mu }_{Y}d{\mu }_{A}}\right) \times p\left( C\right) d{\mu }_{C}
479
+ $$
480
+
481
+ $$
482
+ = \int \widetilde{U}\left( \beta \right) \times p\left( {Y \mid A, C}\right) \times {p}^{ * }\left( A\right) \times p\left( C\right) d{\mu }_{Y, A, C}
483
+ $$
484
+
485
+ $$
486
+ = \int \widetilde{U}\left( \beta \right) \times q\left( {Y, A, C}\right) d{\mu }_{Y, A, C}
487
+ $$
488
+
489
+ $$
490
+ = {\mathbb{E}}_{q}\left\lbrack {\widetilde{U}\left( \beta \right) }\right\rbrack \text{.}
491
+ $$
492
+
493
+ We conclude the proof by noting that $\mathbb{E}\left\lbrack {\widetilde{U}\left( \beta \right) }\right\rbrack$ is mean zero if either $\ell \left( {g\left( {A;\beta }\right) }\right)$ or $\nu \left( {g\left( {A;\beta }\right) }\right)$ is correctly specified. Note that the normalized version of $\widetilde{U}\left( {\beta }^{ * }\right)$ , that is $\mathbb{E}{\left\lbrack \frac{\partial \widetilde{U}\left( {\beta }^{ * }\right) }{\partial \beta }\right\rbrack }^{-1} \times \widetilde{U}\left( {\beta }^{ * }\right)$ , is an influence function that lives in the orthogonal complement of the tangent space ${\widetilde{\Lambda }}_{\eta }^{ \bot }$ . Therefore, the estimator obtained by solving $\mathbb{E}\left\lbrack {\widetilde{U}\left( {\beta }^{ * }\right) }\right\rbrack = 0$ is RAL and is consistent and asymptotically normal with mean zero and variance equal to the variance of the influence function [Van der Vaart, 2000, Tsiatis, 2007].
494
+
495
+ ## Theorem 2.
496
+
497
+ Proof. Let $\beta \in {\mathbb{R}}^{q}$ and let $\eta$ be infinite dimensional. We prove this theorem for the parametric submodel in the semipara-metric model of $\{ p\left( {Z;\beta ,\eta }\right) \}$ . With a slight abuse of notation, we denote $\eta \in {\mathbb{R}}^{r}$ to be the nuisance parameters within the parametric submodel. The Taylor series expansion of $\widetilde{U}\left( {Z;\widehat{\beta }\left( \widehat{\eta }\right) ,\widehat{\eta }}\right)$ around ${\beta }_{0}$ is
498
+
499
+ $$
500
+ 0 = \frac{1}{\sqrt{n}}\mathop{\sum }\limits_{{i = 1}}^{n}\widetilde{U}\left( {{z}_{i};\widehat{\beta }\left( \widehat{\eta }\right) ,\widehat{\eta }}\right) \tag{9}
501
+ $$
502
+
503
+ $$
504
+ = \underset{\left( a\right) }{\underbrace{\frac{1}{\sqrt{n}}\mathop{\sum }\limits_{{i = 1}}^{n}\widetilde{U}\left( {{z}_{i};{\beta }_{0},\widehat{\eta }}\right) }} + \underset{\left( b\right) }{\underbrace{\frac{\partial }{\partial \beta }\left\{ {\frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\widetilde{U}\left( {{z}_{i};{\beta }_{0},\widehat{\eta }}\right) }\right\} }}\sqrt{n}\left( {\widehat{\beta } - {\beta }_{0}}\right) + {o}_{p}\left( 1\right)
505
+ $$
506
+
507
+ $$
508
+ \left( a\right) = \frac{1}{\sqrt{n}}\mathop{\sum }\limits_{{i = 1}}^{n}\widetilde{U}\left( {{z}_{i};{\beta }_{0},{\eta }_{0}}\right)
509
+ $$
510
+
511
+ $$
512
+ + \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}{\left( \frac{\partial \widetilde{U}\left( {{z}_{i};{\beta }_{0},{\eta }_{0}}\right) }{\partial \eta }\right) }_{q \times r} \times \sqrt{n}\left( {\widehat{\eta } - {\eta }_{0}}\right)
513
+ $$
514
+
515
+ $$
516
+ + \frac{1}{2}\underset{1 \times 1 \times r}{\underbrace{{n}^{1/4}{\left( \widehat{\eta } - {\eta }_{0}\right) }^{\prime }}}\underset{r \times q \times r\text{ (tensor) }}{\underbrace{\left( \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\frac{{\partial }^{2}\widetilde{U}\left( {{z}_{i};{\beta }_{0},{\eta }_{0}}\right) }{{\partial }^{2}\eta }\right) }}\underset{r \times 1 \times 1}{\underbrace{{n}^{1/4}\left( {\widehat{\eta } - {\eta }_{0}}\right) }} + {o}_{p}\left( 1\right)
517
+ $$
518
+
519
+ $$
520
+ \left( b\right) = \underset{\left( {b}_{1}\right) }{\underbrace{\frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}{\left( \frac{\partial \widetilde{U}\left( {{z}_{i};{\beta }_{0},{\eta }_{0}}\right) }{\partial \beta }\right) }_{q \times q}}} + \frac{\partial }{\partial \beta }\left\{ {\underset{\left( {b}_{2}\right) }{\underbrace{\frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}{\left( \frac{\partial \widetilde{U}\left( {{z}_{i};{\beta }_{0},{\eta }_{0}}\right) }{\partial \eta }\right) }_{q \times r}}} \times {\left( \widehat{\eta } - {\eta }_{0}\right) }_{r \times 1}}\right\}
521
+ $$
522
+
523
+ $\left( {b}_{1}\right) : \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}{\left( \frac{\partial \widetilde{U}}{\partial \beta }\right) }_{q \times q} \rightarrow {\mathbb{E}}_{{\theta }_{0}}{\left\lbrack \frac{\partial \widetilde{U}}{\partial \beta }\right\rbrack }_{q \times q} = - {\mathbb{E}}_{{\theta }_{0}}\left\lbrack {\widetilde{U}\left( {Z;{\theta }_{0}}\right) {S}_{\beta }^{\prime }\left( {Z;{\theta }_{0}}\right) }\right\rbrack$
524
+
525
+ $\left( {b}_{2}\right) : \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}{\left( \frac{\partial \widetilde{U}}{\partial \eta }\right) }_{q \times r} \rightarrow {\mathbb{E}}_{{\theta }_{0}}{\left\lbrack \frac{\partial \widetilde{U}}{\partial \eta }\right\rbrack }_{q \times r} = - {\mathbb{E}}_{{\theta }_{0}}\left\lbrack {\widetilde{U}\left( {Z;{\theta }_{0}}\right) {S}_{\eta }^{\prime }\left( {Z;{\theta }_{0}}\right) }\right\rbrack = {\mathbf{0}}_{q \times r}$
526
+
527
+ Since ${n}^{1/4}\left( {\widehat{\eta } - {\eta }_{0}}\right)$ and $\frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}{\left( \frac{\partial \widetilde{U}\left( {{z}_{i};{\beta }_{0},{\eta }_{0}}\right) }{\partial \eta }\right) }_{q \times r}$ both converge in probability to zero, then
528
+
529
+ $$
530
+ \frac{1}{\sqrt{n}}\mathop{\sum }\limits_{{i = 1}}^{n}\widetilde{U}\left( {{z}_{i};{\beta }_{0},\widehat{\eta }}\right) = \frac{1}{\sqrt{n}}\mathop{\sum }\limits_{{i = 1}}^{n}\widetilde{U}\left( {{z}_{i};{\beta }_{0},{\eta }_{0}}\right) + {o}_{p}\left( 1\right) .
531
+ $$
532
+
533
+ Therefore, from equation 9
534
+
535
+ $$
536
+ \sqrt{n}\left( {\widehat{\beta } - {\beta }_{0}}\right) = \frac{1}{\sqrt{n}}\mathop{\sum }\limits_{{i = 1}}^{n}\left\{ {-{\mathbb{E}}_{{\theta }_{0}}^{-1}\left\lbrack \frac{\partial \widetilde{U}\left( {{z}_{i};{\beta }_{0},{\eta }_{0}}\right) }{\partial \beta }\right\rbrack \widetilde{U}\left( {{z}_{i};{\beta }_{0},{\eta }_{0}}\right) }\right\} + {o}_{p}\left( 1\right)
537
+ $$
538
+
539
+ Which concludes the proof. This procedure carries over to the case where the nuisance parameter is infinite dimensional, Tsiatis [2007].
540
+
541
+ ## Lemma 4
542
+
543
+ Proof. Define ${U}_{\text{dim }}\left( \psi \right) = Y - \widetilde{f}\left( {A, C,\beta ;\psi }\right)$ . Therefore,
544
+
545
+ $$
546
+ \mathbb{E}\left\lbrack {{U}_{dim}\left( \psi \right) \mid A, C}\right\rbrack = \ell \left( {g\left( {A;\beta }\right) }\right) = \mathbb{E}\left\lbrack {{U}_{dim}\left( \psi \right) \mid g\left( {A;\beta }\right) }\right\rbrack .
547
+ $$
548
+
549
+ This is a situation precisely isomorphic to single treatment SNMMs above, except with the roles of $A$ and $C$ reversed (hence this is an "inverted SNMM"). Our conclusion will then follow by results in [Robins et al., 2000, Vansteelandt and Joffe, 2014]. We provide a more detailed proof as follows. We have that $\widetilde{f}\left( {A, C,\beta ;\psi }\right) = \mathbb{E}\left\lbrack {Y \mid A = a, C = C}\right\rbrack - \ell \left( {g\left( {a;\beta }\right) }\right)$ . Therefore,
550
+
551
+ $$
552
+ \mathbb{E}\left\lbrack {Y \mid A = a, C = C}\right\rbrack = \ell \left( {g\left( {a;\beta }\right) }\right) + \widetilde{f}\left( {a, C,\beta ;\psi }\right) ,
553
+ $$
554
+
555
+ which we can rewrite as follows,
556
+
557
+ $$
558
+ Y = \ell \left( {g\left( {a;\beta }\right) }\right) + \widetilde{f}\left( {a, C,\beta ;\psi }\right) + \epsilon ,\text{ s.t. }\mathbb{E}\left\lbrack {\epsilon \mid C, a}\right\rbrack = 0.
559
+ $$
560
+
561
+ Observed data are instances of the form $\mathbf{Z} = \left( {C, A, Y}\right)$ . The goal is to find semiparametric estimators for $\psi$ in the semiparametric model $\mathcal{P} = \{ p\left( {\mathbf{z};\psi ,\psi \left( \right) }\right) ,\mathbf{z} = \left( {C, a, y}\right) \}$ and the truth is ${p}_{0}\left( \mathbf{z}\right) = p\left( {\mathbf{z};{\psi }_{0},{\eta }_{0}\left( \right) }\right)$ . The observed data likelihood can be written as follows,
562
+
563
+ $$
564
+ p\left( {c, a, y}\right) = p\left( {C, a}\right) \times p\left( {y \mid a, C}\right) \mathrel{\text{:=}} p\left( {C, a}\right) \times p\left( {\epsilon \mid a, C}\right) = {\eta }_{1}\left( {C, a}\right) \times {\eta }_{2}\left( {\epsilon , a, C}\right)
565
+ $$
566
+
567
+ $$
568
+ = {\eta }_{1}\left( {C, a}\right) \times {\eta }_{2}\left( {y - \ell \left( {g\left( {a;\beta }\right) }\right) - \widetilde{f}\left( {a, C,\beta ;\psi }\right) , a, C}\right) ,
569
+ $$
570
+
571
+ where $\epsilon = Y - \ell \left( {g\left( {a;\beta }\right) }\right) - \widetilde{f}\left( {a, C,\beta ;\psi }\right) ,{\eta }_{1}\left( {C, a}\right)$ denotes the nuisance model for $p\left( {C, a}\right)$ , and ${\eta }_{2}\left( {\epsilon , a, C}\right)$ denotes the nuisance model for $p\left( {\epsilon \mid a, C}\right)$ , which is any density such that $\mathbb{E}\left\lbrack {\epsilon \mid a, C}\right\rbrack = 0.\psi$ is the parameter of interest and the nuisance parameters are $\left\{ {{\eta }_{1},{\eta }_{2},\ell \left( {g\left( {a;\beta }\right) }\right) }\right\}$ .
572
+
573
+ The nuisance tangent space of this semiparametric model, $\Lambda$ , is defined as the mean-square closure of parametric submodel nuisance tangent spaces:
574
+
575
+ $$
576
+ {\mathcal{P}}_{\psi ,\zeta } = \left\{ {p\left( {z;\psi ,{\psi }_{\zeta }}\right) = p\left( {c, a;{\zeta }_{1}}\right) \times p\left( {\epsilon \mid a, C;{\zeta }_{2}}\right) }\right\}
577
+ $$
578
+
579
+ $$
580
+ = \left\{ {p\left( {C, a;{\zeta }_{1}}\right) \times p\left( {y - \ell \left( {g\left( {a;\beta }\right) }\right) - \widetilde{f}\left( {a, C,\beta ;\psi }\right) \mid a, C;{\zeta }_{2}}\right) }\right\} ,
581
+ $$
582
+
583
+ where ${\zeta }_{1},{\zeta }_{2}$ are ${r}_{1},{r}_{2}$ dimensional vectors. Thus nuisance parameters in parametric submode are finite dimensional, $\zeta = \left\{ {{\zeta }_{1},{\zeta }_{2},\ell \left( {g\left( {a;\beta }\right) }\right) }\right\} .$
584
+
585
+ $$
586
+ {\Lambda }_{\zeta } = \left\{ {B \times {S}_{\zeta },\forall B}\right\} ,
587
+ $$
588
+
589
+ $$
590
+ {S}_{\zeta } = \frac{\partial \{ \log \text{ likelihood of the submodel evaluated at truth }\} }{\partial \zeta }
591
+ $$
592
+
593
+ $$
594
+ = {\left. \left\{ \left( \frac{\partial \log p\left( {z;\psi ,\zeta }\right) }{\partial {\zeta }_{1}}\right) ,\left( \frac{\partial \log p\left( {z;\psi ,\zeta }\right) }{\partial {\zeta }_{2}}\right) ,\left( \frac{\partial \log p\left( {z;\psi ,\zeta }\right) }{\partial \ell \left( {g\left( {a;\beta }\right) }\right) }\right) \right\} \right| }_{{\psi }_{0},{\zeta }_{0}}
595
+ $$
596
+
597
+ $$
598
+ = \left\{ {{S}_{{\zeta }_{1}}\left( {z;{\psi }_{0},{\zeta }_{0}}\right) ,{S}_{{\zeta }_{2}}\left( {z;{\psi }_{0},{\zeta }_{0}}\right) ,{S}_{\ell \left( {g\left( {a;\beta }\right) }\right) }\left( {z;{\psi }_{0},{\zeta }_{0}}\right) }\right\} .
599
+ $$
600
+
601
+ Hence, ${\Lambda }_{\zeta } = {\Lambda }_{{\zeta }_{1}} + {\Lambda }_{{\zeta }_{2}} + {\Lambda }_{\ell \left( {g\left( {a;\beta }\right) }\right) }.{S}_{{\zeta }_{1}}$ should satisfy the density conditions. In addition, ${S}_{{\zeta }_{2}}$ should satisfy the condition that $\mathbb{E}\left\lbrack {\epsilon \mid a, C}\right\rbrack = 0$ . We derive each of these subspaces using theorems in [Tsiatis,2007] as a guideline.
602
+
603
+ (Theorem 4.6)
604
+
605
+ $$
606
+ {\Lambda }_{{\zeta }_{1}} = \{ f\left( {C, a}\right) ;\mathbb{E}\left\lbrack f\right\rbrack = 0\}
607
+ $$
608
+
609
+ (Theorem 4.7)
610
+
611
+ $$
612
+ {\Lambda }_{{\zeta }_{2}} = \{ f\left( {\epsilon , a, C}\right) ;\mathbb{E}\left\lbrack {f \mid a, C}\right\rbrack = 0,\mathbb{E}\left\lbrack {{\epsilon f} \mid a, C}\right\rbrack = 0\}
613
+ $$
614
+
615
+ (Lemma 4.3)
616
+
617
+ $$
618
+ {\Lambda }_{{\zeta }_{1}}^{ \bot } = \{ g\left( {\epsilon , a, C}\right) ;\mathbb{E}\left\lbrack {g \mid a, C}\right\rbrack = 0\}
619
+ $$
620
+
621
+ (Theorem 4.8)
622
+
623
+ $$
624
+ {\left( {\Lambda }_{{\zeta }_{1}} + {\Lambda }_{{\zeta }_{2}}\right) }^{ \bot } = \{ g\left( {C, a}\right) \epsilon \}
625
+ $$
626
+
627
+ (Equation 10)
628
+
629
+ $$
630
+ {\Lambda }_{\ell \left( {g\left( {a;\beta }\right) }\right) } = \left\{ {\frac{{\psi }_{2\epsilon }^{\prime }\left( {\epsilon , C, a}\right) }{{\psi }_{2}\left( {\epsilon , C, a}\right) }f\left( {g\left( {a;\beta }\right) }\right) }\right\}
631
+ $$
632
+
633
+ In order to derive ${\Lambda }_{\ell \left( {g\left( {a;\beta }\right) }\right) }$ , we write down the corresponding score function as follows.
634
+
635
+ $$
636
+ {S}_{\ell \left( {g\left( {a;\beta }\right) }\right) } = {\left. \frac{\partial \log p\left( {z;\psi ,\zeta }\right) }{\partial \ell \left( {g\left( {a;\beta }\right) }\right) }\right| }_{{\psi }_{0},{\zeta }_{0}}
637
+ $$
638
+
639
+ $$
640
+ = \frac{\partial \log \left( {{\psi }_{1}\left( {C, a;{\zeta }_{10}}\right) \times {\psi }_{2}\left( {y - \ell \left( {g\left( {a;\beta }\right) }\right) - \gamma \left( {C, a;\psi }\right) , C, a;{\zeta }_{20}}\right) }\right) }{\partial \ell \left( {g\left( {a;\beta }\right) }\right) }
641
+ $$
642
+
643
+ $$
644
+ = \frac{\partial \log {\psi }_{2}\left( {y - \ell \left( {g\left( {a;\beta }\right) }\right) - \gamma \left( {C, a;\psi }\right) , l, a;{\zeta }_{20}}\right) }{\partial \ell \left( {g\left( {a;\beta }\right) }\right) }
645
+ $$
646
+
647
+ $$
648
+ = \frac{\partial \log {\psi }_{2}\left( {\epsilon , C, a;{\zeta }_{20}}\right) }{\partial \epsilon } \times \frac{\partial \epsilon }{\partial \ell \left( {g\left( {a;\beta }\right) }\right) }\;\left( {\epsilon \text{ is a function of }\ell \left( {g\left( {a;\beta }\right) }\right) }\right)
649
+ $$
650
+
651
+ $$
652
+ = \frac{{\psi }_{2\epsilon }^{\prime }\left( {\epsilon , C, a}\right) }{{\psi }_{2}\left( {\epsilon , C, a}\right) }f\left( {g\left( {a;\beta }\right) }\right) . \tag{10}
653
+ $$
654
+
655
+ In order to derive ${\Lambda }_{\zeta }^{ \bot }$ , we proceed as follows. Since ${\Lambda }_{\zeta } = {\Lambda }_{{\zeta }_{1}} + {\Lambda }_{{\zeta }_{2}} + {\Lambda }_{\ell \left( {g\left( {a;\beta }\right) }\right) }$ and ${\Lambda }_{{\zeta }_{1}} + {\Lambda }_{{\zeta }_{2}} \subset {\Lambda }_{\zeta }$ , then ${\Lambda }_{\zeta }^{ \bot } \subset$ ${\left( {\Lambda }_{{\zeta }_{1}} + {\Lambda }_{{\zeta }_{2}}\right) }^{ \bot } = \{ g\left( {c, a}\right) \epsilon \}$ . Similarly, ${\Lambda }_{\zeta }^{ \bot } \subset {\Lambda }_{\ell \left( {g\left( {a;\beta }\right) }\right) }^{ \bot }$ , therefore ${\Lambda }_{\zeta }^{ \bot } = \left\{ {{\left( {\Lambda }_{{\zeta }_{1}} + {\Lambda }_{{\zeta }_{2}}\right) }^{ \bot } \cap {\Lambda }_{\ell \left( {g\left( {a;\beta }\right) }\right) }^{ \bot }}\right\}$ .
656
+
657
+ Pick an arbitrary element in ${\left( {\Lambda }_{{\zeta }_{1}} + {\Lambda }_{{\zeta }_{2}}\right) }^{ \bot }$ , and denote it by $d\left( {C, a}\right) \epsilon$ . For $d\left( {C, a}\right) \epsilon$ to be an element in ${\Lambda }_{\zeta }^{ \bot }$ , it needs to be orthogonal to every element in ${\Lambda }_{\ell \left( {g\left( {a;\beta }\right) }\right) }$ . Pick an arbitrary element in ${\Lambda }_{\ell \left( {g\left( {a;\beta }\right) }\right) }$ and denote it by $\frac{{\psi }_{2\varepsilon }^{\prime }}{{\psi }_{2}}h\left( {g\left( {a;\beta }\right) }\right)$ . We have,
658
+
659
+ $$
660
+ \forall h\left( {g\left( {a;\beta }\right) }\right) \;0 = < d\left( {C, a}\right) \epsilon ,\frac{{\psi }_{2\epsilon }^{\prime }}{{\psi }_{2}}h\left( {g\left( {a;\beta }\right) }\right) >
661
+ $$
662
+
663
+ $$
664
+ = \mathbb{E}\left\lbrack {d\left( {C, a}\right) \epsilon \frac{{\psi }_{2\epsilon }^{\prime }}{{\psi }_{2}}h\left( {g\left( {a;\beta }\right) }\right) }\right\rbrack
665
+ $$
666
+
667
+ $$
668
+ = \mathbb{E}\left\lbrack {d\left( {C, a}\right) h\left( {g\left( {a;\beta }\right) }\right) }\right\rbrack .
669
+ $$
670
+
671
+ Consequently, $\forall h\left( {g\left( {a;\beta }\right) }\right)$ :
672
+
673
+ $$
674
+ 0 = \mathbb{E}\left\lbrack {d\left( {C, a}\right) \times h\left( {g\left( {a;\beta }\right) }\right) }\right\rbrack
675
+ $$
676
+
677
+ $$
678
+ = \mathbb{E}\left\lbrack {\mathbb{E}\left\lbrack {d\left( {C, a}\right) \times h\left( {g\left( {a;\beta }\right) }\right) \mid g\left( {a;\beta }\right) }\right\rbrack }\right\rbrack
679
+ $$
680
+
681
+ $$
682
+ = \mathbb{E}\left\lbrack {h\left( {g\left( {a;\beta }\right) }\right) \times \mathbb{E}\left\lbrack {d\left( {C, a}\right) \mid g\left( {a;\beta }\right) }\right\rbrack }\right\rbrack
683
+ $$
684
+
685
+ $$
686
+ = \mathbb{E}\left\lbrack {h\left( {g\left( {a;\beta }\right) }\right) }\right\rbrack \times \mathbb{E}\left\lbrack {d\left( {C, a}\right) \mid g\left( {a;\beta }\right) }\right\rbrack .
687
+ $$
688
+
689
+ Therefore, $\mathbb{E}\left\lbrack {d\left( {C, a}\right) \mid g\left( {a;\beta }\right) }\right\rbrack = 0$ and
690
+
691
+ $$
692
+ {\Lambda }_{\zeta }^{ \bot } = \{ \left( {d\left( {C, a}\right) - \mathbb{E}\left\lbrack {d\left( {C, a}\right) \mid g\left( {a;\beta }\right) }\right\rbrack }\right) \times \epsilon \}
693
+ $$
694
+
695
+ $$
696
+ = \{ \left( {d\left( {C, a}\right) - \mathbb{E}\left\lbrack {d\left( {C, a}\right) \mid g\left( {a;\beta }\right) }\right\rbrack }\right) \times (Y - \gamma \left( {C, a;\psi }\right) - \ell \left( {g\left( {a;\beta }\right) }\right) \}
697
+ $$
698
+
699
+ $$
700
+ = \{ \left( {d\left( {C, a}\right) - \mathbb{E}\left\lbrack {d\left( {C, a}\right) \mid g\left( {a;\beta }\right) }\right\rbrack }\right) \times \left( {U\left( \psi \right) - \mathbb{E}\left\lbrack {U\left( \psi \right) \mid C, a}\right\rbrack }\right) \} .
701
+ $$
702
+
703
+ Note that $\mathbb{E}\left\lbrack {U\left( \psi \right) \mid C, a}\right\rbrack = {\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {a;\beta }\right) }\right\rbrack = \mathbb{E}\left\lbrack {U\left( \psi \right) \mid g\left( {a;\beta }\right) }\right\rbrack$ . Hence,
704
+
705
+ $$
706
+ {\Lambda }_{\zeta }^{ \bot } = \left\{ {\{ d\left( {C, a}\right) - \mathbb{E}\left\lbrack {d\left( {C, a}\right) \mid g\left( {a;\beta }\right) }\right\rbrack \} \times \{ U\left( \psi \right) - \mathbb{E}\left\lbrack {U\left( \psi \right) \mid g\left( {a;\beta }\right) }\right\rbrack \} }\right\} .
707
+ $$
UAI/UAI 2022/UAI 2022 Conference/BFULBwUocxq/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,263 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § SEMIPARAMETRIC CAUSAL SUFFICIENT DIMENSION REDUCTION OF MULTIDIMENSIONAL TREATMENTS
2
+
3
+ § ABSTRACT
4
+
5
+ Cause-effect relationships are typically evaluated by comparing outcome responses to binary treatment values, representing two arms of a hypothetical randomized controlled trial. However, in certain applications, treatments of interest are continuous and multidimensional. For example, understanding the causal relationship between severity of radiation therapy, summarized by a multidimensional vector of radiation exposure values and posttreatment side effects is a problem of clinical interest in radiation oncology. An appropriate strategy for making interpretable causal conclusions is to reduce the dimension of treatment. If individual elements of a multidimensional treatment vector weakly affect the outcome, but the overall relationship between treatment and outcome is strong, careless approaches to dimension reduction may not preserve this relationship. Further, methods developed for regression problems do not directly transfer to causal inference due to confounding complications. In this paper, we use semiparamet-ric inference theory for structural models to give a general approach to causal sufficient dimension reduction of a multidimensional treatment such that the cause-effect relationship between treatment and outcome is preserved. We illustrate the utility of our proposals through simulations and a real data application in radiation oncology.
6
+
7
+ § 1 INTRODUCTION
8
+
9
+ In causal inference, the exposure of interest is commonly assumed to be either binary (e.g., comparing treatment vs placebo) or continuous (e.g., effect of treatment dosages on viral load.) In the latter cases, in addition to contrasts of responses to two specific doses, we may be interested in the entire dose-response relationship, and choose to model it via a simple functional, for example a logarithmic or sigmoidal function. In other applications, we might be interested in assessing causal relationships between outcomes and treatments with values that lie in a multidimensional space. For instance, in natural language processing interest lies in causal analyses that involve high dimensional text data [Gentzkow et al., 2019]. Another example is the neu-roimaging data used to relate neuronal network activity to cognitive processing and behavior. [Ramsey et al., 2010, Mather et al., 2013].
10
+
11
+ As our motivational example, we focus on an application in radiation oncology. In neck and head cancers, minor variations in dose and direction of radiation may result in similar tumor reduction but vastly improve secondary outcomes, such as weight loss, or dysfunction induced by radiation therapy, such as dysphasia or xerostomia [Robertson et al., 2015]. Thus, understanding the causal relationship between a multidimensional radiation exposure and downstream side effects in cancer patients undergoing radiation therapy is of clinical interest. Unlike standard treatments, radiation therapy is complex and is represented by three dimensional voxel maps of radiation doses in different parts of the body. Since this representation is very high dimensional, the exact dose localization information in the voxel map is sometimes represented by cumulative dose-volume histograms and summarized by a multidimensional vector of exposure dosages. Even such summaries complicates establishing clinically relevant causal relationships
12
+
13
+ Since we are interested in dimension reduction for the sake of explicating a particular relationship between treatments and outcomes, approaches that do not take outcomes into account in the right way run the risk of distorting the estimate of this relationship, or even falsely concluding the relationship is absent. Therefore, seemingly natural approaches to dimension reduction, such as principal component analysis (PCA), are not appropriate in our setting. On the other hand, there is a line of research in statistics on sufficient dimension reduction (SDR) [Li, 1991] with the objective of reducing the dimension of covariates by preserving the associational relations between covariates and outcome. However, due to spurious associations introduced by confounding which is ubiquitous in observational data sources, naive use of SDR approaches to discern causal relationships between treatments and outcomes leads to bias.
14
+
15
+ We are interested in applying SDR core ideas to reduce dimension of a treatment in a way that preserves a causal rather than associational relationship with the outcome. In addition, we are interested in doing so under the weakest possible assumptions, which entails generalizing the semi-parametric approaches in the SDR literature [Ma and Zhu, 2012]. In this paper, we provide a framework for structural (causal) models based on semiparametric inference theory developed for marginal structural models [Robins, 1999] to give what we believe is the first approach to causal SDR of a multidimensional treatment.
16
+
17
+ § 2 PRELIMINARIES
18
+
19
+ Sufficient dimension reduction. Given an outcome variable $Y$ and a $p$ -dimensional covariate vector $X$ , the goal of SDR is to find a known function ${g}_{X}\left( {.;\beta }\right)$ parameterized by $\beta$ with a much smaller range than domain such that $Y$ depends on $X$ only through ${g}_{X}\left( {X;\beta }\right)$ . Often this function is assumed to be linear, in which case the goal is to find $\beta \in {\mathbb{R}}^{p \times d}$ , where $d < p$ , such that $Y$ depends on $X$ only through ${X}^{T}\beta$ , i.e., $\mathbb{E}\left\lbrack {Y \mid X}\right\rbrack = \mathbb{E}\left\lbrack {Y \mid {X}^{T}\beta }\right\rbrack$ . Often, proposed solutions to SDR rely on strong parametric assumptions that are unlikely to hold in practical applications, such as the linearity condition where $\mathbb{E}\left\lbrack {X \mid {X}^{T}\beta }\right\rbrack$ is assumed to be a linear function of $X$ , or the assumption that $\operatorname{cov}\left( {X \mid {X}^{T}\beta }\right)$ is constant rather than a function of $X$ ,[Li,1991, Cook and Weisberg,1991, Hardle and Stoker, 1989, Ichimura, 1993, Cook and Li, 2002].
20
+
21
+ Ma and Zhu [2012] introduced a new approach to SDR by recasting the problem in terms of estimation in a semi-parametric model. Crucially, this approach relies on far weaker assumptions than is typical in SDR, and is thus much more generally applicable. To obtain the relevant semiparametric model, we rewrite the above condition as $Y = \ell \left( {{X}^{T}\beta }\right) + \epsilon$ , where $\ell \left( {{X}^{T}\beta }\right) \mathrel{\text{ := }} \mathbb{E}\left\lbrack {Y \mid {X}^{T}\beta }\right\rbrack$ is an unspecified smooth function and $\mathbb{E}\left\lbrack {\epsilon \mid X}\right\rbrack = 0$ , while the distribution $p\left( {\epsilon \mid X}\right)$ remains otherwise unrestricted. Ma and Zhu [2012] derived the class of all influence functions for $\beta$ , a.k.a. the orthogonal nuisance tangent space denoted by ${\Lambda }_{\eta }^{ \bot }$ , as: ${\Lambda }_{\eta }^{ \bot } = \left\{ {\left( {Y - \mathbb{E}\left\lbrack {Y \mid {X}^{T}\beta }\right\rbrack }\right) \times \left( {\alpha \left( X\right) - \mathbb{E}\left\lbrack {\alpha \left( X\right) \mid {X}^{T}\beta }\right\rbrack }\right) }\right\}$ , where $\alpha \left( X\right)$ is any function of $X$ ; see Appendix for a brief overview of influence functions, and [Van der Vaart, 2000, Bang and Robins, 2005, Tsiatis, 2007] for more details.
22
+
23
+ A well-known property of semiparametric models is that all elements of ${\Lambda }_{\eta }^{ \bot }$ are mean 0 under the true distribution. Hence, a general class of estimating equations can be obtained using the sample version of
24
+
25
+ $$
26
+ \mathbb{E}\left\lbrack {U\left( \beta \right) }\right\rbrack \tag{1}
27
+ $$
28
+
29
+ $$
30
+ = \mathbb{E}\left\lbrack {\left( {Y - \mathbb{E}\left\lbrack {Y \mid {X}^{T}\beta }\right\rbrack }\right) \times \left( {\alpha \left( X\right) - \mathbb{E}\left\lbrack {\alpha \left( X\right) \mid {X}^{T}\beta }\right\rbrack }\right) }\right\rbrack = 0,
31
+ $$
32
+
33
+ where $U\left( \beta \right)$ is an arbitrary element in ${\Lambda }_{\eta }^{ \bot }$ . The estimator obtained from (T) is doubly robust under any choice of models for $\mathbb{E}\left\lbrack {Y \mid {\bar{X}}^{T}\beta }\right\rbrack$ and $\mathbb{E}\left\lbrack {\alpha \left( X\right) \mid {X}^{T}\beta }\right\rbrack$ , meaning that the estimator remains consistent if either of these two models is correctly specified [Ma and Zhu, 2012].
34
+
35
+ Causal inference. In causal inference, we seek to make inferences about the causal relationship of a treatment variable $A$ and an outcome variable $Y$ by counterfactual contrasts of the form $Y\left( a\right)$ representing a hypothetical experiment where treatment $A$ is set to $a$ , possibly contrary to the fact. A common setting considers, in addition to $A$ and $Y$ , a vector of baseline variables $C$ , yielding an observed data distribution of the form $p\left( {Y,A,C}\right)$ . Under standard assumptions of consistency, which states that counterfactual outcome is the same as observed outcome if treatment is set to observed value, conditional ignorability which states that $\{ Y\left( a\right) \}$ is independent of $A$ conditional on $C$ , and positivity of $p\left( {A \mid C}\right)$ , the counterfactual distribution $p\left( {Y\left( a\right) }\right)$ is identified as the following function of observed data
36
+
37
+ $$
38
+ p\left( {Y\left( a\right) }\right) = \mathop{\sum }\limits_{c}p\left( {Y \mid A = a,C = c}\right) \times p\left( {C = c}\right) . \tag{2}
39
+ $$
40
+
41
+ The average causal effect (ACE) of a binary treatment on an outcome is defined as $\mathrm{{ACE}} = \mathbb{E}\left\lbrack {Y\left( 1\right) }\right\rbrack - \mathbb{E}\left\lbrack {Y\left( 0\right) }\right\rbrack$ . Under the above assumptions, the counterfactual mean $\mathbb{E}\left\lbrack {Y\left( a\right) }\right\rbrack$ is given as the following function of the observed data, called the adjustment formula or g-formula,
42
+
43
+ $$
44
+ \mathbb{E}\left\lbrack {Y\left( a\right) }\right\rbrack = \mathbb{E}\left\lbrack {\mathbb{E}\left\lbrack {Y \mid A = a,C}\right\rbrack }\right\rbrack , \tag{3}
45
+ $$
46
+
47
+ where the outer expectation is taken with respect to $p\left( C\right)$ ; see Tian and Pearl [2002], Shpitser and Pearl [2006], Bhat-tacharya et al. [2020] for general identification algorithms in the presence of unmeasured confounders.
48
+
49
+ There are several different approaches on estimating the adjustment formula such as plug-in, inverse probability weighting (IPW), and semiparametric based estimators such as augmented IPW (AIPW) [Van der Vaart, 2000, Bang and Robins, 2005, Van der Laan et al., 2011]. An alternative class of IPW estimators models the relationship between $A$ and $Y$ via a marginal structural model (MSM), or a causal regression. A simple version of such a model takes the form $\mathbb{E}\left\lbrack {Y\left( a\right) }\right\rbrack = f\left( {a;\beta }\right)$ , for finite set of parameters $\beta$ . Given such a model, inferences about $\mathbb{E}\left\lbrack {Y\left( a\right) }\right\rbrack$ reduce to inferences about $\beta$ . For binary treatments, $f\left( {a;\beta }\right)$ can be written as ${\beta }_{0} + {\beta }_{a} \times a$ without loss of generality, with $\mathrm{{ACE}} = {\beta }_{a}$ . An MSM is different from an ordinary regression model, since $\mathbb{E}\left\lbrack {Y\left( a\right) }\right\rbrack \neq \mathbb{E}\left\lbrack {Y \mid A = a}\right\rbrack$ given our causal assumptions. Thus, one approach to estimating $\beta$ is via the following estimating equation, appropriately reweighted by the treatment propensity score model, ${W}_{a}\left( {C;{\eta }_{a}}\right) \mathrel{\text{ := }} p\left( {A \mid C;{\eta }_{a}}\right)$ ,
50
+
51
+ $$
52
+ {\mathbb{P}}_{n}\left\lbrack {\frac{{p}^{ * }\left( a\right) }{{W}_{a}\left( {C;{\widehat{\eta }}_{a}}\right) } \times \{ Y - f\left( {a;\beta }\right) \} }\right\rbrack = 0, \tag{4}
53
+ $$
54
+
55
+ where ${\mathbb{P}}_{n} = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\left( .\right) ,{p}^{ * }\left( a\right)$ is an arbitrary function of $a$ with the same dimension as $\beta$ , and ${\widehat{\eta }}_{a}$ is the maximum likelihood estimate of ${\eta }_{a}$ . This IPW procedure is known to be inefficient. A more efficient (in fact optimal in a wide class of reasonable estimators) approach is to use influence functions, described in detail in [Robins, 1999]. Our approach to causal SDR stands in the same relation to the semiparametric approach to SDR for regression problems in [Ma and Zhu, 2012] as fitting regression models does to fitting marginal structural models.
56
+
57
+ § 3 CAUSAL SUFFICIENT DIMENSION REDUCTION
58
+
59
+ We are interested in the causal effect of a multidimensional treatment $A \in {\mathbb{R}}^{p}$ on outcome $Y$ , assuming all relevant covariates that need to be controlled for are observed and denoted by $C$ . We would like to reduce the dimension of treatment $A$ such that the causal relationship between $A$ and $Y$ is preserved. Let $g\left( {.;\beta }\right)$ be a function parameterized by $\beta$ that takes values in ${\mathbb{R}}^{p}$ and map them to values in ${\mathbb{R}}^{d},d < p$ , i.e., $g : A \in {\mathbb{R}}^{p} \mapsto g\left( {A;\beta }\right) \in {\mathbb{R}}^{d}$ . We want to reduce the dimension of $A$ in such a way that the counterfactual response $\mathbb{E}\left\lbrack {Y\left( a\right) }\right\rbrack$ only depends on $A$ via $g\left( a\right)$ . Specifically, we assume that if $\mathbb{E}\left\lbrack {Y\left( a\right) }\right\rbrack$ is identified, that is if $\mathbb{E}\left\lbrack {Y\left( a\right) }\right\rbrack$ is a mapping $f$ from values $a$ of $A$ to functionals ${h}_{a}\left( {p\left( V\right) }\right)$ of the observed data distribution, where $p\left( V\right)$ denotes the joint distribution over the set of observed variables $V$ , then $f\left( a\right) = f\left( {g\left( {a;\beta }\right) }\right)$ . The methodology proposed in this paper does not depend on the choice of $g\left( {.;\beta }\right)$ , although we fix a particular $g\left( {.;\beta }\right)$ in our experiments. We assume the three identification assumptions that were discussed in the previous section, namely consistency, conditional ignor-ability, and positivity, hold in our analysis. Therefore, we fix ${h}_{a}\left( {p\left( {C,A,Y}\right) }\right) = \mathbb{E}\lbrack \mathbb{E}\left\lbrack {Y \mid A = a,C}\right\rbrack$ , as shown in (2).
60
+
61
+ The estimation procedure for MSMs shown in (4) can be viewed as a standard estimating equation for a regression model relating treatment and outcome, but applied to observed data readjusted via inverse weighting in such a way that treatment appear randomly assigned. In other words, MSMs are regressions applied to a version of observed data in such a way that regression parameters can be interpreted causally. Unlike other estimating equations that solve for $\beta$ by maximizing the feature outcome relationship, the equation in (T) fits $\beta$ to maintain the identity $\mathbb{E}\left\lbrack {Y \mid X}\right\rbrack = \mathbb{E}\left\lbrack {Y \mid {X}^{T}\beta }\right\rbrack$ . As a consequence, semiparamet-ric causal SDR can be viewed as an MSM version of this regression problem, which seeks to find $\beta$ which maintains $\mathbb{E}\left\lbrack {Y\left( a\right) }\right\rbrack = \mathbb{E}\left\lbrack {Y\left( {g\left( {a;\beta }\right) }\right) }\right\rbrack$ . In other words, our aim is to estimate $\beta$ by maintaining the following identity
62
+
63
+ $$
64
+ \mathbb{E}\left\lbrack {\mathbb{E}\left\lbrack {Y \mid a,C}\right\rbrack }\right\rbrack = \mathbb{E}\left\lbrack {\mathbb{E}\left\lbrack {Y \mid g\left( {a;\beta }\right) ,C}\right\rbrack }\right\rbrack , \tag{5}
65
+ $$
66
+
67
+ where the outer expectation is wrt the density $p\left( C\right)$ .
68
+
69
+ We note here the different roles that variables play in regression SDR and causal SDR. The goal of regression SDR is to preserve the associative relationship between high dimensional features $X$ and outcome $Y$ . The goal of causal SDR, as we view it here, is to preserve the causal relationship between a multidimensional treatment $A$ and outcome $Y$ , which is made complicated by the presence of spurious associations induced by covariates $C$ . Thus, the goal in causal SDR is not to maintain the regression relationship between covariates and outcome by assuming $\mathbb{E}\left\lbrack {Y\mid \{ A,C\} }\right\rbrack = \mathbb{E}\left\lbrack {Y \mid g\left( {\{ A,C\} ;\beta }\right) }\right\rbrack$ , but to preserve the relationship as in (5) where $C$ is marginalized (adjusted for). The set of confounders $C$ could still be high dimensional, but they are not of primary interest in our problem. Incorporating baseline covariates into the dimension reduction strategy along with the treatment, as is done in some MSMs, is left as an interesting avenue for future work. Examples of work focusing on dimension reduction of common confounders include Imai and Ratkovic [2014], Hu et al. [2014], Shortreed and Ertefaie [2017], Banijamali et al. [2018], Ma et al. [2019], Luo and Zhu [2020], Cheng et al. [2020].
70
+
71
+ As stated earlier, our objective is to preserve the causal effect of $A$ on $Y$ , which is of the form shown in (3). However, it suffices to say that if the counterfactual response curve, i.e., $\mathbb{E}\left\lbrack {Y\left( a\right) }\right\rbrack$ , is preserved under our dimensionality reduction scheme, then the causal effect is preserved. Hence, we stated our constraint in (5) in terms of the counterfactual mean rather than the counterfactual contrast that would define the effect. Moreover, even though treatment is multidimensional, we emphasize that each unit still receives one treatment session; e.g., a single session of radiation therapy with no followups. Records of radiation treatment are usually stored as monodimentional cumulative dose-volume histograms, and are summarized as amount of radiation on $k\%$ of the organ’s volume, where $k$ ranges from 1 to 100 .
72
+
73
+ In a conditionally ignorable causal model, intervention on $A$ corresponds to dropping the term $p\left( {A \mid C}\right)$ from the observed density $p\left( {Y,A,C}\right)$ yielding (2). Define $q\left( {Y,A,C}\right)$ as the following modified version of (2):
74
+
75
+ $$
76
+ q\left( {Y,A,C}\right) \mathrel{\text{ := }} p\left( {Y \mid A,C}\right) \times {p}^{ * }\left( A\right) \times p\left( C\right) ,
77
+ $$
78
+
79
+ where ${p}^{ * }\left( A\right)$ is any density with the same support as $p\left( A\right)$ . Then (5) can be rewritten as
80
+
81
+ $$
82
+ {\mathbb{E}}_{q}\left\lbrack {Y \mid A = a}\right\rbrack = {\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {a;\beta }\right) }\right\rbrack , \tag{6}
83
+ $$
84
+
85
+ where ${\mathbb{E}}_{q}$ is the expectation taken with respect to the density $q\left( {Y,A,C}\right)$ defined above, and $q\left( {Y \mid A}\right) = \mathop{\sum }\limits_{C}q\left( {Y,C \mid A}\right) =$ $\mathop{\sum }\limits_{C}p\left( {Y \mid A,C}\right) \times p\left( C\right)$ by definition. Equations (5) and (6) are equivalent forms of our constraint in the causal SDR problem where the MSM model for $\mathbb{E}\left\lbrack {Y\left( a\right) }\right\rbrack = {\mathbb{E}}_{q}\left\lbrack {Y \mid a}\right\rbrack$ , is assumed to be a function of the multidimensional treatment intervention $a$ only through its lower dimension representation $g\left( {a;\beta }\right)$ . We now describe two approaches to estimating $\beta$ .
86
+
87
+ § 3.1 INVERSE PROBABILITY WEIGHTED SDR
88
+
89
+ Let $\ell \left( {g\left( {A;\beta }\right) }\right) \mathrel{\text{ := }} {\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {A;\beta }\right) }\right\rbrack$ and $\nu \left( {g\left( {A;\beta }\right) }\right) \mathrel{\text{ := }}$ ${\mathbb{E}}_{q}\left\lbrack {\alpha \left( A\right) \mid g\left( {A;\beta }\right) }\right\rbrack$ be two unspecified smooth functions of $g\left( {A;\beta }\right)$ . A simple estimation strategy for $\beta$ based on generalizing (4), entails solving
90
+
91
+ $$
92
+ \mathbb{E}\left\lbrack {\frac{{p}^{ * }\left( a\right) }{p\left( {A = a \mid C}\right) } \times \widetilde{U}\left( \beta \right) }\right\rbrack = 0, \tag{7}
93
+ $$
94
+
95
+ where $\widetilde{U}\left( \beta \right) = \{ Y - \ell \left( {g\left( {a;\beta }\right) }\right) \} \times \{ \alpha \left( A\right) - \nu \left( {g\left( {a;\beta }\right) }\right) \}$ , ${p}^{ * }\left( a\right)$ is an arbitrary function of $a$ , and $p\left( {A \mid C}\right)$ is a correctly specified statistical model which governs how the treatment $A$ is assigned based on baseline characteristics $C$ . The above equation may be solved using observed data by evaluating the expectation empirically.
96
+
97
+ Lemma 1. An estimator for $\beta$ based on solving (7) is unbiased under correct specification of $p\left( {A \mid C}\right)$ , and either one of $\ell \left( {g\left( {A;\beta }\right) }\right) \mathrel{\text{ := }} {\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {A;\beta }\right) }\right\rbrack$ or $\nu \left( {g\left( {A;\beta }\right) }\right) \mathrel{\text{ := }}$ ${\mathbb{E}}_{q}\left\lbrack {\alpha \left( A\right) \mid g\left( {A;\beta }\right) }\right\rbrack$ .
98
+
99
+ § 3.2 SEMIPARAMETRIC CAUSAL SDR
100
+
101
+ A general approach for deriving regular and asymptotically linear (RAL) estimators of $\beta$ is based on deriving ${\widetilde{\Lambda }}_{\eta }^{ \bot }$ , the orthogonal complement of the nuisance tangent space of a semiparametric model that enforces the constraint (5), but places no other restrictions on the observed data distribution; ${\widetilde{\Lambda }}_{\eta }^{ \bot }$ is the class of all influence functions. One approach is to derive this space explicitly, as was done in [Ma and Zhu, 2012]. An alternative is to take advantage of general theory relating orthogonal complements of regression problems, and orthogonal complements of "causal regression problems," or MSMs, developed by Robins [1999]. Given the semiparametric model $\mathcal{M}$ induced by the restriction (5), we take advantage of this theory in the following result.
102
+
103
+ Theorem 1. The orthogonal complement of the nuisance tangent space ${\widetilde{\Lambda }}_{\eta }^{ \bot }$ for $\beta$ that satisfies (6) is:
104
+
105
+ $$
106
+ {\widetilde{\Lambda }}_{\eta }^{ \bot } = \left\{ {\frac{\widetilde{U}\left( \beta \right) }{{W}_{a}\left( C\right) } - \phi \left( {A,C}\right) + \mathbb{E}\left\lbrack {\phi \left( {A,C}\right) \mid C}\right\rbrack }\right\} ,
107
+ $$
108
+
109
+ where $\phi \left( {A,C}\right)$ is an arbitrary function of $A$ and $C,{W}_{a}\left( C\right)$ is the IPW weight $p\left( {A = a \mid C}\right) /{p}^{ * }\left( a\right)$ for a fixed ${p}^{ * }\left( a\right)$ , and $\widetilde{U}\left( \beta \right)$ is of the form
110
+
111
+ $$
112
+ \widetilde{U}\left( \beta \right) = \{ Y - \ell \left( {g\left( {a;\beta }\right) }\right) \} \times \{ \alpha \left( A\right) - \nu \left( {g\left( {a;\beta }\right) }\right) \} ,
113
+ $$
114
+
115
+ where $\ell \left( {g\left( {a;\beta }\right) }\right) \mathrel{\text{ := }} {\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {a;\beta }\right) }\right\rbrack$ and $\nu \left( {g\left( {a;\beta }\right) }\right) \mathrel{\text{ := }}$ ${\mathbb{E}}_{q}\left\lbrack {\alpha \left( A\right) \mid g\left( {A;\beta }\right) }\right\rbrack$ . Moreover, the most efficient estimator in this class, for any fixed $\alpha \left( A\right)$ , is recovered by setting ${\phi }^{\text{ opt }}\left( {A,C}\right) = \mathbb{E}\left\lbrack {\left. \frac{\widetilde{U}\left( \beta \right) }{{W}_{a}\left( C\right) }\right| \;A,C}\right\rbrack .$
116
+
117
+ Lemma 2. For a fixed choice of $\alpha \left( A\right)$ and normalized function ${p}^{ * }\left( A\right)$ , the element $\widetilde{U}\left( {\beta }^{ * }\right) \in {\widetilde{\Lambda }}_{\eta }^{ \bot }$ corresponding to the optimal choice of $\phi \left( {A,C}\right)$ has the form.
118
+
119
+ $$
120
+ \frac{{p}^{ * }\left( A\right) }{p\left( {A \mid C}\right) } \times \widetilde{U}\left( \beta \right) - \frac{{p}^{ * }\left( A\right) }{p\left( {A \mid C}\right) } \times \mathbb{E}\left\lbrack {\widetilde{U}\left( \beta \right) \mid A,C}\right\rbrack
121
+ $$
122
+
123
+ $$
124
+ + {\mathbb{E}}_{q}\left\lbrack {\mathbb{E}\left\lbrack {\widetilde{U}\left( \beta \right) \mid A,C}\right\rbrack \mid C}\right\rbrack , \tag{8}
125
+ $$
126
+
127
+ where ${\mathbb{E}}_{q}\left\lbrack \text{ . }\right\rbrack {isthespectationtakenwithrespecttothe}$ density $q\left( {Y,A,C}\right) \mathrel{\text{ := }} p\left( {Y \mid A,C}\right) \times {p}^{ * }\left( A\right) \times p\left( C\right)$ .
128
+
129
+ § 3.3 ROBUSTNESS PROPERTIES
130
+
131
+ Just as ${\Lambda }_{\eta }^{ \bot }$ in Section 2 entailed double robustness of $U\left( \beta \right)$ for semiparametric regression SDR, we now show that the structure of ${\widetilde{\Lambda }}_{\eta }^{ \bot }$ yields additional robustness properties.
132
+
133
+ Lemma 3. If one of $\{ p\left( {A \mid C}\right) ,\mathbb{E}\left\lbrack {\widetilde{U}\left( \beta \right) \mid A,C}\right\rbrack \}$ and one of $\left\{ {\ell \left( {g\left( {A;\beta }\right) }\right) \mathrel{\text{ := }} {\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {A;\beta }\right) }\right\rbrack ,\nu \left( {g\left( {A;\beta }\right) }\right) \mathrel{\text{ := }} }\right.$ $\left. {{\mathbb{E}}_{q}\left\lbrack {\alpha \left( A\right) \mid g\left( {A;\beta }\right) }\right\rbrack }\right\}$ is correctly specified, then the estimator for $\beta$ based on (8) is consistent and asymptotically normal with mean zero and variance ${\tau }^{-1} \times \operatorname{Var}\left( {\widetilde{U}\left( {\beta }^{ * }\right) }\right) \times {\tau }^{-{1}^{\prime }}$ , where $\widetilde{U}\left( {\beta }^{ * }\right)$ is given in (8) and $\tau = \mathbb{E}\left\lbrack {\partial \widetilde{U}\left( {\beta }^{ * }\right) /\partial \beta }\right\rbrack$ .
134
+
135
+ This result implies that the estimating equation in (8) yields a " $2 \times 2$ " robustness property. In practice, since we will be dealing with multidimensional problems, correct specification of models is difficult to ensure. However, robustness properties of semiparametric estimators also implies that in regions where sufficient subset of models are approximately correct, the overall bias remains small. If $p\left( {A \mid C}\right)$ and one of the models in $\widetilde{U}\left( \beta \right)$ is correctly specified, the AIPW estimator using (8) remains consistent for any choice of $\mathbb{E}\left\lbrack {\widetilde{U}\left( \beta \right) \mid A,C}\right\rbrack$ . One promising direction of future work is to consider cases where $p\left( {A \mid C}\right)$ and $\widetilde{U}\left( \beta \right)$ are known and search for $\mathbb{E}\left\lbrack {\widetilde{U}\left( \beta \right) \mid A,C}\right\rbrack$ which yields good properties of the overall estimator.
136
+
137
+ § 4 ESTIMATION AND IMPLEMENTATION
138
+
139
+ In order to estimate the parameters $\beta$ in 6, we need to solve the estimating equation ${\mathbb{P}}_{n}\left\lbrack {\widetilde{U}\left( {\beta }^{ * }\right) }\right\rbrack = 0$ , where $\widetilde{U}\left( {\beta }^{ * }\right)$ is given in (8). For any $\widetilde{U}\left( \beta \right)$ of the form given in Section 3.1, Theorem 1, provides the class of all RAL estimators for ${\beta }^{ * }$ along with the most efficient estimator in this class. Under the general form of $\widetilde{U}\left( \beta \right) = \{ Y - \ell \left( {g\left( {A;\beta }\right) }\right) \} \times$ $\{ \alpha \left( A\right) - \nu \left( {g\left( {A;\beta }\right) }\right) \}$ , the term $\mathbb{E}\left\lbrack {\widetilde{U}\left( \beta \right) \mid A,C}\right\rbrack$ in $\widetilde{U}\left( {\beta }^{ * }\right)$ equals $\{ \mathbb{E}\left\lbrack {Y \mid A,C}\right\rbrack - \ell \left( {g\left( {A;\beta }\right) }\right) \} \times \{ \alpha \left( A\right) - \nu \left( {g\left( {A;\beta }\right) }\right) \}$ .
140
+
141
+ Hence, in the expression in [8], four different models are involved in estimating $\widetilde{U}\left( {\beta }^{ * }\right)$ , namely (i) $\ell \left( {g\left( {A;\beta }\right) }\right) \mathrel{\text{ := }}$ ${\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {A;\beta }\right) }\right\rbrack$ ,(ii) $\nu \left( {g\left( {A;\beta }\right) }\right) \mathrel{\text{ := }} {\mathbb{E}}_{q}\left\lbrack {\alpha \left( A\right) \mid g\left( {A;\beta }\right) }\right\rbrack$ ,(iii) $p\left( {A \mid C}\right)$ , and (iv) $\mathbb{E}\left\lbrack {Y \mid A,C}\right\rbrack = {\mathbb{E}}_{q}\left\lbrack {Y \mid A,C}\right\rbrack$ . The last term in (8) is equal to ${\mathbb{E}}_{a}\left\lbrack {\mathbb{E}\left\lbrack {U\left( \beta \right) \mid A,C}\right\rbrack }\right\rbrack$ , where ${\mathbb{E}}_{a}\left\lbrack \text{ . }\right\rbrack$ is the expectation wrt the marginal distribution of $A$ which can be evaluated empirically without additional modeling.
142
+
143
+ For a pre-specified functional form of $\ell \left( {g\left( {A;\beta }\right) }\right)$ , we need to fit three different nuisance models. Given models $\nu \left( {g\left( {A;\beta }\right) ;{\eta }_{\nu }}\right) ,p\left( {A \mid C;{\eta }_{a}}\right)$ , and $\mathbb{E}\left\lbrack {Y \mid A,C;{\eta }_{y}}\right\rbrack$ for $\nu \left( {g\left( {A;\beta }\right) }\right) ,p\left( {A \mid C}\right)$ , and $\mathbb{E}\left\lbrack {Y \mid A,C}\right\rbrack$ , respectively, it can be shown that if ${n}^{\frac{1}{4} + \epsilon }\left( {\widehat{\eta } - {\eta }_{0}}\right)$ is bounded in probability for some $\epsilon > 0$ , then the estimating equation ${\mathbb{P}}_{n}\left\lbrack {\widetilde{U}\left( {\beta }^{ * }\right) ;\widehat{\eta }}\right\rbrack = 0$ yields an estimate of $\beta$ with the same asymptotic properties as if the nuisance models were known. Here $\eta =$ $\left\{ {{\eta }_{\nu },{\eta }_{a},{\eta }_{y}}\right\}$ , and $\widehat{\eta },{\eta }_{0}$ denote the estimated and the true parameters of the nuisance models, respectively.
144
+
145
+ Theorem 2. Let ${\phi }_{0}$ denote the influence function of the estimator $\beta$ obtained from the estimating equation ${\mathbb{P}}_{n}\left\lbrack {\widetilde{U}\left( {{\beta }^{ * },{\eta }_{0}}\right) }\right\rbrack = 0$ . If ${n}^{\frac{1}{4} + \epsilon }\left( {\widehat{\eta } - {\eta }_{0}}\right)$ is bounded in probability for some $\epsilon > 0$ , then the influence function corresponding to the estimator $\widehat{\beta }$ obtained from the estimating equation ${\mathbb{P}}_{n}\left\lbrack {\widetilde{U}\left( {{\beta }^{ * },\widehat{\eta }}\right) }\right\rbrack = 0$ is the same as ${\phi }_{0}$ . In other words, $\widehat{\beta }$ follows the same asymptotic properties as if we knew the true nuisance models.
146
+
147
+ The condition for the rate of convergence of nuisance models in Theorem 2 is a sufficient condition and is potentially too conservative. In practice, we might be able to use models with the slower convergence rates, see [Fisher and Kennedy, 2018] for more details. [Stone, 1982] provides a detailed analysis of the convergence rates of nonparametric models.
148
+
149
+ Implementation. We now describe in detail our procedure for estimating $\beta$ by solving the empirical version of $\mathbb{E}\left\lbrack {\widetilde{U}\left( {\beta }^{ * }\right) }\right\rbrack = 0$ , where $\widetilde{U}\left( {\beta }^{ * }\right)$ is given in (8). In what follows, we assume the structural dimension $d$ , i.e. the cardinality of the range of $g\left( {;\beta }\right)$ , is known; we provide a discussion on choosing the structural dimension at the end of this section.
150
+
151
+ For a given choice of ${p}^{ * }\left( A\right)$ and $\alpha \left( A\right)$ ,
152
+
153
+ 1. First estimate ${\widehat{\eta }}_{a}$ and ${\widehat{\eta }}_{y}$ in $p\left( {A \mid C;{\eta }_{a}}\right)$ and $\mathbb{E}\left\lbrack {Y \mid A,C;{\eta }_{y}}\right\rbrack$ by maximum likelihood or nonparametric methods. These two models do not depend on $\beta$ and are not updated within the iterations below.
154
+
155
+ 2. Pick starting values ${\beta }^{\left( 1\right) }$ .
156
+
157
+ 3. At ${j}^{\text{ th }}$ iteration, given a fixed ${\beta }^{\left( j\right) }$ , estimate $\widehat{\ell }\left( {g\left( {A;{\beta }^{\left( j\right) }}\right) }\right)$ and $\widehat{\nu }\left( {g\left( {A;{\beta }^{\left( j\right) }}\right) }\right)$ , and compute:
158
+
159
+ $$
160
+ {U}^{q}\left( {\beta }^{\left( j\right) }\right) = \left\{ {Y - \widehat{\ell }\left( {g\left( {A;{\beta }^{\left( j\right) }}\right) }\right) }\right\} \times \{ \alpha \left( A\right) - \widehat{\nu }\left( {g\left( {A;{\beta }^{\left( j\right) }}\right) }\right) \} ,
161
+ $$
162
+
163
+ $$
164
+ \mathbb{E}\left\lbrack {{U}^{q}\left( {\beta }^{\left( j\right) }\right) \mid A,C}\right\rbrack = \left\{ {\mathbb{E}\left\lbrack {Y \mid A,C;{\widehat{\eta }}_{y}}\right\rbrack - \widehat{\ell }\left( {g\left( {A;{\beta }^{\left( j\right) }}\right) }\right) }\right\}
165
+ $$
166
+
167
+ $$
168
+ \times \left\{ {\alpha \left( A\right) - \widehat{\nu }\left( {g\left( {A;{\beta }^{\left( j\right) }}\right) }\right) }\right\} \text{ . }
169
+ $$
170
+
171
+ 4. Form the sample version of $\mathbb{E}\left\lbrack {\widetilde{U}\left( {\beta }^{ * }\right) }\right\rbrack$ as follows.
172
+
173
+ $$
174
+ \zeta \left( {\beta }^{\left( j\right) }\right) = {\mathbb{P}}_{n}\left\lbrack {\frac{{p}^{ * }\left( A\right) }{p\left( {A \mid C;{\widehat{\eta }}_{a}}\right) } \times \left\{ {{U}^{q}\left( {\beta }^{\left( j\right) }\right) - }\right. }\right.
175
+ $$
176
+
177
+ $$
178
+ \left. {\left. {-\mathbb{E}\left\lbrack {{U}^{q}\left( {\beta }^{\left( j\right) }\right) \mid A,C}\right\rbrack }\right\} + {\mathbb{E}}_{q}\left\lbrack {\mathbb{E}\left\lbrack {{U}^{q}\left( {\beta }^{\left( j\right) }\right) \mid A,C}\right\rbrack \mid C}\right\rbrack }\right\rbrack
179
+ $$
180
+
181
+ where ${\mathbb{P}}_{n}\left\lbrack \text{ . }\right\rbrack \mathrel{\text{ := }} \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}{\left\lbrack \text{ . }\right\rbrack }_{i}.$
182
+
183
+ 5. Calculate the first and second derivatives of $\partial \left\{ {\parallel \zeta \left( \beta \right) {\parallel }^{2}}\right\} /\partial \{ \beta \}$ numerically and evaluate them at ${\beta }^{\left( j\right) }$ , and use the Newton-Raphson update rule to update ${\beta }^{\left( j\right) }$ .
184
+
185
+ 6. Repeat steps (3) through (5) until convergence.
186
+
187
+ The implementation of an empirical evaluation of [7] follows a similar set of steps, except all steps pertaining to second and third terms of [8] are skipped. Moreover, in step(c)of the above implementation, we need to specify individual models for $\ell \left( {g\left( {A;\beta }\right) }\right) \mathrel{\text{ := }} {\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {A;\beta }\right) }\right\rbrack$ and $\mathbb{E}\left\lbrack {Y \mid A,C}\right\rbrack \mathrel{\text{ := }} {\mathbb{E}}_{q}\left\lbrack {Y \mid A,C}\right\rbrack$ . However, due to variation dependence of these models, it may be difficult to fit these two models in a congenial way in general. We provide an alternative approach in the following section.
188
+
189
+ In order to deal with the issue of congeniality, we may opt to specify ${\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {A;\beta }\right) }\right\rbrack$ and $\widetilde{f}\left( {A,C,\beta }\right) =$ ${\mathbb{E}}_{q}\left\lbrack {Y \mid A,C}\right\rbrack - {\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {A;\beta }\right) }\right\rbrack$ , which yield a variationally independent specification of ${\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {A;\beta }\right) }\right\rbrack$ and ${\mathbb{E}}_{q}\left\lbrack {Y \mid A,C}\right\rbrack = {\mathbb{E}}_{q}\left\lbrack {Y \mid g\left( {A;\beta }\right) }\right\rbrack + \widetilde{f}\left( {A,C,\beta }\right)$ . Consequently, the four variationally independent models we need to specify are as follows: $\ell \left( {g\left( {A;\beta }\right) }\right) ,\nu \left( {g\left( {A;\beta }\right) }\right) ,p\left( {A \mid C}\right)$ , and $\widetilde{f}\left( {A,C,\beta }\right)$ ; the last term in (8) can be evaluated empirically without additional modeling. Thus, we need to specify the additional nuisance model $\widetilde{f}$ . We propose to fit $\widetilde{f}$ by borrowing ideas from the theory of structural nested mean models (SNMMs) in [Vansteelandt and Joffe, 2014, Robins, 1999]. We defer the descriptions to the appendix and refer to $\widetilde{f}$ as an "inverted" structural nested mean model.
190
+
191
+ Choosing the structural dimension. Up until here, we assumed the structural dimension was known a priori. Finding the correct dimension is not an straightforward task and incorrect choices may greatly affect performance. We adapt the technique in [Ma and Zhu, 2012] that was used to select the structural dimension in regression SDR to causal SDR. Specifically, we utilize a resampling procedure to select the structural dimension. This procedure was originally described by [Dong and Li, 2010] and adapts the idea of [Ye and Weiss, 2003]. We consider a family of functions ${g}^{1}\left( {.;{\beta }^{1}}\right) ,\ldots ,{g}^{m}\left( {.;{\beta }^{m}}\right)$ with different structural dimensions, and use the cross-validation procedure we describe below to pick the best dimension.
192
+
193
+ Let ${\widehat{\beta }}_{\rho }$ be the estimate of $\beta$ from the original sample for the ${\rho }^{\text{ th }}$ working dimension, where $\rho = 1,\ldots ,p - 1$ , and let ${\widehat{\beta }}_{\rho ,b}$ be the estimate of $\beta$ from the ${b}^{\text{ th }}$ bootstrap sample, for $b = 1,\ldots ,B$ . The structural dimension can be estimated by finding the dimension $\rho$ to be the cardinality of the range of the function
194
+
195
+ $$
196
+ {g}^{ * } = \arg \mathop{\max }\limits_{{g}^{i}}\frac{1}{B}\mathop{\sum }\limits_{{b = 1}}^{B}{r}^{2}\left( {{g}^{i}\left( {A;{\widehat{\beta }}_{\rho }}\right) ,{g}^{i}\left( {A;{\widehat{\beta }}_{\rho ,b}}\right) }\right) ,
197
+ $$
198
+
199
+ where ${r}^{2}\left( {u,v}\right) = {k}^{-1}\mathop{\sum }\limits_{{i = 1}}^{k}{\lambda }_{i}$ and ${\lambda }_{i}$ ’s are the non-zero eigenvalues of
200
+
201
+ $$
202
+ \{ \operatorname{var}\left( {u,v}\right) {\} }^{-1/2}\operatorname{cov}\left( {u,v}\right) \{ \operatorname{var}\left( v\right) {\} }^{-1}\operatorname{cov}\left( {v,u}\right) \{ \operatorname{var}\left( u\right) {\} }^{-1/2}.
203
+ $$
204
+
205
+ This procedure uses resampling to choose $\beta$ to maximize variability of the reduced set of features given by ${g}^{i}\left( {.;{\beta }^{i}}\right)$ where ${g}^{i}\left( {.;{\beta }^{i}}\right)$ is chosen in a way that aims to preserve the causal regression relationship between $A$ and the mean of $Y$ . Exploring other alternatives for choosing the structural dimension is an interesting area for future work.
206
+
207
+ § 5 SIMULATION STUDY
208
+
209
+ Causal SDR is not well-solved via standard methods for dimension reduction such as PCA, as they do not take the feature-outcome relationship into account, nor by standard SDR methods, as they do not take the confounding issues into account. We illustrate the utility of our proposal to causal SDR, via simulation studies, and compare them with regression SDR and PCA methods. We also illustrate the consistency of our estimators and illustrate the procedure for selecting the structural dimension. To provide continuity with previous work, our simulation study is similar to that described in [Ma and Zhu, 2012]. All code necessary to reproduce the simulations is included with this submission and will be made publicly available on publication.
210
+
211
+ We perform 50 replications with fixed sample sizes, where the true response $\mathbb{E}\left\lbrack {Y\left( {g\left( a\right) }\right) }\right\rbrack$ is an object of dimension $d =$ 2, and the observed data distribution $p\left( {Y,A,C}\right)$ is set as follows. The dimension of the baseline factors $C$ is fixed as 4 and the observed treatment dimension $p$ is set to be 6 and 12. The baseline factors $C$ are generated from a standard multivariate normal distribution. We consider two cases for the treatment vector: one where the linearity and the constant covariance conditions in regular SDR are violated, and one where these assumptions are satisfied.
212
+
213
+ Case 1. We generated ${\left( {A}_{1},{A}_{2}\right) }^{T}$ (when $p = 6$ ) and ${\left( {A}_{1},{A}_{2},{A}_{7 : {12}}\right) }^{T}$ (when $p = {12}$ ) from a multivariate normal distribution where the mean of each component is given as: ${\mu }_{1} = \mathop{\sum }\limits_{i}{C}_{i},{\mu }_{2} = \mathop{\sum }\limits_{i}{\left( -1\right) }^{i}{C}_{i},{\mu }_{7} = {C}_{1},{\mu }_{8} = {C}_{2}$ , ${\mu }_{9} = {C}_{3},{\mu }_{10} = - {C}_{1} + {C}_{2},{\mu }_{11} = - {C}_{2} + {C}_{3},{\mu }_{12} =$ $- {C}_{3} + {C}_{4}$ , and the covariance matrix is ${\left( {\sigma }_{ij}\right) }_{\left( {p - 4}\right) \times \left( {p - 4}\right) }$ where ${\sigma }_{ij} = {0.5}^{\left| i - j\right| }$ . We generated ${A}_{3}$ from a normal distribution with mean $\left| {{A}_{1} + {A}_{2}}\right|$ and variance $\left| {A}_{1}\right| .{A}_{4}$ has a normal distribution with mean ${\left| {A}_{1} + {A}_{2}\right| }^{1/2}$ and variance $\left| {A}_{2}\right| .{A}_{5}$ and ${A}_{6}$ were generated from Bernoulli distributions with success probabilities $\exp \left( {A}_{2}\right) /\left\{ {1 + \exp \left( {A}_{2}\right) }\right\}$ , and $\Phi \left( {A}_{2}\right)$ , respectively, where $\Phi \left( \text{ . }\right)$ denotes the standard normal cumulative distribution.
214
+
215
+ Case 2. The treatment vector is generated from a multivariate normal distribution where the mean of each component is given as follows. ${\mu }_{1} = \mathop{\sum }\limits_{i}{C}_{i},{\mu }_{2} = \mathop{\sum }\limits_{i}{\left( -1\right) }^{i}{C}_{i},{\mu }_{3} =$ ${C}_{1} - {C}_{2} - {C}_{3} + {C}_{4},{\mu }_{4} = - {C}_{1} + {C}_{2} + {C}_{3} - {C}_{4},{\mu }_{5} =$ $\mathop{\sum }\limits_{i}{C}_{i} - 2{C}_{3},{\mu }_{6} = \mathop{\sum }\limits_{i}{C}_{i} - 2{C}_{1}$ , and ${\mu }_{6 + i} = {C}_{i},{\mu }_{9 + i} =$ $- {C}_{i}$ for $i = 1,2,3$ , and the covariance matrix is ${\left( {\sigma }_{ij}\right) }_{p \times p}$ where ${\sigma }_{ij} = {0.5}^{\left| i - j\right| }$ .
216
+
217
+ The response variable is generated using
218
+
219
+ $$
220
+ Y = {A}^{T}{\beta }_{1} + {\left( {A}^{T}\right) }^{2}{\beta }_{2} + \mathop{\sum }\limits_{{i = 1}}^{4}{C}_{i} + \left\{ {\mathop{\sum }\limits_{{j = 1}}^{p}{A}_{j}}\right\} \times \left\{ {\mathop{\sum }\limits_{{i = 1}}^{4}{C}_{i}}\right\} + \epsilon ,
221
+ $$
222
+
223
+ where the error term $\epsilon$ is generated from standard normal. For $p = 6$ , we set ${\beta }_{1} = {\left( 1,1,1,1,1\right) }^{T}/\sqrt{6}$ , and ${\beta }_{2} = {\left( 1, - 1,1, - 1,1, - 1\right) }^{T}/\sqrt{6}$ . For $p = {12}$ , the last 6 components of ${\beta }_{1}$ and ${\beta }_{2}$ are identically zero.
224
+
225
+ As mentioned in Section 3.2, Theorem 1 provides the whole class of estimating equations for a given $\widetilde{U}\left( \beta \right)$ . For simplicity, we assume $\mathbb{E}\left\lbrack {\alpha \left( A\right) \mid g\left( {A;\beta }\right) }\right\rbrack = 0$ , and therefore $\widetilde{U}\left( \beta \right) = \{ Y - \ell \left( {g\left( {A;\beta }\right) }\right) \} \times \alpha \left( A\right)$ in the following simulations. The performance of the estimates was computed using the distance between true $\beta$ and $\widehat{\beta }$ , defined as the Frobenius norm of the matrix $\widehat{\beta }{\left( {\widehat{\beta }}^{T}\widehat{\beta }\right) }^{-1}{\widehat{\beta }}^{T} - \beta {\left( {\beta }^{T}\beta \right) }^{-1}{\beta }^{T}$ .
226
+
227
+ Simulation 1. In this set of simulations, we aim for evaluating the performance of different estimation strategies for $\beta$ and fix the sample size to 200 . The results for both Case 1 and Case 2 when $p = 6$ are presented in Fig. 1, and the results for both Case 1 and Case 2 when $p = {12}$ is deferred to the appendix. In each case, there are 4 different boxplots. The first one, from the left hand side, labeled as Reg, corresponds to semiparametric SDR estimating equation (1). Since regular SDR ignores the influence of confounding variables $C$ , the estimates are not capturing the true causal relationship between $A$ and $Y$ . In the second boxplot, labeled as IPW, we use the IPW estimator in (7) with the correct model for $p\left( {A \mid C}\right)$ , by properly adjusting for all the confounders. This recovers a more reasonable ${\beta }^{ * }$ estimate than the first one. However, while IPW generally performs better than PCA or regression SDR, the improvement is relatively modest. This might be due to the inefficiency of naive IPW estimators at the reported sample size. The third plot, labeled AIPW, uses the augmented IPW (AIPW) estimator corresponding to (8), which greatly outperforms the other estimators. The last plot corresponds to the classical PCA dimension reduction technique where the treatment-outcome relation is ignored. In this case, the first two principal directions are reported as estimating the basis of the lower dimensional space. As illustrated in the plots, this naive approach does not seek to preserve a causal, nor indeed any, relationship to the outcome.
228
+
229
+ Our main objective was to reduce the dimension of the treatment such that the cause-effect relation between treatment and outcome is preserved. In order to show that our estimating procedures actually preserve this relation, we compute the contrast between $E\left\lbrack {Y\left( {g\left( {{a}_{i};\beta }\right) }\right) }\right\rbrack$ and $\mathbb{E}\left\lbrack {Y\left( {g\left( {{a}_{j};\beta }\right) }\right) }\right\rbrack$ for $i,j = 1,\ldots ,n$ , given the true parameters and the estimated ones. The $n \times n$ heatmap of effects are provided in Fig. 2 for the true effects and the ones estimated by regular SDR and AIPW. We used 500 sample points generated from Case 2 with $p = 6$ to plot these heatmaps. The plots in 2(a) and (c) demonstrates the significant similarity between the true surface and the one estimated by AIPW. The surface estimated by regression SDR appears to be a very different surface. The root-mean-squared errors between the true causal surface and the ones estimated from AIPW and regular regression SDR are 0.48 and 14.29, respectively.
230
+
231
+ < g r a p h i c s >
232
+
233
+ Figure 1: Boxplots of Frobenius norms between true and estimated parameters in simulations $\left( {p = 6}\right)$ .
234
+
235
+ Simulation 2. We also evaluate the performance of our bootstrap procedure for estimating the structural dimension $d$ , discussed in Section 4. We use the same data generating process as in Simulation 1, with $p = 6$ and $n = {200}$ . We set the bootstrap size to $B = {50}$ . The relative frequency of the selected dimension are reported in the appendix, and it reveals that the bootstrap procedure reliably recovers the true structural dimension, namely 2 in both cases ( ${98}\%$ of the times in Case 1 and ${90}\%$ of the times in Case 2.)
236
+
237
+ Simulation 3. We also demonstrate the effect of sample size on ${IPW}$ and ${AIPW}$ estimators of $\beta$ in the causal SDR model. Results are revealed in the appendix.
238
+
239
+ § 6 DATA APPLICATION
240
+
241
+ We now illustrate our methods using a cohort of patients treated with radiation therapy for head and neck cancer. The cohort consists of 613 patients who received radiation therapy at the $\mathrm{X}$ hospital prior to 2016. Radiation therapy is one of the most effective modalities for the treatment of head and neck cancers. However, because of the complex shape of target volumes in close proximity to sensitive organs, it may be associated with acute and late radiation morbidities such as xerostomia, mucositis, and dysphagia affecting the patient's quality of life. Such morbidities can lead to severe reduction in food intake and undesirable and possibly dangerous weight loss in patients. There are prospective studies that evaluated risk factors for weight loss in patients who undergo radiation therapy [Johnston et al., 1982, Cacicedo et al., 2014]. However, a proper analysis of whether radiation causes weight loss has not yet been reported likely due to the methodological challenges involved in using high dimensional variables such as radiation therapy as a treatment in causal analysis.
242
+
243
+ Here, we focus on the parotid glands which are incidentally irradiated by radiation and examine the summary measures of radiation therapy given by the cumulative dose-volume histograms extracted from the raw voxel maps of radiation doses. In particular, we looked at 5 equally spaced percentages of volume to construct a vector of treatment doses. We used weight loss as the outcome of interest, which was defined as the difference between weight measured within 100 to 160 days after the completion of treatment and the weight measured during consultation before the start of treatment. The data has records on demographics such as age, gender, race, and baseline clinical factors such as whether the patient had used feeding tubes and/or received chemotherapy before the initiation of treatment. We assumed these variables are sufficient to control for confounding and thus would ensure the conditional ignorability assumption was met.
244
+
245
+ There exists a rich literature relating parotid dose-volume characteristics to radiotherapy-induced salivary toxicity. It has been shown that the mean dose to the parotid glands correlates strongly with xerostomia and salivary dysfunction which are risk factors of weight loss [Deasy et al., 2010]. In light of such studies, we assume there exists a single dimension in the radiation exposure that captures the relationships between exposure and side effects including weight loss. Therefore, we set the structural dimension $d$ to be one. We set the mapping function $g\left( {.;\beta }\right)$ to be linear in its parameters $\beta$ , and use Bayesian additive regression trees to fit all nuisance models. The code is provided as part of the supplementary materials. The oncology data were excluded for reasons of patient confidentiality.
246
+
247
+ We generated $n \times n$ heatmaps in Fig. 3 to illustrate the cause-effect relationship between radiation treatment and weight loss. We use AIPW estimator obtained from Theo-
248
+
249
+ < g r a p h i c s >
250
+
251
+ Figure 2: Heatmaps of true causal effects and effects computed by estimating $\beta$ via the regular SDR and the AIPW estimators. Heatmaps are antidiagonally symmetric.
252
+
253
+ < g r a p h i c s >
254
+
255
+ Figure 3: Heatmap to illustrate the causal effect of radiation on weight loss, where effects are computed by estimating $\beta$ via AIPW estimator. Heatmap is antidiagonally symmetric with opposite color tones.
256
+
257
+ rem 1. The absolute values on the plots are antidiagonally symmetric. Radiation doses were sorted in increasing values along both axes. We interpret the heatmaps as follows. Consider the ${\left( i,i\right) }^{\text{ th }}$ point on the plot and draw a line along the y-coordinate. Since radiation doses were sorted in increasing order, then the radiation value at any point on the line to the right of(i, i)is higher than the radiation value at the ${\left( i,i\right) }^{\text{ th }}$ point. For any point to the left of(i, i), the radiation value is lower. The value at the ${\left( k,i\right) }^{\text{ th }}$ coordinate corresponds to the contrast $\mathbb{E}\left\lbrack {Y\left( {g\left( {{a}_{k};\beta }\right) }\right) - Y\left( {g\left( {{a}_{i};\beta }\right) }\right) }\right\rbrack$ . Consequently, if $k > i$ , then a red dot at(k, i)coordinate implies that an increase in radiation doses leads to an increase in weight loss. On the other hand, a blue dot would imply that an increase in radiation doses would not lead to an increase in weight loss. Similarly, a blue dot at(k, i), for $k < i$ , would imply that a decrease in radiation leads to a decrease in weight loss. Reverse is implied when the dot is red. Focusing on the bottom right triangle, we note that most of the area is filled with red color. It implies that as we increase the amount of radiation, the severity of weight loss increases. Thus, radiation therapy is potentially a cause of weight loss among patients who undergo the treatment.
258
+
259
+ We investigated the relationship between the treatment and outcome as the treatment size increases by selecting larger numbers of equally spaced percentages of volume in the dose-volume histograms. The plots are provided in the supplement. Throughout the experiment, we used a crude summary of the treatment that itself had dimension greater than one. A more fine-tuned approach is to look at the raw voxel maps. A voxel-based approach would identify the relations between radiation-induced morbidity and local dose release, thus providing a potentially better insight into spatial signature of radiation sensitivity in composite regions like the head and neck district [Monti et al. 2017]. Given the small cohort of patients that we have access to, a voxel-based approach would fall into $p \gg n$ paradigm, and would require strong sparsity assumptions [Li, 2007] to deal with. This is an interesting and challenging direction for future work.
260
+
261
+ § 7 CONCLUSIONS
262
+
263
+ In this paper, we have described a generalization of the semi-parametric sufficient dimension reduction (SDR) approach for regression problems described in [Ma and Zhu, 2012] to causal SDR. Specifically, we developed a method that reduces the dimension of a multidimensional treatment, while preserving the causal relationship between the treatment and the outcome quantified as a counterfactual mean. Using ideas from structural models [Robins, 1999], we provided semiparametric estimators for parameters of the function that maps the multidimensional treatment to a lower dimensional subspace. We have shown our estimator exhibits "2x2 robustness," where the estimator remains consistent if one of two models, for two pairs of models, is chosen correctly. In order to scale our methods to high dimensional applied settings, such as fMRI scans, text data, or radiation oncology voxel data, we need to incorporate ideas from parametric modeling, and sparsity within a semiparametric framework. Another natural extension for future work is to apply these methods to classical causal inference in longitudinal studies, where multiple time points render a collection of binary treatments a multidimensional object. Our causal SDR approach would provide an alternative to parametric marginal structural models typically employed in such settings.
UAI/UAI 2022/UAI 2022 Conference/BFZL7ULicg5/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,913 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Simplified and Unified Analysis of Various Learning Problems by Reduction to Multiple-Instance Learning
2
+
3
+ ## Abstract
4
+
5
+ In statistical learning, many problem formulations have been proposed so far, such as multi-class learning, complementarily labeled learning, multi-label learning, multi-task learning, which provide theoretical models for various real-world tasks. Although they have been extensively studied, the relationship among them has not been fully investigated. In this work, we focus on a particular problem formulation called Multiple-Instance Learning (MIL), and show that various learning problems including all the problems mentioned above with some of new problems can be reduced to MIL with theoretically guaranteed generalization bounds, where the reductions are established under a new reduction scheme we provide as a byproduct. The results imply that the MIL-reduction gives a simplified and unified framework for designing and analyzing algorithms for various learning problems. Moreover, we show that the MIL-reduction framework can be kernelized.
6
+
7
+ ## 1 INTRODUCTION
8
+
9
+ In this study, we explore how a large class of learning problems can be reduced to the Multiple-Instance Learning (MIL) problem. This is strongly motivated by the results of [Sabato and Tishby, 2012] and [Suehiro et al., 2020]. Suehiro et al. [2020] showed that some local-feature-based learning problems can be reduced to a MIL problem, which gave us an insight that MIL would have a high capability of representing various learning problems. Indeed, the reduced problem is too specific whereas Sabato and Tishby [2012] proposed a much more general formulation of MIL, and thus we believe that a wider class of learning problems can be reduced to MIL.
10
+
11
+ We provide a MIL-reduction framework and reveal that various learning problems, such as multi-class learning, complementarily labeled learning, multi-label learning, and multitask learning, can be reduced to MIL. By the reduction, we immediately derive generalization bounds from [Sabato and Tishby, 2012], as well as learning algorithms. That is, our reduction framework greatly simplifies the analyses of generalization bounds as compared with the analyses in the previous works [e.g., Lei et al., 2019, Ishida et al., 2017, Yu et al., 2014, Pontil and Maurer, 2013]. Some of the obtained generalization bounds are competitive or incomparable to the existing results. In particular, for multi-label learning, we derive an improved generalization bound, and for complementarily labeled learning, we derive a novel learning algorithm, which is the first polynomial-time algorithm in a certain setting. Moreover, we propose three new learning problems, multi-label learning with perfectionistic loss, top- 1 ranking learning and top-1 ranking learning with negative feedback, and we demonstrate that they can be reduced to MIL as well. The results imply that our MIL-reduction gives a unified framework for designing and analyzing algorithms for various learning problems.
12
+
13
+ To provide the MIL-reduction framework, we propose a general reduction scheme among learning problems. Our scheme has two remarkable features as described below. First, our reduction transforms every instance-label pair (x, y)in the given sample of the original learning problem to an instance-label pair $\left( {{x}^{\prime },{y}^{\prime }}\right)$ to form a sample of the reduced learning problem. In contrast, standard reduction schemes employ an instance transformation and an label transformation separately, to construct ${x}^{\prime }$ from $x$ and ${y}^{\prime }$ from $y$ , respectively. Therefore, our scheme enables us to design reduction algorithms among a wider class of learning problems, e.g., learning-to-rank to classification, and supervised learning to weakly supervised learning. Second, our reduction scheme ensures that the Empirical Risk Minimization (ERM) of the reduced problem implies the ERM of the original one, while the empirical Rademacher complexity of the hypothesis (composed with loss function) classes are preserved through the reduction. This means that we can employ an existing ERM algorithm for the reduced problem to obtain an ERM algorithm for the original problem with a theoretical guaranteed generalization bound, which is immediately derived from a known generalization bound for the reduced problem. We also show that the MIL-reduction framework can be kernelized.
14
+
15
+ The main contributions are summarized as follows:
16
+
17
+ - We propose a general reduction scheme based on the ERM, which allows us to derive a generalization risk bound of the original problem immediately.
18
+
19
+ - We demonstrate that several learning problems, from traditional to new problems, can be reduced to MIL. The results imply that our MIL-reduction gives a simplified and unified framework for the analyses for various learning problems.
20
+
21
+ - We obtain novel theoretical results for some learning problems.
22
+
23
+ - We show that the MIL-reduction framework can be kernelized.
24
+
25
+ Several proofs are shown in supplementary materials.
26
+
27
+ ## 2 PRELIMINARIES
28
+
29
+ For an integer $u,\left\lbrack u\right\rbrack$ denotes the set $\{ 1,\ldots , u\} .I\left( \mathrm{e}\right)$ denotes the indicator function of the event $\mathrm{e}$ , that is, $I\left( \mathrm{e}\right) = 1$ if $\mathrm{e}$ is true and $I\left( \mathrm{e}\right) = 0$ otherwise.
30
+
31
+ A learning problem is represented by a pair $\left( {\mathcal{H},\ell }\right)$ of a hypothesis class $\mathcal{H} \subseteq \{ h : \mathcal{X} \rightarrow \mathcal{Y}\}$ and a loss function $\ell : \mathcal{X} \times \mathcal{Y} \times \mathcal{H} \rightarrow \mathbb{R}$ for some input space $\mathcal{X}$ and output space $\mathcal{Y}$ . A learner receives a sample $S = \left( {\left( {{x}_{1},{y}_{1}}\right) ,\ldots ,\left( {{x}_{n},{y}_{n}}\right) }\right)$ where each input-output pair $\left( {{x}_{i},{y}_{i}}\right)$ is drawn i.i.d. according to an unknown distribution $\mathcal{D}$ over $\mathcal{X} \times \mathcal{Y}$ . The goal of the learner is to find, with high probability, a hypothesis $h \in \mathcal{H}$ so that the generalization risk ${R}_{\mathcal{D}}\left( h\right) =$ ${\mathbb{E}}_{\left( {x, y}\right) \sim \mathcal{D}}\ell \left( {x, y, h}\right)$ is small.
32
+
33
+ For a learning problem $\left( {\mathcal{H},\ell }\right)$ , we define a class of loss functions as $\widehat{\mathcal{H}} = \{ \left( {x, y}\right) \mapsto \ell \left( {x, y, h}\right) \mid h \in \mathcal{H}\}$ when the underlying loss function $\ell$ is clear from the context. We give the definition of the empirical Rademacher complexity, which is used to bound the generalization risk.
34
+
35
+ Definition 1 (Empirical Rademacher complexity [Bartlett and Mendelson, 2003]). Given a sample $S = \left( {\left( {{x}_{1},{y}_{1}}\right) ,\ldots ,\left( {{x}_{n},{y}_{n}}\right) }\right) \in {\left( \mathcal{X} \times \mathcal{Y}\right) }^{n}$ , the empirical Rademacher complexity ${\Re }_{S}\left( \widehat{\mathcal{H}}\right)$ of a class $\widehat{\mathcal{H}}$ w.r.t. $S$ is defined as ${\Re }_{S}\left( \widehat{\mathcal{H}}\right) = \frac{1}{n}{\mathbb{E}}_{\mathbf{\sigma }}\left\lbrack {\mathop{\sup }\limits_{{g \in \widehat{\mathcal{H}}}}\mathop{\sum }\limits_{{i = 1}}^{n}{\sigma }_{i}g\left( {{x}_{i},{y}_{i}}\right) }\right\rbrack$ , where $\mathbf{\sigma } \in \{ - 1,1{\} }^{n}$ and each ${\sigma }_{i}$ is an independent uniform random variable in $\{ - 1,1\}$ .
36
+
37
+ Generalization risk bound [Mohri et al., 2018] Let $\left( {\mathcal{H},\ell }\right)$ be a learning problem and $S$ be a sample of size $n$ drawn according to a distribution $\mathcal{D}$ . Then, it holds with probability at least $1 - \delta$ that for all $h \in \mathcal{H}$ ,
38
+
39
+ $$
40
+ {R}_{\mathcal{D}}\left( h\right) \leq {\widehat{R}}_{S}\left( h\right) + 2{\Re }_{S}\left( \widehat{\mathcal{H}}\right) + 3\sqrt{\log \left( {{}^{2}/\delta }\right) /{2n}},
41
+ $$
42
+
43
+ where ${\widehat{R}}_{S}\left( h\right) = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\ell \left( {{x}_{i},{y}_{i}, h}\right)$ denotes the empirical risk of $h$ for sample $S$ .
44
+
45
+ ## 3 REDUCTION SCHEME FOR ERM
46
+
47
+ We introduce a general reduction scheme for empirical risk minimization and provide useful theoretical results.
48
+
49
+ Definition 2 (ERM-reduction). A learning problem $\left( {\mathcal{H},\ell }\right)$ over input-output space $\mathcal{X} \times \mathcal{Y}$ is ERM-reducible to another learning problem $\left( {{\mathcal{H}}^{\prime },{\ell }^{\prime }}\right)$ over input-output space ${\mathcal{X}}^{\prime } \times {\mathcal{Y}}^{\prime }$ if there exist polynomial-time computable functions $\alpha : \mathcal{X} \times$ $\mathcal{Y} \rightarrow {\mathcal{X}}^{\prime } \times {\mathcal{Y}}^{\prime }$ and $\beta : {\mathcal{H}}^{\prime } \rightarrow \mathcal{H}$ such that for any $\left( {x, y}\right) \in$ $\mathcal{X} \times \mathcal{Y}$ and for any ${h}^{\prime } \in {\mathcal{H}}^{\prime }$ ,
50
+
51
+ $$
52
+ \ell \left( {x, y, h}\right) = {\ell }^{\prime }\left( {{x}^{\prime },{y}^{\prime },{h}^{\prime }}\right) ,
53
+ $$
54
+
55
+ where $\left( {{x}^{\prime },{y}^{\prime }}\right) = \alpha \left( {x, y}\right)$ and $h = \beta \left( {h}^{\prime }\right)$ .
56
+
57
+ Here we show the remarkable relationship between the original problem and the reduced problem.
58
+
59
+ Proposition 1. Suppose that $\left( {\mathcal{H},\ell }\right)$ is ERM-reducible to $\left( {{\mathcal{H}}^{\prime },{\ell }^{\prime }}\right)$ with transformations $\alpha$ and $\beta$ . For any sample $S =$ $\left( {\left( {{x}_{1},{y}_{1}}\right) ,\ldots ,\left( {{x}_{n},{y}_{n}}\right) }\right) \in {\left( \mathcal{X} \times \mathcal{Y}\right) }^{n}$ , the following holds:
60
+
61
+ (i) (In)equality of the ERMs:
62
+
63
+ $$
64
+ \mathop{\min }\limits_{{h \in \mathcal{H}}}{\widehat{R}}_{S}\left( h\right) \leq \mathop{\min }\limits_{{h \in {\mathcal{H}}_{\beta }}}{\widehat{R}}_{S}\left( h\right)
65
+ $$
66
+
67
+ $$
68
+ = \mathop{\min }\limits_{{{h}^{\prime } \in {\mathcal{H}}^{\prime }}}{\widehat{R}}_{{S}^{\prime }}\left( {h}^{\prime }\right) ,
69
+ $$
70
+
71
+ where ${\mathcal{H}}_{\beta } = \left\{ {\beta \left( {h}^{\prime }\right) \mid {h}^{\prime } \in {\mathcal{H}}^{\prime }}\right\}$ and ${S}^{\prime } =$ $\left( {\left( {{x}_{1}^{\prime },{y}_{1}^{\prime }}\right) ,\ldots ,\left( {{x}_{n}^{\prime },{y}_{n}^{\prime }}\right) }\right)$ with $\left( {{x}_{i}^{\prime },{y}_{i}^{\prime }}\right) = \alpha \left( {{x}_{i},{y}_{i}}\right)$ for $i \in \left\lbrack n\right\rbrack$ .
72
+
73
+ (ii) Empirical Rademacher complexity preserving:
74
+
75
+ $$
76
+ {\mathfrak{R}}_{S}\left( {\widehat{\mathcal{H}}}_{\beta }\right) = {\mathfrak{R}}_{{S}^{\prime }}\left( {\widehat{\mathcal{H}}}^{\prime }\right) .
77
+ $$
78
+
79
+ We can design a reduction scheme in a straightforward way as follows. When given a sample $S$ of the original problem, we construct ${S}^{\prime }$ of the reduced problem by $\alpha$ and obtain ${h}^{\prime }$ by solving the ERM of the reduced problem. Then, we obtain the final hypothesis $h$ by $\beta$ .
80
+
81
+ We derive the following generalization risk bound using the propositions on the empirical Rademacher complexity.
82
+
83
+ Corollary 2. Let $S = \left( {\left( {{x}_{1},{y}_{1}}\right) ,\ldots ,\left( {{x}_{n},{y}_{n}}\right) }\right)$ be a sample i.i.d. drawn according to unknown distribution $\mathcal{D}$ in an original problem $\left( {\mathcal{H},\ell }\right)$ . If $\left( {\mathcal{H},\ell }\right)$ is ERM-reducible to $\left( {{\mathcal{H}}^{\prime },{\ell }^{\prime }}\right)$ , for ${S}^{\prime } = \left( {\alpha \left( {{x}_{1},{y}_{1}}\right) ,\ldots ,\alpha \left( {{x}_{n},{y}_{n}}\right) }\right)$ and $h = \beta \left( {h}^{\prime }\right)$ , the following generalization risk bound holds with a probability at least $1 - \delta$ for all $h \in {\mathcal{H}}_{\beta }$ :
84
+
85
+ $$
86
+ {R}_{\mathcal{D}}\left( h\right) \leq {\widehat{R}}_{{S}^{\prime }}\left( {h}^{\prime }\right) + 2{\Re }_{{S}^{\prime }}\left( {\widehat{\mathcal{H}}}^{\prime }\right) + 3\sqrt{\log \left( {{}^{2}/\delta }\right) /{2n}}.
87
+ $$
88
+
89
+ That is, we can guarantee the generalization bound of the original problem because of the preservation of the empirical Rademacher complexity.
90
+
91
+ ## 4 MIL-REDUCTION FRAMEWORK
92
+
93
+ This section is the highlight of this paper. We define the ERM-reducibility to MIL and show the reducible condition. Moreover, we show that some theoretical analyses can be simplified. We use some symbols with prime (e.g., ${\mathcal{X}}^{\prime }$ ) to indicate that the MIL is the reduced problem.
94
+
95
+ ### 4.1 PROBLEM FORMULATION OF MIL
96
+
97
+ Let $\mathcal{Z} \subseteq {\mathbb{R}}^{{d}^{\prime }}$ be the instance space. ${\mathcal{X}}^{\prime } \subseteq {2}^{\mathcal{Z}}$ is an input space and a bag ${x}^{\prime } \in {\mathcal{X}}^{\prime }$ is a finite set of instances chosen from $\mathcal{Z}$ . Let ${\mathcal{Y}}^{\prime } = \{ - 1,1\}$ be an output space. Following the formulation by [Sabato and Tishby, 2012], we define, for the rest of the paper, a MIL problem as a pair $\left( {{\mathcal{H}}^{\prime },{\ell }^{\prime }}\right)$ of a hypothesis class ${\mathcal{H}}^{\prime }$ and a loss function ${\ell }^{\prime }$ of the form:
98
+
99
+ $$
100
+ {\mathcal{H}}^{\prime } = \left\{ {{h}^{\prime } : {x}^{\prime } \mapsto {\Psi }_{p}\left( \left\{ {{f}_{2}\left( {g\left( z\right) }\right) \mid z \in {x}^{\prime }}\right\} \right) \mid g \in \mathcal{G}}\right\} , \tag{1}
101
+ $$
102
+
103
+ $$
104
+ {\ell }^{\prime } : \left( {{x}^{\prime },{y}^{\prime },{h}^{\prime }}\right) \mapsto {f}_{1}\left( {{y}^{\prime }{h}^{\prime }\left( {x}^{\prime }\right) }\right) , \tag{2}
105
+ $$
106
+
107
+ where $\mathcal{G} \subseteq \{ g : \mathcal{Z} \rightarrow \mathbb{R}\} ,{f}_{1} : \mathbb{R} \rightarrow \left\lbrack {0,1}\right\rbrack$ is an $a$ -Lipschitz function, ${f}_{2} : \mathbb{R} \rightarrow \left\lbrack {-1,1}\right\rbrack$ is a $b$ -Lipschitz function, and ${\Psi }_{p} : {2}^{\left\lbrack -1,1\right\rbrack } \rightarrow \left\lbrack {-1,1}\right\rbrack$ is a $p$ -norm like function, which is defined for any $p \in \lbrack 1,\infty )$ as
108
+
109
+ $$
110
+ {\Psi }_{p}\left( V\right) = {\left( \frac{1}{m}\mathop{\sum }\limits_{{i = 1}}^{m}{\left( {v}_{i} + 1\right) }^{p}\right) }^{1/p} - 1
111
+ $$
112
+
113
+ for every finite set $V = \left\{ {{v}_{1},{v}_{2},\ldots ,{v}_{m}}\right\} \subseteq \left\lbrack {-1,1}\right\rbrack$ . We define ${\Psi }_{\infty }$ as $\mathop{\lim }\limits_{{p \rightarrow \infty }}{\Psi }_{p}$ . Note that ${\Psi }_{p}$ is 1-Lipschitz for any $p$ [see, Sabato and Tishby,2012]. ${\Psi }_{p}$ is a user-defined function and behaves as an aggregation of some bag information. Typical ${\Psi }_{p}$ are the max operator $\left( {p = \infty }\right)$ and average $\left( {p = 1}\right)$ .
114
+
115
+ The only difference in the hypothesis of [Sabato and Tishby, 2012] is ${f}_{2}.{f}_{2}$ appears redundant (because ${f}_{2} \circ g$ can be replaced by a single function) but plays an important role in the reduction (the examples are shown in Section 5).
116
+
117
+ ### 4.2 MIL-REDUCIBILITY
118
+
119
+ Now we give the definition of ERM-reducibility in a straightforward way.
120
+
121
+ Definition 3 (MIL-reducibility). A learning problem $\left( {\mathcal{H},\ell }\right)$ is said to be MIL-reducible if there exists a MIL problem $\left( {{\mathcal{H}}^{\prime },{\ell }^{\prime }}\right)$ such that $\left( {\mathcal{H},\ell }\right)$ is ERM-reducible to $\left( {{\mathcal{H}}^{\prime },{\ell }^{\prime }}\right)$ .
122
+
123
+ ### 4.3 RADEMACHER COMPLEXITY BOUND
124
+
125
+ We show the empirical Rademacher complexity bound for the MIL-reducible problems using our reduction scheme. As aforementioned, the main advantage of our reduction scheme is to allow us to apply the empirical Rademacher complexity bound of the reduced problem to the original problems. In this paper, we utilize the bound provided by Sabato and Tishby [2012].
126
+
127
+ Theorem 3 (An application of [Sabato and Tishby, 2012]). Let $\left( {{\mathcal{H}}^{\prime },{\ell }^{\prime }}\right)$ be a MIL problem defined in Eq. (1) and (2). Let ${S}^{\prime } = \left( {\left( {{x}_{1}^{\prime },{y}_{1}^{\prime }}\right) ,\ldots ,\left( {{x}_{n}^{\prime },{y}_{n}^{\prime }}\right) }\right)$ be a sample with average bag size ${r}_{{S}^{\prime }}$ . Let $\widehat{\mathcal{G}} = \left\{ {{f}_{2} \circ g \mid g \in \mathcal{G}}\right\}$ . If there exist $C,\rho \geq 0$ such that for all sufficiently large $n$ ,
128
+
129
+ $$
130
+ {\Re }_{{S}^{\prime }}\left( \widehat{\mathcal{G}}\right) \leq \frac{C{\ln }^{\rho }\left( n\right) }{\sqrt{n}},
131
+ $$
132
+
133
+ then
134
+
135
+ $$
136
+ {\Re }_{{S}^{\prime }}\left( {\widehat{\mathcal{H}}}^{\prime }\right) = O\left( \frac{\log \left( {{a}^{2}{n}^{2}{r}_{{S}^{\prime }}}\right) \left( {\frac{aC}{\rho + 1}{\ln }^{\rho + 1}\left( {{a}^{2}n}\right) }\right) }{\sqrt{n}}\right) ,
137
+ $$
138
+
139
+ where ${\widehat{\mathcal{H}}}^{\prime } = \left\{ {{\widehat{h}}^{\prime } : {x}^{\prime } \mapsto {f}_{1}\left( {{y}^{\prime }{h}^{\prime }\left( {x}^{\prime }\right) }\right) \mid {h}^{\prime } \in {\mathcal{H}}^{\prime }}\right\}$ .
140
+
141
+ As mentioned in [Sabato and Tishby, 2012], we obtain the following bound when $\mathcal{G}$ is a set of linear functions.
142
+
143
+ Corollary 4. Let $\mathcal{G} = \left\{ {g : z \mapsto \left\langle {{w}^{\prime }, z}\right\rangle \mid {w}^{\prime } \in }\right.$ $\left. {{\mathbb{R}}^{{d}^{\prime }},\begin{Vmatrix}{w}^{\prime }\end{Vmatrix} \leq {C}_{1}}\right\}$ and assume that $\parallel z\parallel \leq {C}_{2}$ . Then, the following bound holds:
144
+
145
+ $$
146
+ {\Re }_{{S}^{\prime }}\left( \widehat{\mathcal{H}}\right) = O\left( \frac{\log \left( {{a}^{2}{n}^{2}{r}_{{S}^{\prime }}}\right) \left( {{ab}{C}_{1}{C}_{2}\ln \left( {{a}^{2}n}\right) }\right) }{\sqrt{n}}\right) .
147
+ $$
148
+
149
+ The above bound is easily derived from the result of ${\Re }_{{S}^{\prime }}$ [see the proof of Theorem 20 of Sabato and Tishby, 2012]) and ${\Re }_{{S}^{\prime }}\left( \overset{⏜}{\mathcal{G}}\right) \leq b{\Re }_{{S}^{\prime }}\left( \mathcal{G}\right) \leq b{C}_{1}{C}_{2}/\sqrt{n} = b{C}_{1}{C}_{2}{\ln }^{0}\left( n\right) /\sqrt{n}$ [see, e.g., Theorem 5.8 and 5.10 of Mohri et al., 2018].
150
+
151
+ Using Theorem 3 and Corollary 2, we obtain a generalization risk bound for MIL-reducible problems.
152
+
153
+ ### 4.4 LEARNING ALGORITHM
154
+
155
+ For the reduced MIL problems that satisfy some conditions, we can immediately design a learning algorithm according to the condition. Suppose that $\mathcal{G}$ is a set of linear functions:
156
+
157
+ $$
158
+ \mathcal{G} = \left\{ {g : z \mapsto \left\langle {{w}^{\prime }, z}\right\rangle \mid {w}^{\prime } \in {\mathbb{R}}^{{d}^{\prime }},\begin{Vmatrix}{w}^{\prime }\end{Vmatrix} \leq {C}_{1}}\right\} . \tag{3}
159
+ $$
160
+
161
+ Let ${S}^{\prime } = \left( {\left( {{x}_{1}^{\prime },{y}_{1}^{\prime }}\right) ,\ldots ,\left( {{x}_{n}^{\prime },{y}_{n}^{\prime }}\right) }\right)$ . The ERM of MIL is formulated as follows:
162
+
163
+ $$
164
+ \mathop{\min }\limits_{{\begin{Vmatrix}{w}^{\prime }\end{Vmatrix} \leq {C}_{1}}}\lambda {\begin{Vmatrix}{w}^{\prime }\end{Vmatrix}}^{2} + \mathop{\sum }\limits_{{i = 1}}^{n}{f}_{1}\left( {{y}_{i}^{\prime }{\Psi }_{p}\left( \left\{ {{f}_{2}\left( {\left\langle {{w}^{\prime }, z}\right\rangle \mid z \in {x}_{i}^{\prime }}\right) }\right\} \right) }\right) .
165
+ $$
166
+
167
+ (4)
168
+
169
+ For the optimization problem (4), we show that the following propositions hold.
170
+
171
+ Proposition 5. If ${y}_{i}^{\prime } = - 1$ for any $i \in \left\lbrack n\right\rbrack$ for sample ${S}^{\prime }$ , ${f}_{1}\left( c\right)$ is convex and nonincreasing ${}^{\square }$ and ${f}_{2}$ is a nondecreasing convex function, and $\mathcal{G}$ is given as Eq. (3), then the ERM of $\left( {{\mathcal{H}}^{\prime },{\ell }^{\prime }}\right)$ is a convex programming problem.
172
+
173
+ Proposition 6. If ${f}_{1}$ is a nonincreasing convex ${}^{1}$ and ${f}_{1}\left( c\right)$ is a homogeneous function of degree 1 for $c \in {\left\lbrack -1,1\right\rbrack }^{2}$ , and ${f}_{2}$ is a nondecreasing convex function, and $\mathcal{G}$ is given as Eq. (3), then $\operatorname{ERM}$ of $\left( {{\mathcal{H}}^{\prime },{\ell }^{\prime }}\right)$ is a ${DC}$ programming problem.
174
+
175
+ Generally, it is difficult to find a global minimum for a DC programming problem; however, it is known that we can find a solution with $\epsilon$ -approximation of local optima [see, e.g., Le Thi and Dinh, 2018]. We introduce a standard DC algorithm to solve (4) in Algorithm 1 in Sec. D.
176
+
177
+ The propositions indicate that, if $\left( {\mathcal{H},\ell }\right)$ is MIL-reducible to $\left( {{\mathcal{H}}^{\prime },{\ell }^{\prime }}\right)$ and satisfies either of the above conditions, then the solution $h \in {\mathcal{H}}_{\beta }$ in the original problem can be obtained by a unified learning algorithm.
178
+
179
+ ## 5 MIL-REDUCIBLE EXAMPLES
180
+
181
+ In this section, we demonstrate that various learning problems can be reduced to MIL by the proposed reduction framework. The results imply that our MIL-reduction gives a unified framework for designing and analyzing learning algorithms for various learning problems ${}^{3}$ .
182
+
183
+ ### 5.1 THE EXISTING PROBLEMS
184
+
185
+ #### 5.1.1 Multi-class learning problem
186
+
187
+ Problem setting: Let $\mathcal{X} \subseteq {\mathbb{R}}^{d}$ be an instance space, and $\mathcal{Y} = \left\lbrack k\right\rbrack$ be an output space. The learner receives the set of labeled instances $S = \left( {\left( {{x}_{1},{y}_{1}}\right) ,\ldots ,\left( {{x}_{n},{y}_{n}}\right) }\right) \in {\left( \mathcal{X} \times \mathcal{Y}\right) }^{n}$ , where each instance is drawn i.i.d. according to some unknown distribution $\mathcal{D}$ . The learner predicts the label of $x$ using the hypothesis $h \in \mathcal{H} = \left\{ {x \mapsto \arg \mathop{\max }\limits_{{y \in \left\lbrack k\right\rbrack }}\left\langle {{w}_{y}, x}\right\rangle }\right.$ $\forall j \in \left\lbrack k\right\rbrack ,{w}_{j} \in {\mathbb{R}}^{d}\}$ . Let $\ell : \left( {x, y, h}\right) \mapsto \Gamma \left( \left\langle {{w}_{y}, x}\right\rangle \right. -$ $\mathop{\max }\limits_{{j \in \mathcal{Y} \smallsetminus y}}\left\langle {{w}_{j}, x}\right\rangle )$ be a loss function, where $\Gamma : \mathbb{R} \rightarrow \left\lbrack {0,1}\right\rbrack$ is a convex, nonincreasing and $a$ -Lipschitz function. The generalization risk and empirical risk of $h$ are defined as:
188
+
189
+ $$
190
+ {R}_{\mathcal{D}}\left( h\right) = \underset{\left( {x, y}\right) \sim \mathcal{D}}{\mathbb{E}}\ell \left( {x, y, h}\right) ,{\widehat{R}}_{S}\left( h\right) = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\ell \left( {{x}_{i},{y}_{i}, h}\right) .
191
+ $$
192
+
193
+ We obtain the following by using MIL-reduction framework:
194
+
195
+ Theorem 7. Multi-class learning problem is MIL-reducible.
196
+
197
+ Proof. For any(x, y), we define
198
+
199
+ $$
200
+ {\eta }_{\left( x, y\right) } = \left( {\mathbf{0},\ldots ,\mathbf{0},\underset{y - \text{th block }}{\underbrace{x}},\mathbf{0},\ldots ,\mathbf{0}}\right) ,
201
+ $$
202
+
203
+ where0is a $d$ -dimensional vector, the elements of which are all 0 . On the MIL-reduction framework, suppose that $p = \infty ;{f}_{1}\left( c\right) = \Gamma \left( {{2c}{C}_{1}{C}_{2}}\right) ,{f}_{2}\left( c\right) = c/2{C}_{1}{C}_{2}$ (shifting function to $\left\lbrack {-1, + 1}\right\rbrack );\alpha \left( {x, y}\right) = \left( {{x}_{\left( x, y\right) }^{\prime },{y}^{\prime }}\right)$ where ${x}_{\left( x, y\right) }^{\prime } = \left\{ {{\eta }_{\left( x, j\right) } - {\eta }_{\left( x, y\right) }\mid \forall j \in \mathcal{Y} \smallsetminus y}\right\} ;{y}^{\prime } = - 1$ ; for any $z \in {\mathbb{R}}^{kd},\mathcal{G} = \left\{ {g : z \mapsto \left\langle {\left( {{w}_{1}^{\prime },\ldots ,{w}_{k}^{\prime }}\right) , z}\right\rangle }\right\}$ ${w}_{j}^{\prime } \in {\mathbb{R}}^{d},\forall j \in \left\lbrack k\right\rbrack ,\begin{Vmatrix}{W}^{\prime }\end{Vmatrix} \leq {C}_{1}\}$ where ${W}^{\prime } =$ $\left( {{w}_{1}^{\prime },\ldots ,{w}_{k}^{\prime }}\right)$ and $\begin{Vmatrix}{W}^{\prime }\end{Vmatrix} = \sqrt{\mathop{\sum }\limits_{{j = 1}}^{k}{\begin{Vmatrix}{w}_{j}^{\prime }\end{Vmatrix}}^{2}};\beta \left( {h}^{\prime }\right) : x \mapsto$ $\arg \mathop{\max }\limits_{{j \in \left\lbrack k\right\rbrack }}\left\langle {\left( {{w}_{1}^{\prime },\ldots {w}_{k}^{\prime }}\right) , x}\right\rangle$ . Then, for any(x, y)and $h \in \mathcal{H},$
204
+
205
+ $$
206
+ {\ell }^{\prime }\left( {{x}^{\prime },{y}^{\prime },{h}^{\prime }}\right) = {f}_{1}\left( {{y}^{\prime }{\Psi }_{p}\left( \left\{ {{f}_{2}\left( {g\left( z\right) \mid z \in {x}_{\left( x, y\right) }^{\prime }}\right) }\right\} \right) }\right)
207
+ $$
208
+
209
+ $$
210
+ = \Gamma \left( {-{\Psi }_{\infty }\left( \left\{ {g\left( z\right) \mid z \in {x}_{\left( x, y\right) }^{\prime }}\right\} \right) }\right)
211
+ $$
212
+
213
+ $$
214
+ = \Gamma \left( {-\left( {\mathop{\max }\limits_{{j \in \mathcal{Y} \smallsetminus y}}\left( {\left\langle {{w}_{j}, x}\right\rangle - \left\langle {{w}_{y}, x}\right\rangle }\right) }\right) }\right)
215
+ $$
216
+
217
+ $$
218
+ = \ell \left( {x, y, h}\right)
219
+ $$
220
+
221
+ The empirical Rademacher complexity is immediately derived as follows by observing the reduction process.
222
+
223
+ Corollary 8. We assume that $\begin{Vmatrix}{x}_{i}\end{Vmatrix} \leq {C}_{2}$ for any $i \in \left\lbrack n\right\rbrack$ . In the reduced MIL problem from multi-class learning problem, the empirical Rademacher complexity of ${\widehat{\mathcal{H}}}^{\prime }$ is given as:
224
+
225
+ $$
226
+ {\Re }_{{S}^{\prime }}\left( {\widehat{\mathcal{H}}}^{\prime }\right) = O\left( \frac{\log \left( {{\widehat{a}}^{2}2{n}^{2}\left( {k - 1}\right) }\right) \left( {2\widehat{a}\ln \left( {{\widehat{a}}^{2}n}\right) }\right) }{\sqrt{n}}\right) ,
227
+ $$
228
+
229
+ where $\widehat{a} = {2a}{C}_{1}{C}_{2}$ and we assume $\begin{Vmatrix}{w}^{\prime }\end{Vmatrix} \leq {C}_{1}$ in the reduced MIL.
230
+
231
+ We used the fact that the bag size is(k - 1)for all ${x}_{i}^{\prime }$ (i.e., $\left. {{r}_{{S}^{\prime }} = k - 1}\right)$ and $\Re \left( \widehat{\mathcal{G}}\right) \leq 2/\sqrt{n}$ by setting ${f}_{2}\left( c\right) = c/{C}_{1}{C}_{2}$ . Using Corollary 2, we can obtain the generalization risk bound for the multi-class learning.
232
+
233
+ The learning algorithm is obtained by the following result.
234
+
235
+ Corollary 9. The reduced ERM of the MIL from multi-class learning is a convex programming problem.
236
+
237
+ The proof of Theorem 7 shows that ${f}_{2}$ is nondecreasing convex and ${y}_{i}^{\prime } = - 1$ for all $i \in \left\lbrack n\right\rbrack$ . Therefore, by Proposition 5, if we consider $\Gamma$ that is a nonicreasing and convex function, the ERM of the reduced MIL problem is a convex programming problem and solved in polynomial time.
238
+
239
+ ---
240
+
241
+ ${}^{1}$ More precisely, the extended-value extension ${f}_{1}$ also must be nonincreasing (See details in [Boyd and Vandenberghe, 2004]).
242
+
243
+ ${}^{2}$ For example, hinge-loss function $f\left( c\right) : c \mapsto \{ 0,1 - c\}$ satisfies this condition.
244
+
245
+ ${}^{3}$ The reduction of Multi-task learning and top-1 ranking learning negative feedback are shown in Sec. 6 and 5 owing to space limitations.
246
+
247
+ ---
248
+
249
+ #### 5.1.2 Complementarily labeled learning problem
250
+
251
+ Complementarily labeled learning was proposed by Ishida et al. [2017]. In this problem, some training instances are complementarily labeled (e.g., instance ${x}_{i}$ is NOT ${y}_{i}$ ). We essentially follow the problem setting and some assumptions provided by Ishida et al. [2017].
252
+
253
+ Problem setting: Let $\mathcal{X} \subseteq {\mathbb{R}}^{d}$ be an instance space and $\mathcal{Y} = \left\lbrack k\right\rbrack$ be an output space. Let $\mathcal{D}$ be an unknown distribution over $\mathcal{X} \times \mathcal{Y}$ . We assume that the learner receives a sample $S$ drawn i.i.d. according to the distribution ${\mathcal{D}}^{\prime }$ which provides the true label with unknown probability $\theta$ and the complementary label with unknown probability $1 - \theta$ . Moreover, we assume that the complementary label is chosen with a uniform probability (i.e., all complementary labels are equally chosen with the probability $1/\left( {k - 1}\right)$ ). 4 More formally, we assume that the sample is given as $S = \left( {\left( {{x}_{1},{y}_{1},{\gamma }_{1}}\right) \ldots ,\left( {{x}_{n},{y}_{n},{\gamma }_{n}}\right) }\right)$ which is drawn i.i.d. according to the distribution ${\mathcal{D}}^{\prime }$ over $\mathcal{D} \times \{$ False, True $\}$ , where ${\gamma }_{i} =$ True means that ${y}_{i}$ is the true label and ${\gamma }_{i} =$ False means that ${y}_{i}$ is the complementary label (i.e., it indicates that ${x}_{i}$ is NOT ${y}_{i}$ ). For any $\left( {x, y}\right) \sim \mathcal{D}$ , ${\mathcal{D}}^{\prime }\left( {x, y,\text{ True }}\right) = \theta$ and ${\mathcal{D}}^{\prime }\left( {x,\bar{y},\text{ False }}\right) = \frac{1 - \theta }{k - 1}$ for any $\bar{y} \neq y$ (i.e., the complementary label is chosen with a uniform probability). The other basic settings are the same as those for the aforementioned multi-class learning. The learner predicts the label of $x$ using the hypothesis $h \in$ $\mathcal{H} = \left\{ {x \mapsto \arg \mathop{\max }\limits_{{y \in \left\lbrack k\right\rbrack }}\left\langle {{w}_{y}, x}\right\rangle \mid \forall j \in \left\lbrack k\right\rbrack ,{w}_{j} \in {\mathbb{R}}^{d}}\right\} .$ The final goal of the learner is to find $h \in \mathcal{H}$ with a small multi-class classification risk:
254
+
255
+ $$
256
+ {R}_{\mathcal{D}}^{\mathrm{{MC}}}\left( h\right) = \underset{\left( {x, y}\right) \sim \mathcal{D}}{\mathbb{E}}I\left( {y \neq h\left( x\right) }\right) .
257
+ $$
258
+
259
+ However, it is difficult to minimize the empirical multi-class classification risk directly using the complementarily labeled data. Therefore, we consider the following risk
260
+
261
+ $$
262
+ {R}_{{\mathcal{D}}^{\prime }}^{\mathrm{{LC}}}\left( h\right) = \underset{\left( {x, y,\gamma }\right) \sim {\mathcal{D}}^{\prime }}{\mathbb{E}}\left\lbrack {I\left( {\gamma = \left( {y \neq h\left( x\right) }\right) }\right) }\right\rbrack .
263
+ $$
264
+
265
+ This risk implies that when $\gamma =$ True, the learner does not incur a risk if it predicts the true label. When $\gamma =$ False, the learner does not incur a risk if it predicts an assigned nontrue label. Thus, the risk measure is defined using the pair $\left( {y,\gamma }\right) \in \left( {\mathcal{Y}\times \{ \text{False, True}\} }\right)$ . We can show that achieving a small ${R}_{{\mathcal{D}}^{\prime }}^{\mathrm{{LC}}}\left( h\right)$ is consistent with achieving small ${R}_{\mathcal{D}}^{\mathrm{{MC}}}\left( h\right)$ as follows:
266
+
267
+ Lemma 1. For any $h \in \mathcal{H},{R}_{\mathcal{D}}^{\mathrm{{MC}}}\left( h\right) = \frac{k - 1}{\theta \left( {k - 2}\right) + 1}{R}_{{\mathcal{D}}^{\prime }}^{\mathrm{{LC}}}\left( h\right)$ holds.
268
+
269
+ Thus, minimizing ${R}_{{\mathcal{D}}^{\prime }}^{\mathrm{{LC}}}\left( h\right)$ is a reasonable way to achieve a high multi-class classification accuracy. Generally, there is no loss function $\ell \left( {\left( {x,\gamma }\right) , y, h}\right)$ which is a convex upper bound on the zero-one loss $I\left( {\gamma = \left( {y \neq h\left( x\right) }\right) }\right)$ over the domain $w$ . This is because if $I\left( {\gamma = \text{True}}\right) = 1$ then max is convex w.r.t. $w$ ; however, if $I\left( {\gamma = \text{True}}\right) = - 1$ then $- \max = \min$ is concave w.r.t. $w$ . Therefore, we consider the convex upper bounded loss only on the risk for complementarily labeled data (i.e., the concave risk for the normally labeled data) using $\Gamma : \mathbb{R} \rightarrow \left\lbrack {0,1}\right\rbrack$ as $\Gamma \left( {\mathop{\max }\limits_{{j \in \mathcal{Y} \smallsetminus y}}\left\langle {\left( {{w}_{j} - {w}_{y}}\right) , x}\right\rangle }\right)$ . We then define the nonconvex risk $\ell \left( {x,\left( {\gamma , y}\right) , h}\right) =$ $\Gamma \left( {I\left( {\gamma = \text{ True }}\right) \times \left( {\mathop{\max }\limits_{{j \in \mathcal{Y} \smallsetminus y}}\left\langle {\left( {{w}_{j} - {w}_{y}}\right) , x}\right\rangle }\right) }\right)$ . The empirical risk is formulated as:
270
+
271
+ $$
272
+ {\widehat{R}}_{S}^{\mathrm{{LC}}}\left( h\right) = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\ell \left( {{x}_{i},\left( {{\gamma }_{i},{y}_{i}}\right) , h}\right) .
273
+ $$
274
+
275
+ The following is obtained by MIL-reduction framework.
276
+
277
+ Theorem 10. Complementarily labeled learning is MIL-reducible.
278
+
279
+ The difference from the reduction in multi-class learning is that only ${y}^{\prime }$ takes $\{ - 1,1\} .{y}^{\prime }$ behaves as a switch that turns the loss of complementarily or normally labeled data.
280
+
281
+ The empirical Rademacher complexity is bounded as:
282
+
283
+ Corollary 11. We assume that $\begin{Vmatrix}{x}_{i}\end{Vmatrix} \leq {C}_{2}$ for any $i \in \left\lbrack n\right\rbrack$ . In the reduced MIL problem from complementarily labeled learning, the empirical Rademacher complexity of ${\widehat{\mathcal{H}}}^{\prime }$ is given by:
284
+
285
+ $$
286
+ {\Re }_{{S}^{\prime }}\left( {\widehat{\mathcal{H}}}^{\prime }\right) = O\left( \frac{\log \left( {{\widehat{a}}^{2}{n}^{2}\left( {k - 1}\right) }\right) \left( {2\widehat{a}\ln \left( {{\widehat{a}}^{2}n}\right) }\right) }{\sqrt{n}}\right) ,
287
+ $$
288
+
289
+ where $\widehat{a} = {2a}{C}_{1}{C}_{2}$ and we assume $\begin{Vmatrix}{w}^{\prime }\end{Vmatrix} \leq {C}_{1}$ in the reduced MIL problem.
290
+
291
+ We use the same argument as in Corollary 8. Using Corollary 2 and Lemma 1, we obtain the generalization bound for the complementarily labeled learning.
292
+
293
+ The learning algorithm is derived by the following result:
294
+
295
+ Corollary 12. The reduced ERM of the MIL from complementarily labeled learning is a DC programming problem. If the sample contains only complementarily labeled data, the learning problem is a convex programming problem.
296
+
297
+ Generally, ${y}^{\prime } \in \{ - 1,1\}$ in complementarily labeled learning. Using the proof of Theorem 10 and by Proposition 6, if we consider $\Gamma \left( c\right)$ which is a nondecreasing and homogeneous function of degree 1 for $c \in \left\lbrack {-1,1}\right\rbrack$ such as hinge-loss function, we can solve the problem by DC algorithm as shown in Algorithm 1. Note that, if the sample contains only complementarily labeled data (i.e., $\forall i \in \left\lbrack n\right\rbrack ,{y}_{i} = - 1$ ), it becomes a convex programming problem.
298
+
299
+ ---
300
+
301
+ ${}^{4}$ This assumption was proposed by Ishida et al. [2017] as a reasonable scenario in some practical tasks (e.g., crowdsourcing).
302
+
303
+ ${}^{5}$ Ishida et al. [2017] used a different surrogate risk. However, they and we have a common goal: to minimize ${R}_{\mathcal{D}}^{\mathrm{{MC}}}\left( h\right)$
304
+
305
+ ---
306
+
307
+ #### 5.1.3 Multi-label learning problem
308
+
309
+ Problem setting Let $\mathcal{X} \subseteq {\mathbb{R}}^{d}$ be an instance space and $\mathcal{Y} \in \{ - 1,1{\} }^{k}$ be an output space, and $\mathcal{D}$ be an unknown distribution over $\mathcal{X}$ . Unlike the standard multi-class learning setting introduced in Section 5.1.1, each instance may have multiple labels (e.g., in text-categorization tasks, some texts have multiple topics such as IT and business). ${y}^{j}$ denotes the $j$ -th element of ${y}_{i}$ . The learner receives a labeled sample $S = \left( {{x}_{1},{y}_{1}}\right) ,\ldots ,\left( {{x}_{n},{y}_{n}}\right) \in \mathcal{X} \times \mathcal{Y}$ which is drawn i.i.d. according to the distribution $\mathcal{D}$ . The learner predicts whether $x$ belongs to class $j \in \left\lbrack k\right\rbrack$ or not using the hypothesis $h \in \mathcal{H} = \left\{ {\left( {x, j}\right) \mapsto \operatorname{sign}\left( \left\langle {{w}_{j}, x}\right\rangle \right) \mid \forall {w}_{j} \in {\mathbb{R}}^{d}}\right\}$ . Let $\ell$ : $\left( {x, y, h}\right) \mapsto \frac{1}{k}\mathop{\sum }\limits_{{j = 1}}^{k}\Gamma \left( {-{y}^{j}\left\langle {{w}_{j}, x}\right\rangle }\right)$ where $\Gamma : \mathbb{R} \rightarrow \left\lbrack {0,1}\right\rbrack$ is a convex, nondecreasing and $b$ -Lipschitz function ${}^{6}$ . The generalization and empirical risk of $h$ are defined as:
310
+
311
+ $$
312
+ {R}_{\mathcal{D}}\left( h\right) = \underset{\left( {x, y}\right) \sim \mathcal{D}}{\mathbb{E}}\left\lbrack {\ell \left( {x, y, h}\right) }\right\rbrack ,{\widehat{R}}_{S}\left( h\right) = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\ell \left( {{x}_{i},{y}_{i}, h}\right) .
313
+ $$
314
+
315
+ Reduction to MIL
316
+
317
+ Theorem 13. Multi-label learning is MIL-reducible.
318
+
319
+ Proof. On the MIL-reduction framework, suppose that $p = 1;{f}_{1} : {f}_{1}\left( a\right) = - a$ for $a \in \mathbb{R};{f}_{2}$ is $\Gamma ;\alpha \left( {x, y}\right) =$ $\left( {{x}_{\left( x, y\right) }^{\prime },{y}^{\prime }}\right)$ where ${x}_{\left( x, y\right) }^{\prime } = \left\{ {\left( {-{y}^{1}x,1}\right) ,\ldots ,\left( {-{y}^{k}x, k}\right) }\right\}$ ; ${y}^{\prime } = - 1;\mathcal{G} = \left\{ {g : \left( {z, j}\right) \mapsto \left\langle {{w}_{j}^{\prime }, z}\right\rangle \mid {w}_{j}^{\prime } \in {\mathbb{R}}^{d},\forall j \in }\right.$ $\left. {\left\lbrack k\right\rbrack ,\begin{Vmatrix}{W}^{\prime }\end{Vmatrix} \leq {C}_{1}}\right\}$ where ${W}^{\prime } = \left( {{w}_{1}^{\prime },\ldots ,{w}_{k}^{\prime }}\right) ;\beta \left( {h}^{\prime }\right)$ : $\left( {x, j}\right) \mapsto \operatorname{sign}\left( \left\langle {{w}_{j}^{\prime }, x}\right\rangle \right)$ . For any(x, y)and $h \in \mathcal{H}$ , we have that
320
+
321
+ $$
322
+ {\ell }^{\prime }\left( {{x}^{\prime },{y}^{\prime },{h}^{\prime }}\right) = {f}_{1}\left( {{y}^{\prime }{\Psi }_{p}\left( \left\{ {{f}_{2}\left( {g\left( z\right) }\right) \mid z \in {x}_{\left( x, y\right) }^{\prime }}\right\} \right) }\right)
323
+ $$
324
+
325
+ $$
326
+ = \frac{1}{\left| {x}_{\left( x, y\right) }^{\prime }\right| }\mathop{\sum }\limits_{{\left( {{y}^{j}x, j}\right) \in {x}_{\left( x, y\right) }^{\prime }}}\Gamma \left( {-\left\langle {{w}_{j},{y}^{j}x}\right\rangle }\right)
327
+ $$
328
+
329
+ $$
330
+ = \ell \left( {x, y, h}\right)
331
+ $$
332
+
333
+ The empirical Rademacher complexity is bounded as:
334
+
335
+ Corollary 14. We assume that $\begin{Vmatrix}{x}_{i}\end{Vmatrix} \leq {C}_{2}$ for any $i \in \left\lbrack n\right\rbrack$ . In the reduced MIL problem, the empirical Rademacher complexity of ${\widehat{\mathcal{H}}}^{\prime }$ is given as follows:
336
+
337
+ $$
338
+ {\Re }_{{S}^{\prime }}\left( {\widehat{\mathcal{H}}}^{\prime }\right) = O\left( \frac{\log \left( {2{n}^{2}k}\right) \left( {b{C}_{1}{C}_{2}\ln \left( n\right) }\right) }{\sqrt{n}}\right) ,
339
+ $$
340
+
341
+ where $\begin{Vmatrix}{w}^{\prime }\end{Vmatrix} \leq {C}_{1}$ in the reduced MIL.
342
+
343
+ We used the fact that the size of each bag is $k$ . Using Corollary 2 , we obtain the generalization risk bound for the multi-label learning.
344
+
345
+ The learning algorithm is obtained by the following result.
346
+
347
+ Corollary 15. The reduced ERM of the MIL from multi-label learning is a convex programming problem.
348
+
349
+ The proof of Theorem 13 shows that, ${f}_{1}$ is nonincreasing and convex, and ${y}_{i}^{\prime } = - 1$ for all $i \in \left\lbrack n\right\rbrack$ . Therefore, by Proposition 5, if we consider $\Gamma$ that is nondecreasing and convex, the reduced problem is a convex programming problem and it is solved in polynomial time.
350
+
351
+ ### 5.2 APPLICATION TO THE NEW PROBLEMS
352
+
353
+ #### 5.2.1 Multi-label learning with perfectionistic loss
354
+
355
+ Problem setting: In a standard multi-label learning (see Sec.5.1.3), we consider the average prediction error (loss) with the classes. On the other hand, we consider a perfectionistic error in multi-label learning problem. More formally, we consider the following loss in a multi-label learning:
356
+
357
+ $$
358
+ \ell : \left( {x, y, h}\right) \mapsto \mathop{\max }\limits_{{j \in \left\lbrack k\right\rbrack }}\Gamma \left( {-{y}^{j}\left\langle {{w}_{j}, x}\right\rangle }\right) ,
359
+ $$
360
+
361
+ where $\Gamma : \mathbb{R} \rightarrow \left\lbrack {0,1}\right\rbrack$ is a convex, nondecreasing and $b$ -Lipschitz function. This loss means that the learner incurs the risk unless the learner perfectly predict the correct labels. The generalization and empirical risks of $h$ are given as ${R}_{\mathcal{D}}\left( h\right) = {\mathbb{E}}_{\left( {x, y}\right) \sim \mathcal{D}}\left\lbrack {\ell \left( {x, y, h}\right) }\right\rbrack ,{\widehat{R}}_{S}\left( h\right) =$ $\frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\ell \left( {{x}_{i},{y}_{i}, h}\right)$ , respectively.
362
+
363
+ Using MIL-reduction framework, we obtain the following:
364
+
365
+ Theorem 16. Multi-label learning with perfectionistic loss is MIL-reducible.
366
+
367
+ This can be derived by the same argument with multi-label learning except for $p = \infty$ (see Sec. 4).
368
+
369
+ The empirical Rademacher complexity is bounded as:
370
+
371
+ Corollary 17. We assume that $\begin{Vmatrix}{x}_{i}\end{Vmatrix} \leq {C}_{2}$ for any $i \in \left\lbrack n\right\rbrack$ . In the reduced MIL problem, the empirical Rademacher complexity of ${\widehat{\mathcal{H}}}^{\prime }$ is given as follows:
372
+
373
+ $$
374
+ {\Re }_{{S}^{\prime }}\left( {\widehat{\mathcal{H}}}^{\prime }\right) = O\left( \frac{\log \left( {2{n}^{2}k}\right) \left( {b{C}_{1}{C}_{2}\ln \left( n\right) }\right) }{\sqrt{n}}\right) ,
375
+ $$
376
+
377
+ where we assume $\begin{Vmatrix}{w}^{\prime }\end{Vmatrix} \leq {C}_{1}$ .
378
+
379
+ Interestingly, we can have the same generalization risk bound with the standard multi-label learning.
380
+
381
+ The learning algorithm is derived by the following result.
382
+
383
+ ---
384
+
385
+ ${}^{6}$ Note that we use the negative score $- {y}^{j}\left\langle {{w}_{j}, x}\right\rangle$ to employ a nondecreasing $\Gamma$ .
386
+
387
+ ---
388
+
389
+ Corollary 18. The reduced ERM of the MIL from multi-label learning with perfectionistic loss is a convex programming problem.
390
+
391
+ This is easily obtained by observing the reduction process shown in Sec. 4 and using Prpoposition 5.
392
+
393
+ A naive approach for the multi-label learning with perfectionistic loss is to reduce to multi-class learning. That is, we consider all combinations of the multi-label as multi-classes and solve ${2}^{k}$ -class learning problem with high computational cost. However, by the above corollary, multi-label learning with perfectionistic loss can be solved efficiently.
394
+
395
+ #### 5.2.2 Top-1 ranking learning
396
+
397
+ Learning to rank is a fundamental problem, and many applications, such as recommendation systems, exist. We consider the following natural scenario in a recommendation problem; a learner has a set that contains several items, and it wishes to recommend an item to a target user from the set.
398
+
399
+ Problem setting: Let $\mathcal{X} \subseteq {\mathbb{R}}^{d}$ be an instance space, and $s : \mathcal{X} \rightarrow \mathbb{R}$ be a target scoring function. Set $A$ is a finite set of instances selected from $\mathcal{X}$ . The learner receives the sequence of the sets of items and the chosen item $S = \left( {{A}_{1},{x}_{1}^{ * }}\right) ,\ldots ,\left( {{A}_{n},{x}_{n}^{ * }}\right)$ , where each ${x}_{i}^{ * } \in {A}_{i}$ is the highest-valued item determined by the target function $s$ . $k$ denotes the average size of the item sets in $S$ , that is, $k = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\left| {A}_{i}\right|$ . Each sample set of items is drawn i.i.d. from $\mathcal{X}$ according to an unknown distribution $\mathcal{D}$ over ${2}^{\mathcal{X}}$ . Assume that the learner predicts the item from the item set using the hypothesis $h \in \mathcal{H} = \left\{ {A \mapsto \arg \mathop{\max }\limits_{{x \in A}}\langle w, x\rangle \mid }\right.$ $w \in \left. {\mathbb{R}}^{d}\right\}$ . Let $\ell \left( {A,{x}^{ * }, h}\right)$ is a convex upper bound on the zero-one loss function $I\left( {y \neq \widehat{y}}\right)$ . Equivalently, we consider the zero-one loss $I\left( {\left\langle {w,{x}^{ * }}\right\rangle - \mathop{\max }\limits_{{x \in A \smallsetminus {x}^{ * }}}\langle w, x\rangle \leq 0}\right)$ and its convex upper bounded loss $\ell : \left( {A,{x}^{ * }, h}\right) \mapsto$ $\Gamma \left( {\left\langle {w,{x}^{ * }}\right\rangle - \mathop{\max }\limits_{{x \in A \smallsetminus {x}^{ * }}}\langle w, x\rangle }\right)$ where $\Gamma : \mathbb{R} \rightarrow \left\lbrack {0,1}\right\rbrack$ is a convex, nonincreasing and $a$ Lipschitz function. The goal of the learner is to find $h \in \mathcal{H}$ with a small misranking risk w.r.t. the target $s$ . Thus, the generalization and empirical risks are formulated as follows:
400
+
401
+ $$
402
+ {R}_{\mathcal{D}}\left( h\right) = \underset{A \sim \mathcal{D}}{\mathbb{E}}\left\lbrack {\ell \left( {A,{x}^{ * }, h}\right) }\right\rbrack ,{\widehat{R}}_{S}\left( h\right) = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\ell \left( {A,{x}_{i}^{ * }, h}\right) ,
403
+ $$
404
+
405
+ where ${x}^{ * } = \arg \mathop{\max }\limits_{{x \in A}}s\left( x\right)$ .
406
+
407
+ We obtain the following by using MIL-reduction framework:
408
+
409
+ Theorem 19. Top-1 ranking learning is MIL-reducible.
410
+
411
+ The reducible condition is satisfied when we set $\alpha \left( {A,{x}^{ * }}\right) =$ $\left( {{x}^{\prime },{y}^{\prime }}\right)$ where ${x}^{\prime } = \left\{ {x - {x}^{ * } \mid x \in A \smallsetminus {x}^{ * }}\right\} {y}_{i}^{\prime } = - 1$ for all $i \in \left\lbrack n\right\rbrack$ . The details of the reduction process is in Sec.1. The empirical Rademacher complexity bound is as follows:
412
+
413
+ Corollary 20. We assume that $\parallel x\parallel \leq {C}_{2}$ for any $x \in$ ${A}_{i},\forall i \in \left\lbrack n\right\rbrack$ . In the reduced MIL problem, the empirical Rademacher complexity of ${\widehat{\mathcal{H}}}^{\prime }$ is given as follows:
414
+
415
+ $$
416
+ {\Re }_{{S}^{\prime }}\left( {\widehat{\mathcal{H}}}^{\prime }\right) = O\left( \frac{\log \left( {{\widehat{a}}^{2}{n}^{2}\left( {k - 1}\right) }\right) \left( {\widehat{a}\ln \left( {2{\widehat{a}}^{2}n}\right) }\right) }{\sqrt{n}}\right) ,
417
+ $$
418
+
419
+ where $\widehat{a} = {2a}{C}_{1}{C}_{2}$ and we assume $\begin{Vmatrix}{w}^{\prime }\end{Vmatrix} \leq {C}_{1}$ .
420
+
421
+ The generalization bound can be derived by applying ${r}_{{S}^{\prime }} =$ $k - 1$ and using the fact that $\parallel z\parallel \leq 2{C}_{2}$ for any $z \in {x}_{i}^{\prime },\forall i \in$ $\left\lbrack n\right\rbrack$ in the reduced MIL. By using Corollary 2, we can obtain the generalization risk bound for the Top-1 ranking learning.
422
+
423
+ The learning algorithm is designed by the following result:
424
+
425
+ Corollary 21. The reduced ERM of MIL from top-1 ranking learning is a convex programming problem.
426
+
427
+ The corollary can be esasily derived from the reduction process detailed in [1]
428
+
429
+ Important extension: We can consider top-1 ranking learning with negative feedback which is an extension of top-1 ranking learning. We show the details in Sec. J. Remarkably, the ERM of the reduced MIL problem is a DC programming problem.
430
+
431
+ ## 6 KERNELIZED EXTENSION
432
+
433
+ Although we consider a linear function set as $\mathcal{G}$ ; in practice, a nonlinear kernel is required for various learning tasks. A straightforward method is to employ a kernel-approximation technique [see, e.g., Sec.6.6 in Mohri et al., 2018], which constructs feature vectors $\Phi \left( x\right) \in {\mathbb{R}}^{D}$ with the theoretical guarantee that $\left\langle {\Phi \left( {x}_{1}\right) ,\Phi \left( {x}_{2}\right) }\right\rangle \approx K\left( {{x}_{1},{x}_{2}}\right)$ for a user-determined dimension $D$ . However, we can use only a limited number of kernels via the approximation technique. Therefore, we show the kernelized version of the reduction.
434
+
435
+ ### 6.1 SETTINGS
436
+
437
+ We assume that an original problem is defined by $\mathcal{H},\ell ,\mathcal{X},\mathcal{Y}$ , and $\Phi : \mathcal{X} \rightarrow \mathbb{H}$ , where $\mathbb{H}$ is a reproducing kernel Hilbert space associated to $K\left( {{x}_{1},{x}_{2}}\right) = \left\langle {\Phi \left( {x}_{1}\right) ,\Phi \left( {x}_{2}\right) }\right\rangle$ . Aside from the computability, we can virtually consider the sample as $S = \left( {\left( {\Phi \left( {x}_{1}\right) ,{y}_{1}}\right) ,\ldots ,\left( {\Phi \left( {x}_{n}\right) ,{y}_{n}}\right) }\right)$ . The ERM-reducible condition is that there exist $\left( {{x}^{\prime },{y}^{\prime }}\right) = \alpha \left( {\Phi \left( x\right) , y}\right)$ , $h = \beta \left( {h}^{\prime }\right)$ and ${\ell }^{\prime }$ that satisfies $\ell \left( {\Phi \left( x\right) , y, h}\right) = {\ell }^{\prime }\left( {{x}_{i}^{\prime },{y}_{i}^{\prime },{h}^{\prime }}\right)$ for any $\left( {x, y}\right) \in \mathcal{X} \times \mathcal{Y}$ .
438
+
439
+ Let ${S}^{\prime } = \left( {\left( {{x}_{1}^{\prime },{y}_{1}^{\prime }}\right) ,\ldots ,\left( {{x}_{n}^{\prime },{y}_{n}^{\prime }}\right) }\right)$ and let $\mathcal{G} = \{ g : z \mapsto$ $\left. {\left\langle {{w}^{\prime }, z}\right\rangle \mid {w}^{\prime } \in {\mathbb{H}}^{\prime }}\right\}$ . We assume that $\left( {\mathcal{H},\ell }\right)$ is MIL-reducible to ${\mathcal{H}}^{\prime },{\ell }^{\prime }$ . The ERM of the reduced MIL is formulated as:
440
+
441
+ $$
442
+ \mathop{\min }\limits_{{{w}^{\prime } \in {\mathbb{H}}^{\prime }}}\lambda {\begin{Vmatrix}{w}^{\prime }\end{Vmatrix}}_{{\mathbb{H}}^{\prime }} + {\mathcal{L}}_{{w}^{\prime }} \tag{5}
443
+ $$
444
+
445
+ where ${\mathcal{L}}_{{w}^{\prime }} = \mathop{\sum }\limits_{i}^{n}{f}_{1}\left( {{y}_{i}^{\prime }{\Psi }_{p}\left( \left\{ {{f}_{2}\left( {\left\langle {{w}^{\prime }, z}\right\rangle \mid z \in {x}_{i}^{\prime }}\right) }\right\} \right) }\right.$ .
446
+
447
+ ---
448
+
449
+ ${}^{7}$ We consider an arg max with a fixed tie-breaking rule.
450
+
451
+ ---
452
+
453
+ ### 6.2 COMPUTABILITY
454
+
455
+ We show that the representer theorem holds for the optimization problem (5).
456
+
457
+ Theorem 22 (Representer theorem). An optimal solution of the ERM problem (5) has the form ${\widetilde{w}}^{\prime } = \mathop{\sum }\limits_{{z \in {P}_{{S}^{\prime }}}}{\mu }_{z}z$ , where ${P}_{{S}^{\prime }} = \mathop{\bigcup }\limits_{{i = 1}}^{n}{x}_{i}^{\prime }$ .
458
+
459
+ Thus, the ERM problem (5) is equivalently formulated as:
460
+
461
+ $$
462
+ \mathop{\min }\limits_{{\mathbf{\mu } \in {\mathbb{R}}^{\left| {P}_{{S}^{\prime }}\right| }}}\lambda \mathop{\sum }\limits_{{z,\widehat{z} \in {P}_{{S}^{\prime }}}}{\mu }_{z}{\mu }_{\widehat{z}}\langle z,\widehat{z}\rangle + {\mathcal{L}}_{\mathbf{\mu }},
463
+ $$
464
+
465
+ where ${\mathcal{L}}_{\mathbf{\mu }} = \mathop{\sum }\limits_{{i = 1}}^{n}{f}_{1}\left( {{y}_{i}{\Psi }_{p}\left( \left\{ {{f}_{2}\left( {\mathop{\sum }\limits_{{z \in {P}_{{S}^{\prime }}}}{\mu }_{z}\langle z,\widehat{z}\rangle }\right) \mid \widehat{z} \in }\right. \right. }\right.$ $\left. \left. {x}_{i}^{\prime }\right\} \right) )$ .
466
+
467
+ Therefore, if $\left\langle {{z}_{1},{z}_{2}}\right\rangle$ is polynomial-time computable for any ${z}_{1},{z}_{2} \in {x}^{\prime }$ using the original kernel function $K$ as an oracle, the ERM of the MIL can be solved similar to linear case according to the condition in Proposition 5 and 6 (DC algorithm for the kernel version is in Sec. 1). For all MIL-reducible problems introduced in the paper, $\left\langle {{z}_{1},{z}_{2}}\right\rangle$ is polynomial-time computable using $K$ (see details in Sec. M). Moreover, we can construct $\beta$ in polynomial time.
468
+
469
+ ## 7 DISCUSSION AND CONCLUSION
470
+
471
+ ### 7.1 RELATED WORK
472
+
473
+ Other reduction techniques: Several machine-learning reduction schemes exist [see, e.g., Beygelzimer et al., 2015], and we found general reduction schemes, such as [Pitt and Warmuth, 1990, Beygelzimer et al., 2005]. A major difference between the proposed scheme and existing approaches is that we focus on the reduction of ERM. Various applications of machine learning reductions, such as reduction from multi-class learning to binary classification [James and Hastie, 1998, Ramaswamy et al., 2014], and from ranking to binary classification [Balcan et al., 2008, Ailon and Mohri, 2010, Agarwal, 2014], exist. To the best of our knowledge, the reduction to MIL has not yet been discussed.
474
+
475
+ Multi-Class Learning: Recently, Lei et al. [2019] achieved $\log \left( k\right)$ -dependent generalization bound. The proposed generalization bound is competitive with the bound. However, our derivation is highly simpler than the analysis of [Lei et al., 2019] because the reduction allows us to apply the existing MIL bound of [Sabato and Tishby, 2012].
476
+
477
+ Complementarily-labeled learning: Ishida et al. [2017] provided the generalization risk bound in the case in which the training sample contains only complementarily labeled instances (i.e., $\theta = 0$ ). The proposed generalization bound is incomparable to the bound (see details in Sec. 1). Ishida et al. [2017] selected nonconvex loss functions and optimized the empirical risks using a gradient-based algorithm in practice. However, there is no guarantee of the optimality of the solution. We show that the learning problem can be solved by DC algorithm and guarantee the local optima. Moreover, in the special case that sample contains only complementarily labeled data, the learning problem becomes convex programming and we can obtain global optima. To the best of our knowledge, the provided learning algorithm is a first polynomial-time algorithm in the special case.
478
+
479
+ Multi-label learning: Various approaches and generalization analyses have been provided [Yu et al., 2014, Bhatia et al., 2015, Xu et al., 2016a, b]. However, to the best of our knowledge, this paper is the first to propose a $\log \left( k\right)$ - dependent generalization bound for the linear (or nonlinear kernel) hypothesis class, where $k$ is the number of classes.
480
+
481
+ Multi-task learning: A similar generalization bound was reported by [Pontil and Maurer, 2013]. Their results suggest the advantage of regularizing the weights ${w}_{1},\ldots ,{w}_{T}$ over $T$ tasks. However, our result is derived from an entirely different argument from [Pontil and Maurer, 2013] and the derivation is highly simplified.
482
+
483
+ Top-1 ranking learning: Top-1 ranking measure was originally discussed in [Hidasi and Karatzoglou, 2018]. However, the basic problem setting is different from ours. They assumed that the recommender has i.i.d. positive and negative items as the sample. Moreover, they did not propose a general form of the problem and theoretical analysis.
484
+
485
+ MIL: MIL was originally proposed by Dietterich et al. [1997], which is known as weakly supervised learning and there have been proposed many real applications [Gärtner et al., 2002, Andrews et al., 2003, Zhang et al., 2013, Doran and Ray, 2014, Carbonneau et al., 2018]. Moreover, the generalization bound and learning algorithm are analyzed from the theoretical perspective [Sabato and Tishby, 2012, Doran, 2015, Suehiro et al., 2020]. Suehiro et al. [2020] found that a local-feature-based time-series classification problem can be reduced to a MIL problem. Our results first show that various learning problems can be reduced to MIL.
486
+
487
+ ### 7.2 CONCLUSION AND FUTURE WORK
488
+
489
+ We revealed that various learning problems can be reduced to a MIL problem by our ERM-based reduction scheme. The results imply that our MIL-reduction gives a simplified and unified framework for the analyses for various learning problems. Moreover, we obtained novel theoretical results for some learning problems. As future work, we consider other reduction frameworks based on the proposed ERM-reduction. Moreover, we explore the relaxation of the reducible condition. In this paper, two learning algorithms are introduced and the demonstrated MIL-reducible problems are solved by either of them. However, there may be some MIL-reducible problems that can be solved by other learning algorithms or cannot be solved in polynomial time.
490
+
491
+ Shivani Agarwal. Surrogate regret bounds for bipartite ranking via strongly proper losses. The Journal of Machine Learning Research, 15(1):1653-1674, 2014.
492
+
493
+ Nir Ailon and Mehryar Mohri. Preference-based learning to rank. Machine Learning, 80(2-3):189-211, 2010.
494
+
495
+ Stuart Andrews, Ioannis Tsochantaridis, and Thomas Hofmann. Support vector machines for multiple-instance learning. In Advances in Neural Information Processing Systems, pages 577-584, 2003.
496
+
497
+ Maria-Florina Balcan, Nikhil Bansal, Alina Beygelzimer, Don Coppersmith, John Langford, and Gregory B Sorkin. Robust reductions from ranking to classification. Machine learning, 72(1-2):139-153, 2008.
498
+
499
+ Peter L. Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3:463-482, 2003.
500
+
501
+ Alina Beygelzimer, Varsha Dani, Tom Hayes, John Langford, and Bianca Zadrozny. Error limiting reductions between classification tasks. In International Conference on machine Learning, pages 49-56, 2005.
502
+
503
+ Alina Beygelzimer, Hal Daumé, John Langford, and Paul Mineiro. Learning reductions that really work. Proceedings of the IEEE, 104(1):136-147, 2015.
504
+
505
+ Kush Bhatia, Himanshu Jain, Purushottam Kar, Manik Varma, and Prateek Jain. Sparse local embeddings for extreme multi-label classification. In Advances in Neural Information Processing Systems, volume 28, 2015.
506
+
507
+ Stephan Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
508
+
509
+ Marc-André Carbonneau, Veronika Cheplygina, Eric Granger, and Ghyslain Gagnon. Multiple instance learning: A survey of problem characteristics and applications. Pattern Recognition, 77:329 - 353, 2018.
510
+
511
+ Thomas G. Dietterich, Richard H. Lathrop, and Tomás Lozano-Pérez. Solving the multiple instance problem with axis-parallel rectangles. Artificial Intelligence, 89 (1-2):31-71, January 1997.
512
+
513
+ Gary Doran. Multiple Instance Learning from Distributions. PhD thesis, Case WesternReserve University, 2015.
514
+
515
+ Gary Doran and Soumya Ray. A theoretical and empirical analysis of support vector machine methods for multiple-instance classification. Machine Learning, 97(1-2):79- 102, 2014.
516
+
517
+ Thomas Gärtner, Peter A. Flach, Adam Kowalczyk, and Alex J. Smola. Multi-instance kernels. In International Conference on Machine Learning, pages 179-186, 2002.
518
+
519
+ Balázs Hidasi and Alexandros Karatzoglou. Recurrent neural networks with top-k gains for session-based recom-
520
+
521
+ mendations. In International Conference on Information and Knowledge Management, pages 843-852, 2018.
522
+
523
+ Takashi Ishida, Gang Niu, Weihua Hu, and Masashi Sugiyama. Learning from complementary labels. In ${Ad}$ - vances in neural information processing systems, pages 5639-5649, 2017.
524
+
525
+ Gareth James and Trevor Hastie. The error coding method and picts. Journal of Computational and Graphical statistics, 7(3):377-387, 1998.
526
+
527
+ Hoai An Le Thi and Tao Pham Dinh. De programming and dca: thirty years of developments. Mathematical Programming, 169(1):5-68, 2018.
528
+
529
+ Y. Lei, Ü. Dogan, D. Zhou, and M. Kloft. Data-dependent generalization bounds for multi-class classification. IEEE Transactions on Information Theory, 65(5):2995-3021, 2019.
530
+
531
+ Mehryar Mohri, Afshin Rostamizadeh, and Ameet Tal-walkar. Foundations of machine learning. MIT press, 2018.
532
+
533
+ Leonard Pitt and Manfred K Warmuth. Prediction-preserving reducibility. Journal of Computer and System Sciences, 41(3):430-467, 1990.
534
+
535
+ Massimiliano Pontil and Andreas Maurer. Excess risk bounds for multitask learning with trace norm regularization. In Conference on Learning Theory, pages 55-76. PMLR, 2013.
536
+
537
+ Harish G Ramaswamy, Balaji Srinivasan Babu, Shivani Agarwal, and Robert C Williamson. On the consistency of output code based learning algorithms for multiclass learning problems. In Conference on Learning Theory, pages 885-902, 2014.
538
+
539
+ Sivan Sabato and Naftali Tishby. Multi-instance learning with any hypothesis class. Journal of Machine Learning Research, 13(1):2999-3039, October 2012. ISSN 1532- 4435.
540
+
541
+ Daiki Suehiro, Kohei Hatano, Eiji Takimoto, Shuji Ya-mamoto, Kenichi Bannai, and Akiko Takeda. Theory and algorithms for shapelet-based multiple-instance learning. Neural Computation, 32(8):1580-1613, 2020.
542
+
543
+ Chang Xu, Tongliang Liu, Dacheng Tao, and Chao Xu. Local rademacher complexity for multi-label learning. IEEE Transactions on Image Processing, 25(3):1495- 1507, 2016a.
544
+
545
+ Chang Xu, Dacheng Tao, and Chao Xu. Robust extreme multi-label learning. In ACM SIGKDD international conference on knowledge discovery and data mining, pages 1275-1284, 2016b.
546
+
547
+ Hsiang-Fu Yu, Prateek Jain, Purushottam Kar, and Inderjit Dhillon. Large-scale multi-label learning with missing labels. In International Conference on Machine Learning, pages 593-601, 2014.
548
+
549
+ Dan Zhang, Jingrui He, Luo Si, and Richard Lawrence. MILEAGE: Multiple instance learning with global embedding. In International Conference on Machine Learning, pages 82-90, 2013.
550
+
551
+ ## A PROOF OF THEOREM 3
552
+
553
+ Proof. In [Sabato and Tishby,2012, Theorem 20 in], using the fact that ${\psi }_{p}$ is 1-Lipschitz for all $p$ and ${\Re }_{S}$ which is shown in the proof of [Sabato and Tishby, 2012, Theorem 20 in], we can obtain the target theorem.
554
+
555
+ ## B PROOF OF PROPOSITION 5
556
+
557
+ Proof of Proposition 5. First we have that $\widehat{f} = {f}_{2} \circ g$ is a convex function of ${w}^{\prime }$ because ${f}_{2}$ is a nondecreasing convex and $\left\langle {{w}^{\prime }, z}\right\rangle$ is a convex function (see, e.g., Eq. (3.11) in Boyd and Vandenberghe [2004]). Subsequently, we show that ${\Psi }_{p} \circ \widehat{f}$ is a convex function. Without loss of generality, we can consider ${\Psi }_{p}$ as a function ${\mathbb{R}}^{m} \rightarrow \mathbb{R}$ where $m$ is the size of the set ${x}^{\prime }.{\Psi }_{p}$ is a nondecreasing function in each argument and $\widehat{f}$ is convex and thus ${\Psi }_{p} \circ \widehat{h}$ is convex. Finally, because $- {\Psi }_{p}\left( \left\{ {{f}_{2}\left( \left\langle {{w}^{\prime }, z}\right\rangle \right) \mid z \in {x}^{\prime }}\right\} \right)$ is concave and ${f}_{1}$ is nonincreasing convex, ${f}_{1}\left( {-{\Psi }_{p}\left( \left\{ {{f}_{2}\left( \left\langle {{w}^{\prime }, z}\right\rangle \right) \mid z \in {x}^{\prime }}\right\} \right) \text{is convex Boyd}}\right.$ and Vandenberghe [2004].
558
+
559
+ ## C PROOF OF PROPOSITION 6
560
+
561
+ Proof. Because ${f}_{1}\left( c\right)$ is a homogeneous function of degree 1 for $c \in \left\lbrack {-1,1}\right\rbrack$ , we have ${f}_{1}\left( {-{\Psi }_{p}\left( \left\{ {{f}_{2}\left( \left\langle {{w}^{\prime }, z}\right\rangle \right) \mid z \in {x}^{\prime }}\right\} \right) }\right) =$ $- {f}_{1}\left( {{\Psi }_{p}\left( \left\{ {{f}_{2}\left( {\left\langle {{w}^{\prime }, z}\right\rangle \mid z \in {x}^{\prime }}\right) }\right\} \right) }\right)$ . As we proved in Proof of Proposition 5, ${f}_{1}\left( {-{\Psi }_{p}\left( \left\{ {{f}_{2}\left( \left\langle {{w}^{\prime }, z}\right\rangle \right) \mid z \in {x}^{\prime }}\right\} \right) }\right)$ is convex. Moreover, we have ${f}_{1}\left( {{\Psi }_{p}\left( \left\{ {{f}_{2}\left( {\left\langle {{w}^{\prime }, z}\right\rangle \mid z \in {x}^{\prime }}\right) }\right\} \right) }\right) = - {f}_{1}\left( {-{\Psi }_{p}\left( \left\{ {{f}_{2}\left( \left\langle {{w}^{\prime }, z}\right\rangle \right) \mid z \in {x}^{\prime }}\right\} \right) }\right)$ and thus ${f}_{1}\left( {{\Psi }_{p}\left( \left\{ {{f}_{2}\left( \left\langle {{w}^{\prime }, z}\right\rangle \right) }\right. \right. }\right.$ $\left. \left. \left. {z \in {x}^{\prime }}\right) \right\} \right)$ ) is concave. Therefore, we have that ${f}_{1}\left( {{\Psi }_{p}\left( \left\{ {{f}_{2}\left( {\left\langle {{w}^{\prime }, z}\right\rangle \mid z \in {x}^{\prime }}\right) }\right\} \right) }\right) + {f}_{1}\left( {-{\Psi }_{p}\left( \left\{ {{f}_{2}\left( \left\langle {{w}^{\prime }, z}\right\rangle \right) \mid z \in {x}^{\prime }}\right\} \right) }\right)$ is a DC function.
562
+
563
+ ## D DC ALGORITHM FOR THE REDUCED MIL PROBLEM
564
+
565
+ The algorithm is shown in Algorithm 1
566
+
567
+ Algorithm 1 MIL optimization via DC Algorithm
568
+
569
+ ---
570
+
571
+ Inputs:
572
+
573
+ ${S}^{\prime },\lambda$
574
+
575
+ Initialize:
576
+
577
+ ${w}^{\prime }{}_{0} \in {\mathbb{R}}^{{d}^{\prime }}$
578
+
579
+ for $t = 1,\ldots$ ,(until convergence) do
580
+
581
+ Compute the subgradient:
582
+
583
+ $$
584
+ {s}_{t} \in {\nabla }_{{w}^{\prime }}\left( {\mathop{\sum }\limits_{{i : {y}_{i} = - 1}}{f}_{1}\left( {{\Psi }_{p}\left( \left\{ {{f}_{2}\left( \left\langle {{w}^{\prime }, z}\right\rangle \right) \mid z \in {x}_{i}^{\prime }}\right\} \right) }\right) }\right)
585
+ $$
586
+
587
+ at ${w}_{t - 1}^{\prime }$ .
588
+
589
+ Solve the following subproblem:
590
+
591
+ $$
592
+ {w}_{t}^{\prime } \leftarrow \arg \mathop{\min }\limits_{{{w}^{\prime } : \begin{Vmatrix}{w}^{\prime }\end{Vmatrix} \leq {C}_{1}}}\lambda {\begin{Vmatrix}{w}^{\prime }\end{Vmatrix}}^{2} + \mathop{\sum }\limits_{{i : {y}_{i} = + 1}}{f}_{1}\left( {{\Psi }_{p}\left( \left\{ {{f}_{2}\left( \left\langle {{w}^{\prime }, z}\right\rangle \right) \mid z \in {x}_{i}^{\prime }}\right\} \right) }\right) - {s}_{t}^{\top }{w}^{\prime } \tag{6}
593
+ $$
594
+
595
+ ## end for return ${w}_{t}$
596
+
597
+ ---
598
+
599
+ The subproblem (6) is a convex programming problem that can be solved in polynomial time.
600
+
601
+ ## E PROOF OF LEMMA 1
602
+
603
+ Proof. Based on the assumption of ${\mathcal{D}}^{\prime }$ , the expected risk ${R}_{{\mathcal{D}}^{\prime }}^{\mathrm{{LC}}}\left( h\right)$ is represented using $\mathcal{D}, k$ , and $\theta$ as follows:
604
+
605
+ $$
606
+ {R}_{{\mathcal{D}}^{\prime }}^{\mathrm{{LC}}}\left( h\right) = \underset{\left( {x, y}\right) \sim \mathcal{D}}{\mathbb{E}}\left\lbrack {{\theta I}\left( \left( {y \neq h\left( x\right) }\right) \right) + \left( {1 - \theta }\right) \mathop{\sum }\limits_{{\bar{y} \neq y}}\frac{1}{k - 1}I\left( {\bar{y} = h\left( x\right) }\right) .}\right\rbrack
607
+ $$
608
+
609
+ Let ${\rho }_{1} = I\left( {y \neq h\left( x\right) }\right)$ in ${R}_{D}^{\mathrm{{MC}}}\left( h\right)$ and let ${\rho }_{2} = {\theta I}\left( \left( {y \neq h\left( x\right) }\right) \right) + \left( {1 - \theta }\right) \mathop{\sum }\limits_{{\bar{y} \neq y}}\frac{1}{k - 1}I\left( \left( {\bar{y} = h\left( x\right) }\right) \right)$ in ${R}_{{D}^{\prime }}^{\mathrm{{LC}}}\left( h\right)$ . We consider two cases of $h$ for any $h \in \mathcal{H}$ as follows: For a fixed(x, y),(i) If $h\left( x\right) = y : {\rho }_{1} = 0$ and ${\rho }_{2} = 0$ , and thus there is no gap. (ii) If $h\left( x\right) \neq y$ :, the first term of ${\rho }_{2}$ is $\theta$ and the second term is equal to $\left( {1 - \theta }\right) /\left( {k - 1}\right)$ , because there exists a unique $\widehat{y} : \widehat{y} \neq y$ that satisfies $\widehat{y} = h\left( x\right)$ . Therefore, ${\rho }_{2}$ is equal to $\theta + \frac{1 - \theta }{k - 1}$ . In this case, ${\rho }_{1} = 1$ . Thus, we have the bound $\frac{k - 1}{\theta \left( {k - 2}\right) + 1}{R}_{{\mathcal{D}}^{\prime }}^{\mathrm{{LC}}}\left( h\right) = {R}_{\mathcal{D}}^{\mathrm{{MC}}}\left( h\right) .$
610
+
611
+ ## F PROOF OF THEOREM 10
612
+
613
+ Proof. We use ${\eta }_{\left( x, y\right) }$ defined in (5.1.1). On the MIL-reduction framework, suppose that $p = \infty ;{f}_{1}\left( c\right) = \Gamma \left( {{2c}{C}_{1}{C}_{2}}\right)$ ; ${f}_{2}\left( c\right) = c/2{C}_{1}{C}_{2}$ (shifting function to $\left\lbrack {-1, + 1}\right\rbrack$ ); $\alpha \left( {x,\left( {\gamma , y}\right) }\right) = \left( {{x}_{\left( x, y\right) }^{\prime },{y}^{\prime }}\right)$ where ${x}_{\left( x, y\right) }^{\prime } = \left\{ {{\eta }_{\left( x, j\right) } - {\eta }_{\left( x, y\right) }\mid \forall j \in \mathcal{Y} \smallsetminus y}\right\}$ ; ${y}^{\prime } = I\left( {\gamma = \text{ True }}\right)$ ; for any $z \in {\mathbb{R}}^{kd},\mathcal{G} = \left\{ {g : z \mapsto \left\langle {\left( {{w}_{1}^{\prime },\ldots ,{w}_{k}^{\prime }}\right) , z}\right\rangle \mid {w}_{j}^{\prime } \in {\mathbb{R}}^{d},\forall j \in \left\lbrack k\right\rbrack ,\parallel {W}^{\prime }\parallel \leq {C}_{1}}\right\}$ where ${W}^{\prime } = \left( {{w}_{1}^{\prime },\ldots ,{w}_{k}^{\prime }}\right)$ and $\begin{Vmatrix}{W}^{\prime }\end{Vmatrix} = \sqrt{\mathop{\sum }\limits_{{j = 1}}^{k}{\begin{Vmatrix}{w}_{j}^{\prime }\end{Vmatrix}}^{2}};\beta \left( {h}^{\prime }\right) : x \mapsto \arg \mathop{\max }\limits_{{j \in \left\lbrack k\right\rbrack }}\left\langle {\left( {{w}_{1}^{\prime },\ldots {w}_{k}^{\prime }}\right) , x}\right\rangle$ . Then, for any(x, y)and $h \in \mathcal{H},$
614
+
615
+ $$
616
+ {\ell }^{\prime }\left( {{x}^{\prime },{y}^{\prime },{h}^{\prime }}\right) = {f}_{1}\left( {{y}^{\prime }{\Psi }_{p}\left( \left\{ {{f}_{2}\left( {g\left( z\right) \mid z \in {x}_{\left( x, y\right) }^{\prime }}\right) }\right\} \right) }\right)
617
+ $$
618
+
619
+ $$
620
+ = \Gamma \left( {I\left( {\gamma = \text{ True }}\right) \times {\Psi }_{\infty }\left( \left\{ {g\left( z\right) \mid z \in {x}_{\left( x, y\right) }^{\prime }}\right\} \right) }\right)
621
+ $$
622
+
623
+ $$
624
+ = \Gamma \left( {I\left( {\gamma = \text{ True }}\right) \times \left( {\mathop{\max }\limits_{{j \in \mathcal{Y} \smallsetminus y}}\left( {\left\langle {{w}_{j}, x}\right\rangle - \left\langle {{w}_{y}, x}\right\rangle }\right) }\right) }\right)
625
+ $$
626
+
627
+ $$
628
+ = \ell \left( {x,\left( {\gamma , y}\right) , h}\right) \text{.}
629
+ $$
630
+
631
+ ## G REDUCTION FROM MULTI-TASK LEARNING PROBLEM
632
+
633
+ In multi-task learning, the learner finds a common rule in multiple-tasks, which correctly predicts the outputs of the instances. For example, in the multi-classification-task problem, there are three different binary classification tasks for image data, cat or dog, car or train, and apple or tomato.
634
+
635
+ Problem setting Let $\mathcal{X} \subseteq {\mathbb{R}}^{d}$ be an input space and $\mathcal{Y} \in \{ - 1,1\}$ be an output space. We assume that the learner has $T$ different tasks with different data distributions. The learner receives $T$ sets of samples $S = {S}_{1},\ldots ,{S}_{T}$ where ${S}_{t} = \left( {\left( {{x}_{1}^{t},{y}_{1}^{t}}\right) ,\ldots ,\left( {{x}_{n}^{t},{y}_{n}^{t}}\right) }\right)$ is drawn i.i.d. according to unknown distribution ${\mathcal{D}}_{t}.\left( {{x}^{t},{y}^{t}}\right)$ denote an instance and its label, respectively. Let $\mathcal{H} = \left\{ {h : \left( {x}^{t}\right) \mapsto \operatorname{sign}\left( \left\langle {{w}_{t},{x}^{t}}\right\rangle \right) \mid {w}_{t} \in {\mathbb{R}}^{d}\rangle }\right\}$ be a hypothesis class. Let $\ell : \left( {\left( {{x}^{1},\ldots ,{x}^{T}}\right) ,\left( {{y}^{1},\ldots ,{y}^{T}}\right) , h}\right) \mapsto \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}\Gamma \left( {-{y}^{t}\left\langle {{w}_{t},{x}^{t}}\right\rangle }\right)$ where $\Gamma : \mathbb{R} \rightarrow \left\lbrack {0,1}\right\rbrack$ is a convex, nondecreasing and $b$ -Lipschitz function. The generalization risk and empirical risk are formulated as:
636
+
637
+ $$
638
+ \underset{t}{\mathbb{E}}\left\lbrack {{R}_{{\mathcal{D}}_{t}}\left( h\right) }\right\rbrack = \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}\underset{\left( {{x}^{t},{y}^{t}}\right) \sim {\mathcal{D}}_{t}}{\mathbb{E}}\left\lbrack {\Gamma \left( {-{y}^{t}\left\langle {{w}_{t},{x}^{t}}\right\rangle }\right) }\right\rbrack ,
639
+ $$
640
+
641
+ $$
642
+ {\widehat{R}}_{S}\left( h\right) = \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}\frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\Gamma \left( {-{y}_{i}^{t}\left\langle {{w}_{t},{x}_{i}^{t}}\right\rangle }\right) = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\ell \left( {\left( {{x}_{i}^{1},\ldots ,{x}_{i}^{T}}\right) ,\left( {{y}_{i}^{1},\ldots ,{y}_{i}^{T}}\right) , h}\right) .
643
+ $$
644
+
645
+ Reduction to MIL
646
+
647
+ Theorem 23. Multi-task learning is MIL-reducible.
648
+
649
+ Proof. For simplicity, we denote $\left( {{x}^{1},\ldots ,{x}^{T}}\right)$ by $\mathbf{x}$ and denote $\left( {{y}^{1},\ldots ,{y}^{T}}\right)$ by $\mathbf{y}$ . On the MIL-reduction framework, suppose that $p = 1;{f}_{1} : {f}_{1}\left( a\right) = - a;{f}_{2}$ is $\Gamma ;\alpha \left( {\mathbf{x},\mathbf{y}}\right) = \left( {{x}_{\left( \mathbf{x},\mathbf{y}\right) }^{\prime },{y}^{\prime }}\right)$ where ${x}_{\left( \mathbf{x},\mathbf{y}\right) }^{\prime } = \left\{ {\left( {{y}^{1}{x}^{1},1}\right) ,\ldots ,\left( {{y}^{T}{x}^{T}, T}\right) }\right\}$ ; ${y}^{\prime } = - 1;\mathcal{G} = \left\{ {g : \left( {z, t}\right) \mapsto \left\langle {{w}_{t}^{\prime }, z}\right\rangle \mid \forall j \in \left\lbrack T\right\rbrack ,{w}_{t}^{\prime } \in {\mathbb{R}}^{d}\text{ and }\begin{Vmatrix}{W}^{\prime }\end{Vmatrix} \leq {C}_{1}}\right\}$ where ${W}^{\prime } = \left( {{w}_{1}^{\prime },\ldots ,{w}_{T}^{\prime }}\right) ;\beta \left( {h}^{\prime }\right) : \left( {x}^{t}\right) \mapsto$ $\operatorname{sign}\left( \left\langle {{w}_{t}^{\prime },{x}^{t}}\right\rangle \right)$ . For any $\left( {\left( {{x}^{1},\ldots ,{x}^{T}}\right) ,\left( {{y}^{1},\ldots ,{y}^{T}}\right) }\right)$ and $h \in \mathcal{H}$ , we have that
650
+
651
+ $$
652
+ {\ell }^{\prime }\left( {{x}^{\prime },{y}^{\prime },{h}^{\prime }}\right) = {f}_{1}\left( {{y}^{\prime }{\Psi }_{p}\left( \left\{ {{f}_{2}\left( {g\left( z\right) }\right) \mid z \in {x}_{\left( \mathbf{x},\mathbf{y}\right) }^{\prime }}\right\} \right) }\right)
653
+ $$
654
+
655
+ $$
656
+ = \frac{1}{\left| {x}_{\left( \mathbf{x},\mathbf{y}\right) }^{\prime }\right| }\mathop{\sum }\limits_{{\left( {x, t}\right) \in {x}_{\left( \mathbf{x},\mathbf{y}\right) }^{\prime }}}\Gamma \left( {-\left\langle {{w}_{t},{y}^{t}{x}^{t}}\right\rangle }\right)
657
+ $$
658
+
659
+ $$
660
+ = \ell \left( {\left( {{x}^{1},\ldots ,{x}^{T}}\right) ,\left( {{y}^{1},\ldots ,{y}^{T}}\right) , h}\right)
661
+ $$
662
+
663
+ ## ERM algorithm
664
+
665
+ Corollary 24. The reduced ERM of the MIL from multi-task learning is a convex programming problem.
666
+
667
+ As shown in the proof of Theorem 23, ${f}_{1}$ is nonincreasing and ${y}_{i}^{\prime } = - 1$ for all $i \in \left\lbrack n\right\rbrack$ . Thus, by Proposition 5, if we consider $\Gamma$ that is nondecreasing and convex, the reduced MIL problem is a convex programming problem and solved in polynomial time.
668
+
669
+ ## Generalization bound
670
+
671
+ Corollary 25. We assume that $\begin{Vmatrix}{x}_{i}^{t}\end{Vmatrix} \leq {C}_{2}$ for any $i \in \left\lbrack n\right\rbrack$ and $t \in \left\lbrack T\right\rbrack$ . In the reduced problem, the empirical Rademacher complexity of ${\widehat{\mathcal{H}}}^{\prime }$ is given as follows:
672
+
673
+ $$
674
+ {\Re }_{{S}^{\prime }}\left( {\widehat{\mathcal{H}}}^{\prime }\right) = O\left( \frac{\log \left( {2{n}^{2}T}\right) \left( {b{C}_{1}{C}_{2}\ln \left( n\right) }\right) }{\sqrt{n}}\right) ,
675
+ $$
676
+
677
+ where we assume $\begin{Vmatrix}{w}^{\prime }\end{Vmatrix} \leq {C}_{1}$ .
678
+
679
+ We can derive the above from the same argument from the proof of Theorem 2.3. Using Corollary 2, we can obtain the generalization risk bound for the multi-task learning problem.
680
+
681
+ ## H PROOF OF THEOREM 16
682
+
683
+ Proof. On the MIL-reduction framework, suppose that $p = \infty ;{f}_{1} : {f}_{1}\left( a\right) = - a$ for $a \in \mathbb{R};{f}_{2}$ is $\Gamma ;\alpha \left( {x, y}\right) = \left( {{x}_{\left( x, y\right) }^{\prime },{y}^{\prime }}\right)$ where ${x}_{\left( x, y\right) }^{\prime } = \left\{ {\left( {-{y}^{1}x,1}\right) ,\ldots ,\left( {-{y}^{k}x, k}\right) }\right\} ;{y}^{\prime } = - 1;\mathcal{G} = \left\{ {g : \left( {z, j}\right) \mapsto \left\langle {{w}_{j}^{\prime }, z}\right\rangle \mid {w}_{j}^{\prime } \in {\mathbb{R}}^{d},\forall j \in \left\lbrack k\right\rbrack ,\parallel {W}^{\prime }\parallel \leq 1}\right\}$ where ${W}^{\prime } = \left( {{w}_{1}^{\prime },\ldots ,{w}_{k}^{\prime }}\right) ;{W}^{\prime } = \left( {{w}_{1}^{\prime },\ldots ,{w}_{k}^{\prime }}\right) ;\beta \left( {h}^{\prime }\right) : \left( {x, j}\right) \mapsto \left\langle {{w}_{j}^{\prime }, x}\right\rangle$ . For any(x, y)and $h \in \mathcal{H}$ , we have that
684
+
685
+ $$
686
+ {\ell }^{\prime }\left( {{x}^{\prime },{y}^{\prime },{h}^{\prime }}\right) = {f}_{1}\left( {{y}^{\prime }{\Psi }_{p}\left( \left\{ {{f}_{2}\left( {g\left( z\right) }\right) \mid z \in {x}_{\left( x, y\right) }^{\prime }}\right\} \right) }\right)
687
+ $$
688
+
689
+ $$
690
+ = \mathop{\max }\limits_{{\left( {{y}^{j}x, j}\right) \in {x}_{\left( x, y\right) }^{\prime }}}\Gamma \left( {-\left\langle {{w}_{j},{y}^{j}x}\right\rangle }\right)
691
+ $$
692
+
693
+ $$
694
+ = \ell \left( {x, y, h}\right)
695
+ $$
696
+
697
+ ## I PROOF OF THEOREM 19
698
+
699
+ Proof. On the MIL-reduction framework, suppose that $p = \infty ;{f}_{1}\left( c\right) = \Gamma \left( {{2c}{C}_{1}{C}_{2}}\right) ;{f}_{2}\left( c\right) = c/2{C}_{1}{C}_{2};\alpha \left( {A,{x}^{ * }}\right) =$ $\left( {{x}^{\prime },{y}^{\prime }}\right)$ where ${x}^{\prime } = \left\{ {x - {x}^{ * } \mid x \in A \smallsetminus {x}^{ * }}\right\} ;{y}^{\prime } = - 1;\mathcal{G} = \left\{ {g : z \mapsto \left\langle {{w}^{\prime }, z}\right\rangle \mid \begin{Vmatrix}{w}^{\prime }\end{Vmatrix} \leq {C}_{1}}\right\} ;\beta \left( {h}^{\prime }\right) : A \mapsto \arg \mathop{\max }\limits_{{x \in A}}\left\langle {{w}^{\prime }, x}\right\rangle$ . For any $\left( {A,{x}^{ * }}\right)$ and $h \in \mathcal{H}$ , the following holds:
700
+
701
+ $$
702
+ {\ell }^{\prime }\left( {{x}^{\prime },{y}^{\prime },{h}^{\prime }}\right) = {f}_{1}\left( {{y}^{\prime }{\Psi }_{p}\left( \left\{ {{f}_{2}\left( {g\left( z\right) \mid z \in {x}_{\left( x, y\right) }^{\prime }}\right) }\right\} \right) }\right)
703
+ $$
704
+
705
+ $$
706
+ = \Gamma \left( {-{\Psi }_{\infty }\left( \left\{ {g\left( z\right) \mid z \in {x}_{\left( x, y\right) }^{\prime }}\right\} \right) }\right)
707
+ $$
708
+
709
+ $$
710
+ = \Gamma \left( {-\left( {\mathop{\max }\limits_{{j \in A \smallsetminus {x}^{ * }}}\left( {\langle w, x\rangle -\langle w,{x}^{ * }\rangle }\right) }\right) }\right)
711
+ $$
712
+
713
+ $$
714
+ = \ell \left( {A,{x}^{ * }, h}\right)
715
+ $$
716
+
717
+ ## J TOP-1 RANKING LEARNING WITH NEGATIVE FEEDBACK
718
+
719
+ As an extension of the Top-1 rank learning problem, we consider the following scenario. In practice, some item sets do not include the user-preferred item. Therefore, we assume that the item sets are partitioned into two types: the item sets that include the most preferred item and those that do not include the preferred item. For the second type of item set, we assume that we can receive information on non-preferred items as negative feedback from the user.
720
+
721
+ More formally, we assume that the target user has a scoring function $s$ and a parameter ${\gamma }_{i} \in \{ - 1, + 1\}$ , where $\gamma$ takes $+ 1$ for an item set that includes the preferred item and takes -1 otherwise. The learner receives the sequence of the sets of items and the chosen item with positive or negative information $S = \left( {{A}_{1},\left( {{x}_{1}^{ * },{\gamma }_{1}}\right) }\right) ,\ldots ,\left( {{A}_{n},\left( {{x}_{n}^{ * },{\gamma }_{n}}\right) .{\gamma }_{i} = + 1}\right.$ indicates that item set ${A}_{i}$ includes the preferred item, and ${\gamma }_{i} = - 1$ indicates that the item set ${A}_{i}$ does not include the preferred item. For the item set ${A}_{i}$ with $\gamma = + 1,{x}_{i}^{ * } = \mathop{\max }\limits_{{x \in {A}_{i}}}s\left( x\right)$ . Conversely, for the item set ${A}_{i}$ with $\gamma = - 1$ , ${x}_{i}^{ * } \in \left\{ {{A}^{\prime } = A \smallsetminus {x}^{\prime } \mid {x}^{\prime } = \mathop{\max }\limits_{{x \in {A}_{i}}}r\left( x\right) }\right\}$ , that is, if $\gamma = - 1$ , the user selects an item except for the best-scored item by $s$ . Note that we assume that $\gamma$ is a known parameter only in the training phase. The other settings are the same as those in Sec. 5.2.2.
722
+
723
+ A reasonable goal of the learner is to predict the best item from a given set of items even in this setting. Therefore, the learner can recommend the most preferred item if $\gamma = + 1$ and can recommend a preferable item if $\gamma = - 1$ . Similar to top-1 ranking learning, we consider a loss function $\ell : \left( {A,\left( {{x}^{ * },\gamma }\right) , h}\right) \mapsto \Gamma \left( {\gamma \left( {\left\langle {w,{x}^{ * }}\right\rangle - \mathop{\max }\limits_{{x \in A \smallsetminus {x}^{ * }}}\left\langle {w, x}\right\rangle }\right) }\right)$ where $\Gamma : \mathbb{R} \rightarrow \left\lbrack {0,1}\right\rbrack$ is a convex, nonincreasing and $a$ -Lipschitz function. The generalization risk and empirical risk are formulated as follows:
724
+
725
+ $$
726
+ {R}_{\mathcal{D}}\left( h\right) = \underset{\left( {A,\gamma }\right) \sim \mathcal{D}}{\mathbb{E}}\left\lbrack {\ell \left( {A,\left( {{x}^{ * },\gamma }\right) , h}\right) }\right\rbrack ,
727
+ $$
728
+
729
+ $$
730
+ {\widehat{R}}_{S}\left( h\right) = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\ell \left( {A,\left( {{x}_{i}^{ * },{\gamma }_{i}}\right) , h}\right) ,
731
+ $$
732
+
733
+ where ${x}^{ * } = \arg \mathop{\max }\limits_{{x \in A}}s\left( x\right)$ .
734
+
735
+ Reduction to MIL
736
+
737
+ Theorem 26. Top-1 ranking learning with negative feedback is MIL-reducible.
738
+
739
+ The difference from the top-1 ranking learning is just ${y}_{i}^{\prime } = - {\gamma }_{i}$ , and thus we can easily prove it.
740
+
741
+ Proof. On the MIL-reduction framework, suppose that $p = \infty ;{f}_{1}\left( c\right) = \Gamma \left( {{2c}{C}_{1}{C}_{2}}\right) ;{f}_{2}\left( c\right) = c/2{C}_{1}{C}_{2};\alpha \left( {A,{x}^{ * }}\right) =$ $\left( {{x}^{\prime },{y}^{\prime }}\right)$ where ${x}^{\prime } = \left\{ {x - {x}^{ * } \mid x \in A \smallsetminus {x}^{ * }}\right\} ;{y}^{\prime } = - \gamma ;\mathcal{G} = \left\{ {g : z \mapsto \left\langle {{w}^{\prime }, z}\right\rangle \mid \begin{Vmatrix}{w}^{\prime }\end{Vmatrix} \leq 1}\right\} ;\beta \left( {h}^{\prime }\right) : A \mapsto \arg \mathop{\max }\limits_{{x \in A}}\left\langle {{w}^{\prime }, x}\right\rangle$ .
742
+
743
+ For any $\left( {A,{x}^{ * }}\right)$ and $h \in \mathcal{H}$ , the following holds:
744
+
745
+ $$
746
+ {\ell }^{\prime }\left( {{x}^{\prime },{y}^{\prime },{h}^{\prime }}\right) = {f}_{1}\left( {{y}^{\prime }{\Psi }_{p}\left( \left\{ {{f}_{2}\left( {g\left( z\right) \mid z \in {x}_{\left( x, y\right) }^{\prime }}\right) }\right\} \right) }\right)
747
+ $$
748
+
749
+ $$
750
+ = \Gamma \left( {\gamma \left( {{\Psi }_{\infty }\left( \left\{ {g\left( z\right) \mid z \in {x}_{\left( x, y\right) }^{\prime }}\right\} \right) }\right) }\right)
751
+ $$
752
+
753
+ $$
754
+ = \Gamma \left( {\gamma \left( {\mathop{\max }\limits_{{j \in A \smallsetminus {x}^{ * }}}\left( {\langle w, x\rangle -\langle w,{x}^{ * }\rangle }\right) }\right) }\right)
755
+ $$
756
+
757
+ $$
758
+ = \ell \left( {A,{x}^{ * }, h}\right)
759
+ $$
760
+
761
+ ## Generalization bound
762
+
763
+ Corollary 27. We assume that $\parallel x\parallel \leq {C}_{2}$ for any $x \in {A}_{i}\forall i \in \left\lbrack n\right\rbrack$ . In the reduced MIL problem, the empirical Rademacher complexity of ${\widehat{\mathcal{H}}}^{\prime }$ is given as follows:
764
+
765
+ $$
766
+ {\Re }_{{S}^{\prime }}\left( {\widehat{\mathcal{H}}}^{\prime }\right) = O\left( \frac{\log \left( {{\widehat{a}}^{2}{n}^{2}\left( {k - 1}\right) }\right) \left( {2\widehat{a}\ln \left( {{\widehat{a}}^{2}n}\right) }\right) }{\sqrt{n}}\right) ,
767
+ $$
768
+
769
+ where $\widehat{a} = {2a}{C}_{1}{C}_{2}$ we assume $\begin{Vmatrix}{w}^{\prime }\end{Vmatrix} \leq {C}_{1}$ .
770
+
771
+ Using Corollary 2, we can obtain the generalization risk bound for the Top-1 ranking learning with negative feedback.
772
+
773
+ ## ERM algorithm
774
+
775
+ Corollary 28. The reduced ERM of MIL from top-1 ranking learning with negative feedback is a DC programming problem.
776
+
777
+ In top-1 ranking learning, ${y}^{\prime } \in \{ - 1,1\}$ . By the proof of Theorem 26 and by Proposition 6, if we consider a loss function $\Gamma \left( c\right)$ as a nondecreasing and homogeneous function of degree 1 for $\overline{c \in }\left\lbrack {-1,1}\right\rbrack$ such as hinge-loss, we can solve the problem by DC algorithm as shown in Algorithm 1.
778
+
779
+ ## K PROOF OF THEOREM 22
780
+
781
+ Proof. For the optimization problem (5), we can apply the standard representer theorem (see, e.g., Theorem 6.11 of Mohri et al. [2018]). We define ${\mathbb{H}}_{1}$ as the subspace spanned by $\left\{ {\langle z, \cdot \rangle \mid z \in {P}_{{S}^{\prime }}}\right\}$ , namely, ${\mathbb{H}}_{1} = \left\{ {w \in \mathbb{H} \mid w = \mathop{\sum }\limits_{{z \in {P}_{{S}^{\prime }}}}{\mu }_{z}z,{\mu }_{z} \in }\right.$ $\overline{\mathbb{R}\} }$ . For any $w \in \mathbb{H}$ , we can consider the decomposition $w = {w}_{1} + {w}_{1}^{ \bot }$ , where ${w}_{1} \in {\mathbb{H}}_{1}$ , and ${w}_{1}^{ \bot } \in {\mathbb{H}}_{1}^{ \bot }$ is its orthogonal component. Because ${\mathbb{H}}_{1}$ is a subspace of $\mathbb{H},\parallel w{\parallel }_{\mathbb{H}} = \sqrt{{\begin{Vmatrix}{w}_{1}\end{Vmatrix}}_{\mathbb{H}}^{2} + {\begin{Vmatrix}{w}_{1}^{ \bot }\end{Vmatrix}}_{\mathbb{H}}^{2}} \geq {\begin{Vmatrix}{w}_{1}\end{Vmatrix}}_{\mathbb{H}}$ . Moreover, by the definition of ${\mathbb{H}}_{1}$ , $\langle w, z\rangle = \left\langle {{w}_{1}, z}\right\rangle$ . Thus, ${f}_{1}\left( {{y}_{i}^{\prime }{\Psi }_{p}\left( \left\{ {{f}_{2}\left( {\langle w, z\rangle }\right) \mid z \in {x}_{i}^{\prime }}\right\} \right) }\right) = {f}_{1}\left( {{y}_{i}^{\prime }{\Psi }_{p}\left( \left\{ {{f}_{2}\left( {\langle {w}_{1}, z\rangle }\right) \mid z \in {x}_{i}^{\prime }}\right\} \right) }\right)$ and ${\begin{Vmatrix}{w}_{1}\end{Vmatrix}}_{\mathbb{H}} \leq \parallel w{\parallel }_{\mathbb{H}}$ . This implies that the optimal solution is contained in ${\mathbb{H}}_{1}$ .
782
+
783
+ ## L DC ALGORITHM FOR KERNELIZED EXTENSION
784
+
785
+ The algorithm is shown in Algorithm 2.
786
+
787
+ ## M EXAMPLE:REDUCTION FROM MULTI-CLASS LEARNING WITH KERNEL
788
+
789
+ ### M.1 REDUCTION TO MIL WITH KERNEL
790
+
791
+ Theorem 29. Multi-class learning with kernel is MIL-reducible.
792
+
793
+ Algorithm 2 MIL optimization via DC Algorithm
794
+
795
+ ---
796
+
797
+ Inputs:
798
+
799
+ ${S}^{\prime },\lambda$
800
+
801
+ Initialize:
802
+
803
+ ${\mu }_{0} \in {\mathbb{R}}^{\left| {P}_{{S}^{\prime }}\right| }$
804
+
805
+ for $t = 1,\ldots$ ,(until convergence) do
806
+
807
+ Compute the subgradient:
808
+
809
+ $$
810
+ {s}_{t} \in {\nabla }_{\mu }\left( {\mathop{\sum }\limits_{{i : {y}_{i} = - 1}}{f}_{1}\left( {{\Psi }_{p}\left( \left\{ {{f}_{2}\left( {\mathop{\sum }\limits_{{v \in {P}_{{S}^{\prime }}}}{\mu }_{v}K\left( {v, z}\right) }\right) \mid z \in {x}_{i}^{\prime }}\right\} \right) }\right) }\right)
811
+ $$
812
+
813
+ at ${\mathbf{\mu }}_{t - 1}$ .
814
+
815
+ Solve the following subproblem:
816
+
817
+ $$
818
+ {\mathbf{\mu }}_{t} \leftarrow \arg \mathop{\min }\limits_{{\mathbf{\mu } \in {\mathbb{R}}^{\left| {P}_{{S}^{\prime }}\right| }}}\lambda \mathop{\sum }\limits_{{v,\widehat{v} \in {P}_{{S}^{\prime }}}}{\mu }_{v}{\mu }_{\widehat{v}}K\left( {v,\widehat{v}}\right)
819
+ $$
820
+
821
+ $$
822
+ + \mathop{\sum }\limits_{{i : {y}_{i} = + 1}}{f}_{1}\left( {{\Psi }_{p}\left( \left\{ {{f}_{2}\left( {\mathop{\sum }\limits_{{v \in {P}_{{S}^{\prime }}}}{\mu }_{z}K\left( {z, x}\right) }\right) \mid z \in {x}_{i}^{\prime }}\right\} \right) }\right)
823
+ $$
824
+
825
+ $$
826
+ - {s}_{t}^{\top }\mu
827
+ $$
828
+
829
+ end for
830
+
831
+ return ${\mu }_{t}$
832
+
833
+ ---
834
+
835
+ Proof. For any(x, y), we define
836
+
837
+ $$
838
+ {\eta }_{\left( x, y\right) } = \left( {{0}_{\mathbb{H}},\ldots ,{0}_{\mathbb{H}},\underset{y - \text{ th block }}{\underbrace{\Phi \left( x\right) }},{0}_{\mathbb{H}},\ldots ,{0}_{\mathbb{H}}}\right) \in {\mathbb{H}}^{k}, \tag{7}
839
+ $$
840
+
841
+ where ${0}_{\mathbb{H}}$ is a point in $\mathbb{H}$ satisfying $\left\langle {{0}_{\mathbb{H}}, v}\right\rangle = 0$ for any $v \in \mathbb{H}$ . On the MIL-reduction framework, suppose that $p = \infty$ ; ${f}_{1}\left( c\right) = \Gamma \left( {c{C}_{1}{C}_{2}}\right) ;{f}_{2}\left( c\right) = c/{C}_{1}{C}_{2};\alpha \left( {x, y}\right) = \left( {{x}_{\left( x, y\right) }^{\prime },{y}^{\prime }}\right)$ where ${x}_{\left( x, y\right) }^{\prime } = \left\{ {{\eta }_{\left( x, j\right) } - {\eta }_{\left( x, y\right) }\mid \forall j \in \mathcal{Y} \smallsetminus y}\right\} ;{y}^{\prime } = - 1;\mathcal{G} =$ $\left\{ {g : z \mapsto \left\langle {\left( {{w}_{1}^{\prime },\ldots ,{w}_{k}^{\prime }}\right) , z}\right\rangle \mid \forall j \in \left\lbrack k\right\rbrack ,{w}_{j}^{\prime } \in \mathbb{H},{\begin{Vmatrix}{W}^{\prime }\end{Vmatrix}}_{{\mathbb{H}}^{k}} \leq {C}_{1}}\right\}$ where ${W}^{\prime } = \left( {{w}_{1}^{\prime },\ldots ,{w}_{k}^{\prime }}\right) ,{\begin{Vmatrix}{W}^{\prime }\end{Vmatrix}}_{{\mathbb{H}}^{k}} = \sqrt{\mathop{\sum }\limits_{{j = 1}}^{k}{\begin{Vmatrix}{w}_{j}^{\prime }\end{Vmatrix}}_{\mathbb{H}}^{2}}$ . Then, for any(x, y)and $h \in \mathcal{H}$ ,
842
+
843
+ $$
844
+ {\ell }^{\prime }\left( {{x}^{\prime },{y}^{\prime },{h}^{\prime }}\right) = {f}_{1}\left( {{y}^{\prime }{\Psi }_{p}\left( \left\{ {{f}_{2}\left( {g\left( z\right) \mid z \in {x}_{\left( x, y\right) }^{\prime }}\right) }\right\} \right) }\right)
845
+ $$
846
+
847
+ $$
848
+ = \Gamma \left( {-{\Psi }_{\infty }\left( \left\{ {g\left( z\right) \mid z \in {x}_{\left( x, y\right) }^{\prime }}\right\} \right) }\right)
849
+ $$
850
+
851
+ $$
852
+ = \Gamma \left( {-\left( {\mathop{\max }\limits_{{j \in \mathcal{Y} \smallsetminus y}}\left( \left\langle {{W}^{\prime },{\eta }_{\left( x, j\right) } - {\eta }_{\left( x, y\right) }}\right\rangle \right) }\right) }\right)
853
+ $$
854
+
855
+ $$
856
+ = \Gamma \left( {-\left( {\mathop{\max }\limits_{{j \in \mathcal{Y} \smallsetminus y}}\left( {\left\langle {{w}_{j},\Phi \left( x\right) }\right\rangle - \left\langle {{w}_{y},\Phi \left( x\right) }\right\rangle }\right) }\right) }\right)
857
+ $$
858
+
859
+ $$
860
+ = \ell \left( {x, y, h}\right)
861
+ $$
862
+
863
+ ### M.2 CONSTRUCTION OF $\beta$
864
+
865
+ By Theorem 22, ${W}^{\prime }$ is returned by using $\mu$ as
866
+
867
+ $$
868
+ {W}^{\prime } = \mathop{\sum }\limits_{{z \in {P}_{{S}^{\prime }}}}{\mu }_{z}z
869
+ $$
870
+
871
+ Moreover, ${w}_{j}^{\prime }$ can be represented as:
872
+
873
+ $$
874
+ {w}_{j}^{\prime } = \mathop{\sum }\limits_{{z\left\lbrack j\right\rbrack \in {P}_{{S}^{\prime }, j}}}{\mu }_{z\left\lbrack j\right\rbrack }v\left\lbrack j\right\rbrack
875
+ $$
876
+
877
+ where ${P}_{{S}^{\prime }, j} = \left\{ {z\left\lbrack j\right\rbrack \mid z \in \mathop{\bigcup }\limits_{{i = 1}}^{n}{x}_{i}^{\prime }}\right\}$ and $z\left\lbrack j\right\rbrack$ is $j$ -th block of $z$ . That is, $z\left\lbrack j\right\rbrack$ can be rewritten as $\Phi \left( {\widetilde{x}}_{j}\right)$ for some ${\widetilde{x}}_{j}$ . Note that, because $z$ is based on ${\eta }_{\left( x, y\right) }$ as shown in (7), $z\left\lbrack j\right\rbrack$ is in the Hilbert space $\mathbb{H}$ in the original problem. Based on the relationship between ${W}^{\prime } = \left( {{w}_{1}^{\prime },\ldots ,{w}_{k}^{\prime }}\right)$ and $W = \left( {{w}_{1},\ldots ,{w}_{k}}\right)$ , therefore, the hypothesis $h\left( x\right)$ in the original problem is obtained by:
878
+
879
+ $$
880
+ h\left( x\right) = \arg \mathop{\max }\limits_{{j \in \left\lbrack k\right\rbrack }}\left\langle {{w}_{j},\Phi \left( x\right) }\right\rangle
881
+ $$
882
+
883
+ $$
884
+ = \arg \mathop{\max }\limits_{{j \in \left\lbrack k\right\rbrack }}\left\langle {{w}_{j}^{\prime },\Phi \left( x\right) }\right\rangle
885
+ $$
886
+
887
+ $$
888
+ = \arg \mathop{\max }\limits_{{j \in \left\lbrack k\right\rbrack }}\mathop{\sum }\limits_{{z\left\lbrack j\right\rbrack \in {P}_{{S}^{\prime }, j}}}{\mu }_{z\left\lbrack j\right\rbrack }\langle z\left\lbrack j\right\rbrack ,\Phi \left( x\right) \rangle
889
+ $$
890
+
891
+ $$
892
+ = \arg \mathop{\max }\limits_{{j \in \left\lbrack k\right\rbrack }}\mathop{\sum }\limits_{{\widetilde{x}}_{j}}{\mu }_{{\widetilde{x}}_{j}}K\left( {{\widetilde{x}}_{j}, x}\right) .
893
+ $$
894
+
895
+ We can show that the other learning problems presented in this paper can be kernelized. For the other learning problems introduced in this study, there are two types of the domains of $z$ : the concatenation of the Hilbert vector (complementarily labeled learning problems, multi-label learning, multi-task learning) and difference of the Hilbert vector (top-1 ranking learning). For the difference in the Hilbert vector, that is, for $z = \Phi \left( {x}_{1}\right) - \Phi \left( {x}_{2}\right)$ and $\Phi \left( x\right) ,\langle z,\Phi \left( x\right) \rangle$ can be computed as:
896
+
897
+ $$
898
+ \langle z,\Phi \left( x\right) \rangle
899
+ $$
900
+
901
+ $$
902
+ = \left\langle {\Phi \left( {x}_{1}\right) - \Phi \left( {x}_{2}\right) ,\Phi \left( x\right) }\right\rangle
903
+ $$
904
+
905
+ $$
906
+ = K\left( {{x}_{1}, x}\right) - K\left( {{x}_{2}, x}\right) \text{,}
907
+ $$
908
+
909
+ and thus $h\left( x\right)$ is polynomial-time computed by ${h}^{\prime }$ .
910
+
911
+ ## N COMPARISON TO THE EXISTING GENERALIZATION BOUND FOR COMPLEMENTARILY LABELED LEARNING
912
+
913
+ Ishida et al. [2017] stated that, for a linear-hypothesis class, the following bound holds with a probability of at least $1 - \delta$ : ${R}_{\mathcal{D}}^{\mathrm{{MC}}}\left( h\right) \leq R\left( h\right) + k\left( {k - 1}\right) L\sqrt{{R\Lambda }/n} + \left( {k - 1}\right) \sqrt{8\ln \left( {2/\delta }\right) /n}$ . They used the empirical risk $\widehat{R}\left( h\right)$ for complementarily labeled instances, which is different from the risk that we defined [see details in [shida et al., 2017]. According to this difference, the proposed generalization bound is incomparable to the existing bound. However, we can say that if we achieve a small empirical risk close to zero, the proposed risk bound is $k$ times tighter than the existing bound.
UAI/UAI 2022/UAI 2022 Conference/BFZL7ULicg5/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,473 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § SIMPLIFIED AND UNIFIED ANALYSIS OF VARIOUS LEARNING PROBLEMS BY REDUCTION TO MULTIPLE-INSTANCE LEARNING
2
+
3
+ § ABSTRACT
4
+
5
+ In statistical learning, many problem formulations have been proposed so far, such as multi-class learning, complementarily labeled learning, multi-label learning, multi-task learning, which provide theoretical models for various real-world tasks. Although they have been extensively studied, the relationship among them has not been fully investigated. In this work, we focus on a particular problem formulation called Multiple-Instance Learning (MIL), and show that various learning problems including all the problems mentioned above with some of new problems can be reduced to MIL with theoretically guaranteed generalization bounds, where the reductions are established under a new reduction scheme we provide as a byproduct. The results imply that the MIL-reduction gives a simplified and unified framework for designing and analyzing algorithms for various learning problems. Moreover, we show that the MIL-reduction framework can be kernelized.
6
+
7
+ § 1 INTRODUCTION
8
+
9
+ In this study, we explore how a large class of learning problems can be reduced to the Multiple-Instance Learning (MIL) problem. This is strongly motivated by the results of [Sabato and Tishby, 2012] and [Suehiro et al., 2020]. Suehiro et al. [2020] showed that some local-feature-based learning problems can be reduced to a MIL problem, which gave us an insight that MIL would have a high capability of representing various learning problems. Indeed, the reduced problem is too specific whereas Sabato and Tishby [2012] proposed a much more general formulation of MIL, and thus we believe that a wider class of learning problems can be reduced to MIL.
10
+
11
+ We provide a MIL-reduction framework and reveal that various learning problems, such as multi-class learning, complementarily labeled learning, multi-label learning, and multitask learning, can be reduced to MIL. By the reduction, we immediately derive generalization bounds from [Sabato and Tishby, 2012], as well as learning algorithms. That is, our reduction framework greatly simplifies the analyses of generalization bounds as compared with the analyses in the previous works [e.g., Lei et al., 2019, Ishida et al., 2017, Yu et al., 2014, Pontil and Maurer, 2013]. Some of the obtained generalization bounds are competitive or incomparable to the existing results. In particular, for multi-label learning, we derive an improved generalization bound, and for complementarily labeled learning, we derive a novel learning algorithm, which is the first polynomial-time algorithm in a certain setting. Moreover, we propose three new learning problems, multi-label learning with perfectionistic loss, top- 1 ranking learning and top-1 ranking learning with negative feedback, and we demonstrate that they can be reduced to MIL as well. The results imply that our MIL-reduction gives a unified framework for designing and analyzing algorithms for various learning problems.
12
+
13
+ To provide the MIL-reduction framework, we propose a general reduction scheme among learning problems. Our scheme has two remarkable features as described below. First, our reduction transforms every instance-label pair (x, y)in the given sample of the original learning problem to an instance-label pair $\left( {{x}^{\prime },{y}^{\prime }}\right)$ to form a sample of the reduced learning problem. In contrast, standard reduction schemes employ an instance transformation and an label transformation separately, to construct ${x}^{\prime }$ from $x$ and ${y}^{\prime }$ from $y$ , respectively. Therefore, our scheme enables us to design reduction algorithms among a wider class of learning problems, e.g., learning-to-rank to classification, and supervised learning to weakly supervised learning. Second, our reduction scheme ensures that the Empirical Risk Minimization (ERM) of the reduced problem implies the ERM of the original one, while the empirical Rademacher complexity of the hypothesis (composed with loss function) classes are preserved through the reduction. This means that we can employ an existing ERM algorithm for the reduced problem to obtain an ERM algorithm for the original problem with a theoretical guaranteed generalization bound, which is immediately derived from a known generalization bound for the reduced problem. We also show that the MIL-reduction framework can be kernelized.
14
+
15
+ The main contributions are summarized as follows:
16
+
17
+ * We propose a general reduction scheme based on the ERM, which allows us to derive a generalization risk bound of the original problem immediately.
18
+
19
+ * We demonstrate that several learning problems, from traditional to new problems, can be reduced to MIL. The results imply that our MIL-reduction gives a simplified and unified framework for the analyses for various learning problems.
20
+
21
+ * We obtain novel theoretical results for some learning problems.
22
+
23
+ * We show that the MIL-reduction framework can be kernelized.
24
+
25
+ Several proofs are shown in supplementary materials.
26
+
27
+ § 2 PRELIMINARIES
28
+
29
+ For an integer $u,\left\lbrack u\right\rbrack$ denotes the set $\{ 1,\ldots ,u\} .I\left( \mathrm{e}\right)$ denotes the indicator function of the event $\mathrm{e}$ , that is, $I\left( \mathrm{e}\right) = 1$ if $\mathrm{e}$ is true and $I\left( \mathrm{e}\right) = 0$ otherwise.
30
+
31
+ A learning problem is represented by a pair $\left( {\mathcal{H},\ell }\right)$ of a hypothesis class $\mathcal{H} \subseteq \{ h : \mathcal{X} \rightarrow \mathcal{Y}\}$ and a loss function $\ell : \mathcal{X} \times \mathcal{Y} \times \mathcal{H} \rightarrow \mathbb{R}$ for some input space $\mathcal{X}$ and output space $\mathcal{Y}$ . A learner receives a sample $S = \left( {\left( {{x}_{1},{y}_{1}}\right) ,\ldots ,\left( {{x}_{n},{y}_{n}}\right) }\right)$ where each input-output pair $\left( {{x}_{i},{y}_{i}}\right)$ is drawn i.i.d. according to an unknown distribution $\mathcal{D}$ over $\mathcal{X} \times \mathcal{Y}$ . The goal of the learner is to find, with high probability, a hypothesis $h \in \mathcal{H}$ so that the generalization risk ${R}_{\mathcal{D}}\left( h\right) =$ ${\mathbb{E}}_{\left( {x,y}\right) \sim \mathcal{D}}\ell \left( {x,y,h}\right)$ is small.
32
+
33
+ For a learning problem $\left( {\mathcal{H},\ell }\right)$ , we define a class of loss functions as $\widehat{\mathcal{H}} = \{ \left( {x,y}\right) \mapsto \ell \left( {x,y,h}\right) \mid h \in \mathcal{H}\}$ when the underlying loss function $\ell$ is clear from the context. We give the definition of the empirical Rademacher complexity, which is used to bound the generalization risk.
34
+
35
+ Definition 1 (Empirical Rademacher complexity [Bartlett and Mendelson, 2003]). Given a sample $S = \left( {\left( {{x}_{1},{y}_{1}}\right) ,\ldots ,\left( {{x}_{n},{y}_{n}}\right) }\right) \in {\left( \mathcal{X} \times \mathcal{Y}\right) }^{n}$ , the empirical Rademacher complexity ${\Re }_{S}\left( \widehat{\mathcal{H}}\right)$ of a class $\widehat{\mathcal{H}}$ w.r.t. $S$ is defined as ${\Re }_{S}\left( \widehat{\mathcal{H}}\right) = \frac{1}{n}{\mathbb{E}}_{\mathbf{\sigma }}\left\lbrack {\mathop{\sup }\limits_{{g \in \widehat{\mathcal{H}}}}\mathop{\sum }\limits_{{i = 1}}^{n}{\sigma }_{i}g\left( {{x}_{i},{y}_{i}}\right) }\right\rbrack$ , where $\mathbf{\sigma } \in \{ - 1,1{\} }^{n}$ and each ${\sigma }_{i}$ is an independent uniform random variable in $\{ - 1,1\}$ .
36
+
37
+ Generalization risk bound [Mohri et al., 2018] Let $\left( {\mathcal{H},\ell }\right)$ be a learning problem and $S$ be a sample of size $n$ drawn according to a distribution $\mathcal{D}$ . Then, it holds with probability at least $1 - \delta$ that for all $h \in \mathcal{H}$ ,
38
+
39
+ $$
40
+ {R}_{\mathcal{D}}\left( h\right) \leq {\widehat{R}}_{S}\left( h\right) + 2{\Re }_{S}\left( \widehat{\mathcal{H}}\right) + 3\sqrt{\log \left( {{}^{2}/\delta }\right) /{2n}},
41
+ $$
42
+
43
+ where ${\widehat{R}}_{S}\left( h\right) = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\ell \left( {{x}_{i},{y}_{i},h}\right)$ denotes the empirical risk of $h$ for sample $S$ .
44
+
45
+ § 3 REDUCTION SCHEME FOR ERM
46
+
47
+ We introduce a general reduction scheme for empirical risk minimization and provide useful theoretical results.
48
+
49
+ Definition 2 (ERM-reduction). A learning problem $\left( {\mathcal{H},\ell }\right)$ over input-output space $\mathcal{X} \times \mathcal{Y}$ is ERM-reducible to another learning problem $\left( {{\mathcal{H}}^{\prime },{\ell }^{\prime }}\right)$ over input-output space ${\mathcal{X}}^{\prime } \times {\mathcal{Y}}^{\prime }$ if there exist polynomial-time computable functions $\alpha : \mathcal{X} \times$ $\mathcal{Y} \rightarrow {\mathcal{X}}^{\prime } \times {\mathcal{Y}}^{\prime }$ and $\beta : {\mathcal{H}}^{\prime } \rightarrow \mathcal{H}$ such that for any $\left( {x,y}\right) \in$ $\mathcal{X} \times \mathcal{Y}$ and for any ${h}^{\prime } \in {\mathcal{H}}^{\prime }$ ,
50
+
51
+ $$
52
+ \ell \left( {x,y,h}\right) = {\ell }^{\prime }\left( {{x}^{\prime },{y}^{\prime },{h}^{\prime }}\right) ,
53
+ $$
54
+
55
+ where $\left( {{x}^{\prime },{y}^{\prime }}\right) = \alpha \left( {x,y}\right)$ and $h = \beta \left( {h}^{\prime }\right)$ .
56
+
57
+ Here we show the remarkable relationship between the original problem and the reduced problem.
58
+
59
+ Proposition 1. Suppose that $\left( {\mathcal{H},\ell }\right)$ is ERM-reducible to $\left( {{\mathcal{H}}^{\prime },{\ell }^{\prime }}\right)$ with transformations $\alpha$ and $\beta$ . For any sample $S =$ $\left( {\left( {{x}_{1},{y}_{1}}\right) ,\ldots ,\left( {{x}_{n},{y}_{n}}\right) }\right) \in {\left( \mathcal{X} \times \mathcal{Y}\right) }^{n}$ , the following holds:
60
+
61
+ (i) (In)equality of the ERMs:
62
+
63
+ $$
64
+ \mathop{\min }\limits_{{h \in \mathcal{H}}}{\widehat{R}}_{S}\left( h\right) \leq \mathop{\min }\limits_{{h \in {\mathcal{H}}_{\beta }}}{\widehat{R}}_{S}\left( h\right)
65
+ $$
66
+
67
+ $$
68
+ = \mathop{\min }\limits_{{{h}^{\prime } \in {\mathcal{H}}^{\prime }}}{\widehat{R}}_{{S}^{\prime }}\left( {h}^{\prime }\right) ,
69
+ $$
70
+
71
+ where ${\mathcal{H}}_{\beta } = \left\{ {\beta \left( {h}^{\prime }\right) \mid {h}^{\prime } \in {\mathcal{H}}^{\prime }}\right\}$ and ${S}^{\prime } =$ $\left( {\left( {{x}_{1}^{\prime },{y}_{1}^{\prime }}\right) ,\ldots ,\left( {{x}_{n}^{\prime },{y}_{n}^{\prime }}\right) }\right)$ with $\left( {{x}_{i}^{\prime },{y}_{i}^{\prime }}\right) = \alpha \left( {{x}_{i},{y}_{i}}\right)$ for $i \in \left\lbrack n\right\rbrack$ .
72
+
73
+ (ii) Empirical Rademacher complexity preserving:
74
+
75
+ $$
76
+ {\mathfrak{R}}_{S}\left( {\widehat{\mathcal{H}}}_{\beta }\right) = {\mathfrak{R}}_{{S}^{\prime }}\left( {\widehat{\mathcal{H}}}^{\prime }\right) .
77
+ $$
78
+
79
+ We can design a reduction scheme in a straightforward way as follows. When given a sample $S$ of the original problem, we construct ${S}^{\prime }$ of the reduced problem by $\alpha$ and obtain ${h}^{\prime }$ by solving the ERM of the reduced problem. Then, we obtain the final hypothesis $h$ by $\beta$ .
80
+
81
+ We derive the following generalization risk bound using the propositions on the empirical Rademacher complexity.
82
+
83
+ Corollary 2. Let $S = \left( {\left( {{x}_{1},{y}_{1}}\right) ,\ldots ,\left( {{x}_{n},{y}_{n}}\right) }\right)$ be a sample i.i.d. drawn according to unknown distribution $\mathcal{D}$ in an original problem $\left( {\mathcal{H},\ell }\right)$ . If $\left( {\mathcal{H},\ell }\right)$ is ERM-reducible to $\left( {{\mathcal{H}}^{\prime },{\ell }^{\prime }}\right)$ , for ${S}^{\prime } = \left( {\alpha \left( {{x}_{1},{y}_{1}}\right) ,\ldots ,\alpha \left( {{x}_{n},{y}_{n}}\right) }\right)$ and $h = \beta \left( {h}^{\prime }\right)$ , the following generalization risk bound holds with a probability at least $1 - \delta$ for all $h \in {\mathcal{H}}_{\beta }$ :
84
+
85
+ $$
86
+ {R}_{\mathcal{D}}\left( h\right) \leq {\widehat{R}}_{{S}^{\prime }}\left( {h}^{\prime }\right) + 2{\Re }_{{S}^{\prime }}\left( {\widehat{\mathcal{H}}}^{\prime }\right) + 3\sqrt{\log \left( {{}^{2}/\delta }\right) /{2n}}.
87
+ $$
88
+
89
+ That is, we can guarantee the generalization bound of the original problem because of the preservation of the empirical Rademacher complexity.
90
+
91
+ § 4 MIL-REDUCTION FRAMEWORK
92
+
93
+ This section is the highlight of this paper. We define the ERM-reducibility to MIL and show the reducible condition. Moreover, we show that some theoretical analyses can be simplified. We use some symbols with prime (e.g., ${\mathcal{X}}^{\prime }$ ) to indicate that the MIL is the reduced problem.
94
+
95
+ § 4.1 PROBLEM FORMULATION OF MIL
96
+
97
+ Let $\mathcal{Z} \subseteq {\mathbb{R}}^{{d}^{\prime }}$ be the instance space. ${\mathcal{X}}^{\prime } \subseteq {2}^{\mathcal{Z}}$ is an input space and a bag ${x}^{\prime } \in {\mathcal{X}}^{\prime }$ is a finite set of instances chosen from $\mathcal{Z}$ . Let ${\mathcal{Y}}^{\prime } = \{ - 1,1\}$ be an output space. Following the formulation by [Sabato and Tishby, 2012], we define, for the rest of the paper, a MIL problem as a pair $\left( {{\mathcal{H}}^{\prime },{\ell }^{\prime }}\right)$ of a hypothesis class ${\mathcal{H}}^{\prime }$ and a loss function ${\ell }^{\prime }$ of the form:
98
+
99
+ $$
100
+ {\mathcal{H}}^{\prime } = \left\{ {{h}^{\prime } : {x}^{\prime } \mapsto {\Psi }_{p}\left( \left\{ {{f}_{2}\left( {g\left( z\right) }\right) \mid z \in {x}^{\prime }}\right\} \right) \mid g \in \mathcal{G}}\right\} , \tag{1}
101
+ $$
102
+
103
+ $$
104
+ {\ell }^{\prime } : \left( {{x}^{\prime },{y}^{\prime },{h}^{\prime }}\right) \mapsto {f}_{1}\left( {{y}^{\prime }{h}^{\prime }\left( {x}^{\prime }\right) }\right) , \tag{2}
105
+ $$
106
+
107
+ where $\mathcal{G} \subseteq \{ g : \mathcal{Z} \rightarrow \mathbb{R}\} ,{f}_{1} : \mathbb{R} \rightarrow \left\lbrack {0,1}\right\rbrack$ is an $a$ -Lipschitz function, ${f}_{2} : \mathbb{R} \rightarrow \left\lbrack {-1,1}\right\rbrack$ is a $b$ -Lipschitz function, and ${\Psi }_{p} : {2}^{\left\lbrack -1,1\right\rbrack } \rightarrow \left\lbrack {-1,1}\right\rbrack$ is a $p$ -norm like function, which is defined for any $p \in \lbrack 1,\infty )$ as
108
+
109
+ $$
110
+ {\Psi }_{p}\left( V\right) = {\left( \frac{1}{m}\mathop{\sum }\limits_{{i = 1}}^{m}{\left( {v}_{i} + 1\right) }^{p}\right) }^{1/p} - 1
111
+ $$
112
+
113
+ for every finite set $V = \left\{ {{v}_{1},{v}_{2},\ldots ,{v}_{m}}\right\} \subseteq \left\lbrack {-1,1}\right\rbrack$ . We define ${\Psi }_{\infty }$ as $\mathop{\lim }\limits_{{p \rightarrow \infty }}{\Psi }_{p}$ . Note that ${\Psi }_{p}$ is 1-Lipschitz for any $p$ [see, Sabato and Tishby,2012]. ${\Psi }_{p}$ is a user-defined function and behaves as an aggregation of some bag information. Typical ${\Psi }_{p}$ are the max operator $\left( {p = \infty }\right)$ and average $\left( {p = 1}\right)$ .
114
+
115
+ The only difference in the hypothesis of [Sabato and Tishby, 2012] is ${f}_{2}.{f}_{2}$ appears redundant (because ${f}_{2} \circ g$ can be replaced by a single function) but plays an important role in the reduction (the examples are shown in Section 5).
116
+
117
+ § 4.2 MIL-REDUCIBILITY
118
+
119
+ Now we give the definition of ERM-reducibility in a straightforward way.
120
+
121
+ Definition 3 (MIL-reducibility). A learning problem $\left( {\mathcal{H},\ell }\right)$ is said to be MIL-reducible if there exists a MIL problem $\left( {{\mathcal{H}}^{\prime },{\ell }^{\prime }}\right)$ such that $\left( {\mathcal{H},\ell }\right)$ is ERM-reducible to $\left( {{\mathcal{H}}^{\prime },{\ell }^{\prime }}\right)$ .
122
+
123
+ § 4.3 RADEMACHER COMPLEXITY BOUND
124
+
125
+ We show the empirical Rademacher complexity bound for the MIL-reducible problems using our reduction scheme. As aforementioned, the main advantage of our reduction scheme is to allow us to apply the empirical Rademacher complexity bound of the reduced problem to the original problems. In this paper, we utilize the bound provided by Sabato and Tishby [2012].
126
+
127
+ Theorem 3 (An application of [Sabato and Tishby, 2012]). Let $\left( {{\mathcal{H}}^{\prime },{\ell }^{\prime }}\right)$ be a MIL problem defined in Eq. (1) and (2). Let ${S}^{\prime } = \left( {\left( {{x}_{1}^{\prime },{y}_{1}^{\prime }}\right) ,\ldots ,\left( {{x}_{n}^{\prime },{y}_{n}^{\prime }}\right) }\right)$ be a sample with average bag size ${r}_{{S}^{\prime }}$ . Let $\widehat{\mathcal{G}} = \left\{ {{f}_{2} \circ g \mid g \in \mathcal{G}}\right\}$ . If there exist $C,\rho \geq 0$ such that for all sufficiently large $n$ ,
128
+
129
+ $$
130
+ {\Re }_{{S}^{\prime }}\left( \widehat{\mathcal{G}}\right) \leq \frac{C{\ln }^{\rho }\left( n\right) }{\sqrt{n}},
131
+ $$
132
+
133
+ then
134
+
135
+ $$
136
+ {\Re }_{{S}^{\prime }}\left( {\widehat{\mathcal{H}}}^{\prime }\right) = O\left( \frac{\log \left( {{a}^{2}{n}^{2}{r}_{{S}^{\prime }}}\right) \left( {\frac{aC}{\rho + 1}{\ln }^{\rho + 1}\left( {{a}^{2}n}\right) }\right) }{\sqrt{n}}\right) ,
137
+ $$
138
+
139
+ where ${\widehat{\mathcal{H}}}^{\prime } = \left\{ {{\widehat{h}}^{\prime } : {x}^{\prime } \mapsto {f}_{1}\left( {{y}^{\prime }{h}^{\prime }\left( {x}^{\prime }\right) }\right) \mid {h}^{\prime } \in {\mathcal{H}}^{\prime }}\right\}$ .
140
+
141
+ As mentioned in [Sabato and Tishby, 2012], we obtain the following bound when $\mathcal{G}$ is a set of linear functions.
142
+
143
+ Corollary 4. Let $\mathcal{G} = \left\{ {g : z \mapsto \left\langle {{w}^{\prime },z}\right\rangle \mid {w}^{\prime } \in }\right.$ $\left. {{\mathbb{R}}^{{d}^{\prime }},\begin{Vmatrix}{w}^{\prime }\end{Vmatrix} \leq {C}_{1}}\right\}$ and assume that $\parallel z\parallel \leq {C}_{2}$ . Then, the following bound holds:
144
+
145
+ $$
146
+ {\Re }_{{S}^{\prime }}\left( \widehat{\mathcal{H}}\right) = O\left( \frac{\log \left( {{a}^{2}{n}^{2}{r}_{{S}^{\prime }}}\right) \left( {{ab}{C}_{1}{C}_{2}\ln \left( {{a}^{2}n}\right) }\right) }{\sqrt{n}}\right) .
147
+ $$
148
+
149
+ The above bound is easily derived from the result of ${\Re }_{{S}^{\prime }}$ [see the proof of Theorem 20 of Sabato and Tishby, 2012]) and ${\Re }_{{S}^{\prime }}\left( \overset{⏜}{\mathcal{G}}\right) \leq b{\Re }_{{S}^{\prime }}\left( \mathcal{G}\right) \leq b{C}_{1}{C}_{2}/\sqrt{n} = b{C}_{1}{C}_{2}{\ln }^{0}\left( n\right) /\sqrt{n}$ [see, e.g., Theorem 5.8 and 5.10 of Mohri et al., 2018].
150
+
151
+ Using Theorem 3 and Corollary 2, we obtain a generalization risk bound for MIL-reducible problems.
152
+
153
+ § 4.4 LEARNING ALGORITHM
154
+
155
+ For the reduced MIL problems that satisfy some conditions, we can immediately design a learning algorithm according to the condition. Suppose that $\mathcal{G}$ is a set of linear functions:
156
+
157
+ $$
158
+ \mathcal{G} = \left\{ {g : z \mapsto \left\langle {{w}^{\prime },z}\right\rangle \mid {w}^{\prime } \in {\mathbb{R}}^{{d}^{\prime }},\begin{Vmatrix}{w}^{\prime }\end{Vmatrix} \leq {C}_{1}}\right\} . \tag{3}
159
+ $$
160
+
161
+ Let ${S}^{\prime } = \left( {\left( {{x}_{1}^{\prime },{y}_{1}^{\prime }}\right) ,\ldots ,\left( {{x}_{n}^{\prime },{y}_{n}^{\prime }}\right) }\right)$ . The ERM of MIL is formulated as follows:
162
+
163
+ $$
164
+ \mathop{\min }\limits_{{\begin{Vmatrix}{w}^{\prime }\end{Vmatrix} \leq {C}_{1}}}\lambda {\begin{Vmatrix}{w}^{\prime }\end{Vmatrix}}^{2} + \mathop{\sum }\limits_{{i = 1}}^{n}{f}_{1}\left( {{y}_{i}^{\prime }{\Psi }_{p}\left( \left\{ {{f}_{2}\left( {\left\langle {{w}^{\prime },z}\right\rangle \mid z \in {x}_{i}^{\prime }}\right) }\right\} \right) }\right) .
165
+ $$
166
+
167
+ (4)
168
+
169
+ For the optimization problem (4), we show that the following propositions hold.
170
+
171
+ Proposition 5. If ${y}_{i}^{\prime } = - 1$ for any $i \in \left\lbrack n\right\rbrack$ for sample ${S}^{\prime }$ , ${f}_{1}\left( c\right)$ is convex and nonincreasing ${}^{\square }$ and ${f}_{2}$ is a nondecreasing convex function, and $\mathcal{G}$ is given as Eq. (3), then the ERM of $\left( {{\mathcal{H}}^{\prime },{\ell }^{\prime }}\right)$ is a convex programming problem.
172
+
173
+ Proposition 6. If ${f}_{1}$ is a nonincreasing convex ${}^{1}$ and ${f}_{1}\left( c\right)$ is a homogeneous function of degree 1 for $c \in {\left\lbrack -1,1\right\rbrack }^{2}$ , and ${f}_{2}$ is a nondecreasing convex function, and $\mathcal{G}$ is given as Eq. (3), then $\operatorname{ERM}$ of $\left( {{\mathcal{H}}^{\prime },{\ell }^{\prime }}\right)$ is a ${DC}$ programming problem.
174
+
175
+ Generally, it is difficult to find a global minimum for a DC programming problem; however, it is known that we can find a solution with $\epsilon$ -approximation of local optima [see, e.g., Le Thi and Dinh, 2018]. We introduce a standard DC algorithm to solve (4) in Algorithm 1 in Sec. D.
176
+
177
+ The propositions indicate that, if $\left( {\mathcal{H},\ell }\right)$ is MIL-reducible to $\left( {{\mathcal{H}}^{\prime },{\ell }^{\prime }}\right)$ and satisfies either of the above conditions, then the solution $h \in {\mathcal{H}}_{\beta }$ in the original problem can be obtained by a unified learning algorithm.
178
+
179
+ § 5 MIL-REDUCIBLE EXAMPLES
180
+
181
+ In this section, we demonstrate that various learning problems can be reduced to MIL by the proposed reduction framework. The results imply that our MIL-reduction gives a unified framework for designing and analyzing learning algorithms for various learning problems ${}^{3}$ .
182
+
183
+ § 5.1 THE EXISTING PROBLEMS
184
+
185
+ § 5.1.1 MULTI-CLASS LEARNING PROBLEM
186
+
187
+ Problem setting: Let $\mathcal{X} \subseteq {\mathbb{R}}^{d}$ be an instance space, and $\mathcal{Y} = \left\lbrack k\right\rbrack$ be an output space. The learner receives the set of labeled instances $S = \left( {\left( {{x}_{1},{y}_{1}}\right) ,\ldots ,\left( {{x}_{n},{y}_{n}}\right) }\right) \in {\left( \mathcal{X} \times \mathcal{Y}\right) }^{n}$ , where each instance is drawn i.i.d. according to some unknown distribution $\mathcal{D}$ . The learner predicts the label of $x$ using the hypothesis $h \in \mathcal{H} = \left\{ {x \mapsto \arg \mathop{\max }\limits_{{y \in \left\lbrack k\right\rbrack }}\left\langle {{w}_{y},x}\right\rangle }\right.$ $\forall j \in \left\lbrack k\right\rbrack ,{w}_{j} \in {\mathbb{R}}^{d}\}$ . Let $\ell : \left( {x,y,h}\right) \mapsto \Gamma \left( \left\langle {{w}_{y},x}\right\rangle \right. -$ $\mathop{\max }\limits_{{j \in \mathcal{Y} \smallsetminus y}}\left\langle {{w}_{j},x}\right\rangle )$ be a loss function, where $\Gamma : \mathbb{R} \rightarrow \left\lbrack {0,1}\right\rbrack$ is a convex, nonincreasing and $a$ -Lipschitz function. The generalization risk and empirical risk of $h$ are defined as:
188
+
189
+ $$
190
+ {R}_{\mathcal{D}}\left( h\right) = \underset{\left( {x,y}\right) \sim \mathcal{D}}{\mathbb{E}}\ell \left( {x,y,h}\right) ,{\widehat{R}}_{S}\left( h\right) = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\ell \left( {{x}_{i},{y}_{i},h}\right) .
191
+ $$
192
+
193
+ We obtain the following by using MIL-reduction framework:
194
+
195
+ Theorem 7. Multi-class learning problem is MIL-reducible.
196
+
197
+ Proof. For any(x, y), we define
198
+
199
+ $$
200
+ {\eta }_{\left( x,y\right) } = \left( {\mathbf{0},\ldots ,\mathbf{0},\underset{y - \text{ th block }}{\underbrace{x}},\mathbf{0},\ldots ,\mathbf{0}}\right) ,
201
+ $$
202
+
203
+ where0is a $d$ -dimensional vector, the elements of which are all 0 . On the MIL-reduction framework, suppose that $p = \infty ;{f}_{1}\left( c\right) = \Gamma \left( {{2c}{C}_{1}{C}_{2}}\right) ,{f}_{2}\left( c\right) = c/2{C}_{1}{C}_{2}$ (shifting function to $\left\lbrack {-1, + 1}\right\rbrack );\alpha \left( {x,y}\right) = \left( {{x}_{\left( x,y\right) }^{\prime },{y}^{\prime }}\right)$ where ${x}_{\left( x,y\right) }^{\prime } = \left\{ {{\eta }_{\left( x,j\right) } - {\eta }_{\left( x,y\right) }\mid \forall j \in \mathcal{Y} \smallsetminus y}\right\} ;{y}^{\prime } = - 1$ ; for any $z \in {\mathbb{R}}^{kd},\mathcal{G} = \left\{ {g : z \mapsto \left\langle {\left( {{w}_{1}^{\prime },\ldots ,{w}_{k}^{\prime }}\right) ,z}\right\rangle }\right\}$ ${w}_{j}^{\prime } \in {\mathbb{R}}^{d},\forall j \in \left\lbrack k\right\rbrack ,\begin{Vmatrix}{W}^{\prime }\end{Vmatrix} \leq {C}_{1}\}$ where ${W}^{\prime } =$ $\left( {{w}_{1}^{\prime },\ldots ,{w}_{k}^{\prime }}\right)$ and $\begin{Vmatrix}{W}^{\prime }\end{Vmatrix} = \sqrt{\mathop{\sum }\limits_{{j = 1}}^{k}{\begin{Vmatrix}{w}_{j}^{\prime }\end{Vmatrix}}^{2}};\beta \left( {h}^{\prime }\right) : x \mapsto$ $\arg \mathop{\max }\limits_{{j \in \left\lbrack k\right\rbrack }}\left\langle {\left( {{w}_{1}^{\prime },\ldots {w}_{k}^{\prime }}\right) ,x}\right\rangle$ . Then, for any(x, y)and $h \in \mathcal{H},$
204
+
205
+ $$
206
+ {\ell }^{\prime }\left( {{x}^{\prime },{y}^{\prime },{h}^{\prime }}\right) = {f}_{1}\left( {{y}^{\prime }{\Psi }_{p}\left( \left\{ {{f}_{2}\left( {g\left( z\right) \mid z \in {x}_{\left( x,y\right) }^{\prime }}\right) }\right\} \right) }\right)
207
+ $$
208
+
209
+ $$
210
+ = \Gamma \left( {-{\Psi }_{\infty }\left( \left\{ {g\left( z\right) \mid z \in {x}_{\left( x,y\right) }^{\prime }}\right\} \right) }\right)
211
+ $$
212
+
213
+ $$
214
+ = \Gamma \left( {-\left( {\mathop{\max }\limits_{{j \in \mathcal{Y} \smallsetminus y}}\left( {\left\langle {{w}_{j},x}\right\rangle - \left\langle {{w}_{y},x}\right\rangle }\right) }\right) }\right)
215
+ $$
216
+
217
+ $$
218
+ = \ell \left( {x,y,h}\right)
219
+ $$
220
+
221
+ The empirical Rademacher complexity is immediately derived as follows by observing the reduction process.
222
+
223
+ Corollary 8. We assume that $\begin{Vmatrix}{x}_{i}\end{Vmatrix} \leq {C}_{2}$ for any $i \in \left\lbrack n\right\rbrack$ . In the reduced MIL problem from multi-class learning problem, the empirical Rademacher complexity of ${\widehat{\mathcal{H}}}^{\prime }$ is given as:
224
+
225
+ $$
226
+ {\Re }_{{S}^{\prime }}\left( {\widehat{\mathcal{H}}}^{\prime }\right) = O\left( \frac{\log \left( {{\widehat{a}}^{2}2{n}^{2}\left( {k - 1}\right) }\right) \left( {2\widehat{a}\ln \left( {{\widehat{a}}^{2}n}\right) }\right) }{\sqrt{n}}\right) ,
227
+ $$
228
+
229
+ where $\widehat{a} = {2a}{C}_{1}{C}_{2}$ and we assume $\begin{Vmatrix}{w}^{\prime }\end{Vmatrix} \leq {C}_{1}$ in the reduced MIL.
230
+
231
+ We used the fact that the bag size is(k - 1)for all ${x}_{i}^{\prime }$ (i.e., $\left. {{r}_{{S}^{\prime }} = k - 1}\right)$ and $\Re \left( \widehat{\mathcal{G}}\right) \leq 2/\sqrt{n}$ by setting ${f}_{2}\left( c\right) = c/{C}_{1}{C}_{2}$ . Using Corollary 2, we can obtain the generalization risk bound for the multi-class learning.
232
+
233
+ The learning algorithm is obtained by the following result.
234
+
235
+ Corollary 9. The reduced ERM of the MIL from multi-class learning is a convex programming problem.
236
+
237
+ The proof of Theorem 7 shows that ${f}_{2}$ is nondecreasing convex and ${y}_{i}^{\prime } = - 1$ for all $i \in \left\lbrack n\right\rbrack$ . Therefore, by Proposition 5, if we consider $\Gamma$ that is a nonicreasing and convex function, the ERM of the reduced MIL problem is a convex programming problem and solved in polynomial time.
238
+
239
+ ${}^{1}$ More precisely, the extended-value extension ${f}_{1}$ also must be nonincreasing (See details in [Boyd and Vandenberghe, 2004]).
240
+
241
+ ${}^{2}$ For example, hinge-loss function $f\left( c\right) : c \mapsto \{ 0,1 - c\}$ satisfies this condition.
242
+
243
+ ${}^{3}$ The reduction of Multi-task learning and top-1 ranking learning negative feedback are shown in Sec. 6 and 5 owing to space limitations.
244
+
245
+ § 5.1.2 COMPLEMENTARILY LABELED LEARNING PROBLEM
246
+
247
+ Complementarily labeled learning was proposed by Ishida et al. [2017]. In this problem, some training instances are complementarily labeled (e.g., instance ${x}_{i}$ is NOT ${y}_{i}$ ). We essentially follow the problem setting and some assumptions provided by Ishida et al. [2017].
248
+
249
+ Problem setting: Let $\mathcal{X} \subseteq {\mathbb{R}}^{d}$ be an instance space and $\mathcal{Y} = \left\lbrack k\right\rbrack$ be an output space. Let $\mathcal{D}$ be an unknown distribution over $\mathcal{X} \times \mathcal{Y}$ . We assume that the learner receives a sample $S$ drawn i.i.d. according to the distribution ${\mathcal{D}}^{\prime }$ which provides the true label with unknown probability $\theta$ and the complementary label with unknown probability $1 - \theta$ . Moreover, we assume that the complementary label is chosen with a uniform probability (i.e., all complementary labels are equally chosen with the probability $1/\left( {k - 1}\right)$ ). 4 More formally, we assume that the sample is given as $S = \left( {\left( {{x}_{1},{y}_{1},{\gamma }_{1}}\right) \ldots ,\left( {{x}_{n},{y}_{n},{\gamma }_{n}}\right) }\right)$ which is drawn i.i.d. according to the distribution ${\mathcal{D}}^{\prime }$ over $\mathcal{D} \times \{$ False, True $\}$ , where ${\gamma }_{i} =$ True means that ${y}_{i}$ is the true label and ${\gamma }_{i} =$ False means that ${y}_{i}$ is the complementary label (i.e., it indicates that ${x}_{i}$ is NOT ${y}_{i}$ ). For any $\left( {x,y}\right) \sim \mathcal{D}$ , ${\mathcal{D}}^{\prime }\left( {x,y,\text{ True }}\right) = \theta$ and ${\mathcal{D}}^{\prime }\left( {x,\bar{y},\text{ False }}\right) = \frac{1 - \theta }{k - 1}$ for any $\bar{y} \neq y$ (i.e., the complementary label is chosen with a uniform probability). The other basic settings are the same as those for the aforementioned multi-class learning. The learner predicts the label of $x$ using the hypothesis $h \in$ $\mathcal{H} = \left\{ {x \mapsto \arg \mathop{\max }\limits_{{y \in \left\lbrack k\right\rbrack }}\left\langle {{w}_{y},x}\right\rangle \mid \forall j \in \left\lbrack k\right\rbrack ,{w}_{j} \in {\mathbb{R}}^{d}}\right\} .$ The final goal of the learner is to find $h \in \mathcal{H}$ with a small multi-class classification risk:
250
+
251
+ $$
252
+ {R}_{\mathcal{D}}^{\mathrm{{MC}}}\left( h\right) = \underset{\left( {x,y}\right) \sim \mathcal{D}}{\mathbb{E}}I\left( {y \neq h\left( x\right) }\right) .
253
+ $$
254
+
255
+ However, it is difficult to minimize the empirical multi-class classification risk directly using the complementarily labeled data. Therefore, we consider the following risk
256
+
257
+ $$
258
+ {R}_{{\mathcal{D}}^{\prime }}^{\mathrm{{LC}}}\left( h\right) = \underset{\left( {x,y,\gamma }\right) \sim {\mathcal{D}}^{\prime }}{\mathbb{E}}\left\lbrack {I\left( {\gamma = \left( {y \neq h\left( x\right) }\right) }\right) }\right\rbrack .
259
+ $$
260
+
261
+ This risk implies that when $\gamma =$ True, the learner does not incur a risk if it predicts the true label. When $\gamma =$ False, the learner does not incur a risk if it predicts an assigned nontrue label. Thus, the risk measure is defined using the pair $\left( {y,\gamma }\right) \in \left( {\mathcal{Y}\times \{ \text{ False, True }\} }\right)$ . We can show that achieving a small ${R}_{{\mathcal{D}}^{\prime }}^{\mathrm{{LC}}}\left( h\right)$ is consistent with achieving small ${R}_{\mathcal{D}}^{\mathrm{{MC}}}\left( h\right)$ as follows:
262
+
263
+ Lemma 1. For any $h \in \mathcal{H},{R}_{\mathcal{D}}^{\mathrm{{MC}}}\left( h\right) = \frac{k - 1}{\theta \left( {k - 2}\right) + 1}{R}_{{\mathcal{D}}^{\prime }}^{\mathrm{{LC}}}\left( h\right)$ holds.
264
+
265
+ Thus, minimizing ${R}_{{\mathcal{D}}^{\prime }}^{\mathrm{{LC}}}\left( h\right)$ is a reasonable way to achieve a high multi-class classification accuracy. Generally, there is no loss function $\ell \left( {\left( {x,\gamma }\right) ,y,h}\right)$ which is a convex upper bound on the zero-one loss $I\left( {\gamma = \left( {y \neq h\left( x\right) }\right) }\right)$ over the domain $w$ . This is because if $I\left( {\gamma = \text{ True }}\right) = 1$ then max is convex w.r.t. $w$ ; however, if $I\left( {\gamma = \text{ True }}\right) = - 1$ then $- \max = \min$ is concave w.r.t. $w$ . Therefore, we consider the convex upper bounded loss only on the risk for complementarily labeled data (i.e., the concave risk for the normally labeled data) using $\Gamma : \mathbb{R} \rightarrow \left\lbrack {0,1}\right\rbrack$ as $\Gamma \left( {\mathop{\max }\limits_{{j \in \mathcal{Y} \smallsetminus y}}\left\langle {\left( {{w}_{j} - {w}_{y}}\right) ,x}\right\rangle }\right)$ . We then define the nonconvex risk $\ell \left( {x,\left( {\gamma ,y}\right) ,h}\right) =$ $\Gamma \left( {I\left( {\gamma = \text{ True }}\right) \times \left( {\mathop{\max }\limits_{{j \in \mathcal{Y} \smallsetminus y}}\left\langle {\left( {{w}_{j} - {w}_{y}}\right) ,x}\right\rangle }\right) }\right)$ . The empirical risk is formulated as:
266
+
267
+ $$
268
+ {\widehat{R}}_{S}^{\mathrm{{LC}}}\left( h\right) = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\ell \left( {{x}_{i},\left( {{\gamma }_{i},{y}_{i}}\right) ,h}\right) .
269
+ $$
270
+
271
+ The following is obtained by MIL-reduction framework.
272
+
273
+ Theorem 10. Complementarily labeled learning is MIL-reducible.
274
+
275
+ The difference from the reduction in multi-class learning is that only ${y}^{\prime }$ takes $\{ - 1,1\} .{y}^{\prime }$ behaves as a switch that turns the loss of complementarily or normally labeled data.
276
+
277
+ The empirical Rademacher complexity is bounded as:
278
+
279
+ Corollary 11. We assume that $\begin{Vmatrix}{x}_{i}\end{Vmatrix} \leq {C}_{2}$ for any $i \in \left\lbrack n\right\rbrack$ . In the reduced MIL problem from complementarily labeled learning, the empirical Rademacher complexity of ${\widehat{\mathcal{H}}}^{\prime }$ is given by:
280
+
281
+ $$
282
+ {\Re }_{{S}^{\prime }}\left( {\widehat{\mathcal{H}}}^{\prime }\right) = O\left( \frac{\log \left( {{\widehat{a}}^{2}{n}^{2}\left( {k - 1}\right) }\right) \left( {2\widehat{a}\ln \left( {{\widehat{a}}^{2}n}\right) }\right) }{\sqrt{n}}\right) ,
283
+ $$
284
+
285
+ where $\widehat{a} = {2a}{C}_{1}{C}_{2}$ and we assume $\begin{Vmatrix}{w}^{\prime }\end{Vmatrix} \leq {C}_{1}$ in the reduced MIL problem.
286
+
287
+ We use the same argument as in Corollary 8. Using Corollary 2 and Lemma 1, we obtain the generalization bound for the complementarily labeled learning.
288
+
289
+ The learning algorithm is derived by the following result:
290
+
291
+ Corollary 12. The reduced ERM of the MIL from complementarily labeled learning is a DC programming problem. If the sample contains only complementarily labeled data, the learning problem is a convex programming problem.
292
+
293
+ Generally, ${y}^{\prime } \in \{ - 1,1\}$ in complementarily labeled learning. Using the proof of Theorem 10 and by Proposition 6, if we consider $\Gamma \left( c\right)$ which is a nondecreasing and homogeneous function of degree 1 for $c \in \left\lbrack {-1,1}\right\rbrack$ such as hinge-loss function, we can solve the problem by DC algorithm as shown in Algorithm 1. Note that, if the sample contains only complementarily labeled data (i.e., $\forall i \in \left\lbrack n\right\rbrack ,{y}_{i} = - 1$ ), it becomes a convex programming problem.
294
+
295
+ ${}^{4}$ This assumption was proposed by Ishida et al. [2017] as a reasonable scenario in some practical tasks (e.g., crowdsourcing).
296
+
297
+ ${}^{5}$ Ishida et al. [2017] used a different surrogate risk. However, they and we have a common goal: to minimize ${R}_{\mathcal{D}}^{\mathrm{{MC}}}\left( h\right)$
298
+
299
+ § 5.1.3 MULTI-LABEL LEARNING PROBLEM
300
+
301
+ Problem setting Let $\mathcal{X} \subseteq {\mathbb{R}}^{d}$ be an instance space and $\mathcal{Y} \in \{ - 1,1{\} }^{k}$ be an output space, and $\mathcal{D}$ be an unknown distribution over $\mathcal{X}$ . Unlike the standard multi-class learning setting introduced in Section 5.1.1, each instance may have multiple labels (e.g., in text-categorization tasks, some texts have multiple topics such as IT and business). ${y}^{j}$ denotes the $j$ -th element of ${y}_{i}$ . The learner receives a labeled sample $S = \left( {{x}_{1},{y}_{1}}\right) ,\ldots ,\left( {{x}_{n},{y}_{n}}\right) \in \mathcal{X} \times \mathcal{Y}$ which is drawn i.i.d. according to the distribution $\mathcal{D}$ . The learner predicts whether $x$ belongs to class $j \in \left\lbrack k\right\rbrack$ or not using the hypothesis $h \in \mathcal{H} = \left\{ {\left( {x,j}\right) \mapsto \operatorname{sign}\left( \left\langle {{w}_{j},x}\right\rangle \right) \mid \forall {w}_{j} \in {\mathbb{R}}^{d}}\right\}$ . Let $\ell$ : $\left( {x,y,h}\right) \mapsto \frac{1}{k}\mathop{\sum }\limits_{{j = 1}}^{k}\Gamma \left( {-{y}^{j}\left\langle {{w}_{j},x}\right\rangle }\right)$ where $\Gamma : \mathbb{R} \rightarrow \left\lbrack {0,1}\right\rbrack$ is a convex, nondecreasing and $b$ -Lipschitz function ${}^{6}$ . The generalization and empirical risk of $h$ are defined as:
302
+
303
+ $$
304
+ {R}_{\mathcal{D}}\left( h\right) = \underset{\left( {x,y}\right) \sim \mathcal{D}}{\mathbb{E}}\left\lbrack {\ell \left( {x,y,h}\right) }\right\rbrack ,{\widehat{R}}_{S}\left( h\right) = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\ell \left( {{x}_{i},{y}_{i},h}\right) .
305
+ $$
306
+
307
+ Reduction to MIL
308
+
309
+ Theorem 13. Multi-label learning is MIL-reducible.
310
+
311
+ Proof. On the MIL-reduction framework, suppose that $p = 1;{f}_{1} : {f}_{1}\left( a\right) = - a$ for $a \in \mathbb{R};{f}_{2}$ is $\Gamma ;\alpha \left( {x,y}\right) =$ $\left( {{x}_{\left( x,y\right) }^{\prime },{y}^{\prime }}\right)$ where ${x}_{\left( x,y\right) }^{\prime } = \left\{ {\left( {-{y}^{1}x,1}\right) ,\ldots ,\left( {-{y}^{k}x,k}\right) }\right\}$ ; ${y}^{\prime } = - 1;\mathcal{G} = \left\{ {g : \left( {z,j}\right) \mapsto \left\langle {{w}_{j}^{\prime },z}\right\rangle \mid {w}_{j}^{\prime } \in {\mathbb{R}}^{d},\forall j \in }\right.$ $\left. {\left\lbrack k\right\rbrack ,\begin{Vmatrix}{W}^{\prime }\end{Vmatrix} \leq {C}_{1}}\right\}$ where ${W}^{\prime } = \left( {{w}_{1}^{\prime },\ldots ,{w}_{k}^{\prime }}\right) ;\beta \left( {h}^{\prime }\right)$ : $\left( {x,j}\right) \mapsto \operatorname{sign}\left( \left\langle {{w}_{j}^{\prime },x}\right\rangle \right)$ . For any(x, y)and $h \in \mathcal{H}$ , we have that
312
+
313
+ $$
314
+ {\ell }^{\prime }\left( {{x}^{\prime },{y}^{\prime },{h}^{\prime }}\right) = {f}_{1}\left( {{y}^{\prime }{\Psi }_{p}\left( \left\{ {{f}_{2}\left( {g\left( z\right) }\right) \mid z \in {x}_{\left( x,y\right) }^{\prime }}\right\} \right) }\right)
315
+ $$
316
+
317
+ $$
318
+ = \frac{1}{\left| {x}_{\left( x,y\right) }^{\prime }\right| }\mathop{\sum }\limits_{{\left( {{y}^{j}x,j}\right) \in {x}_{\left( x,y\right) }^{\prime }}}\Gamma \left( {-\left\langle {{w}_{j},{y}^{j}x}\right\rangle }\right)
319
+ $$
320
+
321
+ $$
322
+ = \ell \left( {x,y,h}\right)
323
+ $$
324
+
325
+ The empirical Rademacher complexity is bounded as:
326
+
327
+ Corollary 14. We assume that $\begin{Vmatrix}{x}_{i}\end{Vmatrix} \leq {C}_{2}$ for any $i \in \left\lbrack n\right\rbrack$ . In the reduced MIL problem, the empirical Rademacher complexity of ${\widehat{\mathcal{H}}}^{\prime }$ is given as follows:
328
+
329
+ $$
330
+ {\Re }_{{S}^{\prime }}\left( {\widehat{\mathcal{H}}}^{\prime }\right) = O\left( \frac{\log \left( {2{n}^{2}k}\right) \left( {b{C}_{1}{C}_{2}\ln \left( n\right) }\right) }{\sqrt{n}}\right) ,
331
+ $$
332
+
333
+ where $\begin{Vmatrix}{w}^{\prime }\end{Vmatrix} \leq {C}_{1}$ in the reduced MIL.
334
+
335
+ We used the fact that the size of each bag is $k$ . Using Corollary 2, we obtain the generalization risk bound for the multi-label learning.
336
+
337
+ The learning algorithm is obtained by the following result.
338
+
339
+ Corollary 15. The reduced ERM of the MIL from multi-label learning is a convex programming problem.
340
+
341
+ The proof of Theorem 13 shows that, ${f}_{1}$ is nonincreasing and convex, and ${y}_{i}^{\prime } = - 1$ for all $i \in \left\lbrack n\right\rbrack$ . Therefore, by Proposition 5, if we consider $\Gamma$ that is nondecreasing and convex, the reduced problem is a convex programming problem and it is solved in polynomial time.
342
+
343
+ § 5.2 APPLICATION TO THE NEW PROBLEMS
344
+
345
+ § 5.2.1 MULTI-LABEL LEARNING WITH PERFECTIONISTIC LOSS
346
+
347
+ Problem setting: In a standard multi-label learning (see Sec.5.1.3), we consider the average prediction error (loss) with the classes. On the other hand, we consider a perfectionistic error in multi-label learning problem. More formally, we consider the following loss in a multi-label learning:
348
+
349
+ $$
350
+ \ell : \left( {x,y,h}\right) \mapsto \mathop{\max }\limits_{{j \in \left\lbrack k\right\rbrack }}\Gamma \left( {-{y}^{j}\left\langle {{w}_{j},x}\right\rangle }\right) ,
351
+ $$
352
+
353
+ where $\Gamma : \mathbb{R} \rightarrow \left\lbrack {0,1}\right\rbrack$ is a convex, nondecreasing and $b$ -Lipschitz function. This loss means that the learner incurs the risk unless the learner perfectly predict the correct labels. The generalization and empirical risks of $h$ are given as ${R}_{\mathcal{D}}\left( h\right) = {\mathbb{E}}_{\left( {x,y}\right) \sim \mathcal{D}}\left\lbrack {\ell \left( {x,y,h}\right) }\right\rbrack ,{\widehat{R}}_{S}\left( h\right) =$ $\frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\ell \left( {{x}_{i},{y}_{i},h}\right)$ , respectively.
354
+
355
+ Using MIL-reduction framework, we obtain the following:
356
+
357
+ Theorem 16. Multi-label learning with perfectionistic loss is MIL-reducible.
358
+
359
+ This can be derived by the same argument with multi-label learning except for $p = \infty$ (see Sec. 4).
360
+
361
+ The empirical Rademacher complexity is bounded as:
362
+
363
+ Corollary 17. We assume that $\begin{Vmatrix}{x}_{i}\end{Vmatrix} \leq {C}_{2}$ for any $i \in \left\lbrack n\right\rbrack$ . In the reduced MIL problem, the empirical Rademacher complexity of ${\widehat{\mathcal{H}}}^{\prime }$ is given as follows:
364
+
365
+ $$
366
+ {\Re }_{{S}^{\prime }}\left( {\widehat{\mathcal{H}}}^{\prime }\right) = O\left( \frac{\log \left( {2{n}^{2}k}\right) \left( {b{C}_{1}{C}_{2}\ln \left( n\right) }\right) }{\sqrt{n}}\right) ,
367
+ $$
368
+
369
+ where we assume $\begin{Vmatrix}{w}^{\prime }\end{Vmatrix} \leq {C}_{1}$ .
370
+
371
+ Interestingly, we can have the same generalization risk bound with the standard multi-label learning.
372
+
373
+ The learning algorithm is derived by the following result.
374
+
375
+ ${}^{6}$ Note that we use the negative score $- {y}^{j}\left\langle {{w}_{j},x}\right\rangle$ to employ a nondecreasing $\Gamma$ .
376
+
377
+ Corollary 18. The reduced ERM of the MIL from multi-label learning with perfectionistic loss is a convex programming problem.
378
+
379
+ This is easily obtained by observing the reduction process shown in Sec. 4 and using Prpoposition 5.
380
+
381
+ A naive approach for the multi-label learning with perfectionistic loss is to reduce to multi-class learning. That is, we consider all combinations of the multi-label as multi-classes and solve ${2}^{k}$ -class learning problem with high computational cost. However, by the above corollary, multi-label learning with perfectionistic loss can be solved efficiently.
382
+
383
+ § 5.2.2 TOP-1 RANKING LEARNING
384
+
385
+ Learning to rank is a fundamental problem, and many applications, such as recommendation systems, exist. We consider the following natural scenario in a recommendation problem; a learner has a set that contains several items, and it wishes to recommend an item to a target user from the set.
386
+
387
+ Problem setting: Let $\mathcal{X} \subseteq {\mathbb{R}}^{d}$ be an instance space, and $s : \mathcal{X} \rightarrow \mathbb{R}$ be a target scoring function. Set $A$ is a finite set of instances selected from $\mathcal{X}$ . The learner receives the sequence of the sets of items and the chosen item $S = \left( {{A}_{1},{x}_{1}^{ * }}\right) ,\ldots ,\left( {{A}_{n},{x}_{n}^{ * }}\right)$ , where each ${x}_{i}^{ * } \in {A}_{i}$ is the highest-valued item determined by the target function $s$ . $k$ denotes the average size of the item sets in $S$ , that is, $k = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\left| {A}_{i}\right|$ . Each sample set of items is drawn i.i.d. from $\mathcal{X}$ according to an unknown distribution $\mathcal{D}$ over ${2}^{\mathcal{X}}$ . Assume that the learner predicts the item from the item set using the hypothesis $h \in \mathcal{H} = \left\{ {A \mapsto \arg \mathop{\max }\limits_{{x \in A}}\langle w,x\rangle \mid }\right.$ $w \in \left. {\mathbb{R}}^{d}\right\}$ . Let $\ell \left( {A,{x}^{ * },h}\right)$ is a convex upper bound on the zero-one loss function $I\left( {y \neq \widehat{y}}\right)$ . Equivalently, we consider the zero-one loss $I\left( {\left\langle {w,{x}^{ * }}\right\rangle - \mathop{\max }\limits_{{x \in A \smallsetminus {x}^{ * }}}\langle w,x\rangle \leq 0}\right)$ and its convex upper bounded loss $\ell : \left( {A,{x}^{ * },h}\right) \mapsto$ $\Gamma \left( {\left\langle {w,{x}^{ * }}\right\rangle - \mathop{\max }\limits_{{x \in A \smallsetminus {x}^{ * }}}\langle w,x\rangle }\right)$ where $\Gamma : \mathbb{R} \rightarrow \left\lbrack {0,1}\right\rbrack$ is a convex, nonincreasing and $a$ Lipschitz function. The goal of the learner is to find $h \in \mathcal{H}$ with a small misranking risk w.r.t. the target $s$ . Thus, the generalization and empirical risks are formulated as follows:
388
+
389
+ $$
390
+ {R}_{\mathcal{D}}\left( h\right) = \underset{A \sim \mathcal{D}}{\mathbb{E}}\left\lbrack {\ell \left( {A,{x}^{ * },h}\right) }\right\rbrack ,{\widehat{R}}_{S}\left( h\right) = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\ell \left( {A,{x}_{i}^{ * },h}\right) ,
391
+ $$
392
+
393
+ where ${x}^{ * } = \arg \mathop{\max }\limits_{{x \in A}}s\left( x\right)$ .
394
+
395
+ We obtain the following by using MIL-reduction framework:
396
+
397
+ Theorem 19. Top-1 ranking learning is MIL-reducible.
398
+
399
+ The reducible condition is satisfied when we set $\alpha \left( {A,{x}^{ * }}\right) =$ $\left( {{x}^{\prime },{y}^{\prime }}\right)$ where ${x}^{\prime } = \left\{ {x - {x}^{ * } \mid x \in A \smallsetminus {x}^{ * }}\right\} {y}_{i}^{\prime } = - 1$ for all $i \in \left\lbrack n\right\rbrack$ . The details of the reduction process is in Sec.1. The empirical Rademacher complexity bound is as follows:
400
+
401
+ Corollary 20. We assume that $\parallel x\parallel \leq {C}_{2}$ for any $x \in$ ${A}_{i},\forall i \in \left\lbrack n\right\rbrack$ . In the reduced MIL problem, the empirical Rademacher complexity of ${\widehat{\mathcal{H}}}^{\prime }$ is given as follows:
402
+
403
+ $$
404
+ {\Re }_{{S}^{\prime }}\left( {\widehat{\mathcal{H}}}^{\prime }\right) = O\left( \frac{\log \left( {{\widehat{a}}^{2}{n}^{2}\left( {k - 1}\right) }\right) \left( {\widehat{a}\ln \left( {2{\widehat{a}}^{2}n}\right) }\right) }{\sqrt{n}}\right) ,
405
+ $$
406
+
407
+ where $\widehat{a} = {2a}{C}_{1}{C}_{2}$ and we assume $\begin{Vmatrix}{w}^{\prime }\end{Vmatrix} \leq {C}_{1}$ .
408
+
409
+ The generalization bound can be derived by applying ${r}_{{S}^{\prime }} =$ $k - 1$ and using the fact that $\parallel z\parallel \leq 2{C}_{2}$ for any $z \in {x}_{i}^{\prime },\forall i \in$ $\left\lbrack n\right\rbrack$ in the reduced MIL. By using Corollary 2, we can obtain the generalization risk bound for the Top-1 ranking learning.
410
+
411
+ The learning algorithm is designed by the following result:
412
+
413
+ Corollary 21. The reduced ERM of MIL from top-1 ranking learning is a convex programming problem.
414
+
415
+ The corollary can be esasily derived from the reduction process detailed in [1]
416
+
417
+ Important extension: We can consider top-1 ranking learning with negative feedback which is an extension of top-1 ranking learning. We show the details in Sec. J. Remarkably, the ERM of the reduced MIL problem is a DC programming problem.
418
+
419
+ § 6 KERNELIZED EXTENSION
420
+
421
+ Although we consider a linear function set as $\mathcal{G}$ ; in practice, a nonlinear kernel is required for various learning tasks. A straightforward method is to employ a kernel-approximation technique [see, e.g., Sec.6.6 in Mohri et al., 2018], which constructs feature vectors $\Phi \left( x\right) \in {\mathbb{R}}^{D}$ with the theoretical guarantee that $\left\langle {\Phi \left( {x}_{1}\right) ,\Phi \left( {x}_{2}\right) }\right\rangle \approx K\left( {{x}_{1},{x}_{2}}\right)$ for a user-determined dimension $D$ . However, we can use only a limited number of kernels via the approximation technique. Therefore, we show the kernelized version of the reduction.
422
+
423
+ § 6.1 SETTINGS
424
+
425
+ We assume that an original problem is defined by $\mathcal{H},\ell ,\mathcal{X},\mathcal{Y}$ , and $\Phi : \mathcal{X} \rightarrow \mathbb{H}$ , where $\mathbb{H}$ is a reproducing kernel Hilbert space associated to $K\left( {{x}_{1},{x}_{2}}\right) = \left\langle {\Phi \left( {x}_{1}\right) ,\Phi \left( {x}_{2}\right) }\right\rangle$ . Aside from the computability, we can virtually consider the sample as $S = \left( {\left( {\Phi \left( {x}_{1}\right) ,{y}_{1}}\right) ,\ldots ,\left( {\Phi \left( {x}_{n}\right) ,{y}_{n}}\right) }\right)$ . The ERM-reducible condition is that there exist $\left( {{x}^{\prime },{y}^{\prime }}\right) = \alpha \left( {\Phi \left( x\right) ,y}\right)$ , $h = \beta \left( {h}^{\prime }\right)$ and ${\ell }^{\prime }$ that satisfies $\ell \left( {\Phi \left( x\right) ,y,h}\right) = {\ell }^{\prime }\left( {{x}_{i}^{\prime },{y}_{i}^{\prime },{h}^{\prime }}\right)$ for any $\left( {x,y}\right) \in \mathcal{X} \times \mathcal{Y}$ .
426
+
427
+ Let ${S}^{\prime } = \left( {\left( {{x}_{1}^{\prime },{y}_{1}^{\prime }}\right) ,\ldots ,\left( {{x}_{n}^{\prime },{y}_{n}^{\prime }}\right) }\right)$ and let $\mathcal{G} = \{ g : z \mapsto$ $\left. {\left\langle {{w}^{\prime },z}\right\rangle \mid {w}^{\prime } \in {\mathbb{H}}^{\prime }}\right\}$ . We assume that $\left( {\mathcal{H},\ell }\right)$ is MIL-reducible to ${\mathcal{H}}^{\prime },{\ell }^{\prime }$ . The ERM of the reduced MIL is formulated as:
428
+
429
+ $$
430
+ \mathop{\min }\limits_{{{w}^{\prime } \in {\mathbb{H}}^{\prime }}}\lambda {\begin{Vmatrix}{w}^{\prime }\end{Vmatrix}}_{{\mathbb{H}}^{\prime }} + {\mathcal{L}}_{{w}^{\prime }} \tag{5}
431
+ $$
432
+
433
+ where ${\mathcal{L}}_{{w}^{\prime }} = \mathop{\sum }\limits_{i}^{n}{f}_{1}\left( {{y}_{i}^{\prime }{\Psi }_{p}\left( \left\{ {{f}_{2}\left( {\left\langle {{w}^{\prime },z}\right\rangle \mid z \in {x}_{i}^{\prime }}\right) }\right\} \right) }\right.$ .
434
+
435
+ ${}^{7}$ We consider an arg max with a fixed tie-breaking rule.
436
+
437
+ § 6.2 COMPUTABILITY
438
+
439
+ We show that the representer theorem holds for the optimization problem (5).
440
+
441
+ Theorem 22 (Representer theorem). An optimal solution of the ERM problem (5) has the form ${\widetilde{w}}^{\prime } = \mathop{\sum }\limits_{{z \in {P}_{{S}^{\prime }}}}{\mu }_{z}z$ , where ${P}_{{S}^{\prime }} = \mathop{\bigcup }\limits_{{i = 1}}^{n}{x}_{i}^{\prime }$ .
442
+
443
+ Thus, the ERM problem (5) is equivalently formulated as:
444
+
445
+ $$
446
+ \mathop{\min }\limits_{{\mathbf{\mu } \in {\mathbb{R}}^{\left| {P}_{{S}^{\prime }}\right| }}}\lambda \mathop{\sum }\limits_{{z,\widehat{z} \in {P}_{{S}^{\prime }}}}{\mu }_{z}{\mu }_{\widehat{z}}\langle z,\widehat{z}\rangle + {\mathcal{L}}_{\mathbf{\mu }},
447
+ $$
448
+
449
+ where ${\mathcal{L}}_{\mathbf{\mu }} = \mathop{\sum }\limits_{{i = 1}}^{n}{f}_{1}\left( {{y}_{i}{\Psi }_{p}\left( \left\{ {{f}_{2}\left( {\mathop{\sum }\limits_{{z \in {P}_{{S}^{\prime }}}}{\mu }_{z}\langle z,\widehat{z}\rangle }\right) \mid \widehat{z} \in }\right. \right. }\right.$ $\left. \left. {x}_{i}^{\prime }\right\} \right) )$ .
450
+
451
+ Therefore, if $\left\langle {{z}_{1},{z}_{2}}\right\rangle$ is polynomial-time computable for any ${z}_{1},{z}_{2} \in {x}^{\prime }$ using the original kernel function $K$ as an oracle, the ERM of the MIL can be solved similar to linear case according to the condition in Proposition 5 and 6 (DC algorithm for the kernel version is in Sec. 1). For all MIL-reducible problems introduced in the paper, $\left\langle {{z}_{1},{z}_{2}}\right\rangle$ is polynomial-time computable using $K$ (see details in Sec. M). Moreover, we can construct $\beta$ in polynomial time.
452
+
453
+ § 7 DISCUSSION AND CONCLUSION
454
+
455
+ § 7.1 RELATED WORK
456
+
457
+ Other reduction techniques: Several machine-learning reduction schemes exist [see, e.g., Beygelzimer et al., 2015], and we found general reduction schemes, such as [Pitt and Warmuth, 1990, Beygelzimer et al., 2005]. A major difference between the proposed scheme and existing approaches is that we focus on the reduction of ERM. Various applications of machine learning reductions, such as reduction from multi-class learning to binary classification [James and Hastie, 1998, Ramaswamy et al., 2014], and from ranking to binary classification [Balcan et al., 2008, Ailon and Mohri, 2010, Agarwal, 2014], exist. To the best of our knowledge, the reduction to MIL has not yet been discussed.
458
+
459
+ Multi-Class Learning: Recently, Lei et al. [2019] achieved $\log \left( k\right)$ -dependent generalization bound. The proposed generalization bound is competitive with the bound. However, our derivation is highly simpler than the analysis of [Lei et al., 2019] because the reduction allows us to apply the existing MIL bound of [Sabato and Tishby, 2012].
460
+
461
+ Complementarily-labeled learning: Ishida et al. [2017] provided the generalization risk bound in the case in which the training sample contains only complementarily labeled instances (i.e., $\theta = 0$ ). The proposed generalization bound is incomparable to the bound (see details in Sec. 1). Ishida et al. [2017] selected nonconvex loss functions and optimized the empirical risks using a gradient-based algorithm in practice. However, there is no guarantee of the optimality of the solution. We show that the learning problem can be solved by DC algorithm and guarantee the local optima. Moreover, in the special case that sample contains only complementarily labeled data, the learning problem becomes convex programming and we can obtain global optima. To the best of our knowledge, the provided learning algorithm is a first polynomial-time algorithm in the special case.
462
+
463
+ Multi-label learning: Various approaches and generalization analyses have been provided [Yu et al., 2014, Bhatia et al., 2015, Xu et al., 2016a, b]. However, to the best of our knowledge, this paper is the first to propose a $\log \left( k\right)$ - dependent generalization bound for the linear (or nonlinear kernel) hypothesis class, where $k$ is the number of classes.
464
+
465
+ Multi-task learning: A similar generalization bound was reported by [Pontil and Maurer, 2013]. Their results suggest the advantage of regularizing the weights ${w}_{1},\ldots ,{w}_{T}$ over $T$ tasks. However, our result is derived from an entirely different argument from [Pontil and Maurer, 2013] and the derivation is highly simplified.
466
+
467
+ Top-1 ranking learning: Top-1 ranking measure was originally discussed in [Hidasi and Karatzoglou, 2018]. However, the basic problem setting is different from ours. They assumed that the recommender has i.i.d. positive and negative items as the sample. Moreover, they did not propose a general form of the problem and theoretical analysis.
468
+
469
+ MIL: MIL was originally proposed by Dietterich et al. [1997], which is known as weakly supervised learning and there have been proposed many real applications [Gärtner et al., 2002, Andrews et al., 2003, Zhang et al., 2013, Doran and Ray, 2014, Carbonneau et al., 2018]. Moreover, the generalization bound and learning algorithm are analyzed from the theoretical perspective [Sabato and Tishby, 2012, Doran, 2015, Suehiro et al., 2020]. Suehiro et al. [2020] found that a local-feature-based time-series classification problem can be reduced to a MIL problem. Our results first show that various learning problems can be reduced to MIL.
470
+
471
+ § 7.2 CONCLUSION AND FUTURE WORK
472
+
473
+ We revealed that various learning problems can be reduced to a MIL problem by our ERM-based reduction scheme. The results imply that our MIL-reduction gives a simplified and unified framework for the analyses for various learning problems. Moreover, we obtained novel theoretical results for some learning problems. As future work, we consider other reduction frameworks based on the proposed ERM-reduction. Moreover, we explore the relaxation of the reducible condition. In this paper, two learning algorithms are introduced and the demonstrated MIL-reducible problems are solved by either of them. However, there may be some MIL-reducible problems that can be solved by other learning algorithms or cannot be solved in polynomial time.
UAI/UAI 2022/UAI 2022 Conference/BGGevIUicl9/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,383 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Robustness of Model Predictions under Extension
2
+
3
+ ## Abstract
4
+
5
+ Mathematical models of the real world are simplified representations of complex systems. A caveat to using mathematical models is that predicted causal effects and conditional independences may not be robust under model extensions, limiting applicability of such models. In this work, we consider conditions under which qualitative model predictions are preserved when two models are combined. Under mild assumptions, we show how to use the technique of causal ordering to efficiently assess the robustness of qualitative model predictions. We also characterize a large class of model extensions that preserve qualitative model predictions. For dynamical systems at equilibrium, we demonstrate how novel insights help to select appropriate model extensions and to reason about the presence of feedback loops. We apply our ideas to a viral infection model with immune responses.
6
+
7
+ ## 1 INTRODUCTION
8
+
9
+ There are several interesting systems for which the causal relations and Markov properties cannot be modelled by Structural Causal Models (SCMs) [Pearl, 2009, Bongers et al., 2021]. The causal ordering algorithm, first introduced by Simon [1953], can be used to better understand the qualitative model predictions of these systems [Blom et al. 2021]. In this paper, we take a closer look at what happens to these predictions when two systems are combined. Particularly, we give conditions under which properties of the whole system can be understood in terms of properties of its parts. We discuss how a holistic approach towards causal modelling may result in novel insights when we derive and test the predictions of systems for which new properties emerge from the combination of its parts.
10
+
11
+ In the first part of the paper, we focus on the practical issue of assessing whether qualitative model predictions are robust under model extensions. We revisit the observations of De Boer [2012] who demonstrated that qualitative predictions of a certain viral infection model change dramatically when the model is extended with extra equations describing simple immune responses. To assess the robustness of predicted causal relations or conditional independences under such an alteration of the model, it is useful to characterize a class of model extensions that lead to unaltered qualitative model predictions. In this work, we propose the technique of causal ordering $\left\lbrack {\text{Simon,}{1953}}\right\rbrack$ as an efficient method to assess the robustness of qualitative causal predictions. Under mild conditions, this allows us to characterize a large class of model extensions that preserve qualitative causal predictions. We also consider the class of models that are obtained from the equilibrium equations of dynamical models where each variable is self-regulating. For this class, we show that the predicted presence of causal relations and absence of conditional independences is robust when the model is extended with new equations.
12
+
13
+ Key aspects of the scientific method include generating a model or hypothesis that explains a phenomenon, deriving testable predictions from this model or hypothesis, and designing an experiment to test these predictions in the real world. The promise of causal discovery algorithms is that they are able to learn causal relations from a combination of background knowledge and data. The general idea of many constraint-based approaches (e.g. PC or FCI and variants thereof [Spirtes et al., 2000, Zhang, 2008, Colombo et al., 2012]) is to exploit information about conditional independences in a probability distribution to construct an equivalence class of graphs that encode certain aspects of the probability distribution, and then draw conclusions about the causal relations from the graphs. There is a large amount of literature concerning particular algorithms for which the learned structure expresses causal relations under certain conditions (e.g. linearity, causal sufficiency, absence of feedback loops), see for example [Richardson and Spirtes, 1999, Spirtes et al., 2000, Lacerda et al., 2008, Zhang, 2008, Colombo et al., 2012, Hyttinen et al., 2012, Forré and Mooij, 2018, Strobl, 2019, Mooij and Claassen, 2020]. In the last part of this paper, our main interest is in dynamical models with the property that graphs representing relations between variables by encoding the conditional independences of their equilibrium distribution should not be interpreted causally at all. For the case that a model for a subsystem is given, we present novel insights that enable us to reject model extensions based on conditional independences in equilibrium data of the subsystem. We demonstrate how this approach allows us to reason about the presence of variables that are not self-regulating and feedback mechanisms that involve unobserved variables from the equilibrium distribution of certain dynamical models.
14
+
15
+ ### 1.1 CAUSAL ORDERING GRAPH
16
+
17
+ Here, we give a concise introduction to the technique of causal ordering, introduced by Simon [1952]. ${}^{1}$ In short, the causal ordering algorithm takes a set of equations as input and returns a causal ordering graph that encodes the effects of interventions and a Markov ordering graph that implies conditional independences between variables in the model [Theorem 17, Blom et al., 2021]. Compared with the popular framework of structural causal models [Pearl, 2009], the distinction between the causal ordering and Markov ordering graphs does not provide new insights for acyclic models, but it results in non-trivial conclusions for models with feedback, as suggested in the discussion in Section 2.4 and explained in detail in [Blom et al., 2021].
18
+
19
+ We consider models consisting of equations $F$ that contain endogenous variables $V$ , independent exogenous random variables $W$ , and (constant, exogenous) parameters $P$ . The structure of equations and the endogenous variables that appear in them can be represented by the associated bipartite graph $\mathcal{B} = \langle V, F, E\rangle$ , where each endogenous variable is associated with a distinct vertex in $V$ , and each equation is associated with a distinct vertex in $F$ . There is an edge $\left( {v - f}\right) \in E$ if and only if variable $v \in V$ appears in equation $f \in F$ . The causal ordering algorithm constructs a directed cluster graph $\langle \mathcal{V},\mathcal{E}\rangle$ , where $\mathcal{V}$ is a partition of vertices $V$ into clusters and $\mathcal{E}$ is a set of directed edges from vertices in $V$ to clusters in $\mathcal{V}$ . Given a bipartite graph $\mathcal{B} = \langle V, F, E\rangle$ with a perfect matching $M$ , the causal ordering algorithm proceeds with the following three steps [Nayak, 1995, Blom et al., 2021]:
20
+
21
+ 1. For $v \in V, f \in F$ orient edges(v - f)as $\left( {v \leftarrow f}\right)$ when $\left( {v - f}\right) \in M$ and as $\left( {v \rightarrow f}\right)$ otherwise; this yields a directed graph $\mathcal{G}\left( {\mathcal{B}, M}\right)$ .
22
+
23
+ 2. Find all strongly connected components ${S}_{1},{S}_{2},\ldots ,{S}_{n}$ of $\mathcal{G}\left( {\mathcal{B}, M}\right)$ . Let $\mathcal{V}$ be the set of clusters ${S}_{i} \cup M\left( {S}_{i}\right)$ for $i \in \{ 1,\ldots , n\}$ , where $M\left( {S}_{i}\right)$ denotes the set of vertices that are matched to vertices in ${S}_{i}$ in matching $M$ .
24
+
25
+ 3. Let $\operatorname{cl}\left( f\right)$ denote the cluster in $\mathcal{V}$ containing $f$ . For each $\left( {v \rightarrow f}\right)$ such that $v \notin \operatorname{cl}\left( f\right)$ add an edge $\left( {v \rightarrow \operatorname{cl}\left( f\right) }\right)$ to $\mathcal{E}$ .
26
+
27
+ Independent exogenous random variables and parameters are then added as singleton clusters with edges towards the clusters of the equations in which they appear. It has been shown that the resulting directed cluster graph $\operatorname{CO}\left( \mathcal{B}\right) =$ $\langle \mathcal{V},\mathcal{E}\rangle$ , which we refer to as the causal ordering graph, is independent of the choice of perfect matching [Theorem 4, Blom et al. 2021]. Example 1 shows how the algorithm works and a graphical illustration of the algorithm for a more elaborate cyclic model can be found in Appendix A.2.
28
+
29
+ Example 1. Let $V = \left\{ {{v}_{1},{v}_{2}}\right\} , W = \left\{ {{w}_{1},{w}_{2}}\right\}$ , and $P =$ $\left\{ {{p}_{1},{p}_{2}}\right\}$ be index sets. Consider model equations ${f}_{1}$ and ${f}_{2}$ with endogenous variables ${\left( {X}_{v}\right) }_{v \in V}$ , exogenous random variables ${\left( {U}_{w}\right) }_{w \in W}$ and parameters ${C}_{p}$ with $p \in P$ below.
30
+
31
+ $$
32
+ {f}_{1} : \;{C}_{{p}_{1}}{X}_{{v}_{1}} - {U}_{{w}_{1}} = 0, \tag{1}
33
+ $$
34
+
35
+ $$
36
+ {f}_{2} : \;{C}_{{p}_{2}}{X}_{{v}_{2}} + {X}_{{v}_{1}} + {U}_{{w}_{2}} = 0. \tag{2}
37
+ $$
38
+
39
+ The bipartite graph $\mathcal{B} = \langle V, F, E\rangle$ in Figure 1a, with $E = \left\{ {\left( {{v}_{1} - {f}_{1}}\right) ,\left( {{v}_{1} - {f}_{2}}\right) ,\left( {{v}_{2} - {f}_{2}}\right) }\right\}$ is a compact representation of the model structure. This graph has a perfect matching $M = \left\{ {\left( {{v}_{1} - {f}_{1}}\right) ,\left( {{v}_{2} - {f}_{2}}\right) }\right\}$ . By orienting edges in $\mathcal{B}$ according to the rules in step 1 of the causal ordering algorithm we obtain the directed graph $\left\langle {V \cup F,{E}_{\text{dir }}}\right\rangle$ with ${E}_{\text{dir }} = \left\{ {\left( {{f}_{1} \rightarrow {v}_{1}}\right) ,\left( {{f}_{2} \rightarrow {v}_{2}}\right) ,\left( {{v}_{1} \rightarrow {f}_{2}}\right) }\right\}$ . The clusters ${C}_{1} = \left\{ {{v}_{1},{f}_{1}}\right\}$ and ${C}_{2} = \left\{ {{v}_{2},{f}_{2}}\right\}$ are added to $\mathcal{V}$ in step 2 of the algorithm, and the edge $\left( {{v}_{1} \rightarrow {C}_{2}}\right)$ is added to $\mathcal{E}$ in step 3 . Finally, we add the parameters $P$ and independent exogenous random variables $W$ as singleton clusters to $\mathcal{V}$ , and the edges $\left( {{p}_{1} \rightarrow {C}_{1}}\right) ,\left( {{w}_{1} \rightarrow {C}_{1}}\right) ,\left( {{p}_{2} \rightarrow {C}_{2}}\right)$ , and $\left( {{w}_{2} \rightarrow {C}_{2}}\right)$ to $\mathcal{E}$ . The resulting causal ordering graph is given in Figure 1b.
40
+
41
+ Throughout this work, we will assume that models are uniquely solvable with respect to the causal ordering graph, which roughly means that for each cluster, the equations in that cluster can be solved uniquely for the endogenous variables in that cluster (see [Definition 14, Blom et al., 2021 for details). A perfect intervention on a cluster that contains equation vertices represents a model change where the equations in the targeted cluster are replaced by equations that set the endogenous variables in that cluster equal to constant values. A soft intervention targets an equation, parameter, or exogenous variable, but does not affect which variables appear in the equations. We say that there is a directed path from a vertex $x$ to a vertex $y$ in a causal ordering graph $\langle \mathcal{V},\mathcal{E}\rangle$ if either $\operatorname{cl}\left( x\right) = \operatorname{cl}\left( y\right)$ or there is a sequence of clusters ${C}_{1} = \operatorname{cl}\left( x\right) ,{C}_{2},\ldots ,{C}_{k - 1},{C}_{k} = \operatorname{cl}\left( y\right)$ so that for all $i \in \{ 1,\ldots , k - 1\}$ there is a vertex ${z}_{i} \in {C}_{i}$ such that $\left( {{z}_{i} \rightarrow {C}_{i + 1}}\right) \in \mathcal{E}$ . It can be shown that a) the presence of a directed path from a cluster, equation, parameter, or exogenous variable that is targeted by a soft intervention towards a certain variable in the causal ordering graph implies that the intervention has a generic effect on that variable and b) if no such path exists there is no causal effect of the intervention on that variable [Theorem 20, Blom et al., 2021].
42
+
43
+ ---
44
+
45
+ ${}^{1}$ Actually, we consider an equivalent algorithm for causal ordering that was shown to be more computationally efficient by [Nayak, 1995, Gonçalves and Porto, 2016]. For more details, see [Blom et al., 2021].
46
+
47
+ ${}^{2}$ A perfect matching $M$ is a subset of edges in a bipartite graph so that every vertex is adjacent to exactly one edge in $M$ . Note that not every bipartite graph has a perfect matching.
48
+
49
+ ---
50
+
51
+ ![019639ad-572a-75ca-872c-15699fe78324_2_219_636_612_314_0.jpg](images/019639ad-572a-75ca-872c-15699fe78324_2_219_636_612_314_0.jpg)
52
+
53
+ Figure 1: The bipartite graph in Figure 1a is a compact representation of the model in Example 1. The corresponding causal ordering graph and Markov ordering graph are given in Figures 1b and 1c respectively. Exogenous variables are denoted by dashed circles and parameters by black dots.
54
+
55
+ ### 1.2 MARKOV ORDERING GRAPH
56
+
57
+ The causal ordering graph $\mathrm{{CO}}\left( \mathcal{B}\right) = \langle \mathcal{V},\mathcal{E}\rangle$ of model equations $F$ with endogenous variables $V$ , exogenous random variables $W$ , parameters $P$ , and bipartite graph $\mathcal{B}$ can be used to construct the Markov ordering graph, which is a DAG $\operatorname{MO}\left( \mathcal{B}\right) = \langle V \cup W, E\rangle$ , with $\left( {x \rightarrow y}\right) \in E$ if and only if $\left( {x \rightarrow \operatorname{cl}\left( y\right) }\right) \in \mathcal{E}$ . The Markov ordering graph for the model equations in Example 1 is given in Figure 1c. It has been shown that, under the assumption of unique solvability w.r.t. the causal ordering graph, d-separations in the Markov ordering graph imply conditional independences between the corresponding variables [Blom et al., 2021]. Henceforth, we will assume that the probability distribution of the solution ${\left( {X}_{v}\right) }_{v \in V}$ to a set of model equations is faithful to the Markov ordering graph. In other words, each conditional independence in the distribution implies a d-separation in the Markov ordering graph. Under the assumption that data is generated from such a model, some causal discovery algorithms, such as the PC algorithm [Spirtes et al., 2000], aim to construct the Markov equivalence class of the Markov ordering graph. In this work, we will specifically focus on feedback models for which the Markov ordering graph of the equilibrium distribution, and consequently the output of many causal discovery algorithms, does not have a straightforward causal interpretation.
58
+
59
+ ## 2 CAUSAL ORDERING FOR A VIRAL INFECTION MODEL
60
+
61
+ This work was inspired by a viral infection model discussed by De Boer [2012], who showed through explicit calculations that the predictions of the model are not robust under addition of an immune response. This sheds doubt on the correct interpretation of variables and parameters in the model. For many systems it is intrinsically difficult to study their behaviour in detail. The use of simplified mathematical models that capture key characteristics aids in the analysis of certain properties of the system. The hope is that the explanations inferred from model equations are legitimate accounts of the true underlying system [De Boer, 2012]. In reality, a modeller must take into account that the outcome of these studies may be contingent on the specifics of the model design. Here, we demonstrate how causal ordering can be used as a scalable tool to assess the robustness of model predictions without requiring explicit calculations.
62
+
63
+ ### 2.1 VIRAL INFECTION WITHOUT IMMUNE RESPONSE
64
+
65
+ Let ${U}_{\sigma }$ be a production term for target cells, ${d}_{T}$ the death rate for target cells, ${U}_{f}$ the fraction of successful infections, and ${U}_{\delta }$ the death rate of productively infected cells. Define $\beta = \frac{bp}{c}$ , where $b$ is the infection rate, $p$ the amount of virus produced per infected cell, and $c$ the clearance rate of viral particles. The following first-order differential equations describe how the amount of target cells ${X}_{T}\left( t\right)$ and the amount of infected cells ${X}_{I}\left( t\right)$ evolve over time [De Boer,2012]:
66
+
67
+ $$
68
+ {\dot{X}}_{T}\left( t\right) = {U}_{\sigma } - {d}_{T}{X}_{T}\left( t\right) - \beta {X}_{T}\left( t\right) {X}_{I}\left( t\right) , \tag{3}
69
+ $$
70
+
71
+ $$
72
+ {\dot{X}}_{I}\left( t\right) = \left( {{U}_{f}\beta {X}_{T}\left( t\right) - {U}_{\delta }}\right) {X}_{I}\left( t\right) , \tag{4}
73
+ $$
74
+
75
+ Suppose that we want to use this simple viral infection model to explain why the set-point viral load (i.e. the total amount of virus circulating in the bloodstream) of chronically infected HIV-patients differs by several orders of magnitude, as De Boer [2012] does. To analyse this problem we look at the equilibrium equations that are implied by equations (3) and (4) : 3
76
+
77
+ $$
78
+ {f}_{T} : \;{U}_{\sigma } - {d}_{T}{X}_{T} - \beta {X}_{T}{X}_{I} = 0, \tag{5}
79
+ $$
80
+
81
+ $$
82
+ {f}_{I}^{ + } : \;{U}_{f}\beta {X}_{T} - {U}_{\delta } = 0. \tag{6}
83
+ $$
84
+
85
+ Throughout the remainder of this work we will use this natural labelling of equilibrium equations, where the equation derived from the derivative ${\dot{X}}_{i}\left( t\right)$ is labelled ${f}_{i}$ . For first-order differential equations that are written in canonical form, ${\dot{X}}_{i}\left( t\right) = {g}_{i}\left( {X\left( t\right) }\right)$ , the natural labelling always exists.
86
+
87
+ ---
88
+
89
+ ${}^{3}$ Since we are only interested in strictly positive solutions we removed ${X}_{I}$ from the equilibrium equation ${f}_{I} : \left( {{U}_{f}\beta {X}_{T} - }\right.$ $\left. {U}_{\delta }\right) {X}_{I} = 0$ to obtain ${f}_{I}^{ + }$ .
90
+
91
+ ---
92
+
93
+ ![019639ad-572a-75ca-872c-15699fe78324_3_187_173_648_417_0.jpg](images/019639ad-572a-75ca-872c-15699fe78324_3_187_173_648_417_0.jpg)
94
+
95
+ Figure 2: Graphical representations of the viral infection model in equations (5) and (6). Vertices ${v}_{i}$ and ${w}_{j}$ correspond to variables ${X}_{i}$ and ${U}_{j}$ , respectively. The causal ordering graph represents generic effects of interventions. The d-separations in Figure 2c imply conditional independences.
96
+
97
+ Suppose that ${U}_{\sigma },{U}_{f}$ and ${U}_{\delta }$ are independent exogenous random variables taking values in ${\mathbb{R}}_{ > 0}$ and ${d}_{T},\beta$ are strictly positive parameters. The associated bipartite graph, causal ordering graph, and Markov ordering graph are given in Figure 2. The causal ordering graph tells us that soft interventions targeting ${U}_{\sigma },{U}_{f},{U}_{\delta },{d}_{T}$ , or $\beta$ generically have an effect on the equilibrium distribution of the amount of infected cells ${X}_{I}$ . From here on, we say that the causal ordering graph of a model predicts the generic presence or absence of causal effects. The Markov ordering graph shows that ${v}_{T}$ and ${w}_{\sigma }$ are d-separated. This implies that the amount of target cells ${X}_{T}$ should be independent of the production rate ${U}_{\sigma }$ when the system is at equilibrium. Henceforth, we will say that the Markov ordering graph predicts the generic presence or absence of conditional dependences.
98
+
99
+ ### 2.2 VIRAL INFECTION WITH A SINGLE IMMUNE RESPONSE
100
+
101
+ The viral infection model in equations (3) and (4) can be extended with a simple immune response ${X}_{E}\left( t\right)$ by adding the following dynamic and static equations:
102
+
103
+ $$
104
+ {\dot{X}}_{E}\left( t\right) = \left( {{U}_{a}{X}_{I}\left( t\right) - {d}_{E}}\right) {X}_{E}\left( t\right) , \tag{7}
105
+ $$
106
+
107
+ $$
108
+ {X}_{\delta }\left( t\right) = {d}_{I} + {U}_{k}{X}_{E}\left( t\right) , \tag{8}
109
+ $$
110
+
111
+ where ${U}_{a}$ is an activation rate, ${d}_{E}$ and ${d}_{I}$ are turnover rates and ${U}_{k}$ is a mass-action killing rate [De Boer,2012]. Note that the exogenous random variable ${U}_{\delta }$ is now treated as an endogenous variable ${X}_{\delta }\left( t\right)$ instead. We derive the following equilibrium equations using the natural labelling provided by equations (7) and (8):4
112
+
113
+ $$
114
+ {f}_{E}^{ + } : \;{U}_{a}{X}_{I} - {d}_{E} = 0, \tag{9}
115
+ $$
116
+
117
+ $$
118
+ {f}_{\delta } : \;{X}_{\delta } - {d}_{I} - {U}_{k}{X}_{E} = 0, \tag{10}
119
+ $$
120
+
121
+ Henceforth, we will call the addition of equations ${F}_{ + }$ to $F$ a model extension. Notice that when two sets of equations are combined, there may exist variables that were exogenous in the submodel (i.e. the original model) but that are endogenous within the whole model (i.e. the extended model). Generally, equations ${F}_{ + }$ may contain endogenous variables in $V$ and exogenous variables in $W$ but they may also contain additional endogenous variables ${V}_{ + }$ , additional exogenous variables ${W}_{ + }$ and additional parameters ${P}_{ + }$ . Parameters and exogenous random variables that appear in equations $F$ can appear as endogenous variables in ${V}_{ + }$ and in the extended model ${F}_{\text{ext }} = F \cup {F}_{ + }$ . In that case, these variables are no longer considered to be parameters or exogenous variables within the extended model.
122
+
123
+ ![019639ad-572a-75ca-872c-15699fe78324_3_893_716_710_408_0.jpg](images/019639ad-572a-75ca-872c-15699fe78324_3_893_716_710_408_0.jpg)
124
+
125
+ Figure 3: Graphical representations of the viral infection model with a single immune response. The presence or absence of causal relations and d-connections implied by the graphs in Figure 2 are not preserved if a single immune response is added.
126
+
127
+ Suppose that ${U}_{a}$ and ${U}_{k}$ are independent exogenous random variables taking values in ${\mathbb{R}}_{ > 0}$ and ${d}_{E},{d}_{I}$ are parameters taking value in ${\mathbb{R}}_{ > 0}$ . The bipartite graph, causal ordering graph, and Markov ordering graph associated with equations (5),(6),(9), and (10) (with ${X}_{\delta }$ replacing ${U}_{\delta }$ ) are given in Figure 3. The causal ordering graph predicts a causal effect of ${U}_{\sigma }$ and ${d}_{T}$ on ${X}_{T}$ but not on ${X}_{I}$ . By comparing with the predictions of the causal ordering graph in Figure 2b, we find that effects of interventions targeting ${U}_{\sigma }$ and ${d}_{T}$ are not robust under the model extension. The Markov ordering graph of the extended model shows that ${w}_{\sigma }$ is d-connected to ${v}_{T}$ , and hence ${U}_{\sigma }$ and ${X}_{T}$ are dependent. We conclude that the independence between ${U}_{\sigma }$ and ${X}_{T}$ that was implied by the Markov ordering graph of the viral infection model without immune response is not robust under the model extension.
128
+
129
+ The systematic graphical procedure followed here easily leads to the same causal conclusions as De Boer [2012] obtained by explicitly solving the equilibrium equations. In addition, it leads to predictions regarding the conditional (in)dependences in the equilibrium distribution.
130
+
131
+ ---
132
+
133
+ ${}^{4}$ Analogous to changing ${f}_{I}$ to ${f}_{I}^{ + }$ for strictly positive solutions, we will look at ${f}_{E}^{ + }$ instead of ${f}_{E}$ .
134
+
135
+ ---
136
+
137
+ ### 2.3 VIRAL INFECTION WITH MULTIPLE IMMUNE RESPONSES
138
+
139
+ The following static and dynamical equations describe multiple immune responses:
140
+
141
+ $$
142
+ {\dot{X}}_{{E}_{i}}\left( t\right) = \frac{{p}_{E}{X}_{{E}_{i}}\left( t\right) {U}_{{a}_{i}}{X}_{I}\left( t\right) }{h + {X}_{{E}_{i}}\left( t\right) + {U}_{{a}_{i}}{X}_{I}\left( t\right) } - {d}_{E}{X}_{{E}_{i}}\left( t\right) , \tag{11}
143
+ $$
144
+
145
+ $$
146
+ i = 1,2,\ldots , n
147
+ $$
148
+
149
+ $$
150
+ {X}_{\delta }\left( t\right) = {d}_{I} + {U}_{k}\mathop{\sum }\limits_{i}^{n}{U}_{{a}_{i}}{X}_{{E}_{i}}\left( t\right) , \tag{12}
151
+ $$
152
+
153
+ where there are $n$ immune responses, ${U}_{{a}_{i}}$ is the avidity of immune response $i,{p}_{E}$ is the maximum division rate, and $h$ is a saturation constant [De Boer,2012]. For $n = 2$ we can derive equilibrium equations ${f}_{{E}_{1}},{f}_{{E}_{2}}$ , and ${f}_{\delta }$ using the natural labelling as we did for the equilibrium equations in the previous section. Together with the equilibrium equations (5) and (6) (with ${X}_{\delta }$ replacing ${U}_{\delta }$ ) for the viral infection model this is another extended model. The bipartite graph of this extended model is given in Figure 5a, while the causal ordering graph can be found in Figure 4a. By comparing the directed paths in this causal ordering graph with that of the original viral infection model (i.e. the model without an immune response) in Figure 2b, it can be seen that the predicted presence of causal relations is preserved under extension of the model with multiple immune responses, while the predicted absence of causal relations is not. Similarly, by comparing d-separations in the Markov ordering graphs in Figure 2c with those in Figure 4b, we find that predicted conditional dependences are preserved under the extensions, while the predicted conditional independences are not.
154
+
155
+ ![019639ad-572a-75ca-872c-15699fe78324_4_268_1389_469_601_0.jpg](images/019639ad-572a-75ca-872c-15699fe78324_4_268_1389_469_601_0.jpg)
156
+
157
+ Figure 4: Graphical representations of the viral infection model with multiple immune responses. The presence of causal relations and d-connections in Figure 2 is preserved.
158
+
159
+ ### 2.4 MARKOV ORDERING GRAPHS AND CAUSAL INTERPRETATIONS
160
+
161
+ In [Blom et al., 2021], it was shown that the Markov ordering graph may not have a straightforward causal interpretation. Here, we illustrate for the viral infection models that the Markov ordering graphs neither have a straightforward causal interpretation at equilibrium in terms of soft interventions targeting parameters, exogenous variables, or equations nor in terms of perfect interventions on variables in the dynamical model. To see this, consider the Markov ordering graph in Figure 3b for the viral infection with a single immune response. The edge $\left( {{v}_{I} \rightarrow {v}_{T}}\right)$ cannot correspond to the effect of a soft intervention targeting ${f}_{I}^{ + }$ , because the causal ordering graph in Figure 3c shows that there is no such effect. Clearly, directed paths in the Markov ordering graph do not necessarily represent the effects of soft interventions. The natural way to model a perfect intervention targeting a variable in the Markov ordering graph is to replace the (differential) equation of that variable with an equation setting that variable equal to a certain value in the underlying dynamical model [Mooij et al., 2013]. By explicitly solving equilibrium equations it is easy to check that replacing ${f}_{\delta }$ with an equation setting ${X}_{\delta }$ equal to a constant generically changes the distribution of ${X}_{I}$ . Since there is no directed path from ${v}_{\delta }$ to ${v}_{I}$ in the Markov ordering graph, the effect of this perfect intervention would not have been predicted by the Markov ordering graph, if it would have been interpreted causally. Therefore, contrary to the causal ordering graph, the Markov ordering graph does not have a causal interpretation in terms of soft or perfect interventions on the true underlying dynamical model.
162
+
163
+ ## 3 ROBUST CAUSAL PREDICTIONS UNDER MODEL EXTENSIONS
164
+
165
+ One way to gauge the robustness of model predictions is to check to what extent they depend on the model design. The example of a viral infection with different immune responses in the previous section indicates that qualitative causal predictions entailed by the causal ordering graph of a mathematical model may strongly depend on the particulars of the model. Both the implied presence or absence of causal relations at equilibrium and the implied presence or absence of conditional independences at equilibrium may change under certain model extensions. Under what conditions are these qualitative predictions preserved under model extensions? In this section, we characterize a large class of model extensions under which qualitative equilibrium predictions are preserved.
166
+
167
+ Theorem 1 gives a sufficient condition on model extensions under which the predicted generic presence of causal relations and predicted generic presence of conditional dependences at equilibrium is preserved. The proof is given in
168
+
169
+ ## Appendix A.2.
170
+
171
+ Theorem 1. Consider model equations $F$ containing endogenous variables $V$ with bipartite graph $\mathcal{B}$ . Suppose $F$ is extended with equations ${F}_{ + }$ containing endogenous variables in $V \cup {V}_{ + }$ , where ${V}_{ + }$ contains endogenous variables that are added by the model extension. ${}^{5}$ Let ${\mathcal{B}}_{\text{ext }}$ be the bipartite graph associated with ${F}_{\text{ext }} = F \cup {F}_{ + }$ and ${V}_{\text{ext }} = V \cup {V}_{ + }$ , and ${\mathcal{B}}_{ + }$ the bipartite graph associated with the extension ${F}_{ + }$ and ${V}_{ + }$ , where variables in $V$ appearing in ${F}^{ + }$ are treated as exogenous variables (i.e. they are not added as vertices in ${\mathcal{B}}_{ + }$ ). If $\mathcal{B}$ and ${\mathcal{B}}_{ + }$ both have a perfect matching then:
172
+
173
+ 1. ${\mathcal{B}}_{\text{ext }}$ has a perfect matching,
174
+
175
+ 2. ancestral relations in $\mathrm{{CO}}\left( \mathcal{B}\right)$ are also present in $\operatorname{CO}\left( {\mathcal{B}}_{\text{ext }}\right)$ ,
176
+
177
+ 3. d-connections in $\mathrm{{MO}}\left( \mathcal{B}\right)$ are also present in $\operatorname{MO}\left( {\mathcal{B}}_{\text{ext }}\right)$ .
178
+
179
+ This result characterizes a large set of extensions under which the implied causal effects and conditional dependences of a model are preserved. Consider again the equilibrium behaviour of the viral infection models in Section 2, We already showed explicitly that the extension of the viral infection model with multiple immune responses preserved the predicted presence of causal relations and conditional dependences, but with the help of Theorem 1 we only would have needed to check whether the bipartite graph in Figure $5\mathrm{c}$ has a perfect matching to arrive at the same conclusion. The bipartite graph for the extension with a single immune response in Figure 5b does not have a perfect matching and hence the conditions of Theorem 1 do not hold. Recall that this model extension did not preserve the predicted presence of causal relations.
180
+
181
+ The theorem below gives a stronger condition under which (conditional) independence relations and the absence of causal relations that are implied by a model are also predicted by the extended model. The proof is provided in the supplement.
182
+
183
+ Theorem 2. Let $F,{F}_{ + },{F}_{\text{ext }}, V,{V}_{ + },{V}_{\text{ext }},\mathcal{B},{\mathcal{B}}_{ + }$ , and ${\mathcal{B}}_{\text{ext }}$ be as in Theorem 1 If $\mathcal{B}$ and ${\mathcal{B}}_{ + }$ both have perfect matchings and no vertex in ${V}_{ + }$ is adjacent to a vertex in $F$ in ${\mathcal{B}}_{\text{ext }}$ then [6]
184
+
185
+ 1. ancestral relations absent in $\mathrm{{CO}}\left( \mathcal{B}\right)$ are also absent in $\operatorname{CO}\left( {\mathcal{B}}_{\text{ext }}\right)$ ,
186
+
187
+ 2. d-connections absent in $\mathrm{{MO}}\left( \mathcal{B}\right)$ are also absent in $\operatorname{MO}\left( {\mathcal{B}}_{\text{ext }}\right)$ . Together with Theorem 1, this result characterizes a large class of model extensions under which all qualitative model predictions are preserved. Consider again the equilibrium models for the viral infection in Section 2. The bipartite graph for the extension with a single immune response, which we obtain by adding equations (9) and (10), does not have a perfect matching. In the bipartite graph associated with the viral infection model with multiple immune responses the additional endogenous variable ${v}_{\delta }$ is adjacent to ${f}_{I}$ . Neither of the model extensions satisfies the conditions of Theorem 2. We already demonstrated that neither of the model extensions preserves all qualitative model predictions. An example of a model extension that does satisfy the conditions in Theorem 1 and 2 is an acyclic structural causal model that is extended with another acyclic structural causal model such that the additional variables are non-ancestors of the original ones. Together, Theorem 1 and 2, can be used to understand when the causal and Markov properties of a system can be understood by studying the corresponding properties of its parts.
188
+
189
+ ## 4 SELECTION OF MODEL EXTENSIONS
190
+
191
+ So far, we have considered methods to assess the robustness of qualitative model predictions. In this section we will show how this idea results in novel opportunities regarding causal discovery. In particular, if we assume that the systems that we observe are part of a larger partially observed system, then we can use the methods in this paper to reason about causal mechanisms of unobserved variables. Consider, for example, the viral infection model for which we have demonstrated that extensions with different immune responses imply different (conditional) independences between variables in the original model. The Markov ordering graphs in Figures 2c, 3b, and 4b imply the following (in)dependences:
192
+
193
+ 1. Viral infection without immune response: ${U}_{\sigma } ⫫ {X}_{T}$ , ${U}_{\sigma } \mathrel{\text{\bot \not{} }} {X}_{I}.$
194
+
195
+ 2. Viral infection with single immune response: ${U}_{\sigma } \mathrel{\text{\Downarrow \not{} }}$ ${X}_{T},{U}_{\sigma } ⫫ {X}_{I}.$
196
+
197
+ 3. Viral infection with multiple immune responses: ${U}_{\sigma } \mathrel{\text{\Downarrow \not{} }}$ ${X}_{T},{U}_{\sigma } \mathrel{\text{\bot \not{} }} {X}_{I}.$
198
+
199
+ Given a model for variables ${X}_{T}$ and ${X}_{I}$ only, we can reject model extensions based on the (conditional) independences for variables ${X}_{T},{X}_{I}$ , and ${U}_{\sigma }$ . Using this holistic modelling approach, we can reason about an unknown model extension without observing the new mechanisms or variables. In the remainder of this section, we further discuss how this idea can be applied to equilibrium data of dynamical systems.
200
+
201
+ ---
202
+
203
+ ${}^{5}{V}_{ + }$ may also contain parameters or exogenous variables that appear in $F$ and become endogenous in the extended model.
204
+
205
+ ${}^{6}$ A vertex in ${V}_{ + }$ is considered adjacent to $F$ if it corresponds with one of the exogenous random variables or parameters in $F$ that become endogenous in the model extension.
206
+
207
+ ---
208
+
209
+ ### 4.1 REASONING ABOUT SELF-REGULATING VARIABLES
210
+
211
+ We say that a variable in a set of first-order differential equations in canonical form is self-regulating if it can be solved uniquely from the equilibrium equation that is constructed from its derivative. For models in which every variable is self-regulating there exists a perfect matching where each variable ${v}_{i}$ is matched to its associated equilibrium equation ${f}_{i}$ according to the natural labelling, for more details see Lemma 1 in the supplement. TIt then follows from Theorem 1 that the presence of ancestral relations and d-connections is robust under dynamical model extensions in which each variable is self-regulating, as is stated more formally in Corollary 1 below.
212
+
213
+ Corollary 1. Consider a first-order dynamical model in canonical form for endogenous variables $V$ and an extension consisting of canonical first-order differential equations for additional endogenous variables ${V}_{ + }$ . Let $F$ and ${F}_{\text{ext }} = F \cup {F}_{ + }$ be the equilibrium equations of the original and extended model respectively. If all variables in $V \cup {V}_{ + }$ are self-regulating then 2 and 3 of Theorem 1 hold.
214
+
215
+ Corollary 1 characterizes a class of models under which certain qualitative predictions for the equilibrium distribution are robust, but the result can also be interpreted from a different angle. Suppose that we have equilibrium data that is generated by an extended dynamical model with equilibrium equations ${F}_{\text{ext }}$ , but we only have a partial model consisting of equations in $F$ for a subset $V \subseteq {V}_{\text{ext }} = V \cup {V}_{ + }$ of variables that appear in ${F}_{\text{ext }} = F \cup {F}_{ + }$ . If we would find conditional independences between variables in $V$ that do not correspond to d-separations in the Markov ordering graph of the partial model, this does not necessarily mean that the model equations are wrong. It could also be the case, for example, that we are wrong to assume that the system can be studied in a reductionist manner and that the model should be extended. Furthermore, under the assumption that data is generated from the equilibrium distribution of a dynamical model, Corollary 1 tells us that conditional inde-pendences in the data that are not predicted by the equations of a partial model imply the presence of variables that are not self-regulating, if we assume faithfulness. This shows that, given a model for a subsystem, we can reason about the properties of unobserved and unknown variables in the whole system. Consider, for example, the model of the viral infection without immune response and assume that this is a submodel of a larger system. Suppose that we observe a conditional independence between ${U}_{\sigma }$ and ${X}_{I}$ and assume that the model equations of the submodel are correct. Since the Markov ordering graph in Figure 2c implies that ${U}_{\sigma }$ and ${X}_{I}$ are dependent, Corollary 1 tells us that there must be variables that are not self-regulating in the extended system. If the extended system can be described by the strictly positive solutions of the viral infection model with a single immune response, so that ${U}_{\sigma }$ and ${X}_{I}$ are independent, then we see from equations (5),(6),(9), and (10) that both ${X}_{E}\left( t\right)$ and ${X}_{I}\left( t\right)$ are not self-regulating.
216
+
217
+ ### 4.2 REASONING ABOUT FEEDBACK LOOPS
218
+
219
+ We say that an extension of a dynamical model introduces a new feedback loop with the original dynamical model when there is feedback in the extended dynamical model that involves variables in both the original model and the model extension. To make this definition more precise, consider the set ${E}_{\text{nat }}$ of edges $\left( {{v}_{i} - {f}_{i}}\right)$ that are associated with the natural labelling of the equilibrium equations of the extended dynamical model. The feedback loops in the dynamical model coincide with cycles in the directed graph $\mathcal{G}\left( {{\mathcal{B}}_{\text{nat }},{M}_{\text{nat }}}\right)$ that is obtained by applying step [1] of the causal ordering algorithm to the bipartite graph ${\mathcal{B}}_{\text{nat }} = \left\langle {{V}_{\text{ext }},{F}_{\text{ext }},{E}_{\text{ext }} \cup {E}_{\text{nat }}}\right\rangle$ using the perfect matching ${M}_{\text{nat }} = {E}_{\text{nat }}$ . The following proposition can be used to reason about the presence of partially unobserved feedback loops given a model and observations for a subsystem.
220
+
221
+ Proposition 1. Consider a first-order dynamical model in canonical form for endogenous variables $V$ and an extension consisting of canonical first-order differential equations for additional endogenous variables ${V}_{ + }$ . Let $F$ and ${F}_{\text{ext }} =$ $F \cup {F}_{ + }$ be the equilibrium equations of the original and extended model respectively. Let $\mathcal{B} = \langle V, F, E\rangle$ be the bipartite graph associated with $F$ and ${\mathcal{B}}_{\text{ext }} = \left\langle {{V}_{\text{ext }},{F}_{\text{ext }},{E}_{\text{ext }}}\right\rangle$ the bipartite graph associated with ${F}_{\text{ext }}$ . Assume that $\mathcal{B}$ and ${\mathcal{B}}_{\text{ext }}$ both have perfect matchings. If the model extension does not introduce a new feedback loop with the original dynamical model, then $d$ -connections in $\mathrm{{MO}}\left( \mathcal{B}\right)$ are also present in $\operatorname{MO}\left( {\mathcal{B}}_{\text{ext }}\right)$ .
222
+
223
+ ---
224
+
225
+ ${}^{7}$ Interestingly, the Markov ordering graph for the equilibrium equations of such a model always has a causal interpretation. By construction of the causal ordering graph from the bipartite graph and the perfect matching provided by the natural labelling, we know that a vertex ${v}_{i}$ always appears in a cluster with ${f}_{i}$ in the causal ordering graph. The presence or absence of directed paths in the Markov ordering graph can then easily be associated with the presence or absence of directed paths in the causal ordering graph. Consequently, the Markov ordering graph can be interpreted in terms of both soft interventions targeting equations and perfect interventions that set variables equal to a constant by replacement of the associated dynamical and equilibrium equations. Note that dynamical systems with only self-regulating variables were also considered in [Mooij et al., 2013], where it was shown that their equilibria can be modelled as structural causal models without self-cycles.
226
+
227
+ ${}^{8}$ Note that a feedback loop in the dynamical model does not imply a feedback loop in the equilibrium equations as well. For example, there is feedback in the dynamical equations (3), (4), but there is no feedback in the causal ordering graph of the equilibrium equations in Figure 2b nor in the directed graph that is constructed in step 1 of the causal ordering algorithm.
228
+
229
+ ---
230
+
231
+ ![019639ad-572a-75ca-872c-15699fe78324_7_148_172_715_226_0.jpg](images/019639ad-572a-75ca-872c-15699fe78324_7_148_172_715_226_0.jpg)
232
+
233
+ Figure 5: The bipartite graphs associated with the viral infection model with multiple immune responses, the single immune response extension, and the multiple immune response extension are given in Figures 5a, 5b, and 5c, respectively.
234
+
235
+ Proposition 1 characterizes a class of model extensions under which certain qualitative model predictions are robust, but it also shows how we can reason about the existence of unobserved feedback loops. To be more precise, it shows that, given a submodel for a subsystem, the presence of conditional independences that are not predicted by the sub-model imply the existence of an unobserved feedback loop, if we assume faithfulness. If, for example, we assume that the viral infection model without an immune response is a submodel of the system that is described by the strictly positive equilibrium solutions of the viral infection model with a single immune response, then we would observe an independence between ${U}_{\sigma }$ and ${X}_{I}$ that is not predicted by the model equations of the submodel. Proposition 1 would then imply that there is an unobserved feedback loop. Indeed, it can be seen from equations (3), (4), (7), (8) that there is an unobserved feedback loop from ${X}_{I}\left( t\right)$ to ${X}_{E}\left( t\right)$ to ${X}_{\delta }\left( t\right)$ and back to ${X}_{I}\left( t\right)$ , while the Markov ordering graphs in Figures 2c and 3b imply that ${U}_{\sigma }$ and ${X}_{I}$ are dependent in the original model and independent in the extended model. We consider the use of existing structure learning algorithms for the detection of feedback loops in models with variables that are not self-regulating from a combination of background knowledge and observational equilibrium data to be an interesting topic for future work.
236
+
237
+ ## 5 DISCUSSION
238
+
239
+ In this work we revisited several models of viral infections and immune responses. In our treatment of these models we closely followed the approach in De Boer [2012] and therefore we only considered strictly positive solutions. If we would have modelled all solutions then, for example, we would have considered the equilibrium equation ${f}_{I}$ : $\left( {{U}_{f}\beta {X}_{T} - {U}_{\delta }}\right) {X}_{I} = 0$ instead of ${f}_{I}^{ + }$ in equation (6). In that case, we would have obtained the causal ordering graph in Figure 6 instead of that in Figure 2b. Clearly, the model predictions of the causal ordering graph for the positive solutions in Figure 2b are more informative. The choice of only modelling strictly positive solutions depends on the application.
240
+
241
+ ![019639ad-572a-75ca-872c-15699fe78324_7_1116_174_251_142_0.jpg](images/019639ad-572a-75ca-872c-15699fe78324_7_1116_174_251_142_0.jpg)
242
+
243
+ Figure 6: Causal ordering graph for positive and nonpositive solutions of the viral infection model.
244
+
245
+ In many application domains mathematical models are used to predict the equilibrium behaviour of complex systems. An important issue is that (causal and Markov) predictions may strongly depend on the specifics of the model design. We revisited an example of a viral infection model [De Boer, 2012], in which implied causal relations and conditional independences change dramatically when equations, describing immune reactions, are added. Analysis of this behaviour through explicit calculations is neither insightful nor scalable. We showed how the technique of causal ordering can be used to efficiently analyse the robustness of implied causal effects and conditional independences under certain solvability assumptions. Using key insights provided by this approach we characterized large classes of model extensions under which predicted causal relations and conditional inde-pendences are robust. We hope that the results presented in this paper are a step towards bringing the world of causal modeling and reasoning closer to practical applications.
246
+
247
+ Our results for the characterization of the robustness of model extensions can also be used to reason about the properties of models that are the combination of two submodels. This way, we can study systems whose causal and Markov properties can be understood in a reductionistic manner by considering the properties of its parts. When the properties of the whole model differ from those of its parts, a holistic modelling approach is required. For models of the equilibrium distribution of dynamical systems, we proved that extensions of dynamical models where each variable is self-regulating preserve the predicted presence of causal effects and d-connections in the original model. Based on those insights, we proposed a novel approach to model selection, where information about conditional independences can be used in combination with model equations to reason about possible model extensions or the presence of feedback mechanisms. For dynamical models with feedback, the output of structure learning algorithms does not always have a causal interpretation in terms of soft or perfect interventions for the equilibrium distribution. We have shown that in dynamical systems where each variable is self-regulating the identifiable directed edges in the learned graph do express causal relations between variables.
248
+
249
+ Tineke Blom, Mirthe M. van Diepen, and Joris M. Mooij. Conditional independences and causal relations implied by sets of equations. Journal of Machine Learning Research, 22(178):1-62, 2021.
250
+
251
+ Stephan Bongers, Patrick Forré, Jonas Peters, and Joris M. Mooij. Foundations of structural causal models with cycles and latent variables. Annals of Statistics, 49(5): 2885-2915, 2021.
252
+
253
+ D. Colombo, M. Maathuis, M. Kalisch, and Thomas S. Richardson. Learning high-dimensional directed acyclic graphs with latent and selection variables. Annals of Statistics, 40:294-321, 2012.
254
+
255
+ Rob J. De Boer. Which of our modeling predictions are robust? PLOS Computational Biology, 8(7), 2012.
256
+
257
+ Patrick Forré and Joris M. Mooij. Markov properties for graphical models with cycles and latent variables. arXiv.org preprint, arXiv:1710.08775 [math.ST], 2017. URL https://arxiv.org/abs/1710.08775.
258
+
259
+ Patrick Forré and Joris M. Mooij. Constraint-based causal discovery for non-linear structural causal models with cycles and latent confounders. In Proceedings of the 34th Annual Conference on Uncertainty in Artificial Intelligence (UAI-18), 2018.
260
+
261
+ Bernardo Gonçalves and Fabio Porto. A note on the complexity of the causal ordering problem. Artificial Intelligence, 238:154-165, 2016.
262
+
263
+ Antti Hyttinen, Frederick Eberhard, and Patrik O. Hoyer. Learning linear cyclic causal models with latent variables. The Journal of Machine Learning Research, 13(1):3387- 3439, 2012.
264
+
265
+ Gustavo Lacerda, Peter Spirtes, Joseph Ramsey, and Patrick O. Hoyer. Discovering cyclic causal models by independent components analysis. Proceedings of the Twenty-Fourth Conference on Uncertainty in Artificial Intelligence, pages 1159-1168, 2008.
266
+
267
+ Joris M. Mooij and Tom Claassen. Constraint-based causal discovery using partial ancestral graphs in the presence of cycles. In Jonas Peters and David Sontag, editors, Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI-20), volume 124, pages 1159- 1168. PMLR, 82020.
268
+
269
+ Joris M. Mooij, Dominik Janzing, and Bernhard Schölkopf. From ordinary differential equations to structural causal models: the deterministic case. In Proceedings of the 29th Annual Conference on Uncertainty in Artificial Intelligence (UAI-13), pages 440-448, 2013.
270
+
271
+ P. Nayak. Automated Modeling of Physical Systems. Springer-Verlag Berlin Heidelberg, 1995.
272
+
273
+ Judea Pearl. Causality : models, reasoning, and inference. Cambridge University Press, 2009.
274
+
275
+ Alex Pothen and Chin-Ju Fan. Computing the block triangular form of a sparse matrix. ACM Transactions on Mathematical Software (TOMS), 16(4):303-324, 1990.
276
+
277
+ Thomas Richardson and Peter Spirtes. Automated discovery of linear feedback models. Computation, Causation, and Discovery, pages 254-304, 1999.
278
+
279
+ Herbert A. Simon. On the definition of the causal relation. The Journal of Philosophy, 49(16):517-528, 1952.
280
+
281
+ Herbert A. Simon. Causal ordering and identifiability. In Studies in Econometric Methods, pages 49-74. John Wiley & Sons, 1953.
282
+
283
+ P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction, and Search. MIT press, 2000.
284
+
285
+ Erik V. Strobl. A constraint-based algorithm for causal discovery with cycles, latent variables and selection bias. International Journal of Data Science and Analytics, 8: 33-56, 2019.
286
+
287
+ J. Zhang. On the completeness of orientation rules for causal discovery in the presence of latent confounders and selection bias. Artificial Intelligence, 172(16-17): 1873-1896, 2008.
288
+
289
+ ## A APPENDIX
290
+
291
+ A graphical illustration of the causal ordering algorithm applied to the equations of a cyclic model is provided in the first section. The second section contains proofs of the results in the main paper.
292
+
293
+ ### A.1 CAUSAL ORDERING ALGORITHM APPLIED TO A CYCLIC MODEL
294
+
295
+ In this section we demonstrate how the causal ordering algorithm works on a set of equations for a cyclic model. The algorithm is also presented graphically. Consider the following equations for endogenous variables $\mathbf{X}$ and exogenous random variables $\mathbf{U}$ :
296
+
297
+ $$
298
+ {f}_{1} : \;{g}_{1}\left( {{X}_{{v}_{1}},{U}_{{w}_{1}}}\right) = 0, \tag{13}
299
+ $$
300
+
301
+ $$
302
+ {f}_{2} : \;{g}_{2}\left( {{X}_{{v}_{2}},{X}_{{v}_{1}},{X}_{{v}_{4}},{U}_{{w}_{2}}}\right) = 0, \tag{14}
303
+ $$
304
+
305
+ $$
306
+ {f}_{3} : \;{g}_{3}\left( {{X}_{{v}_{3}},{X}_{{v}_{2}},{U}_{{w}_{3}}}\right) = 0, \tag{15}
307
+ $$
308
+
309
+ $$
310
+ {f}_{4} : \;{g}_{4}\left( {{X}_{{v}_{4}},{X}_{{v}_{3}},{U}_{{w}_{4}}}\right) = 0, \tag{16}
311
+ $$
312
+
313
+ $$
314
+ {f}_{5} : \;{g}_{5}\left( {{X}_{{v}_{5}},{X}_{{v}_{4}},{U}_{{w}_{5}}}\right) = 0. \tag{17}
315
+ $$
316
+
317
+ The associated bipartite graph in Figure 7a consists of variable vertices $V = \left\{ {{v}_{1},\ldots ,{v}_{5}}\right\}$ and equation vertices $F = \left\{ {{f}_{1},\ldots ,{f}_{5}}\right\}$ . There is an edge between a variable vertex and an equation vertex whenever that variable appears in the equation. The associated bipartite graph has exactly two perfect matchings:
318
+
319
+ $$
320
+ {M}_{1} = \left\{ {\left( {{v}_{1} - {f}_{1}}\right) ,\left( {{v}_{2} - {f}_{2}}\right) ,\left( {{v}_{3} - {f}_{3}}\right) ,\left( {{v}_{4} - {f}_{4}}\right) ,\left( {{v}_{5} - {f}_{5}}\right) }\right\}
321
+ $$
322
+
323
+ $$
324
+ {M}_{2} = \left\{ {\left( {{v}_{1} - {f}_{1}}\right) ,\left( {{v}_{2} - {f}_{3}}\right) ,\left( {{v}_{3} - {f}_{4}}\right) ,\left( {{v}_{4} - {f}_{2}}\right) ,\left( {{v}_{5} - {f}_{5}}\right) }\right\} .
325
+ $$
326
+
327
+ Application of the first step of the causal ordering algorithm results either in the directed graph in Figure 7b or that in Figure 7c, depending on the choice of the perfect matching. The segmentation of vertices into strongly connected components, which takes place in the second step of the algorithm, results in the clusters $\left\{ {v}_{1}\right\} ,\left\{ {f}_{1}\right\} ,\left\{ {{v}_{2},{v}_{3},{v}_{4},{f}_{2},{f}_{3},{f}_{4}}\right\}$ , $\left\{ {v}_{5}\right\}$ , and $\left\{ {f}_{5}\right\}$ . To construct the clusters of the causal ordering graph we add ${S}_{i} \cup M\left( {S}_{i}\right)$ to a cluster set $\mathcal{V}$ for each ${S}_{i}$ in the segmentation. The segmentation of vertices into strongly connected components is displayed in Figures 7d and 7e. Notice that the segmentation in Figure 7d is the same as that in Figure 7e. It is known that the segmentation into strongly connected components is unique (i.e. it does not depend on the choice of the perfect matching) [Pothen and Fan,[1990, Blom et al.,2021]. The cluster set $\mathcal{V}$ for the causal ordering graph in Figure 7f is constructed by merging clusters in the segmented graph whenever two clusters contain vertices that are matched and by adding exogenous variables as singleton clusters. The edge set $\mathcal{E}$ for the causal ordering graph is obtained by adding edges $\left( {v \rightarrow C}\right)$ from an endogenous vertex $v$ to a cluster $C$ , whenever $v \notin C$ and there is an edge from $v$ to $f \in C$ in the directed graph.
328
+
329
+ We also add edges from exogenous vertices to clusters that contain equations in which the corresponding exogenous random variables appear. ure 7f.
330
+
331
+ ![019639ad-572a-75ca-872c-15699fe78324_9_908_312_675_628_0.jpg](images/019639ad-572a-75ca-872c-15699fe78324_9_908_312_675_628_0.jpg)
332
+
333
+ Figure 7: Graphical illustration of the causal ordering algorithm that was described in Section 1.1. Figure 7a shows the bipartite graph that is associated with equations (13) to (17). Application of the first step of the causal ordering algorithm results in the directed graph in Figure 7b for perfect matching ${M}_{1}$ and that in Figure 7c for perfect matching ${M}_{2}$ . The blue and orange edges correspond to the edges in the perfect matchings ${M}_{1}$ and ${M}_{2}$ , respectively. Figures 7d and $7\mathrm{e}$ show that the segmentation into strongly connected components does not depend on the choice of the perfect matching. Exogenous vertices and edges from these vertices to clusters were added to the causal ordering graph in Fig-
334
+
335
+ ### A.2 PROOFS
336
+
337
+ Theorem 1. Consider model equations $F$ containing endogenous variables $V$ with bipartite graph $\mathcal{B}$ . Suppose $F$ is extended with equations ${F}_{ + }$ containing endogenous variables in $V \cup {V}_{ + }$ , where ${V}_{ + }$ contains endogenous variables that are added by the model extension. Let ${\mathcal{B}}_{\text{ext }}$ be the bipartite graph associated with ${F}_{\text{ext }} = \bar{F} \cup {F}_{ + }$ and ${V}_{\text{ext }} = V \cup {V}_{ + }$ , and ${\mathcal{B}}_{ + }$ the bipartite graph associated with the extension ${F}_{ + }$ and ${V}_{ + }$ , where variables in $V$ appearing in ${F}^{ + }$ are treated as exogenous variables (i.e. they are not added as vertices in ${\mathcal{B}}_{ + }$ ). If $\mathcal{B}$ and ${\mathcal{B}}_{ + }$ both have a perfect matching then:
338
+
339
+ 1. ${\mathcal{B}}_{\text{ext }}$ has a perfect matching,
340
+
341
+ ---
342
+
343
+ ${}^{9}{V}_{ + }$ may also contain parameters or exogenous variables that appear in $F$ and become endogenous in the extended model.
344
+
345
+ ---
346
+
347
+ 2. ancestral relations in $\mathrm{{CO}}\left( \mathcal{B}\right)$ are also present in $\operatorname{CO}\left( {\mathcal{B}}_{\text{ext }}\right)$ ,
348
+
349
+ 3. d-connections in $\mathrm{{MO}}\left( \mathcal{B}\right)$ are also present in $\operatorname{MO}\left( {\mathcal{B}}_{\text{ext }}\right)$ .
350
+
351
+ Proof. The causal ordering graph $\mathrm{{CO}}\left( \mathcal{B}\right)$ is constructed from a perfect matching $M$ for the bipartite graph $\mathcal{B} =$ $\langle V, F, E\rangle$ . Let ${M}_{ + }$ be a perfect matching for ${\mathcal{B}}_{ + }$ . Note that ${M}_{\text{ext }} = M \cup {M}_{ + }$ is a perfect matching for ${\mathcal{B}}_{\text{ext }} =$ $\left\langle {V \cup {V}_{ + }, F \cup {F}_{ + },{E}_{\text{ext }}}\right\rangle$ . Following the causal ordering algorithm for $\mathcal{B}, M$ and ${\mathcal{B}}_{\text{ext }},{M}_{\text{ext }}$ , we note that $\mathcal{G}\left( {\mathcal{B}, M}\right)$ is a subgraph of $\mathcal{G}\left( {{\mathcal{B}}_{\text{ext }},{M}_{\text{ext }}}\right)$ and hence clusters in $\mathrm{{CO}}\left( \mathcal{B}\right)$ are fully contained in clusters in $\mathrm{{CO}}\left( {\mathcal{B}}_{\text{ext }}\right)$ . Therefore ancestral relations in $\mathrm{{CO}}\left( \mathcal{B}\right)$ are also present in $\mathrm{{CO}}\left( {\mathcal{B}}_{\text{ext }}\right)$ .
352
+
353
+ It follows directly from the definition [Forré and Mooij, 2017] that $\sigma$ -connections in a graph remain present if the graph is extended with additional vertices and edges. The directed graphs $\mathcal{G}\left( {\mathcal{B}, M}\right)$ and $\mathcal{G}\left( {{\mathcal{B}}_{\text{ext }},{M}_{\text{ext }}}\right)$ can be augmented with exogenous variables by adding exogenous vertices to these graphs with directed edges towards the equations in which they appear. The $\sigma$ -connections in the augmentation of $\mathcal{G}\left( {\mathcal{B}, M}\right)$ must also be present in the augmentation of $\mathcal{G}\left( {{\mathcal{B}}_{\text{ext }},{M}_{\text{ext }}}\right)$ . By [Corollary 2.8.4, Forré and Mooij, 2017] and [Lemma 43, Blom et al., 2021] we have that d-connections in $\operatorname{MO}\left( \mathcal{B}\right)$ must also be present in $\operatorname{MO}\left( {\mathcal{B}}_{\text{ext }}\right)$ .
354
+
355
+ Theorem 2. Let $F,{F}_{ + },{F}_{\text{ext }}, V,{V}_{ + },{V}_{\text{ext }},\mathcal{B},{\mathcal{B}}_{ + }$ , and ${\mathcal{B}}_{\text{ext }}$ be as in Theorem 1 If $\mathcal{B}$ and ${\mathcal{B}}_{ + }$ both have perfect matchings and no vertex in ${V}_{ + }$ is adjacent to a vertex in $F$ in ${\mathcal{B}}_{\text{ext }}$ then.
356
+
357
+ 1. ancestral relations absent in $\mathrm{{CO}}\left( \mathcal{B}\right)$ are also absent in $\operatorname{CO}\left( {\mathcal{B}}_{\text{ext }}\right)$ ,
358
+
359
+ 2. d-connections absent in $\mathrm{{MO}}\left( \mathcal{B}\right)$ are also absent in $\operatorname{MO}\left( {\mathcal{B}}_{\text{ext }}\right)$ .
360
+
361
+ Proof. Since $\mathcal{B}$ and ${\mathcal{B}}_{ + }$ both have perfect matchings the results of Theorem 1 hold. Let $\mathcal{G}\left( {\mathcal{B}, M}\right)$ , and $\mathcal{G}\left( {{\mathcal{B}}_{\text{ext }},{M}_{\text{ext }}}\right)$ be as in the proof of Theorem 1. Note that in ${M}_{\text{ext }}$ vertices in ${F}_{ + }$ are matched to vertices in ${V}_{ + }$ and therefore edges between ${f}_{ + } \in {F}_{ + }$ and $v \in {\operatorname{adj}}_{{\mathcal{B}}_{\text{ext }}}\left( {F}_{ + }\right) \smallsetminus {V}_{ + }$ are oriented as $\left( {{f}_{ + } \leftarrow v}\right)$ in $\mathcal{G}\left( {{\mathcal{B}}_{\text{ext }},{M}_{\text{ext }}}\right)$ . By assumption, we therefore have that vertices in ${V}_{ + }$ are non-ancestors of vertices in $V \cup F$ in $\mathcal{G}\left( {{\mathcal{B}}_{\text{ext }},{M}_{\text{ext }}}\right)$ . Since $M \subseteq {M}_{\text{ext }}$ we know that the same directed edges between vertices in $V$ and $F$ appear in both $\mathcal{G}\left( {\mathcal{B}, M}\right)$ and $\mathcal{G}\left( {{\mathcal{B}}_{\text{ext }},{M}_{\text{ext }}}\right)$ . Notice that the subgraph of $\mathcal{G}\left( {{\mathcal{B}}_{\text{ext }},{M}_{\text{ext }}}\right)$ induced by the vertices $V \cup F$ coincides with $\mathcal{G}\left( {\mathcal{B}, M}\right)$ . Hence $\mathrm{{CO}}\left( \mathcal{B}\right)$ is the induced subgraph of $\mathrm{{CO}}\left( {\mathcal{B}}_{\text{ext }}\right)$ and $\mathrm{{MO}}\left( \mathcal{B}\right)$ is the induced subgraph of $\operatorname{MO}\left( {\mathcal{B}}_{\text{ext }}\right)$ .
362
+
363
+ Lemma 1. Consider a first-order dynamical model in canonical form for endogenous variables $V$ and let $F$ be the equilibrium equations of the model. If all variables in $V$ are self-regulating then $\mathcal{B}$ has a perfect matching.
364
+
365
+ Proof. Recall that the equilibrium equation constructed from the derivative of a variable $i$ is labelled ${f}_{i}$ according to the natural labelling. When a variable in ${v}_{i} \in V$ is self-regulating then it can be matched to its equilibrium equation ${f}_{i}$ . If this holds for all variables in $V$ then $\mathcal{B}$ has a perfect matching.
366
+
367
+ Lemma 2. Let $\mathcal{B}$ be a bipartite graph and let $M$ and ${M}^{\prime }$ be two distinct perfect matchings. The associated directed graphs $\mathcal{G}\left( {\mathcal{B}, M}\right)$ and $\mathcal{G}\left( {\mathcal{B},{M}^{\prime }}\right)$ that are obtained in step 1 of the causal ordering algorithm differ only in the direction of cycles.
368
+
369
+ Proof. This follows directly from the fact that the output of the causal ordering algorithm does not depend on the choice of the perfect matching. This result is a direct consequence of Theorem 4 and Theorem 6 in Blom et al. [2021].
370
+
371
+ Proposition 1. Consider a first-order dynamical model in canonical form for endogenous variables $V$ and an extension consisting of canonical first-order differential equations for additional endogenous variables ${V}_{ + }$ . Let $F$ and ${F}_{\text{ext }} =$ $F \cup {F}_{ + }$ be the equilibrium equations of the original and extended model respectively. Let $\mathcal{B} = \langle V, F, E\rangle$ be the bipartite graph associated with $F$ and ${\mathcal{B}}_{\text{ext }} = \left\langle {{V}_{\text{ext }},{F}_{\text{ext }},{E}_{\text{ext }}}\right\rangle$ the bipartite graph associated with ${F}_{\text{ext }}$ . Assume that $\mathcal{B}$ and ${\mathcal{B}}_{\text{ext }}$ both have perfect matchings. If the model extension does not introduce a new feedback loop with the original dynamical model, then d-connections in $\mathrm{{MO}}\left( \mathcal{B}\right)$ are also present in $\operatorname{MO}\left( {\mathcal{B}}_{\text{ext }}\right)$ .
372
+
373
+ Proof. Let ${E}_{\text{nat }}$ be the set of edges $\left( {{v}_{i} - {f}_{i}}\right)$ associated with the natural labelling of the equilibrium equations of the extended dynamical model. Note that the feedback loops in the dynamical model coincide with cycles in the directed graph $\mathcal{G}\left( {{\mathcal{B}}_{\text{nat }},{M}_{\text{nat }}}\right)$ that is obtained by applying step 1 of the causal ordering algorithm to the bipartite graph ${\mathcal{B}}_{\text{nat }} = \left\langle {{V}_{\text{ext }},{F}_{\text{ext }},{E}_{\text{ext }} \cup {E}_{\text{nat }}}\right\rangle$ using the perfect matching ${M}_{\text{nat }} = {E}_{\text{nat }}$ .
374
+
375
+ By Theorem 1, we know that if $\mathcal{B}$ and ${\mathcal{B}}_{ + }$ (the subgraph of ${\mathcal{B}}_{\text{ext }}$ induced by ${V}_{ + } \cup {F}_{ + }$ ) both have perfect matchings then d-connections in $\mathrm{{MO}}\left( \mathcal{B}\right)$ must also be present in $\mathrm{{MO}}\left( {\mathcal{B}}_{\text{ext }}\right)$ . Therefore, if there exists a perfect matching ${M}_{\text{ext }}$ for ${\mathcal{B}}_{\text{ext }}$ so that each $f \in F$ is ${M}_{\text{ext }}$ -matched to a vertex $v \in V$ and each ${f}_{ + } \in {F}_{ + }$ is ${M}_{\text{ext }}$ -matched to a vertex ${v}_{ + } \in$ ${V}_{ + }$ in ${\mathcal{B}}_{\text{ext }}$ , d-connections in $\operatorname{MO}\left( \mathcal{B}\right)$ are also present in $\operatorname{MO}\left( {\mathcal{B}}_{\text{ext }}\right)$ .
376
+
377
+ We will prove the contrapositive of the proposition, so we start with the assumption that the d-connections in $\operatorname{MO}\left( \mathcal{B}\right)$ are not preserved in $\operatorname{MO}\left( {\mathcal{B}}_{\text{ext }}\right)$ . In that case, there must exist a perfect matching ${M}_{\text{ext }}$ for ${\mathcal{B}}_{\text{ext }}$ so that there is an $f \in F$ that is ${M}_{\text{ext }}$ -matched to a ${v}_{ + } \in {V}_{ + }$ and a $v \in V$ that is ${M}_{\text{ext }}$ -matched to a ${f}_{ + } \in {F}_{ + }$ . Note that since ${\mathcal{B}}_{\text{ext }}$ is a subgraph of ${\mathcal{B}}_{\text{nat }}$ , this perfect matching ${M}_{\text{ext }}$ is also a perfect matching for ${\mathcal{B}}_{\text{nat }}$ . Lemma 2 says that $\mathcal{G}\left( {{\mathcal{B}}_{\text{nat }},{M}_{\text{nat }}}\right)$ and $\mathcal{G}\left( {{\mathcal{B}}_{\text{nat }},{M}_{\text{ext }}}\right)$ only differ in the direction of cycles. We know that vertices in $V$ are only ${M}_{\text{nat }}$ -matched to vertices in $F$ , while vertices in ${V}_{ + }$ are only ${M}_{\text{nat }}$ -matched to vertices in ${F}_{ + }$ . Therefore, the vertices ${v}_{ + }$ and $f$ must be on a directed cycle in both directed graphs, as well as $v$ and ${f}_{ + }$ . Hence the model extension ${F}_{ + }$ introduced a new feedback loop that includes variables in the original model.
378
+
379
+ ---
380
+
381
+ ${}^{10}$ A vertex in ${V}_{ + }$ is considered adjacent to $F$ if it corresponds with one of the exogenous random variables or parameters in $F$ that become endogenous in the model extension.
382
+
383
+ ---
UAI/UAI 2022/UAI 2022 Conference/BGGevIUicl9/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,227 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § ROBUSTNESS OF MODEL PREDICTIONS UNDER EXTENSION
2
+
3
+ § ABSTRACT
4
+
5
+ Mathematical models of the real world are simplified representations of complex systems. A caveat to using mathematical models is that predicted causal effects and conditional independences may not be robust under model extensions, limiting applicability of such models. In this work, we consider conditions under which qualitative model predictions are preserved when two models are combined. Under mild assumptions, we show how to use the technique of causal ordering to efficiently assess the robustness of qualitative model predictions. We also characterize a large class of model extensions that preserve qualitative model predictions. For dynamical systems at equilibrium, we demonstrate how novel insights help to select appropriate model extensions and to reason about the presence of feedback loops. We apply our ideas to a viral infection model with immune responses.
6
+
7
+ § 1 INTRODUCTION
8
+
9
+ There are several interesting systems for which the causal relations and Markov properties cannot be modelled by Structural Causal Models (SCMs) [Pearl, 2009, Bongers et al., 2021]. The causal ordering algorithm, first introduced by Simon [1953], can be used to better understand the qualitative model predictions of these systems [Blom et al. 2021]. In this paper, we take a closer look at what happens to these predictions when two systems are combined. Particularly, we give conditions under which properties of the whole system can be understood in terms of properties of its parts. We discuss how a holistic approach towards causal modelling may result in novel insights when we derive and test the predictions of systems for which new properties emerge from the combination of its parts.
10
+
11
+ In the first part of the paper, we focus on the practical issue of assessing whether qualitative model predictions are robust under model extensions. We revisit the observations of De Boer [2012] who demonstrated that qualitative predictions of a certain viral infection model change dramatically when the model is extended with extra equations describing simple immune responses. To assess the robustness of predicted causal relations or conditional independences under such an alteration of the model, it is useful to characterize a class of model extensions that lead to unaltered qualitative model predictions. In this work, we propose the technique of causal ordering $\left\lbrack {\text{ Simon, }{1953}}\right\rbrack$ as an efficient method to assess the robustness of qualitative causal predictions. Under mild conditions, this allows us to characterize a large class of model extensions that preserve qualitative causal predictions. We also consider the class of models that are obtained from the equilibrium equations of dynamical models where each variable is self-regulating. For this class, we show that the predicted presence of causal relations and absence of conditional independences is robust when the model is extended with new equations.
12
+
13
+ Key aspects of the scientific method include generating a model or hypothesis that explains a phenomenon, deriving testable predictions from this model or hypothesis, and designing an experiment to test these predictions in the real world. The promise of causal discovery algorithms is that they are able to learn causal relations from a combination of background knowledge and data. The general idea of many constraint-based approaches (e.g. PC or FCI and variants thereof [Spirtes et al., 2000, Zhang, 2008, Colombo et al., 2012]) is to exploit information about conditional independences in a probability distribution to construct an equivalence class of graphs that encode certain aspects of the probability distribution, and then draw conclusions about the causal relations from the graphs. There is a large amount of literature concerning particular algorithms for which the learned structure expresses causal relations under certain conditions (e.g. linearity, causal sufficiency, absence of feedback loops), see for example [Richardson and Spirtes, 1999, Spirtes et al., 2000, Lacerda et al., 2008, Zhang, 2008, Colombo et al., 2012, Hyttinen et al., 2012, Forré and Mooij, 2018, Strobl, 2019, Mooij and Claassen, 2020]. In the last part of this paper, our main interest is in dynamical models with the property that graphs representing relations between variables by encoding the conditional independences of their equilibrium distribution should not be interpreted causally at all. For the case that a model for a subsystem is given, we present novel insights that enable us to reject model extensions based on conditional independences in equilibrium data of the subsystem. We demonstrate how this approach allows us to reason about the presence of variables that are not self-regulating and feedback mechanisms that involve unobserved variables from the equilibrium distribution of certain dynamical models.
14
+
15
+ § 1.1 CAUSAL ORDERING GRAPH
16
+
17
+ Here, we give a concise introduction to the technique of causal ordering, introduced by Simon [1952]. ${}^{1}$ In short, the causal ordering algorithm takes a set of equations as input and returns a causal ordering graph that encodes the effects of interventions and a Markov ordering graph that implies conditional independences between variables in the model [Theorem 17, Blom et al., 2021]. Compared with the popular framework of structural causal models [Pearl, 2009], the distinction between the causal ordering and Markov ordering graphs does not provide new insights for acyclic models, but it results in non-trivial conclusions for models with feedback, as suggested in the discussion in Section 2.4 and explained in detail in [Blom et al., 2021].
18
+
19
+ We consider models consisting of equations $F$ that contain endogenous variables $V$ , independent exogenous random variables $W$ , and (constant, exogenous) parameters $P$ . The structure of equations and the endogenous variables that appear in them can be represented by the associated bipartite graph $\mathcal{B} = \langle V,F,E\rangle$ , where each endogenous variable is associated with a distinct vertex in $V$ , and each equation is associated with a distinct vertex in $F$ . There is an edge $\left( {v - f}\right) \in E$ if and only if variable $v \in V$ appears in equation $f \in F$ . The causal ordering algorithm constructs a directed cluster graph $\langle \mathcal{V},\mathcal{E}\rangle$ , where $\mathcal{V}$ is a partition of vertices $V$ into clusters and $\mathcal{E}$ is a set of directed edges from vertices in $V$ to clusters in $\mathcal{V}$ . Given a bipartite graph $\mathcal{B} = \langle V,F,E\rangle$ with a perfect matching $M$ , the causal ordering algorithm proceeds with the following three steps [Nayak, 1995, Blom et al., 2021]:
20
+
21
+ 1. For $v \in V,f \in F$ orient edges(v - f)as $\left( {v \leftarrow f}\right)$ when $\left( {v - f}\right) \in M$ and as $\left( {v \rightarrow f}\right)$ otherwise; this yields a directed graph $\mathcal{G}\left( {\mathcal{B},M}\right)$ .
22
+
23
+ 2. Find all strongly connected components ${S}_{1},{S}_{2},\ldots ,{S}_{n}$ of $\mathcal{G}\left( {\mathcal{B},M}\right)$ . Let $\mathcal{V}$ be the set of clusters ${S}_{i} \cup M\left( {S}_{i}\right)$ for $i \in \{ 1,\ldots ,n\}$ , where $M\left( {S}_{i}\right)$ denotes the set of vertices that are matched to vertices in ${S}_{i}$ in matching $M$ .
24
+
25
+ 3. Let $\operatorname{cl}\left( f\right)$ denote the cluster in $\mathcal{V}$ containing $f$ . For each $\left( {v \rightarrow f}\right)$ such that $v \notin \operatorname{cl}\left( f\right)$ add an edge $\left( {v \rightarrow \operatorname{cl}\left( f\right) }\right)$ to $\mathcal{E}$ .
26
+
27
+ Independent exogenous random variables and parameters are then added as singleton clusters with edges towards the clusters of the equations in which they appear. It has been shown that the resulting directed cluster graph $\operatorname{CO}\left( \mathcal{B}\right) =$ $\langle \mathcal{V},\mathcal{E}\rangle$ , which we refer to as the causal ordering graph, is independent of the choice of perfect matching [Theorem 4, Blom et al. 2021]. Example 1 shows how the algorithm works and a graphical illustration of the algorithm for a more elaborate cyclic model can be found in Appendix A.2.
28
+
29
+ Example 1. Let $V = \left\{ {{v}_{1},{v}_{2}}\right\} ,W = \left\{ {{w}_{1},{w}_{2}}\right\}$ , and $P =$ $\left\{ {{p}_{1},{p}_{2}}\right\}$ be index sets. Consider model equations ${f}_{1}$ and ${f}_{2}$ with endogenous variables ${\left( {X}_{v}\right) }_{v \in V}$ , exogenous random variables ${\left( {U}_{w}\right) }_{w \in W}$ and parameters ${C}_{p}$ with $p \in P$ below.
30
+
31
+ $$
32
+ {f}_{1} : \;{C}_{{p}_{1}}{X}_{{v}_{1}} - {U}_{{w}_{1}} = 0, \tag{1}
33
+ $$
34
+
35
+ $$
36
+ {f}_{2} : \;{C}_{{p}_{2}}{X}_{{v}_{2}} + {X}_{{v}_{1}} + {U}_{{w}_{2}} = 0. \tag{2}
37
+ $$
38
+
39
+ The bipartite graph $\mathcal{B} = \langle V,F,E\rangle$ in Figure 1a, with $E = \left\{ {\left( {{v}_{1} - {f}_{1}}\right) ,\left( {{v}_{1} - {f}_{2}}\right) ,\left( {{v}_{2} - {f}_{2}}\right) }\right\}$ is a compact representation of the model structure. This graph has a perfect matching $M = \left\{ {\left( {{v}_{1} - {f}_{1}}\right) ,\left( {{v}_{2} - {f}_{2}}\right) }\right\}$ . By orienting edges in $\mathcal{B}$ according to the rules in step 1 of the causal ordering algorithm we obtain the directed graph $\left\langle {V \cup F,{E}_{\text{ dir }}}\right\rangle$ with ${E}_{\text{ dir }} = \left\{ {\left( {{f}_{1} \rightarrow {v}_{1}}\right) ,\left( {{f}_{2} \rightarrow {v}_{2}}\right) ,\left( {{v}_{1} \rightarrow {f}_{2}}\right) }\right\}$ . The clusters ${C}_{1} = \left\{ {{v}_{1},{f}_{1}}\right\}$ and ${C}_{2} = \left\{ {{v}_{2},{f}_{2}}\right\}$ are added to $\mathcal{V}$ in step 2 of the algorithm, and the edge $\left( {{v}_{1} \rightarrow {C}_{2}}\right)$ is added to $\mathcal{E}$ in step 3 . Finally, we add the parameters $P$ and independent exogenous random variables $W$ as singleton clusters to $\mathcal{V}$ , and the edges $\left( {{p}_{1} \rightarrow {C}_{1}}\right) ,\left( {{w}_{1} \rightarrow {C}_{1}}\right) ,\left( {{p}_{2} \rightarrow {C}_{2}}\right)$ , and $\left( {{w}_{2} \rightarrow {C}_{2}}\right)$ to $\mathcal{E}$ . The resulting causal ordering graph is given in Figure 1b.
40
+
41
+ Throughout this work, we will assume that models are uniquely solvable with respect to the causal ordering graph, which roughly means that for each cluster, the equations in that cluster can be solved uniquely for the endogenous variables in that cluster (see [Definition 14, Blom et al., 2021 for details). A perfect intervention on a cluster that contains equation vertices represents a model change where the equations in the targeted cluster are replaced by equations that set the endogenous variables in that cluster equal to constant values. A soft intervention targets an equation, parameter, or exogenous variable, but does not affect which variables appear in the equations. We say that there is a directed path from a vertex $x$ to a vertex $y$ in a causal ordering graph $\langle \mathcal{V},\mathcal{E}\rangle$ if either $\operatorname{cl}\left( x\right) = \operatorname{cl}\left( y\right)$ or there is a sequence of clusters ${C}_{1} = \operatorname{cl}\left( x\right) ,{C}_{2},\ldots ,{C}_{k - 1},{C}_{k} = \operatorname{cl}\left( y\right)$ so that for all $i \in \{ 1,\ldots ,k - 1\}$ there is a vertex ${z}_{i} \in {C}_{i}$ such that $\left( {{z}_{i} \rightarrow {C}_{i + 1}}\right) \in \mathcal{E}$ . It can be shown that a) the presence of a directed path from a cluster, equation, parameter, or exogenous variable that is targeted by a soft intervention towards a certain variable in the causal ordering graph implies that the intervention has a generic effect on that variable and b) if no such path exists there is no causal effect of the intervention on that variable [Theorem 20, Blom et al., 2021].
42
+
43
+ ${}^{1}$ Actually, we consider an equivalent algorithm for causal ordering that was shown to be more computationally efficient by [Nayak, 1995, Gonçalves and Porto, 2016]. For more details, see [Blom et al., 2021].
44
+
45
+ ${}^{2}$ A perfect matching $M$ is a subset of edges in a bipartite graph so that every vertex is adjacent to exactly one edge in $M$ . Note that not every bipartite graph has a perfect matching.
46
+
47
+ < g r a p h i c s >
48
+
49
+ Figure 1: The bipartite graph in Figure 1a is a compact representation of the model in Example 1. The corresponding causal ordering graph and Markov ordering graph are given in Figures 1b and 1c respectively. Exogenous variables are denoted by dashed circles and parameters by black dots.
50
+
51
+ § 1.2 MARKOV ORDERING GRAPH
52
+
53
+ The causal ordering graph $\mathrm{{CO}}\left( \mathcal{B}\right) = \langle \mathcal{V},\mathcal{E}\rangle$ of model equations $F$ with endogenous variables $V$ , exogenous random variables $W$ , parameters $P$ , and bipartite graph $\mathcal{B}$ can be used to construct the Markov ordering graph, which is a DAG $\operatorname{MO}\left( \mathcal{B}\right) = \langle V \cup W,E\rangle$ , with $\left( {x \rightarrow y}\right) \in E$ if and only if $\left( {x \rightarrow \operatorname{cl}\left( y\right) }\right) \in \mathcal{E}$ . The Markov ordering graph for the model equations in Example 1 is given in Figure 1c. It has been shown that, under the assumption of unique solvability w.r.t. the causal ordering graph, d-separations in the Markov ordering graph imply conditional independences between the corresponding variables [Blom et al., 2021]. Henceforth, we will assume that the probability distribution of the solution ${\left( {X}_{v}\right) }_{v \in V}$ to a set of model equations is faithful to the Markov ordering graph. In other words, each conditional independence in the distribution implies a d-separation in the Markov ordering graph. Under the assumption that data is generated from such a model, some causal discovery algorithms, such as the PC algorithm [Spirtes et al., 2000], aim to construct the Markov equivalence class of the Markov ordering graph. In this work, we will specifically focus on feedback models for which the Markov ordering graph of the equilibrium distribution, and consequently the output of many causal discovery algorithms, does not have a straightforward causal interpretation.
54
+
55
+ § 2 CAUSAL ORDERING FOR A VIRAL INFECTION MODEL
56
+
57
+ This work was inspired by a viral infection model discussed by De Boer [2012], who showed through explicit calculations that the predictions of the model are not robust under addition of an immune response. This sheds doubt on the correct interpretation of variables and parameters in the model. For many systems it is intrinsically difficult to study their behaviour in detail. The use of simplified mathematical models that capture key characteristics aids in the analysis of certain properties of the system. The hope is that the explanations inferred from model equations are legitimate accounts of the true underlying system [De Boer, 2012]. In reality, a modeller must take into account that the outcome of these studies may be contingent on the specifics of the model design. Here, we demonstrate how causal ordering can be used as a scalable tool to assess the robustness of model predictions without requiring explicit calculations.
58
+
59
+ § 2.1 VIRAL INFECTION WITHOUT IMMUNE RESPONSE
60
+
61
+ Let ${U}_{\sigma }$ be a production term for target cells, ${d}_{T}$ the death rate for target cells, ${U}_{f}$ the fraction of successful infections, and ${U}_{\delta }$ the death rate of productively infected cells. Define $\beta = \frac{bp}{c}$ , where $b$ is the infection rate, $p$ the amount of virus produced per infected cell, and $c$ the clearance rate of viral particles. The following first-order differential equations describe how the amount of target cells ${X}_{T}\left( t\right)$ and the amount of infected cells ${X}_{I}\left( t\right)$ evolve over time [De Boer,2012]:
62
+
63
+ $$
64
+ {\dot{X}}_{T}\left( t\right) = {U}_{\sigma } - {d}_{T}{X}_{T}\left( t\right) - \beta {X}_{T}\left( t\right) {X}_{I}\left( t\right) , \tag{3}
65
+ $$
66
+
67
+ $$
68
+ {\dot{X}}_{I}\left( t\right) = \left( {{U}_{f}\beta {X}_{T}\left( t\right) - {U}_{\delta }}\right) {X}_{I}\left( t\right) , \tag{4}
69
+ $$
70
+
71
+ Suppose that we want to use this simple viral infection model to explain why the set-point viral load (i.e. the total amount of virus circulating in the bloodstream) of chronically infected HIV-patients differs by several orders of magnitude, as De Boer [2012] does. To analyse this problem we look at the equilibrium equations that are implied by equations (3) and (4) : 3
72
+
73
+ $$
74
+ {f}_{T} : \;{U}_{\sigma } - {d}_{T}{X}_{T} - \beta {X}_{T}{X}_{I} = 0, \tag{5}
75
+ $$
76
+
77
+ $$
78
+ {f}_{I}^{ + } : \;{U}_{f}\beta {X}_{T} - {U}_{\delta } = 0. \tag{6}
79
+ $$
80
+
81
+ Throughout the remainder of this work we will use this natural labelling of equilibrium equations, where the equation derived from the derivative ${\dot{X}}_{i}\left( t\right)$ is labelled ${f}_{i}$ . For first-order differential equations that are written in canonical form, ${\dot{X}}_{i}\left( t\right) = {g}_{i}\left( {X\left( t\right) }\right)$ , the natural labelling always exists.
82
+
83
+ ${}^{3}$ Since we are only interested in strictly positive solutions we removed ${X}_{I}$ from the equilibrium equation ${f}_{I} : \left( {{U}_{f}\beta {X}_{T} - }\right.$ $\left. {U}_{\delta }\right) {X}_{I} = 0$ to obtain ${f}_{I}^{ + }$ .
84
+
85
+ < g r a p h i c s >
86
+
87
+ Figure 2: Graphical representations of the viral infection model in equations (5) and (6). Vertices ${v}_{i}$ and ${w}_{j}$ correspond to variables ${X}_{i}$ and ${U}_{j}$ , respectively. The causal ordering graph represents generic effects of interventions. The d-separations in Figure 2c imply conditional independences.
88
+
89
+ Suppose that ${U}_{\sigma },{U}_{f}$ and ${U}_{\delta }$ are independent exogenous random variables taking values in ${\mathbb{R}}_{ > 0}$ and ${d}_{T},\beta$ are strictly positive parameters. The associated bipartite graph, causal ordering graph, and Markov ordering graph are given in Figure 2. The causal ordering graph tells us that soft interventions targeting ${U}_{\sigma },{U}_{f},{U}_{\delta },{d}_{T}$ , or $\beta$ generically have an effect on the equilibrium distribution of the amount of infected cells ${X}_{I}$ . From here on, we say that the causal ordering graph of a model predicts the generic presence or absence of causal effects. The Markov ordering graph shows that ${v}_{T}$ and ${w}_{\sigma }$ are d-separated. This implies that the amount of target cells ${X}_{T}$ should be independent of the production rate ${U}_{\sigma }$ when the system is at equilibrium. Henceforth, we will say that the Markov ordering graph predicts the generic presence or absence of conditional dependences.
90
+
91
+ § 2.2 VIRAL INFECTION WITH A SINGLE IMMUNE RESPONSE
92
+
93
+ The viral infection model in equations (3) and (4) can be extended with a simple immune response ${X}_{E}\left( t\right)$ by adding the following dynamic and static equations:
94
+
95
+ $$
96
+ {\dot{X}}_{E}\left( t\right) = \left( {{U}_{a}{X}_{I}\left( t\right) - {d}_{E}}\right) {X}_{E}\left( t\right) , \tag{7}
97
+ $$
98
+
99
+ $$
100
+ {X}_{\delta }\left( t\right) = {d}_{I} + {U}_{k}{X}_{E}\left( t\right) , \tag{8}
101
+ $$
102
+
103
+ where ${U}_{a}$ is an activation rate, ${d}_{E}$ and ${d}_{I}$ are turnover rates and ${U}_{k}$ is a mass-action killing rate [De Boer,2012]. Note that the exogenous random variable ${U}_{\delta }$ is now treated as an endogenous variable ${X}_{\delta }\left( t\right)$ instead. We derive the following equilibrium equations using the natural labelling provided by equations (7) and (8):4
104
+
105
+ $$
106
+ {f}_{E}^{ + } : \;{U}_{a}{X}_{I} - {d}_{E} = 0, \tag{9}
107
+ $$
108
+
109
+ $$
110
+ {f}_{\delta } : \;{X}_{\delta } - {d}_{I} - {U}_{k}{X}_{E} = 0, \tag{10}
111
+ $$
112
+
113
+ Henceforth, we will call the addition of equations ${F}_{ + }$ to $F$ a model extension. Notice that when two sets of equations are combined, there may exist variables that were exogenous in the submodel (i.e. the original model) but that are endogenous within the whole model (i.e. the extended model). Generally, equations ${F}_{ + }$ may contain endogenous variables in $V$ and exogenous variables in $W$ but they may also contain additional endogenous variables ${V}_{ + }$ , additional exogenous variables ${W}_{ + }$ and additional parameters ${P}_{ + }$ . Parameters and exogenous random variables that appear in equations $F$ can appear as endogenous variables in ${V}_{ + }$ and in the extended model ${F}_{\text{ ext }} = F \cup {F}_{ + }$ . In that case, these variables are no longer considered to be parameters or exogenous variables within the extended model.
114
+
115
+ < g r a p h i c s >
116
+
117
+ Figure 3: Graphical representations of the viral infection model with a single immune response. The presence or absence of causal relations and d-connections implied by the graphs in Figure 2 are not preserved if a single immune response is added.
118
+
119
+ Suppose that ${U}_{a}$ and ${U}_{k}$ are independent exogenous random variables taking values in ${\mathbb{R}}_{ > 0}$ and ${d}_{E},{d}_{I}$ are parameters taking value in ${\mathbb{R}}_{ > 0}$ . The bipartite graph, causal ordering graph, and Markov ordering graph associated with equations (5),(6),(9), and (10) (with ${X}_{\delta }$ replacing ${U}_{\delta }$ ) are given in Figure 3. The causal ordering graph predicts a causal effect of ${U}_{\sigma }$ and ${d}_{T}$ on ${X}_{T}$ but not on ${X}_{I}$ . By comparing with the predictions of the causal ordering graph in Figure 2b, we find that effects of interventions targeting ${U}_{\sigma }$ and ${d}_{T}$ are not robust under the model extension. The Markov ordering graph of the extended model shows that ${w}_{\sigma }$ is d-connected to ${v}_{T}$ , and hence ${U}_{\sigma }$ and ${X}_{T}$ are dependent. We conclude that the independence between ${U}_{\sigma }$ and ${X}_{T}$ that was implied by the Markov ordering graph of the viral infection model without immune response is not robust under the model extension.
120
+
121
+ The systematic graphical procedure followed here easily leads to the same causal conclusions as De Boer [2012] obtained by explicitly solving the equilibrium equations. In addition, it leads to predictions regarding the conditional (in)dependences in the equilibrium distribution.
122
+
123
+ ${}^{4}$ Analogous to changing ${f}_{I}$ to ${f}_{I}^{ + }$ for strictly positive solutions, we will look at ${f}_{E}^{ + }$ instead of ${f}_{E}$ .
124
+
125
+ § 2.3 VIRAL INFECTION WITH MULTIPLE IMMUNE RESPONSES
126
+
127
+ The following static and dynamical equations describe multiple immune responses:
128
+
129
+ $$
130
+ {\dot{X}}_{{E}_{i}}\left( t\right) = \frac{{p}_{E}{X}_{{E}_{i}}\left( t\right) {U}_{{a}_{i}}{X}_{I}\left( t\right) }{h + {X}_{{E}_{i}}\left( t\right) + {U}_{{a}_{i}}{X}_{I}\left( t\right) } - {d}_{E}{X}_{{E}_{i}}\left( t\right) , \tag{11}
131
+ $$
132
+
133
+ $$
134
+ i = 1,2,\ldots ,n
135
+ $$
136
+
137
+ $$
138
+ {X}_{\delta }\left( t\right) = {d}_{I} + {U}_{k}\mathop{\sum }\limits_{i}^{n}{U}_{{a}_{i}}{X}_{{E}_{i}}\left( t\right) , \tag{12}
139
+ $$
140
+
141
+ where there are $n$ immune responses, ${U}_{{a}_{i}}$ is the avidity of immune response $i,{p}_{E}$ is the maximum division rate, and $h$ is a saturation constant [De Boer,2012]. For $n = 2$ we can derive equilibrium equations ${f}_{{E}_{1}},{f}_{{E}_{2}}$ , and ${f}_{\delta }$ using the natural labelling as we did for the equilibrium equations in the previous section. Together with the equilibrium equations (5) and (6) (with ${X}_{\delta }$ replacing ${U}_{\delta }$ ) for the viral infection model this is another extended model. The bipartite graph of this extended model is given in Figure 5a, while the causal ordering graph can be found in Figure 4a. By comparing the directed paths in this causal ordering graph with that of the original viral infection model (i.e. the model without an immune response) in Figure 2b, it can be seen that the predicted presence of causal relations is preserved under extension of the model with multiple immune responses, while the predicted absence of causal relations is not. Similarly, by comparing d-separations in the Markov ordering graphs in Figure 2c with those in Figure 4b, we find that predicted conditional dependences are preserved under the extensions, while the predicted conditional independences are not.
142
+
143
+ < g r a p h i c s >
144
+
145
+ Figure 4: Graphical representations of the viral infection model with multiple immune responses. The presence of causal relations and d-connections in Figure 2 is preserved.
146
+
147
+ § 2.4 MARKOV ORDERING GRAPHS AND CAUSAL INTERPRETATIONS
148
+
149
+ In [Blom et al., 2021], it was shown that the Markov ordering graph may not have a straightforward causal interpretation. Here, we illustrate for the viral infection models that the Markov ordering graphs neither have a straightforward causal interpretation at equilibrium in terms of soft interventions targeting parameters, exogenous variables, or equations nor in terms of perfect interventions on variables in the dynamical model. To see this, consider the Markov ordering graph in Figure 3b for the viral infection with a single immune response. The edge $\left( {{v}_{I} \rightarrow {v}_{T}}\right)$ cannot correspond to the effect of a soft intervention targeting ${f}_{I}^{ + }$ , because the causal ordering graph in Figure 3c shows that there is no such effect. Clearly, directed paths in the Markov ordering graph do not necessarily represent the effects of soft interventions. The natural way to model a perfect intervention targeting a variable in the Markov ordering graph is to replace the (differential) equation of that variable with an equation setting that variable equal to a certain value in the underlying dynamical model [Mooij et al., 2013]. By explicitly solving equilibrium equations it is easy to check that replacing ${f}_{\delta }$ with an equation setting ${X}_{\delta }$ equal to a constant generically changes the distribution of ${X}_{I}$ . Since there is no directed path from ${v}_{\delta }$ to ${v}_{I}$ in the Markov ordering graph, the effect of this perfect intervention would not have been predicted by the Markov ordering graph, if it would have been interpreted causally. Therefore, contrary to the causal ordering graph, the Markov ordering graph does not have a causal interpretation in terms of soft or perfect interventions on the true underlying dynamical model.
150
+
151
+ § 3 ROBUST CAUSAL PREDICTIONS UNDER MODEL EXTENSIONS
152
+
153
+ One way to gauge the robustness of model predictions is to check to what extent they depend on the model design. The example of a viral infection with different immune responses in the previous section indicates that qualitative causal predictions entailed by the causal ordering graph of a mathematical model may strongly depend on the particulars of the model. Both the implied presence or absence of causal relations at equilibrium and the implied presence or absence of conditional independences at equilibrium may change under certain model extensions. Under what conditions are these qualitative predictions preserved under model extensions? In this section, we characterize a large class of model extensions under which qualitative equilibrium predictions are preserved.
154
+
155
+ Theorem 1 gives a sufficient condition on model extensions under which the predicted generic presence of causal relations and predicted generic presence of conditional dependences at equilibrium is preserved. The proof is given in
156
+
157
+ § APPENDIX A.2.
158
+
159
+ Theorem 1. Consider model equations $F$ containing endogenous variables $V$ with bipartite graph $\mathcal{B}$ . Suppose $F$ is extended with equations ${F}_{ + }$ containing endogenous variables in $V \cup {V}_{ + }$ , where ${V}_{ + }$ contains endogenous variables that are added by the model extension. ${}^{5}$ Let ${\mathcal{B}}_{\text{ ext }}$ be the bipartite graph associated with ${F}_{\text{ ext }} = F \cup {F}_{ + }$ and ${V}_{\text{ ext }} = V \cup {V}_{ + }$ , and ${\mathcal{B}}_{ + }$ the bipartite graph associated with the extension ${F}_{ + }$ and ${V}_{ + }$ , where variables in $V$ appearing in ${F}^{ + }$ are treated as exogenous variables (i.e. they are not added as vertices in ${\mathcal{B}}_{ + }$ ). If $\mathcal{B}$ and ${\mathcal{B}}_{ + }$ both have a perfect matching then:
160
+
161
+ 1. ${\mathcal{B}}_{\text{ ext }}$ has a perfect matching,
162
+
163
+ 2. ancestral relations in $\mathrm{{CO}}\left( \mathcal{B}\right)$ are also present in $\operatorname{CO}\left( {\mathcal{B}}_{\text{ ext }}\right)$ ,
164
+
165
+ 3. d-connections in $\mathrm{{MO}}\left( \mathcal{B}\right)$ are also present in $\operatorname{MO}\left( {\mathcal{B}}_{\text{ ext }}\right)$ .
166
+
167
+ This result characterizes a large set of extensions under which the implied causal effects and conditional dependences of a model are preserved. Consider again the equilibrium behaviour of the viral infection models in Section 2, We already showed explicitly that the extension of the viral infection model with multiple immune responses preserved the predicted presence of causal relations and conditional dependences, but with the help of Theorem 1 we only would have needed to check whether the bipartite graph in Figure $5\mathrm{c}$ has a perfect matching to arrive at the same conclusion. The bipartite graph for the extension with a single immune response in Figure 5b does not have a perfect matching and hence the conditions of Theorem 1 do not hold. Recall that this model extension did not preserve the predicted presence of causal relations.
168
+
169
+ The theorem below gives a stronger condition under which (conditional) independence relations and the absence of causal relations that are implied by a model are also predicted by the extended model. The proof is provided in the supplement.
170
+
171
+ Theorem 2. Let $F,{F}_{ + },{F}_{\text{ ext }},V,{V}_{ + },{V}_{\text{ ext }},\mathcal{B},{\mathcal{B}}_{ + }$ , and ${\mathcal{B}}_{\text{ ext }}$ be as in Theorem 1 If $\mathcal{B}$ and ${\mathcal{B}}_{ + }$ both have perfect matchings and no vertex in ${V}_{ + }$ is adjacent to a vertex in $F$ in ${\mathcal{B}}_{\text{ ext }}$ then [6]
172
+
173
+ 1. ancestral relations absent in $\mathrm{{CO}}\left( \mathcal{B}\right)$ are also absent in $\operatorname{CO}\left( {\mathcal{B}}_{\text{ ext }}\right)$ ,
174
+
175
+ 2. d-connections absent in $\mathrm{{MO}}\left( \mathcal{B}\right)$ are also absent in $\operatorname{MO}\left( {\mathcal{B}}_{\text{ ext }}\right)$ . Together with Theorem 1, this result characterizes a large class of model extensions under which all qualitative model predictions are preserved. Consider again the equilibrium models for the viral infection in Section 2. The bipartite graph for the extension with a single immune response, which we obtain by adding equations (9) and (10), does not have a perfect matching. In the bipartite graph associated with the viral infection model with multiple immune responses the additional endogenous variable ${v}_{\delta }$ is adjacent to ${f}_{I}$ . Neither of the model extensions satisfies the conditions of Theorem 2. We already demonstrated that neither of the model extensions preserves all qualitative model predictions. An example of a model extension that does satisfy the conditions in Theorem 1 and 2 is an acyclic structural causal model that is extended with another acyclic structural causal model such that the additional variables are non-ancestors of the original ones. Together, Theorem 1 and 2, can be used to understand when the causal and Markov properties of a system can be understood by studying the corresponding properties of its parts.
176
+
177
+ § 4 SELECTION OF MODEL EXTENSIONS
178
+
179
+ So far, we have considered methods to assess the robustness of qualitative model predictions. In this section we will show how this idea results in novel opportunities regarding causal discovery. In particular, if we assume that the systems that we observe are part of a larger partially observed system, then we can use the methods in this paper to reason about causal mechanisms of unobserved variables. Consider, for example, the viral infection model for which we have demonstrated that extensions with different immune responses imply different (conditional) independences between variables in the original model. The Markov ordering graphs in Figures 2c, 3b, and 4b imply the following (in)dependences:
180
+
181
+ 1. Viral infection without immune response: ${U}_{\sigma } ⫫ {X}_{T}$ , ${U}_{\sigma } \mathrel{\text{ \textbackslash bot \textbackslash not\{ } }} {X}_{I}.$
182
+
183
+ 2. Viral infection with single immune response: ${U}_{\sigma } \mathrel{\text{ \textbackslash Downarrow \textbackslash not\{ } }}$ ${X}_{T},{U}_{\sigma } ⫫ {X}_{I}.$
184
+
185
+ 3. Viral infection with multiple immune responses: ${U}_{\sigma } \mathrel{\text{ \textbackslash Downarrow \textbackslash not\{ } }}$ ${X}_{T},{U}_{\sigma } \mathrel{\text{ \textbackslash bot \textbackslash not\{ } }} {X}_{I}.$
186
+
187
+ Given a model for variables ${X}_{T}$ and ${X}_{I}$ only, we can reject model extensions based on the (conditional) independences for variables ${X}_{T},{X}_{I}$ , and ${U}_{\sigma }$ . Using this holistic modelling approach, we can reason about an unknown model extension without observing the new mechanisms or variables. In the remainder of this section, we further discuss how this idea can be applied to equilibrium data of dynamical systems.
188
+
189
+ ${}^{5}{V}_{ + }$ may also contain parameters or exogenous variables that appear in $F$ and become endogenous in the extended model.
190
+
191
+ ${}^{6}$ A vertex in ${V}_{ + }$ is considered adjacent to $F$ if it corresponds with one of the exogenous random variables or parameters in $F$ that become endogenous in the model extension.
192
+
193
+ § 4.1 REASONING ABOUT SELF-REGULATING VARIABLES
194
+
195
+ We say that a variable in a set of first-order differential equations in canonical form is self-regulating if it can be solved uniquely from the equilibrium equation that is constructed from its derivative. For models in which every variable is self-regulating there exists a perfect matching where each variable ${v}_{i}$ is matched to its associated equilibrium equation ${f}_{i}$ according to the natural labelling, for more details see Lemma 1 in the supplement. TIt then follows from Theorem 1 that the presence of ancestral relations and d-connections is robust under dynamical model extensions in which each variable is self-regulating, as is stated more formally in Corollary 1 below.
196
+
197
+ Corollary 1. Consider a first-order dynamical model in canonical form for endogenous variables $V$ and an extension consisting of canonical first-order differential equations for additional endogenous variables ${V}_{ + }$ . Let $F$ and ${F}_{\text{ ext }} = F \cup {F}_{ + }$ be the equilibrium equations of the original and extended model respectively. If all variables in $V \cup {V}_{ + }$ are self-regulating then 2 and 3 of Theorem 1 hold.
198
+
199
+ Corollary 1 characterizes a class of models under which certain qualitative predictions for the equilibrium distribution are robust, but the result can also be interpreted from a different angle. Suppose that we have equilibrium data that is generated by an extended dynamical model with equilibrium equations ${F}_{\text{ ext }}$ , but we only have a partial model consisting of equations in $F$ for a subset $V \subseteq {V}_{\text{ ext }} = V \cup {V}_{ + }$ of variables that appear in ${F}_{\text{ ext }} = F \cup {F}_{ + }$ . If we would find conditional independences between variables in $V$ that do not correspond to d-separations in the Markov ordering graph of the partial model, this does not necessarily mean that the model equations are wrong. It could also be the case, for example, that we are wrong to assume that the system can be studied in a reductionist manner and that the model should be extended. Furthermore, under the assumption that data is generated from the equilibrium distribution of a dynamical model, Corollary 1 tells us that conditional inde-pendences in the data that are not predicted by the equations of a partial model imply the presence of variables that are not self-regulating, if we assume faithfulness. This shows that, given a model for a subsystem, we can reason about the properties of unobserved and unknown variables in the whole system. Consider, for example, the model of the viral infection without immune response and assume that this is a submodel of a larger system. Suppose that we observe a conditional independence between ${U}_{\sigma }$ and ${X}_{I}$ and assume that the model equations of the submodel are correct. Since the Markov ordering graph in Figure 2c implies that ${U}_{\sigma }$ and ${X}_{I}$ are dependent, Corollary 1 tells us that there must be variables that are not self-regulating in the extended system. If the extended system can be described by the strictly positive solutions of the viral infection model with a single immune response, so that ${U}_{\sigma }$ and ${X}_{I}$ are independent, then we see from equations (5),(6),(9), and (10) that both ${X}_{E}\left( t\right)$ and ${X}_{I}\left( t\right)$ are not self-regulating.
200
+
201
+ § 4.2 REASONING ABOUT FEEDBACK LOOPS
202
+
203
+ We say that an extension of a dynamical model introduces a new feedback loop with the original dynamical model when there is feedback in the extended dynamical model that involves variables in both the original model and the model extension. To make this definition more precise, consider the set ${E}_{\text{ nat }}$ of edges $\left( {{v}_{i} - {f}_{i}}\right)$ that are associated with the natural labelling of the equilibrium equations of the extended dynamical model. The feedback loops in the dynamical model coincide with cycles in the directed graph $\mathcal{G}\left( {{\mathcal{B}}_{\text{ nat }},{M}_{\text{ nat }}}\right)$ that is obtained by applying step [1] of the causal ordering algorithm to the bipartite graph ${\mathcal{B}}_{\text{ nat }} = \left\langle {{V}_{\text{ ext }},{F}_{\text{ ext }},{E}_{\text{ ext }} \cup {E}_{\text{ nat }}}\right\rangle$ using the perfect matching ${M}_{\text{ nat }} = {E}_{\text{ nat }}$ . The following proposition can be used to reason about the presence of partially unobserved feedback loops given a model and observations for a subsystem.
204
+
205
+ Proposition 1. Consider a first-order dynamical model in canonical form for endogenous variables $V$ and an extension consisting of canonical first-order differential equations for additional endogenous variables ${V}_{ + }$ . Let $F$ and ${F}_{\text{ ext }} =$ $F \cup {F}_{ + }$ be the equilibrium equations of the original and extended model respectively. Let $\mathcal{B} = \langle V,F,E\rangle$ be the bipartite graph associated with $F$ and ${\mathcal{B}}_{\text{ ext }} = \left\langle {{V}_{\text{ ext }},{F}_{\text{ ext }},{E}_{\text{ ext }}}\right\rangle$ the bipartite graph associated with ${F}_{\text{ ext }}$ . Assume that $\mathcal{B}$ and ${\mathcal{B}}_{\text{ ext }}$ both have perfect matchings. If the model extension does not introduce a new feedback loop with the original dynamical model, then $d$ -connections in $\mathrm{{MO}}\left( \mathcal{B}\right)$ are also present in $\operatorname{MO}\left( {\mathcal{B}}_{\text{ ext }}\right)$ .
206
+
207
+ ${}^{7}$ Interestingly, the Markov ordering graph for the equilibrium equations of such a model always has a causal interpretation. By construction of the causal ordering graph from the bipartite graph and the perfect matching provided by the natural labelling, we know that a vertex ${v}_{i}$ always appears in a cluster with ${f}_{i}$ in the causal ordering graph. The presence or absence of directed paths in the Markov ordering graph can then easily be associated with the presence or absence of directed paths in the causal ordering graph. Consequently, the Markov ordering graph can be interpreted in terms of both soft interventions targeting equations and perfect interventions that set variables equal to a constant by replacement of the associated dynamical and equilibrium equations. Note that dynamical systems with only self-regulating variables were also considered in [Mooij et al., 2013], where it was shown that their equilibria can be modelled as structural causal models without self-cycles.
208
+
209
+ ${}^{8}$ Note that a feedback loop in the dynamical model does not imply a feedback loop in the equilibrium equations as well. For example, there is feedback in the dynamical equations (3), (4), but there is no feedback in the causal ordering graph of the equilibrium equations in Figure 2b nor in the directed graph that is constructed in step 1 of the causal ordering algorithm.
210
+
211
+ < g r a p h i c s >
212
+
213
+ Figure 5: The bipartite graphs associated with the viral infection model with multiple immune responses, the single immune response extension, and the multiple immune response extension are given in Figures 5a, 5b, and 5c, respectively.
214
+
215
+ Proposition 1 characterizes a class of model extensions under which certain qualitative model predictions are robust, but it also shows how we can reason about the existence of unobserved feedback loops. To be more precise, it shows that, given a submodel for a subsystem, the presence of conditional independences that are not predicted by the sub-model imply the existence of an unobserved feedback loop, if we assume faithfulness. If, for example, we assume that the viral infection model without an immune response is a submodel of the system that is described by the strictly positive equilibrium solutions of the viral infection model with a single immune response, then we would observe an independence between ${U}_{\sigma }$ and ${X}_{I}$ that is not predicted by the model equations of the submodel. Proposition 1 would then imply that there is an unobserved feedback loop. Indeed, it can be seen from equations (3), (4), (7), (8) that there is an unobserved feedback loop from ${X}_{I}\left( t\right)$ to ${X}_{E}\left( t\right)$ to ${X}_{\delta }\left( t\right)$ and back to ${X}_{I}\left( t\right)$ , while the Markov ordering graphs in Figures 2c and 3b imply that ${U}_{\sigma }$ and ${X}_{I}$ are dependent in the original model and independent in the extended model. We consider the use of existing structure learning algorithms for the detection of feedback loops in models with variables that are not self-regulating from a combination of background knowledge and observational equilibrium data to be an interesting topic for future work.
216
+
217
+ § 5 DISCUSSION
218
+
219
+ In this work we revisited several models of viral infections and immune responses. In our treatment of these models we closely followed the approach in De Boer [2012] and therefore we only considered strictly positive solutions. If we would have modelled all solutions then, for example, we would have considered the equilibrium equation ${f}_{I}$ : $\left( {{U}_{f}\beta {X}_{T} - {U}_{\delta }}\right) {X}_{I} = 0$ instead of ${f}_{I}^{ + }$ in equation (6). In that case, we would have obtained the causal ordering graph in Figure 6 instead of that in Figure 2b. Clearly, the model predictions of the causal ordering graph for the positive solutions in Figure 2b are more informative. The choice of only modelling strictly positive solutions depends on the application.
220
+
221
+ < g r a p h i c s >
222
+
223
+ Figure 6: Causal ordering graph for positive and nonpositive solutions of the viral infection model.
224
+
225
+ In many application domains mathematical models are used to predict the equilibrium behaviour of complex systems. An important issue is that (causal and Markov) predictions may strongly depend on the specifics of the model design. We revisited an example of a viral infection model [De Boer, 2012], in which implied causal relations and conditional independences change dramatically when equations, describing immune reactions, are added. Analysis of this behaviour through explicit calculations is neither insightful nor scalable. We showed how the technique of causal ordering can be used to efficiently analyse the robustness of implied causal effects and conditional independences under certain solvability assumptions. Using key insights provided by this approach we characterized large classes of model extensions under which predicted causal relations and conditional inde-pendences are robust. We hope that the results presented in this paper are a step towards bringing the world of causal modeling and reasoning closer to practical applications.
226
+
227
+ Our results for the characterization of the robustness of model extensions can also be used to reason about the properties of models that are the combination of two submodels. This way, we can study systems whose causal and Markov properties can be understood in a reductionistic manner by considering the properties of its parts. When the properties of the whole model differ from those of its parts, a holistic modelling approach is required. For models of the equilibrium distribution of dynamical systems, we proved that extensions of dynamical models where each variable is self-regulating preserve the predicted presence of causal effects and d-connections in the original model. Based on those insights, we proposed a novel approach to model selection, where information about conditional independences can be used in combination with model equations to reason about possible model extensions or the presence of feedback mechanisms. For dynamical models with feedback, the output of structure learning algorithms does not always have a causal interpretation in terms of soft or perfect interventions for the equilibrium distribution. We have shown that in dynamical systems where each variable is self-regulating the identifiable directed edges in the learned graph do express causal relations between variables.
UAI/UAI 2022/UAI 2022 Conference/BGe6r8i9x5/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,334 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Learning Binary Multi-Scale Games on Networks
2
+
3
+ ## Abstract
4
+
5
+ Network games are a natural modeling framework for strategic interactions of agents whose actions have local impact on others. Recently, a multi-scale network game model has been proposed to capture local effects at multiple network scales, such as among both individuals and groups. We propose a framework to learn the utility functions of binary multi-scale games from agents' behavioral data. Departing from much prior work in this area, we model agent behavior as following logit-response dynamics, rather than acting according to a Nash equilibrium. This defines a generative time-series model of joint behavior of both agents and groups, which enables us to naturally cast the learning problem as maximum likelihood estimation (MLE). We show that in the important special case of multi-scale linear-quadratic games, this MLE problem is convex. Extensive experiments using both synthetic and real data demonstrate that our proposed modeling and learning approach is effective in both game parameter estimation as well as prediction of future behavior, even when we learn the game from only a single behavior time series. Furthermore, we show how to use our framework to develop a statistical test for the existence of multi-scale structure in the game, and use it to demonstrate that real time-series data indeed exhibits such structure.
6
+
7
+ ## 2 INTRODUCTION
8
+
9
+ A broad class of scenarios involving strategic interaction among a large collection of agents can be modeled by network (graphical) games, including investment in a public good [Bramoullé and Kranton, 2007, Grossklags et al., 2008], information diffusion [Galeotti et al., 2010], peer effects in social networks [Ballester et al., 2006], and adoption of innovation [Jackson, 2010]. A prominent feature of network games is local effects, where an agent's utility depends only on the actions of its network neighbors [Kearns et al., 2001]. Many real networks, however, additionally exhibit group or community structure [Girvan and Newman, 2002], and Jin et al. [2021] recently proposed a multi-scale network game model that embeds such structure into the network game representation. However, a multi-scale game representation is often not given a priori, and instead what is available is time-series data of actual behavior, such as trade interactions among nations, or homicides arising from organized crime activities. Our goal is to develop a scalable framework for learning parametric models of multi-scale network games from such time-series data.
10
+
11
+ The general problem of learning utility functions in games from observed behavior has been extensively studied [Cha-jewska et al., 2001, Vorobeychik et al., 2007, Waugh et al., 2011, Honorio and Ortiz, 2015, Garg and Jaakkola, 2016, Leng et al., 2020]. A common assumption in this line of work is that agents are fully rational in that they act according to a Nash equilibrium. However, much experimental evidence suggests that this assumption is commonly violated [Andreoni and Miller, 1993, Camerer, 2003]. In addition, time-series behavior data often exhibits intertem-poral dependence, such as the self-exciting nature of crime data [Mohler et al. 2011], a feature that is lost if behavior is modeled by a Nash equilibrium of a single-shot game.
12
+
13
+ We propose to use logit-response dynamics(LRD)-a classic framework to capture bounded rational behavior in games [Blume, 1993, Alós-Ferrer and Netzer, 2010]—as a solution concept in learning utility functions from time-series data representing behavior in repeated strategic interactions. In LRD, each action by a player is played with a probability proportional to its utility, with actions of the other players fixed to what was played in the previous time step. LRD has two advantages over Nash equilibrium. First, it explicitly captures intertemporal dependence in behavior, since agents are responding to previously observed choices by others; in contrast, Nash equilibrium behavior in a one-shot game exhibits no temporal dependence. Second, LRD solution concept is more psychologically plausible than Nash equilibrium behavior [Haile et al., 2008, Fudenberg et al., 1998, Stahl II and Wilson, 1994]. While Duong et al. [2010] also explicitly modeled intertemporal dependence in behavior, their approach was limited to consensus games, and required knowledge of utilities associated with player actions. Finally, ours is the first approach to consider multi-scale structure of strategic interactions on networks.
14
+
15
+ Armed with the game-theoretic generative model of time-series behavior data, we formulate the game learning problem as maximum likelihood estimation (MLE). In general, this problem can be (approximately) solved using gradient ascent; however, neither optimality nor consistency of estimation is guaranteed in our setting, where data is not generated i.i.d. To address this, we instantiate our framework in the context of parametric multi-scale linear-quadratic utility models. We prove that in this special case, the MLE problem is convex and can thus be solved efficiently. Our final technical contribution is a likelihood ratio test that enables us to statistically determine whether behavioral data generated by a multi-scale game model actually reflects multi-scale structure, where the null hypothesis is that only single-scale interactions significantly impact behavior.
16
+
17
+ We use extensive experiments on both synthetic and real datasets to demonstrate that the proposed approach effectively learns game parameters from time-series data. Furthermore, we show that our approach outperforms state-of-the-art baselines in predicting future agent behavior. Finally, we show that the game models we learn on real data offer interesting insights about behavior in the associated settings. For example, in the case of gang violence data, we show that the model we learn exhibits temporal self-excitation of homicides at multiple scales (that is, stemming from both individual gang member interaction, as well as interactions among gangs), generalizing insights from prior literature [Mohler et al., 2011].
18
+
19
+ Related Work Preference (or utility) elicitation, or inferring preferences of agents through active interaction, is a classic problem in decision theory [Fischhoff and Man-ski, 2000, Blum et al., 2004]. The passive counterpart of preference elicitation is preference or utility learning from observed time-series data of behavior [Chajewska et al., 2001, Nielsen and Jensen, 2004]. Of direct relevance to our work is the literature on learning utility functions of players in game-theoretic models of their behavior. In this there are two major strands: learning utilities from observations of behavior time-series [Honorio and Ortiz, 2015, Garg and Jaakkola, 2016, Leng et al., 2020, Ling et al., 2018, Waugh et al., 2011], and learning utilities from observed payoffs [Duong et al., 2009, Vorobeychik et al., 2007]. The principal difference between our framework and the former set of approaches stems from our use of LRD model of behavior, which considerably simplifies the learning problem and naturally allows us to capture temporal interdependence. Our approach draws some inspiration from the framework for learning from collective behavior by Kearns and Wort-man [2008]. However, the key general result in Kearns and Wortman [2008] requires learning with reset (i.e., a large collection of independently generated sequences of behavior), whereas we learn from only a single observed behavior sequence. Duong et al. [2010], like us, explicitly modeled intertemporal dependence in behavior. However, their approach was limited to consensus games, and required knowledge of player utilities.
20
+
21
+ ## 3 MODEL
22
+
23
+ ### 3.1 BINARY MULTI-SCALE GAME ON NETWORKS
24
+
25
+ A binary multi-scale game is defined on a network, which we represent by the adjacency matrix $\mathbf{A}$ . The network can be directed or undirected, weighted or unweighted. We only assume that there are no self-loops in the network. For expository purposes, $\mathbf{A}$ is unweighted and undirected in the present paper. The agents in the game are situated on the vertices of $\mathbf{A}$ , denoted by $\mathcal{V} = \left\{ {{v}_{1},\ldots ,{v}_{n}}\right\}$ , and are partitioned into $K$ groups, i.e., $\mathcal{V} = { \cup }_{i = 1}^{K}{\mathcal{G}}_{i}$ , and ${\mathcal{G}}_{i} \cap {\mathcal{G}}_{j} = \varnothing$ for any $i \neq j$ . We use the set $\mathcal{J} = \left\{ {{\mathcal{G}}_{i} \mid i = 1,\ldots , K}\right\}$ to represent the $K$ groups. Intuitively, we can use each group ${\mathcal{G}}_{i}$ to represent a neighborhood when the underlying network is an urban network, or an interest group if the underlying network is a social network. The group membership of agent $i$ is encoded by a mapping $\alpha \left( i\right)$ from the agent’s index to its group index, i.e., $\alpha \left( i\right) = j$ for $i \in {\mathcal{G}}_{j}$ . Throughout, we assume that the network structure $\mathbf{A}$ , the mapping $\alpha \left( i\right)$ , and the group structure $\mathcal{J}$ are known.
26
+
27
+ We use ${x}_{i} \in {\mathcal{S}}_{i}$ to represent agent $i$ ’s action, where ${\mathcal{S}}_{i} = \{ 0,1\}$ . We use public goods investment as a running example, where ${x}_{i} = 1$ (resp. ${x}_{i} = 0$ ) means that agent $i$ invests (resp. does not invest) in the public goods. Consequently, we will refer the choice ${x}_{i} = 1$ as an agent’s decision to invest, while ${x}_{i} = 0$ means that $i$ decides not to invest. The marginal cost of making an investment is captured by a constant ${c}_{i} \in {\mathbb{R}}_{ + }$ , e.g., monetary cost, time, and/or effort exerted. The action profile of all agents is represented by $\mathbf{x} \in \{ 0,1{\} }^{n}$ , where the $i$ -th entry is ${x}_{i}$ . We use the set $\mathcal{N}\left( i\right)$ to represent agent $i$ ’s neighbors. The action profile restricted to agent $i$ ’s neighbors is ${\mathbf{x}}_{\mathcal{N}\left( i\right) }$ .
28
+
29
+ To capture multi-scale (group) structure of the game, we define a vector $\mathbf{y} \in {\mathbb{R}}^{K}$ , which represents some aggregate statistic at the group level. Typically, ${y}_{i}$ will be the total investment by group $i$ , i.e., ${y}_{i} = \mathop{\sum }\limits_{{j \in {\mathcal{G}}_{i}}}{x}_{j}$ . We emphasize, however, that the definition of $\mathbf{y}$ is quite general, e.g., ${y}_{i}$ can also be the median investment from group $i$ , or any other reasonable group-level statistic. The key idea behind the multi-scale representation is that while agents have concrete knowledge about the behavior of those they regularly interact with (network neighbors), they only have higher-level knowledge about other groups, as captured by the associated statistics for those groups. A concrete example is vaccination: an agent usually has more specific knowledge about the vaccination status of her close friends, which is encoded by ${\mathbf{x}}_{\mathcal{N}\left( i\right) }$ , but only aggregate vaccination information at the level of counties or states, which is captured by $\mathbf{y}$ . The utility function of agent $i$ is defined as follows:
30
+
31
+ $$
32
+ {u}_{i}\left( {{x}_{i},{\mathbf{x}}_{-i}}\right) = {g}_{i}\left( {{x}_{i},{\mathbf{x}}_{\mathcal{N}\left( i\right) }}\right) + {h}_{i}\left( {{x}_{i},\mathbf{y}}\right) - {c}_{i}{x}_{i}, \tag{1}
33
+ $$
34
+
35
+ where $\mathbf{y}$ is implicitly a function of the full action profile $\mathbf{x}$ . The function ${g}_{i}$ models local effects between an agent and its direct neighbors, capturing the externality that agent $i$ experiences from its neighbors' (and its own) investment. The function ${h}_{i}$ generalizes local effects from the individual level to the group level, encoding the multi-scale structure in the game. The term ${c}_{i}{x}_{i}$ captures the cost of investment. Putting everything together, we define a binary multi-scale game on networks as a tuple $\mathrm{b} - \operatorname{MSGN}\left( {\mathbf{A},\mathcal{J},\left\{ {\mathcal{S}}_{i}\right\} ,{\left\{ {u}_{i}\right\} }_{i = 1}^{n}}\right)$ , where $\mathbf{A}$ is the underlying network, $\mathcal{J}$ is the group structure, ${\mathcal{S}}_{i}$ are pure strategy sets of players, and ${u}_{i}$ are player utilities defined in Equation (1).
36
+
37
+ ### 3.2 LOGIT-RESPONSE DYNAMICS
38
+
39
+ When modeling agents' strategic behavior, a common assumption is that agents are rational, i.e., they always choose the action with the highest utility. This is formally modeled by the best-response rule: ${x}_{i} \in \arg \mathop{\max }\limits_{{x}_{i}^{\prime }}{u}_{i}\left( {{x}_{i}^{\prime },{\mathbf{x}}_{-i}}\right)$ , where ${\mathbf{x}}_{-i}$ represents all agents’ actions other than agent $i$ . In the conventional Nash equilibrium solution concept that has been common in prior literature on learning games from data [Honorio and Ortiz, 2015, Leng et al., 2020], all players are assumed to simultaneously choose a best response to each other. In reality, however, an agent may not make completely rational decisions, due to 1) limited resources or computational power needed to precisely solve the argmax problem and 2) inability to perfectly assess small differences in its utility. Furthermore, a Nash equilibrium of a static game cannot capture intertemporal dependencies that may be present in time-series behavior data, and multiplicity of equilibria creates a further practical challenge in learning general-sum games from data. A common alternative to the Nash equilibrium solution concept is a quantal response equilibrium (QRE) [McKelvey and Palfrey, 1995], which was recently used in a framework for learning two-player zero-sum games from data [Ling et al., 2018]. However, multiplicity of equilibria (both Nash and QRE) in general-sum games has limited further progress.
40
+
41
+ Our key conceptual contribution is to combine bounded rationality in action choices with bounded rationality in dynamic agent behavior. While such combination seems entirely natural, we are the first to explore it in the context of learning games from time-series data. Our experiments below vindicate this approach, which resolves both the issue of multiplicity of equilibria and dynamic interdependencies in behavior. Specifically, we adopt a classic model of bounded-rational dynamic behavior: logit-response dynamics (LRD) [Blume, 1993, Alós-Ferrer and Netzer, 2010]. LRD presumes a repeated one-shot game in which agents select actions with probabilities proportional to their utilities (as in QRE) in every step, taking choices made by others as given from the previous step (unlike QRE). In our context, the probability of agent $i$ choosing to invest $\left( {{x}_{i} = 1}\right)$ in the next time step is
42
+
43
+ $$
44
+ p\left( {{x}_{i}^{t + 1} = 1 \mid {\mathbf{x}}^{t},{\mathbf{y}}^{t}}\right) = \frac{{e}^{\gamma \cdot {u}_{i}\left( {1,{\mathbf{x}}_{-i}^{t},{\mathbf{y}}^{t}}\right) }}{{e}^{\gamma \cdot {u}_{i}\left( {1,{\mathbf{x}}_{-i}^{t},{\mathbf{y}}^{t}}\right) } + {e}^{\gamma \cdot {u}_{i}\left( {0,{\mathbf{x}}_{-i}^{t},{\mathbf{y}}^{t}}\right) }}
45
+ $$
46
+
47
+ $$
48
+ = \frac{1}{1 + {e}^{\gamma \left( {{u}_{i}\left( {0,{\mathbf{x}}_{-i}^{t},{\mathbf{y}}^{t}}\right) - {u}_{i}\left( {1,{\mathbf{x}}_{-i}^{t},{\mathbf{y}}^{t}}\right) }\right) }}.
49
+ $$
50
+
51
+ (2)
52
+
53
+ The scalar $\gamma$ quantifies the noise level in the agent’s decision-making. As $\gamma$ goes to infinity, the logit-response converges to the best-response rule. For any $0 < \gamma < \infty$ , the agent chooses non-best response with positive probability, and the actions yielding larger utility are chosen with higher probability. Throughout the paper, we assume that $\gamma$ is known (in practice, we set it to 1 , as it simply scales the utility functions). We define the probability $p\left( {{x}_{i}^{t + 1} = 1 \mid {\mathbf{x}}^{t},{\mathbf{y}}^{t}}\right)$ as the investment probability at time step $t + 1$ . When the context is clear we use $p\left( {x}_{i}^{t + 1}\right)$ to represent the investment probability, omitting the dependence on ${\mathbf{x}}^{t}$ and ${\mathbf{y}}^{t}$ .
54
+
55
+ In LRD, we assume that at each time step each agent updates its action independently according to the logit response function (2). Consequently, given ${\mathbf{x}}^{t}$ and ${\mathbf{y}}^{t}$ the agents’ investment probabilities at time step $t + 1$ are conditionally independent, i.e., $p\left( {x}_{i}^{t + 1}\right)$ and $p\left( {x}_{j}^{t + 1}\right)$ are independent for $i \neq j$ . The assumption of conditional independence conceptually utilizes the classic idea of Maximum Pseudo-Likelihood [Besag, 1974], which simplifies the derivation of the data likelihood by avoiding the computation of the normalization constant. Additionally, this property implies convergence of agents' behavior to a stationary distribution. Specifically, let $\mathcal{M}$ be the discrete Markov chain induced from the logit-response dynamics, with state space $\mathcal{S} = \{ 0,1{\} }^{n}$ . The transition probability $p\left( {{\mathbf{x}}^{t + 1} \mid {\mathbf{x}}^{t}}\right)$ equals to $\mathop{\prod }\limits_{{i = 1}}^{n}p\left( {{x}_{i}^{t + 1} = 1 \mid {\mathbf{x}}^{t},{\mathbf{y}}^{t}}\right)$ , which by definition is always positive, including the transition probability from a state to itself. Consequently, the state transition graph of $\mathcal{M}$ is strongly connected and aperiodic. This in turn implies that the stationary distribution $\pi$ of the Markov chain exists and is unique [Chung and Graham, 1997, Wildstrom, 2005].
56
+
57
+ ---
58
+
59
+ ${}^{1}$ The state transition graph of a discrete Markov chain is aperiodic if the transition probability from a state to itself is positive.
60
+
61
+ ---
62
+
63
+ ## 4 THE LEARNING FRAMEWORK
64
+
65
+ Since in practice we typically only have a single trail of past behavior to learn from, we consider the problem of learning a game model parameters from a single behavior sequence collected over $l$ time steps, i.e., ${\mathcal{D}}_{l} =$ $\left\{ {\left( {{\mathbf{x}}^{1},{\mathbf{y}}^{1}}\right) ,\ldots ,\left( {{\mathbf{x}}^{l},{\mathbf{y}}^{l}}\right) }\right\}$ , where ${\mathbf{x}}^{k}$ is the action profile of all agents at time step $k$ and ${\mathbf{y}}^{k}$ is the group-level statistics that capture aggregate behavior by each group in the multi-scale game. We assume that the utility functions of players ${u}_{i}$ have parametric representations, with associated parameter vectors denoted by ${\mathbf{\theta }}_{i} \in {\mathcal{F}}_{i} \mathrel{\text{:=}} {\left\lbrack -1,1\right\rbrack }^{m}$ , where $m$ is the dimension of ${\mathbf{\theta }}_{i}$ ; these are concatenations of the parameters of ${g}_{i}$ and ${h}_{i}$ (and the cost ${c}_{i}$ ), the two main constituent functions in player utilities. We use $\Theta = \left\{ {{\mathbf{\theta }}_{1},\ldots ,{\mathbf{\theta }}_{n}}\right\}$ to represent all learnable parameters of the game, where $\Theta \in \Pi = { \times }_{i = 1}^{n}{\mathcal{F}}_{i}$ . The utility function in (I) is a high-level description; we will instantiate ${g}_{i}$ and ${h}_{i}$ to specific parametric functions below. We present a general likelihood-based approach for learning multi-scale games from such data, and subsequently study an important special case which admits efficient learning.
66
+
67
+ ### 4.1 THE GENERAL CASE
68
+
69
+ The binary multi-scale game together with the logit-response dynamics define a generative time-series model of joint behavior of both agents and groups. We assume that ${\mathbf{y}}^{t}$ is a deterministic function of the individual-level action profile ${\mathbf{x}}^{t}$ , which simplifies the derivation of the data likelihood, as the joint probability of ${\mathbf{x}}^{t + 1}$ and ${\mathbf{y}}^{t + 1}$ reduces to the marginal probability of ${\mathbf{x}}^{t + 1}$ . The generative model is a discrete Markov chain over action profiles. Omitting the dependence of the investment probability on ${\mathbf{x}}^{t}$ and ${\mathbf{y}}^{t}$ , the data likelihood $\mathcal{L}\left( {{\mathcal{D}}_{l};\Theta }\right)$ is formulated as follows:
70
+
71
+ $$
72
+ \mathcal{L}\left( {{\mathcal{D}}_{l};\Theta }\right) = p\left( {\mathbf{x}}^{1}\right) \mathop{\prod }\limits_{{t = 1}}^{{l - 1}}p\left( {{\mathbf{x}}^{t + 1} \mid {\mathbf{x}}^{t},{\mathbf{y}}^{t}}\right) =
73
+ $$
74
+
75
+ $$
76
+ \mathop{\prod }\limits_{{t = 1}}^{{l - 1}}\mathop{\prod }\limits_{{i = 1}}^{n}{\left\lbrack p\left( {x}_{i}^{t + 1} = 1\right) \right\rbrack }^{{x}_{i}^{t + 1}}{\left\lbrack 1 - p\left( {x}_{i}^{t + 1} = 1\right) \right\rbrack }^{1 - {x}_{i}^{t + 1}},
77
+ $$
78
+
79
+ (3)
80
+
81
+ where the last equality utilizes the independence of investment probabilities and the fact that $p\left( {\mathbf{x}}^{1}\right) = 1$ . We learn the parameters $\Theta$ by resorting to the maximum likelihood estimation (MLE). In general, we can leverage gradient-based methods and automatic differentiation tools to maximize the likelihood, as long as the utility functions are differentiable.
82
+
83
+ With a slight abuse of notation, we use $\mathrm{b} - \operatorname{MSGN}\left( \Theta \right)$ to represent the generative model (consisting of the game together with the logit-response dynamics solution concept), with the utility functions parameterized by $\Theta$ . We now instantiate the utility function to a specific parametric form. In particular, we consider the games with linear-quadratic utility functions, augmented with the ${h}_{i}$ to account for the multi-scale structures. The resulting MLE problem is convex, and can thus be (near-)optimally solved using interior point methods. We also develop a statistical test for the existence of multi-scale structure in this game based on the classic likelihood ratio test.
84
+
85
+ ### 4.2 LEARNING MULTI-SCALE LINEAR-QUADRATIC GAMES
86
+
87
+ Linear-quadratic games have been used in much prior literature on network game modeling both in economics and machine learning [Ballester et al., 2006, Bramoullé and Kranton, 2007, Galeotti et al., 2020, Leng et al., 2020], with Leng et al. [2020] specifically considering the problem of learning network structure in such models from Nash equilibrium behavior by the agents. The standard utility function in linear-quadratic network games is defined as
88
+
89
+ $$
90
+ {u}_{i}\left( {{x}_{i},{\mathbf{x}}_{-i}}\right) = {b}_{i}{x}_{i} + {\beta }_{i}{x}_{i}\mathop{\sum }\limits_{{j \in \mathcal{V}}}{A}_{i, j}{x}_{j} - {c}_{i}{x}_{i}^{2}, \tag{4}
91
+ $$
92
+
93
+ where ${b}_{i} \geq 0$ is the marginal benefit of investing, ${c}_{i} \geq 0$ is the cost to invest, and ${\beta }_{i} \in \mathbb{R}$ captures peer effects from the neighbors’ investment. When ${\beta }_{i} > 0$ (resp. ${\beta }_{i} < 0$ ) higher investment from the neighbors encourages agent $i$ to make more (resp., less) investment.
94
+
95
+ To model the multi-scale structure in the game, we consider the following group-level aggregate function ${h}_{i}$ :
96
+
97
+ $$
98
+ {h}_{i}\left( {{x}_{i},\mathbf{y}}\right) = {\eta }_{i}{x}_{i}\left( {{y}_{\alpha \left( i\right) } - \frac{\mathop{\sum }\limits_{{g \in \mathcal{J} \smallsetminus \left\{ {\mathcal{G}}_{\alpha \left( i\right) }\right\} }}{y}_{g}}{\left| \mathcal{J}\right| - 1}}\right) , \tag{5}
99
+ $$
100
+
101
+ where ${y}_{\alpha \left( i\right) }$ is some group-level statistics from agent $i$ ’s group and the second term in the parentheses is the average of the statistics from other groups. The difference models the relative magnitude of the statistics between agent $i$ ’s group and other groups. When ${\eta }_{i} > 0$ (resp., ${\eta }_{i} < 0$ ), higher relative investment by agent $i$ ’s group compared to other groups encourages (resp., discourages) $i$ ’s own investment.
102
+
103
+ We augment the linear-quadratic payoff with the function ${h}_{i}$ , leading to the multi-scale linear-quadratic utility:
104
+
105
+ $$
106
+ {u}_{i}\left( {{x}_{i},{\mathbf{x}}_{-i}}\right) = \left( {{b}_{i} - {c}_{i}}\right) {x}_{i} + {\beta }_{i}{x}_{i}\mathop{\sum }\limits_{{j \in \mathcal{V}}}{A}_{i, j}{x}_{j} + {h}_{i}\left( {{x}_{i},\mathbf{y}}\right) .
107
+ $$
108
+
109
+ (6)
110
+
111
+ The set ${\mathbf{\theta }}_{i} = \left\{ {{b}_{i},{\beta }_{i},{\eta }_{i},{c}_{i}}\right\}$ consists of the parameters we aim to learn from data. Note that as the action space in our setting is binary, the term ${b}_{i}{x}_{i} - {c}_{i}{\left( {x}_{i}\right) }^{2}$ becomes $\left( {{b}_{i} - {c}_{i}}\right) {x}_{i}$ . As a result, accurately estimating the two parameters may not be feasible, as they can be shifted the same amount without changing the difference. Therefore, we treat ${b}_{i} - {c}_{i}$ as a single marginal benefit that we estimate from data. As we now show, the key property of this multi-scale linear quadratic game model is that the resulting MLE problem is convex.
112
+
113
+ ---
114
+
115
+ ${}^{2}$ This problem is not specific to our model: in prior literature, the cost constant ${c}_{i}$ is usually set to $\frac{1}{2}$ in order to avoid the invariance of ${b}_{i} - {c}_{i}$ to the shifting.
116
+
117
+ ---
118
+
119
+ Proposition 4.1. Consider a $b - \operatorname{MSGN}\left( {\mathbf{A},\mathcal{J},{\left\{ {u}_{i}\right\} }_{i = 1}^{n}}\right)$ . If ${\left\{ {u}_{i}\right\} }_{i = 1}^{n}$ are instantiated as the multi-scale linear-quadratic utilities, the resulting MLE optimization problem is convex.
120
+
121
+ Proof. Recall that $\Theta \in \Pi = \mathop{X}\limits_{{i = 1}}^{n}{\mathcal{F}}_{i}$ , that is, a Cartesian product of a set of convex sets. Thus, the feasible region $\Pi$ of the MLE is convex. In what follows, we show that the $\log$ -likelihoof function $\log \mathcal{L}\left( {{\mathcal{D}}_{l};\Theta }\right)$ is concave w.r.t. $\Theta$ .
122
+
123
+ Note that $\log \mathcal{L}\left( {{\mathcal{D}}_{l};\Theta }\right) = \mathop{\sum }\limits_{{t = 1}}^{{l - 1}}\log p\left( {{\mathbf{x}}^{t + 1} \mid {\mathbf{x}}^{t}}\right)$ ; it is sufficient to show that $\log p\left( {{\mathbf{x}}^{t + 1} \mid {\mathbf{x}}^{t}}\right)$ is concave w.r.t. $\Theta$ for any $1 \leq t \leq l - 1$ . We expand $\log p\left( {{\mathbf{x}}^{t + 1} \mid {\mathbf{x}}^{t}}\right)$ as follows:
124
+
125
+ $\log p\left( {{\mathbf{x}}^{t + 1} \mid {\mathbf{x}}^{t}}\right) =$
126
+
127
+ $$
128
+ \mathop{\sum }\limits_{{i = 1}}^{n}\left\lbrack {{x}_{i}^{t + 1}\log p\left( {{x}_{i}^{t + 1} = 1}\right) }\right.
129
+ $$
130
+
131
+ $$
132
+ \left. {+\left( {1 - {x}_{i}^{t + 1}}\right) \log \left\lbrack {1 - p\left( {{x}_{i}^{t + 1} = 1}\right) }\right\rbrack }\right\rbrack \text{,}
133
+ $$
134
+
135
+ The logarithm of the investment probability is as follows:
136
+
137
+ $$
138
+ \log p\left( {{x}_{i}^{t + 1} = 1}\right) = \log \left\lbrack \frac{1}{1 + {e}^{-\gamma \cdot {u}_{i}\left( {1 \mid {\mathbf{x}}^{t},{\mathbf{y}}^{t},{\mathbf{\theta }}_{i}}\right) }}\right\rbrack .
139
+ $$
140
+
141
+ It is direct that ${u}_{i}\left( {1 \mid {\mathbf{x}}^{t},{\mathbf{y}}^{t},{\mathbf{\theta }}_{i}}\right)$ is a linear function of ${\mathbf{\theta }}_{i}$ . In addition, $\log p\left( {{x}_{i}^{t + 1} = 1}\right)$ is concave w.r.t. ${u}_{i}\left( {1 \mid {\mathbf{x}}^{t},{\mathbf{y}}^{t},{\mathbf{\theta }}_{i}}\right)$ , as the second derivative is negative over the domain, i.e.,
142
+
143
+ $$
144
+ \frac{{\partial }^{2}\log p\left( {{x}_{i}^{t + 1} = 1}\right) }{{\partial }^{2}{u}_{i}\left( {1 \mid {\mathbf{x}}^{t},{\mathbf{y}}^{t},{\mathbf{\theta }}_{i}}\right) } = - \frac{{e}^{\gamma \cdot {u}_{i}\left( {1 \mid {\mathbf{x}}^{t},{\mathbf{y}}^{t},{\mathbf{\theta }}_{i}}\right) } \cdot {\gamma }^{2}}{{\left( 1 + {e}^{\gamma \cdot {u}_{i}\left( {1 \mid {\mathbf{x}}^{t},{\mathbf{y}}^{t},{\mathbf{\theta }}_{i}}\right) }\right) }^{2}} < 0.
145
+ $$
146
+
147
+ The composition of a linear function with a concave function leads to a concave function (chapter 3.2.2 at [Boyd and Van-denberghe,2004]); thus, $\log p\left( {{x}_{i}^{t + 1} = 1}\right)$ is concave w.r.t. ${\mathbf{\theta }}_{i}$ . We can similarly show that $\log \left\lbrack {1 - p\left( {{x}_{i}^{t + 1} = 1}\right) }\right\rbrack$ is convex w.r.t. ${\mathbf{\theta }}_{i}$ , which implies that $\left( {1 - {x}_{i}^{t + 1}}\right) \log p\left( {{x}_{i}^{t + 1} = 1}\right)$ is concave w.r.t. ${\mathbf{\theta }}_{i}$ . A linear combination of concave functions is still concave, so $\log p\left( {{\mathbf{x}}^{t + 1} \mid {\mathbf{x}}^{t}}\right)$ is concave w.r.t. $\Theta$ .
148
+
149
+ A Statistical Test for Multi-Scale Structure We now further leverage the proposed framework to develop a statistical test to check whether the game exhibits the multi-scale structure. This test is based on the classic likelihood ratio test [Wasserman,2013]. Specifically, let $\widehat{\Theta } = \{ \widehat{\mathbf{b}},\widehat{\mathbf{c}},\widehat{\mathbf{\beta }},\widehat{\mathbf{\eta }}\}$ be the MLE estimator. The feasible region of $\widehat{\Theta }$ is $\mathcal{F} = \{ \widehat{\Theta } \mid$ $\widehat{\mathbf{b}} \geq 0,\widehat{\mathbf{c}} \geq 0,\widehat{\mathbf{\beta }} \in \left\lbrack {-\mathbf{1},\mathbf{1}}\right\rbrack ,\widehat{\mathbf{\eta }} \in \left\lbrack {-\mathbf{1},\mathbf{1}}\right\rbrack \}$ . The null hypothesis set is ${\mathcal{F}}_{0} = \{ \widehat{\Theta } \in \mathcal{F} \mid \widehat{\mathbf{\eta }} = \mathbf{0}\}$ , encoding the hypothesis that group-level statistics have no impact on agents' utilities. The test statistic is as follows:
150
+
151
+ $$
152
+ \lambda = 2\log \left( \frac{\mathop{\max }\limits_{{\Theta \in \mathcal{F}}}\mathcal{L}\left( {{\mathcal{D}}_{l};\Theta }\right) }{\mathop{\max }\limits_{{\Theta \in {\mathcal{F}}_{0}}}\mathcal{L}\left( {{\mathcal{D}}_{l};\Theta }\right) }\right) . \tag{7}
153
+ $$
154
+
155
+ Intuitively, $\lambda$ is large if there is some estimator $\widehat{\Theta }$ in the feasible region $\mathcal{F}$ for which the data ${\mathcal{D}}_{l}$ is much more likely than for any estimator in the null hypothesis set ${\mathcal{F}}_{0}$ . The p-value is equal to $p\left( {{\chi }_{n}^{2} > \lambda }\right)$ , where ${\chi }_{n}^{2}$ follows a chi-square distribution with $n$ degrees of freedom. In the Experiments section, we present experiments on synthetic data to show that the test is indeed effective at identifying multi-scale structure in games. We then use it on real data to demonstrate that such data also exhibits statistically significant multi-scale behavior dependence.
156
+
157
+ ## 5 EXPERIMENTS
158
+
159
+ We focus our experimental study on learning a multi-scale linear-quadratic game $\mathrm{b} - \operatorname{MSGN}\left( {\Theta }^{ * }\right)$ . In all cases, we learn the game from a single sequence ${\mathcal{D}}_{l}$ , and experiment on both synthetic and real-world data. We use synthetic data to demonstrate the effectiveness of our approach at recovering the ground-truth parameters of the linear-quadratic games, and additionally show that the statistical test successfully identifies multi-scale game structure.
160
+
161
+ In addition, we evaluate the efficacy of the proposed approach to predict future time-series behavior. For both synthetic and real data, we first compare predictive efficacy of the proposed game learning approach with three conventional generative baseline approaches commonly applied in similar settings with the primary purpose of time-series prediction: a discrete Markov chain, a homogeneous Poisson process, and the Hawkes process [Mohler et al., 2011]. Specifically, our experiments use a discrete-time Hawkes process with exponential decay function; the intensity function at time step $t$ is: $\lambda \left( t\right) = {\lambda }_{0} + \alpha \mathop{\sum }\limits_{{{t}_{i} < t}}{z}_{{t}_{i}}{e}^{-\beta \left( {t - {t}_{i}}\right) };{\lambda }_{0}$ and $\alpha$ are estimated through MLE; $\beta$ is selected by cross-validation; ${z}_{{t}_{i}}$ is the sum of ${\mathbf{x}}^{{t}_{i}}$ , i.e., $\mathop{\sum }\limits_{{j = 1}}^{n}{x}_{j}^{{t}_{i}}$ . We show that the proposed approach outperforms these baselines in terms of prediction efficacy.
162
+
163
+ Additionally, we compare our approach with a method for learning Linear Influence Games (LIGs) [Honorio and Ortiz, 2015], a state-of-the-art game-theoretic baseline for learning utility functions from time-series behavior in network games. LIG is a generative model that assumes that behavior in each step in a time-series is generated according to a mixture of two distributions: a uniform distribution over the set of all pure-strategy Nash equilibria, and a uniform distribution over the set of all non-equilibrium strategy profiles. ${}^{3}$ The learnable parameters of an LIG include the parameters of the players' utility functions as well as a parameter deciding which distribution an action profile comes from. The parameters are learned by maximizing the proportion of equilibria observed in the training data.
164
+
165
+ ---
166
+
167
+ ${}^{3}$ Note that the LIG approach assumes that the set of all pure-strategy Nash equilibria can be efficiently sampled. Another advantage of the proposed approach over LIG is that we do not need this assumption.
168
+
169
+ ---
170
+
171
+ ### 5.1 SYNTHETIC DATA
172
+
173
+ We generate a synthetic sequence ${\mathcal{D}}_{l}$ by simulating b-MSGN $\left( {\Theta }^{ * }\right)$ for $l - 1$ iterations, with starting action profile initialized as zeros. In each time step, every agent makes a decision according to the Bernoulli distribution with success rate equal to the investment probability (i.e., Equation (2)). The ground-truth parameters ${\Theta }^{ * } = \left\{ {{\mathbf{b}}^{ * },{\mathbf{c}}^{ * },{\mathbf{\beta }}^{ * },{\mathbf{\eta }}^{ * }}\right\}$ are specified as follows: ${b}_{i}^{ * } \sim$ $\mathcal{N}\left( {{0.3},{0.01}^{2}}\right) ,{c}_{i}^{ * } \sim \mathcal{N}\left( {{1.3},{0.1}^{2}}\right) ,{\beta }_{i}^{ * } \sim \mathcal{N}\left( {-1,{0.01}^{2}}\right)$ and ${\eta }_{i}^{ * } \sim \mathcal{N}\left( {{0.1},{0.01}^{2}}\right)$ . The parameter $\gamma$ is set to 5 . We consider three classes of synthetic networks: Barabási-Albert (BA) [Barabási and Albert, 1999], Watts-Strogatz (WS) [Watts and Strogatz, 1998], and Block Two-level Erdős-Rényi (BTER) [Seshadhri et al., 2012] networks. For each class, we randomly generate 20 networks with 100 nodes each. For each randomly generated network, we run the community detection algorithm proposed by Clauset et al. [2004] and use the resulting communities as groups.
174
+
175
+ Figure 1 shows the effectiveness of learning the game parameters from synthetic data. As the length $l$ increases, the Root Mean Squared Error (RMSE) between the estimated parameters and the true parameters consistently decreases, converging to near-zero; this indicates that the MLE estimator approximates the ground-truth ${\Theta }^{ * }$ reasonably well.
176
+
177
+ ![01963934-8687-7a2c-9533-abca4c913c2f_5_223_1100_555_207_0.jpg](images/01963934-8687-7a2c-9533-abca4c913c2f_5_223_1100_555_207_0.jpg)
178
+
179
+ Figure 1: The RMSE between the estimated parameters and the true parameters across various lengths $l$ . Left: BA (avg. degree=5.82, avg. clustering coeff.=0.1067); Middle: WS (avg. degree=9.1064, avg. clustering coeff.=0.3542); Right: BTER (avg. degree=9.3200, avg. clustering coeff.=0.1299).
180
+
181
+ Next, we show that the statistical test successfully determines the existence of the multi-scale structure in the game. We simulate two sets of data, one is called "with groups" and the other "without group". The "with groups" data is simulated as usual, such that the agents' utilities are influenced by the multi-scale structure. The "without group" data is simulated with ${\eta }_{i}^{ * }$ set to zero, which implies that the multi-scale structure does not have a direct impact on the agents' utilities. The p-values for the two sets of data are showed in Figure 2. The red horizontal lines represent where $p\left( {{\chi }_{n}^{2} > \lambda }\right) = {0.05}$ : we reject the null hypothesis when the p-value is below the red line. The blue lines represent the p-values for the "with groups" data. We can see that as $l$ (the number of observations) increases the p-values consistently decrease. In particular, for BA (resp. WS) networks when $l > {1500}$ (resp. $l > {750}$ ) we correctly reject the null hypothesis. One exception is the BTER network, where the null hypothesis is not rejected with 2000 steps; the average p-value is ${0.18}\left( {\pm {0.074}}\right)$ . This may due to greater structural complexity of BTER networks compared to BA and WS. The dashed orange lines represent the p-values for the "without group" data. Note that the orange lines are consistently above 0.05 by a large margin, which means that we never incorrectly reject the null hypothesis (i.e., never falsely claim the existence of multi-scale structure).
182
+
183
+ ![01963934-8687-7a2c-9533-abca4c913c2f_5_966_473_553_192_0.jpg](images/01963934-8687-7a2c-9533-abca4c913c2f_5_966_473_553_192_0.jpg)
184
+
185
+ Figure 2: Experimental results for the statistical test. The blue solid lines (resp. orange dashed lines) represent the p-values evaluated on the data with (resp. without) the multi-scale structure. Left: BA; Middle: WS; Right: BTER.
186
+
187
+ ### 5.2 REAL-WORLD DATA
188
+
189
+ Gang-Related Homicides. We learn the game on a gang-related homicides data from Los Angeles [Valasik et al., 2017]. The data includes 1425 incidents from 1978 to 2012. Each incident consists of several attributes, including date, address, coordinates $(X$ and $Y$ correspond to latitude and longitude, respectively), and demographic information of the victim and the suspect. Each incident includes a label indicating whether the homicide is gang-related, and if so, includes an attribute of the suspect's gang affiliation. All sensitive attributes in the experimental results are anonymized with numerical values. The data is preprocessed as follows. First, we keep only the incidents that are gang-related, and discard the incidents with missing attributes. Second, to correct errors in incident coordinates, we compute the geometric center of the incidents' coordinates, and then fit a standard Gaussian distribution on their distances to the center, and finally discard any incidents that are three standard deviation away from the center. After preprocessing, the data contains 606 incidents committed by suspects from 54 gangs. A gang's location is approximated by the geometric center of its associated incidents. We treat the 54 gangs as the agents in the game; they are partitioned into three groups according to their neighborhood information. The network $\mathbf{A}$ is weighted, undirected, and complete, with the gangs as nodes. The weight on an edge is the inverse of the driving time between the two endpoints (gangs) obtained by querying the Google Maps API. A visualization of the processed data is provided in the Supplement.
190
+
191
+ Next, we construct a sequence ${\mathcal{D}}_{l}$ of action profiles from the processed data by discretizing time and grouping incidents that occur in each time interval, where $T$ is the hyperpa-rameter corresponding to the length of the interval in days (i.e., how finely the data is discretized). We experiment with different values of $T$ , e.g., $T = {30},{60},{90}$ . We set ${x}_{j}^{t} = 1$ if there is at least one incident associated with the $j$ -th gang at time step $t$ , and set ${x}_{j}^{t} = 0$ otherwise. The aggregate statistic ${y}_{i}^{t} = \mathop{\sum }\limits_{{j \in {\mathcal{G}}_{i}}}{x}_{j}^{t}$ , measuring the overall level of violence in group ${\mathcal{G}}_{i}$ .
192
+
193
+ We first apply the statistical test on data aggregated with different values of $T$ . The p-values are less than 0.05 across the values of $T$ , except for $T = {30}$ and 120 . The overall observation is that the data consistently exhibits statistically significant multi-scale behavior dependence, an effect that is relatively robust to time discretization; the only instances where its influence is not statistically significant is for $T =$ 30 and 120 .
194
+
195
+ ![01963934-8687-7a2c-9533-abca4c913c2f_6_275_666_454_271_0.jpg](images/01963934-8687-7a2c-9533-abca4c913c2f_6_275_666_454_271_0.jpg)
196
+
197
+ Figure 3: Comparison of our approach with the game-theoretic baseline LIG and three conventional generative approaches in terms of predictive log-likelihood on test data.
198
+
199
+ ![01963934-8687-7a2c-9533-abca4c913c2f_6_225_1083_552_207_0.jpg](images/01963934-8687-7a2c-9533-abca4c913c2f_6_225_1083_552_207_0.jpg)
200
+
201
+ Figure 4: A visualization of the predicted total crimes on test data with $T = {30}$ (i.e., each time step represents 30 days). We omit Poisson and LIG as their predictions are far from the ground-truth. The shaded area represents two standard deviations of the prediction from b-MSGN.
202
+
203
+ To compare the proposed approach, in which we learn the linear-quadratic game on this data, with several baselines in terms of predictive log-likelihood on test data, we split ${\mathcal{D}}_{l}$ into training data and test data with ratio $9 : 1$ . The results are showed in Figure 3. We observe that our approach is considerably better than LIG, particularly for smaller values of $T$ . In addition, our approach is competitive in predictive efficacy with all but the Markov chain baseline (which is considerably worse), including the Hawkes process, which is the state-of-the-art approach for modeling crime data of this kind [Mohler et al., 2011].
204
+
205
+ A visualization of the predicted total crimes on test data is showed in Figure 4; the shaded area represents two standard deviations of the prediction from b-MSGN. The prediction from Poisson and LIG is omitted as they are far from the ground-truth; both are almost horizontal lines without capturing any trends exhibited in real data. We can observe that b-MSGN is capturing the overall trend with high confidence, i.e., the ground-truth lies within two standard deviations of the prediction.
206
+
207
+ ![01963934-8687-7a2c-9533-abca4c913c2f_6_950_326_588_214_0.jpg](images/01963934-8687-7a2c-9533-abca4c913c2f_6_950_326_588_214_0.jpg)
208
+
209
+ Figure 5: The estimates of ${b}_{i} - {c}_{i},{\beta }_{i}$ and ${\eta }_{i}$ . Top: the homicides data aggregated with $T = {60}$ . Bottom: the bilateral trading data.
210
+
211
+ The key advantage of the proposed approach comes from its interpretability as capturing strategic interactions, and in linear-quadratic games in particular, the parameters we learn have a natural interpretation, which we now consider. Specifically, to analyze the game parameters we have learned, we set $T = {60}$ as an illustration (the results are quite robust to this), so that the resulting sequence ${\mathcal{D}}_{l}$ has $l = {213}$ time steps. As we do not have access to the ground-truth utility functions, the analysis serves to provide insights about the gangs' behavior. The learned parameters are showed in the top row of Figure 5. First, the estimated ${b}_{i} - {c}_{i}$ are showed on the left of the figure; the median is -0.77 . Note that the estimates are negative, that is, perceived costs of homicides by gang members exceed benefits. Overall, gang-related homicides are relatively rare; indeed there are on average ${4.7}\%$ gangs that committed homicides at each time step; when increasing $T$ to 365, there are on average 19.7% gangs that committed homicides at each time step and the median of ${b}_{i} - {c}_{i}$ becomes -0.56 .
212
+
213
+ The estimates of ${\beta }_{i}$ are shown in the middle of the figure. Most of the estimates take two extreme values, +1 or -1 . The mean is 0.18 , which indicates that gang members on average tend to commit more homicides as the number of homicides from other members of their gang increases. This may be explained by the self-excitation phenomenon observed by Mohler et al. [2011] that an incident involving rival gangs can lead to retaliatory acts of homicide. Finally, estimated ${\eta }_{i}$ are shown on the right of the figure. Most estimates are positive (except a few outliers), which suggests an intuitive observation that a greater overall level of violence in a gang's neighborhood tends to lead to greater incidence of violence by the gang.
214
+
215
+ To see how the discretization affects the estimates, we plot the estimated parameters across the values of $T$ as in Figure 6 . The estimates of ${b}_{i} - {c}_{i}$ increase slightly increasing as $T$ gets larger, due simply to a greater number of homicides being aggregated in each step. However, the difference is not significant. Similarly, the estimates of ${\beta }_{i}$ and ${\eta }_{i}$ are not severely affected by the values of $T$ .
216
+
217
+ ![01963934-8687-7a2c-9533-abca4c913c2f_7_191_181_620_207_0.jpg](images/01963934-8687-7a2c-9533-abca4c913c2f_7_191_181_620_207_0.jpg)
218
+
219
+ Figure 6: From left to right, the estimations of ${b}_{i} - {c}_{i},{\beta }_{i}$ and ${\eta }_{i}$ across different values of $T$ ; the feasible region of each estimated parameter is restricted to $\left\lbrack {-1,1}\right\rbrack$ .
220
+
221
+ Bilateral Trading Data. The second dataset we consider is the bilateral trading data from the United Nations Com-trade Database (https://comtrade.un.org/).The data consists of statistics for international bilateral trading (e.g., imports and exports), including over 170 reporting economies and records from 1962 to 2018. We focus on annual exports data in terms of their value in US-dollars and extract a subset consisting of 127 reporting economies with complete statistics since 1962; the reporting economies are partitioned into six groups according to the continents they are located on: Asia, Africa, Europe, South America, Australia and North America. We treat the reporting economies as agents in the game. The graph underlying the game is directed and weighted, where an edge from $i$ to $j$ means that $i$ has exported goods/service to $j$ , and the weight on the edge is the normalized total value of exports since 1962. As the graph is directed, we define the neighborhood of economy $i$ as its exporting destinations. The sequence ${\mathcal{D}}_{l}$ of action profiles consists of 57 time steps, each corresponding to a year. For every economy, we track a moving average of the value of exports over $k$ time steps. Let ${e}_{i}^{t}$ be the value of exports of economy $i$ at time step $t$ . For $t > k$ , if the value is greater than the moving average, i.e., ${e}_{i}^{t} > \left( {{e}_{i}^{t - 1} + \cdots + {e}_{i}^{t - k}}\right) /k$ , we set ${x}_{i}^{t} = 1$ ; otherwise ${x}_{i}^{t} = 0$ . For $t = 1,\ldots , k$ the actions ${x}_{i}^{t}$ are always set to zero. Intuitively, ${x}_{i}^{t} = 1$ encodes that economy $i$ has a higher value of exports compared with the average value of last three years, which signals economic growth [Michaely, 1977]. The group-level statistic is again ${y}_{i}^{t} = \mathop{\sum }\limits_{{j \in {\mathcal{G}}_{i}}}{x}_{j}^{t}$ . We experiment with five values of $k$ , ranging from 1 to 5 .
222
+
223
+ We first run the statistical test on ${\mathcal{D}}_{l}$ . The resulting p-values are nearly zero across all the values of $k$ , providing evidence for the importance of multi-scale structure.
224
+
225
+ Next, we compare the game with the baselines on test data (the last 5% of the entire sequence) in terms of predicted log-likelihoods. The results for $k = 5$ are as follows: 1) Markov Chain: -36.8413, 2) Poisson: -36.9614, 3) Hawkes: -29.9843, 4) LIG: -31.4041, and 5) b-MSGN: -24.7406; the results for other values of $k$ are similar. Note that in this case the proposed b-MSGN approach outperforms all baselines, including the Hawkes process.
226
+
227
+ Finally, the estimated parameters are shown in Figure 5 (second row). The estimated ${b}_{i} - {c}_{i}$ are mostly negative, indicating that for most economies it is difficult to maintain a steady growth in exports; some exceptions include Hong Kong, Japan, Ireland, Central African Republic, and Taiwan, which are the top five economies ranked by the estimated ${b}_{i} - {c}_{i}$ in descending order. Most estimated values of ${\beta }_{i}$ are positive, suggesting that an economy will have a growth in exports when its exporting destinations also have increasing exports. Similarly, we rank the economies in terms of the estimated ${\beta }_{i}$ in descending order, with Australia, New Zealand, Aruba, Venezuela, and Dominica the top five. For each of these, we identify the top three major export goods in terms of the share in the total value of exports since 1962. The major exporting goods of the five economies are raw materials, e.g., iron ore, meat, crude/refined petroleum, and fruits/nuts. The interpretation is that when the exports of other economies grow, the demand for raw materials also increases. Finally, most estimated values of ${\eta }_{i}$ are positive, which suggests that the relative growth of a group's exports (compared with other groups) is a good predictor of the participating economies' growth.
228
+
229
+ To study the sensitivity of the estimated parameters to $k$ , we plot the estimated parameters across the values of $k$ in Figure 7. There is a decreasing trend in the estimate of ${b}_{i} - {c}_{i}$ for smaller values of $k$ , suggesting that the interpretation of the estimate has to consider the specific value of $k$ . The estimates of ${\beta }_{i}$ and ${\eta }_{i}$ are not sensitive to $k$ .
230
+
231
+ ![01963934-8687-7a2c-9533-abca4c913c2f_7_934_1128_623_207_0.jpg](images/01963934-8687-7a2c-9533-abca4c913c2f_7_934_1128_623_207_0.jpg)
232
+
233
+ Figure 7: From left to right, the estimations of ${b}_{i} - {c}_{i},{\beta }_{i}$ and ${\eta }_{i}$ across different values of $k$ ; the feasible region of each estimated parameter is restricted to $\left\lbrack {-1,1}\right\rbrack$ .
234
+
235
+ ## 6 CONCLUSION
236
+
237
+ We propose a game-theoretic generative model of time-series behavior data by combining single-shot multi-scale network games with logit-response dynamics. We do not assume that the agents are fully rational, but rather that they make decisions according to logit-response dynamics. We then present a general learning framework based on maximum likelihood estimation (MLE) for inferring parameters of such games. In the special case of multi-scale linear-quadratic games we prove that the MLE is a convex optimization problem and thus admits efficient solution algorithms. We further develop a statistical test to determine whether the game exhibits multi-scale structure. We use extensive experiments on both synthetic and real datasets to show the efficacy of the proposed approach.
238
+
239
+ ## REFERENCES
240
+
241
+ Carlos Alós-Ferrer and Nick Netzer. The logit-response dynamics. Games and Economic Behavior, 68(2):413- 427, 2010.
242
+
243
+ James Andreoni and John H Miller. Rational cooperation in the finitely repeated prisoner's dilemma: Experimental evidence. Economic Journal, 103(418):570-85, 1993.
244
+
245
+ Coralio Ballester, Antoni Calvó-Armengol, and Yves Zenou. Who's who in networks. wanted: The key player. Econometrica, 74(5):1403-1417, 2006.
246
+
247
+ Albert-László Barabási and Réka Albert. Emergence of scaling in random networks. Science, 286(5439):509- 512, 1999.
248
+
249
+ Julian Besag. Spatial interaction and the statistical analysis of lattice systems. Journal of the Royal Statistical Society: Series B (Methodological), 36(2):192-225, 1974.
250
+
251
+ Avrim Blum, Jeffrey Jackson, Tuomas Sandholm, and Martin Zinkevich. Preference elicitation and query learning. Journal of Machine Learning Research (JMLR), 5(Jun): 649-667, 2004.
252
+
253
+ Lawrence E Blume. The statistical mechanics of strategic interaction. Games and economic behavior, 5(3):387- 424, 1993.
254
+
255
+ Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.
256
+
257
+ Yann Bramoullé and Rachel Kranton. Public goods in networks. Journal of Economic Theory, 135(1):478-494, 2007.
258
+
259
+ Colin Camerer. Behavioral Game Theory: Experiments in Strategic Interaction. Princeton University Press, 2003.
260
+
261
+ Urszula Chajewska, Daphne Koller, and Dirk Ormoneit. Learning an agent's utility function by observing behavior. In Proceedings of the 18th International Conference on Machine Learning, pages 35-42, 2001.
262
+
263
+ Fan RK Chung and Fan Chung Graham. Spectral graph theory. American Mathematical Society, 1997.
264
+
265
+ Aaron Clauset, Mark EJ Newman, and Cristopher Moore. Finding community structure in very large networks. Physical review E, 70(6):066111, 2004.
266
+
267
+ Quang Duong, Yevgeniy Vorobeychik, Satinder Singh, and Michael Wellman. Learning graphical game models. In Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI), 2009.
268
+
269
+ Quang Duong, Michael P Wellman, Satinder Singh, and Yevgeniy Vorobeychik. History-dependent graphical mul-tiagent models. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pages 1215-1222, 2010.
270
+
271
+ Baruch Fischhoff and Charles F Manski. Elicitation of preferences, volume 19. Springer Science & Business Media, 2000.
272
+
273
+ Drew Fudenberg, Fudenberg Drew, David K Levine, and David K Levine. The theory of learning in games, volume 2. MIT press, 1998.
274
+
275
+ Andrea Galeotti, Sanjeev Goyal, Matthew O Jackson, Fernando Vega-Redondo, and Leeat Yariv. Network games. The Review of Economic Studies, 77(1):218-244, 2010.
276
+
277
+ Andrea Galeotti, Benjamin Golub, and Sanjeev Goyal. Targeting interventions in networks. Econometrica, 88(6): 2445-2471, 2020.
278
+
279
+ Vikas Garg and Tommi Jaakkola. Learning tree structured potential games. In Proceedings of the 29th International Conference on Neural Information Processing Systems (NeurIPS), volume 29, pages 1552-1560, 2016.
280
+
281
+ Michelle Girvan and Mark EJ Newman. Community structure in social and biological networks. Proceedings of the national academy of sciences, 99(12):7821-7826, 2002.
282
+
283
+ Jens Grossklags, Nicolas Christin, and John Chuang. Security and insurance management in networks with heterogeneous agents. In Proceedings of the 9th ACM Conference on Electronic Commerce (EC), pages 160-169, 2008.
284
+
285
+ Philip A Haile, Ali Hortaçsu, and Grigory Kosenok. On the empirical content of quantal response equilibrium. American Economic Review, 98(1):180-200, 2008.
286
+
287
+ Jean Honorio and Luis Ortiz. Learning the structure and parameters of large-population graphical games from behavioral data. Journal of Machine Learning Research (JMLR), 16:1157-1210, 2015.
288
+
289
+ Matthew O Jackson. Social and economic networks. Princeton university press, 2010.
290
+
291
+ Kun Jin, Yevgeniy Vorobeychik, and Mingyan Liu. Multi-scale games: Representing and solving games on networks with group structure. In Proceedings of the AAAI Conference on Artificial Intelligence, 2021.
292
+
293
+ Michael Kearns, Michael L Littman, and Satinder Singh. Graphical models for game theory. In Proceedings of the 17th conference on Uncertainty in artificial intelligence (UAI), pages 253-260, 2001.
294
+
295
+ Michael J Kearns and Jennifer Wortman. Learning from collective behavior. In Proceedings of the 21st Conference on Learning Theory (COLT), pages 99-110, 2008.
296
+
297
+ Yan Leng, Xiaowen Dong, Junfeng Wu, and Alex Pentland. Learning quadratic games on networks. In Proceedings of the 37th International Conference on Machine Learning (ICML), pages 5820-5830. PMLR, 2020.
298
+
299
+ Chun Kai Ling, Fei Fang, and J. Zico Kolter. What game are we playing? end-to-end learning in normal and extensive form games. In Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI), 2018.
300
+
301
+ Richard D McKelvey and Thomas R Palfrey. Quantal response equilibria for normal form games. Games and Economic Behavior, 10(1):6-38, 1995.
302
+
303
+ Michael Michaely. Exports and growth: an empirical investigation. Journal of development economics, 4(1):49-53, 1977.
304
+
305
+ G. O. Mohler, M. B. Short, P. J. Brantingham, F. P. Schoenberg, and G. E. Tita. Self-exciting point process modeling of crime. Journal of the American Statistical Association, 106(493):100-108, 2011.
306
+
307
+ Thomas D Nielsen and Finn V Jensen. Learning a decision maker's utility function from (possibly) inconsistent behavior. Artificial Intelligence, 160(1-2):53-78, 2004.
308
+
309
+ Comandur Seshadhri, Tamara G Kolda, and Ali Pinar. Community structure and scale-free collections of erdős-rényi graphs. Physical Review E, 85(5):056109, 2012.
310
+
311
+ Dale O Stahl II and Paul W Wilson. Experimental evidence on players' models of other players. Journal of economic behavior & organization, 25(3):309-327, 1994.
312
+
313
+ Matthew Valasik, Michael S. Barton, Shannon E. Reid, and George E. Tita. Barriocide: Investigating the temporal and spatial influence of neighborhood structural characteristics on gang and non-gang homicides in east los angeles. Homicide Studies, 21, 2017.
314
+
315
+ Yevgeniy Vorobeychik, Michael P Wellman, and Satinder Singh. Learning payoff functions in infinite games. Machine Learning, 67(1-2):145-168, 2007.
316
+
317
+ Larry Wasserman. All of statistics: a concise course in statistical inference. Springer Science & Business Media, 2013.
318
+
319
+ Duncan J Watts and Steven H Strogatz. Collective dynamics of small-world networks. Nature, 393(6684):440, 1998.
320
+
321
+ Kevin Waugh, Brian D Ziebart, and J Andrew Bagnell. Computational rationalization: The inverse equilibrium problem. In Proceedings of the 28th International Conference on Machine Learning (ICML), 2011.
322
+
323
+ Jacob Wildstrom. The convergence of markov chain on directed graph. http://www.math.ucsd.edu/ ~fan/teach/262/notes/paul/10_24_notes. pdf, 2005.
324
+
325
+ ## SUPPLEMENTARY MATERIAL
326
+
327
+ ## 7 A VISUALIZATION OF THE GANG-RELATED HOMICIDES DATASET
328
+
329
+ The spread of the data is displayed in Figure 8 left, where each dot represents an incident and the color indicates the suspect's gang affiliation. The horizontal axes and the vertical axes represent latitude and longitude, respectively. A gang's location is approximated by the geometric center of its associated incidents. The spread of the gangs are showed in Figure 8 middle. We treat the 54 gangs as the agents of the game; they are partitioned into three groups according to their neighborhoods information. The groups are shown in Figure 8 right, where gangs in the same group have the same color. The network $\mathbf{A}$ is weighted, undirected and complete, with the gangs as nodes. The weight on an edge is the inverse of the driving time between the two endpoints (gangs), obtained by querying the Google Map API.
330
+
331
+ ![01963934-8687-7a2c-9533-abca4c913c2f_10_525_614_705_259_0.jpg](images/01963934-8687-7a2c-9533-abca4c913c2f_10_525_614_705_259_0.jpg)
332
+
333
+ Figure 8: A visualization of the gang-related homicides dataset. The horizontal axes and the vertical axes represent latitude and longitude, respectively. Left: each dot represents an incident and the color indicates the suspect's gang affiliation. Middle: each dot represents a gang, where the location is the geometric center of its associated incidents. Right: the partition of the gangs into three groups, with each group representing a neighborhood.
334
+
UAI/UAI 2022/UAI 2022 Conference/BGe6r8i9x5/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,225 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Learning Binary Multi-Scale Games on Networks
2
+
3
+ § ABSTRACT
4
+
5
+ Network games are a natural modeling framework for strategic interactions of agents whose actions have local impact on others. Recently, a multi-scale network game model has been proposed to capture local effects at multiple network scales, such as among both individuals and groups. We propose a framework to learn the utility functions of binary multi-scale games from agents' behavioral data. Departing from much prior work in this area, we model agent behavior as following logit-response dynamics, rather than acting according to a Nash equilibrium. This defines a generative time-series model of joint behavior of both agents and groups, which enables us to naturally cast the learning problem as maximum likelihood estimation (MLE). We show that in the important special case of multi-scale linear-quadratic games, this MLE problem is convex. Extensive experiments using both synthetic and real data demonstrate that our proposed modeling and learning approach is effective in both game parameter estimation as well as prediction of future behavior, even when we learn the game from only a single behavior time series. Furthermore, we show how to use our framework to develop a statistical test for the existence of multi-scale structure in the game, and use it to demonstrate that real time-series data indeed exhibits such structure.
6
+
7
+ § 2 INTRODUCTION
8
+
9
+ A broad class of scenarios involving strategic interaction among a large collection of agents can be modeled by network (graphical) games, including investment in a public good [Bramoullé and Kranton, 2007, Grossklags et al., 2008], information diffusion [Galeotti et al., 2010], peer effects in social networks [Ballester et al., 2006], and adoption of innovation [Jackson, 2010]. A prominent feature of network games is local effects, where an agent's utility depends only on the actions of its network neighbors [Kearns et al., 2001]. Many real networks, however, additionally exhibit group or community structure [Girvan and Newman, 2002], and Jin et al. [2021] recently proposed a multi-scale network game model that embeds such structure into the network game representation. However, a multi-scale game representation is often not given a priori, and instead what is available is time-series data of actual behavior, such as trade interactions among nations, or homicides arising from organized crime activities. Our goal is to develop a scalable framework for learning parametric models of multi-scale network games from such time-series data.
10
+
11
+ The general problem of learning utility functions in games from observed behavior has been extensively studied [Cha-jewska et al., 2001, Vorobeychik et al., 2007, Waugh et al., 2011, Honorio and Ortiz, 2015, Garg and Jaakkola, 2016, Leng et al., 2020]. A common assumption in this line of work is that agents are fully rational in that they act according to a Nash equilibrium. However, much experimental evidence suggests that this assumption is commonly violated [Andreoni and Miller, 1993, Camerer, 2003]. In addition, time-series behavior data often exhibits intertem-poral dependence, such as the self-exciting nature of crime data [Mohler et al. 2011], a feature that is lost if behavior is modeled by a Nash equilibrium of a single-shot game.
12
+
13
+ We propose to use logit-response dynamics(LRD)-a classic framework to capture bounded rational behavior in games [Blume, 1993, Alós-Ferrer and Netzer, 2010]—as a solution concept in learning utility functions from time-series data representing behavior in repeated strategic interactions. In LRD, each action by a player is played with a probability proportional to its utility, with actions of the other players fixed to what was played in the previous time step. LRD has two advantages over Nash equilibrium. First, it explicitly captures intertemporal dependence in behavior, since agents are responding to previously observed choices by others; in contrast, Nash equilibrium behavior in a one-shot game exhibits no temporal dependence. Second, LRD solution concept is more psychologically plausible than Nash equilibrium behavior [Haile et al., 2008, Fudenberg et al., 1998, Stahl II and Wilson, 1994]. While Duong et al. [2010] also explicitly modeled intertemporal dependence in behavior, their approach was limited to consensus games, and required knowledge of utilities associated with player actions. Finally, ours is the first approach to consider multi-scale structure of strategic interactions on networks.
14
+
15
+ Armed with the game-theoretic generative model of time-series behavior data, we formulate the game learning problem as maximum likelihood estimation (MLE). In general, this problem can be (approximately) solved using gradient ascent; however, neither optimality nor consistency of estimation is guaranteed in our setting, where data is not generated i.i.d. To address this, we instantiate our framework in the context of parametric multi-scale linear-quadratic utility models. We prove that in this special case, the MLE problem is convex and can thus be solved efficiently. Our final technical contribution is a likelihood ratio test that enables us to statistically determine whether behavioral data generated by a multi-scale game model actually reflects multi-scale structure, where the null hypothesis is that only single-scale interactions significantly impact behavior.
16
+
17
+ We use extensive experiments on both synthetic and real datasets to demonstrate that the proposed approach effectively learns game parameters from time-series data. Furthermore, we show that our approach outperforms state-of-the-art baselines in predicting future agent behavior. Finally, we show that the game models we learn on real data offer interesting insights about behavior in the associated settings. For example, in the case of gang violence data, we show that the model we learn exhibits temporal self-excitation of homicides at multiple scales (that is, stemming from both individual gang member interaction, as well as interactions among gangs), generalizing insights from prior literature [Mohler et al., 2011].
18
+
19
+ Related Work Preference (or utility) elicitation, or inferring preferences of agents through active interaction, is a classic problem in decision theory [Fischhoff and Man-ski, 2000, Blum et al., 2004]. The passive counterpart of preference elicitation is preference or utility learning from observed time-series data of behavior [Chajewska et al., 2001, Nielsen and Jensen, 2004]. Of direct relevance to our work is the literature on learning utility functions of players in game-theoretic models of their behavior. In this there are two major strands: learning utilities from observations of behavior time-series [Honorio and Ortiz, 2015, Garg and Jaakkola, 2016, Leng et al., 2020, Ling et al., 2018, Waugh et al., 2011], and learning utilities from observed payoffs [Duong et al., 2009, Vorobeychik et al., 2007]. The principal difference between our framework and the former set of approaches stems from our use of LRD model of behavior, which considerably simplifies the learning problem and naturally allows us to capture temporal interdependence. Our approach draws some inspiration from the framework for learning from collective behavior by Kearns and Wort-man [2008]. However, the key general result in Kearns and Wortman [2008] requires learning with reset (i.e., a large collection of independently generated sequences of behavior), whereas we learn from only a single observed behavior sequence. Duong et al. [2010], like us, explicitly modeled intertemporal dependence in behavior. However, their approach was limited to consensus games, and required knowledge of player utilities.
20
+
21
+ § 3 MODEL
22
+
23
+ § 3.1 BINARY MULTI-SCALE GAME ON NETWORKS
24
+
25
+ A binary multi-scale game is defined on a network, which we represent by the adjacency matrix $\mathbf{A}$ . The network can be directed or undirected, weighted or unweighted. We only assume that there are no self-loops in the network. For expository purposes, $\mathbf{A}$ is unweighted and undirected in the present paper. The agents in the game are situated on the vertices of $\mathbf{A}$ , denoted by $\mathcal{V} = \left\{ {{v}_{1},\ldots ,{v}_{n}}\right\}$ , and are partitioned into $K$ groups, i.e., $\mathcal{V} = { \cup }_{i = 1}^{K}{\mathcal{G}}_{i}$ , and ${\mathcal{G}}_{i} \cap {\mathcal{G}}_{j} = \varnothing$ for any $i \neq j$ . We use the set $\mathcal{J} = \left\{ {{\mathcal{G}}_{i} \mid i = 1,\ldots ,K}\right\}$ to represent the $K$ groups. Intuitively, we can use each group ${\mathcal{G}}_{i}$ to represent a neighborhood when the underlying network is an urban network, or an interest group if the underlying network is a social network. The group membership of agent $i$ is encoded by a mapping $\alpha \left( i\right)$ from the agent’s index to its group index, i.e., $\alpha \left( i\right) = j$ for $i \in {\mathcal{G}}_{j}$ . Throughout, we assume that the network structure $\mathbf{A}$ , the mapping $\alpha \left( i\right)$ , and the group structure $\mathcal{J}$ are known.
26
+
27
+ We use ${x}_{i} \in {\mathcal{S}}_{i}$ to represent agent $i$ ’s action, where ${\mathcal{S}}_{i} = \{ 0,1\}$ . We use public goods investment as a running example, where ${x}_{i} = 1$ (resp. ${x}_{i} = 0$ ) means that agent $i$ invests (resp. does not invest) in the public goods. Consequently, we will refer the choice ${x}_{i} = 1$ as an agent’s decision to invest, while ${x}_{i} = 0$ means that $i$ decides not to invest. The marginal cost of making an investment is captured by a constant ${c}_{i} \in {\mathbb{R}}_{ + }$ , e.g., monetary cost, time, and/or effort exerted. The action profile of all agents is represented by $\mathbf{x} \in \{ 0,1{\} }^{n}$ , where the $i$ -th entry is ${x}_{i}$ . We use the set $\mathcal{N}\left( i\right)$ to represent agent $i$ ’s neighbors. The action profile restricted to agent $i$ ’s neighbors is ${\mathbf{x}}_{\mathcal{N}\left( i\right) }$ .
28
+
29
+ To capture multi-scale (group) structure of the game, we define a vector $\mathbf{y} \in {\mathbb{R}}^{K}$ , which represents some aggregate statistic at the group level. Typically, ${y}_{i}$ will be the total investment by group $i$ , i.e., ${y}_{i} = \mathop{\sum }\limits_{{j \in {\mathcal{G}}_{i}}}{x}_{j}$ . We emphasize, however, that the definition of $\mathbf{y}$ is quite general, e.g., ${y}_{i}$ can also be the median investment from group $i$ , or any other reasonable group-level statistic. The key idea behind the multi-scale representation is that while agents have concrete knowledge about the behavior of those they regularly interact with (network neighbors), they only have higher-level knowledge about other groups, as captured by the associated statistics for those groups. A concrete example is vaccination: an agent usually has more specific knowledge about the vaccination status of her close friends, which is encoded by ${\mathbf{x}}_{\mathcal{N}\left( i\right) }$ , but only aggregate vaccination information at the level of counties or states, which is captured by $\mathbf{y}$ . The utility function of agent $i$ is defined as follows:
30
+
31
+ $$
32
+ {u}_{i}\left( {{x}_{i},{\mathbf{x}}_{-i}}\right) = {g}_{i}\left( {{x}_{i},{\mathbf{x}}_{\mathcal{N}\left( i\right) }}\right) + {h}_{i}\left( {{x}_{i},\mathbf{y}}\right) - {c}_{i}{x}_{i}, \tag{1}
33
+ $$
34
+
35
+ where $\mathbf{y}$ is implicitly a function of the full action profile $\mathbf{x}$ . The function ${g}_{i}$ models local effects between an agent and its direct neighbors, capturing the externality that agent $i$ experiences from its neighbors' (and its own) investment. The function ${h}_{i}$ generalizes local effects from the individual level to the group level, encoding the multi-scale structure in the game. The term ${c}_{i}{x}_{i}$ captures the cost of investment. Putting everything together, we define a binary multi-scale game on networks as a tuple $\mathrm{b} - \operatorname{MSGN}\left( {\mathbf{A},\mathcal{J},\left\{ {\mathcal{S}}_{i}\right\} ,{\left\{ {u}_{i}\right\} }_{i = 1}^{n}}\right)$ , where $\mathbf{A}$ is the underlying network, $\mathcal{J}$ is the group structure, ${\mathcal{S}}_{i}$ are pure strategy sets of players, and ${u}_{i}$ are player utilities defined in Equation (1).
36
+
37
+ § 3.2 LOGIT-RESPONSE DYNAMICS
38
+
39
+ When modeling agents' strategic behavior, a common assumption is that agents are rational, i.e., they always choose the action with the highest utility. This is formally modeled by the best-response rule: ${x}_{i} \in \arg \mathop{\max }\limits_{{x}_{i}^{\prime }}{u}_{i}\left( {{x}_{i}^{\prime },{\mathbf{x}}_{-i}}\right)$ , where ${\mathbf{x}}_{-i}$ represents all agents’ actions other than agent $i$ . In the conventional Nash equilibrium solution concept that has been common in prior literature on learning games from data [Honorio and Ortiz, 2015, Leng et al., 2020], all players are assumed to simultaneously choose a best response to each other. In reality, however, an agent may not make completely rational decisions, due to 1) limited resources or computational power needed to precisely solve the argmax problem and 2) inability to perfectly assess small differences in its utility. Furthermore, a Nash equilibrium of a static game cannot capture intertemporal dependencies that may be present in time-series behavior data, and multiplicity of equilibria creates a further practical challenge in learning general-sum games from data. A common alternative to the Nash equilibrium solution concept is a quantal response equilibrium (QRE) [McKelvey and Palfrey, 1995], which was recently used in a framework for learning two-player zero-sum games from data [Ling et al., 2018]. However, multiplicity of equilibria (both Nash and QRE) in general-sum games has limited further progress.
40
+
41
+ Our key conceptual contribution is to combine bounded rationality in action choices with bounded rationality in dynamic agent behavior. While such combination seems entirely natural, we are the first to explore it in the context of learning games from time-series data. Our experiments below vindicate this approach, which resolves both the issue of multiplicity of equilibria and dynamic interdependencies in behavior. Specifically, we adopt a classic model of bounded-rational dynamic behavior: logit-response dynamics (LRD) [Blume, 1993, Alós-Ferrer and Netzer, 2010]. LRD presumes a repeated one-shot game in which agents select actions with probabilities proportional to their utilities (as in QRE) in every step, taking choices made by others as given from the previous step (unlike QRE). In our context, the probability of agent $i$ choosing to invest $\left( {{x}_{i} = 1}\right)$ in the next time step is
42
+
43
+ $$
44
+ p\left( {{x}_{i}^{t + 1} = 1 \mid {\mathbf{x}}^{t},{\mathbf{y}}^{t}}\right) = \frac{{e}^{\gamma \cdot {u}_{i}\left( {1,{\mathbf{x}}_{-i}^{t},{\mathbf{y}}^{t}}\right) }}{{e}^{\gamma \cdot {u}_{i}\left( {1,{\mathbf{x}}_{-i}^{t},{\mathbf{y}}^{t}}\right) } + {e}^{\gamma \cdot {u}_{i}\left( {0,{\mathbf{x}}_{-i}^{t},{\mathbf{y}}^{t}}\right) }}
45
+ $$
46
+
47
+ $$
48
+ = \frac{1}{1 + {e}^{\gamma \left( {{u}_{i}\left( {0,{\mathbf{x}}_{-i}^{t},{\mathbf{y}}^{t}}\right) - {u}_{i}\left( {1,{\mathbf{x}}_{-i}^{t},{\mathbf{y}}^{t}}\right) }\right) }}.
49
+ $$
50
+
51
+ (2)
52
+
53
+ The scalar $\gamma$ quantifies the noise level in the agent’s decision-making. As $\gamma$ goes to infinity, the logit-response converges to the best-response rule. For any $0 < \gamma < \infty$ , the agent chooses non-best response with positive probability, and the actions yielding larger utility are chosen with higher probability. Throughout the paper, we assume that $\gamma$ is known (in practice, we set it to 1, as it simply scales the utility functions). We define the probability $p\left( {{x}_{i}^{t + 1} = 1 \mid {\mathbf{x}}^{t},{\mathbf{y}}^{t}}\right)$ as the investment probability at time step $t + 1$ . When the context is clear we use $p\left( {x}_{i}^{t + 1}\right)$ to represent the investment probability, omitting the dependence on ${\mathbf{x}}^{t}$ and ${\mathbf{y}}^{t}$ .
54
+
55
+ In LRD, we assume that at each time step each agent updates its action independently according to the logit response function (2). Consequently, given ${\mathbf{x}}^{t}$ and ${\mathbf{y}}^{t}$ the agents’ investment probabilities at time step $t + 1$ are conditionally independent, i.e., $p\left( {x}_{i}^{t + 1}\right)$ and $p\left( {x}_{j}^{t + 1}\right)$ are independent for $i \neq j$ . The assumption of conditional independence conceptually utilizes the classic idea of Maximum Pseudo-Likelihood [Besag, 1974], which simplifies the derivation of the data likelihood by avoiding the computation of the normalization constant. Additionally, this property implies convergence of agents' behavior to a stationary distribution. Specifically, let $\mathcal{M}$ be the discrete Markov chain induced from the logit-response dynamics, with state space $\mathcal{S} = \{ 0,1{\} }^{n}$ . The transition probability $p\left( {{\mathbf{x}}^{t + 1} \mid {\mathbf{x}}^{t}}\right)$ equals to $\mathop{\prod }\limits_{{i = 1}}^{n}p\left( {{x}_{i}^{t + 1} = 1 \mid {\mathbf{x}}^{t},{\mathbf{y}}^{t}}\right)$ , which by definition is always positive, including the transition probability from a state to itself. Consequently, the state transition graph of $\mathcal{M}$ is strongly connected and aperiodic. This in turn implies that the stationary distribution $\pi$ of the Markov chain exists and is unique [Chung and Graham, 1997, Wildstrom, 2005].
56
+
57
+ ${}^{1}$ The state transition graph of a discrete Markov chain is aperiodic if the transition probability from a state to itself is positive.
58
+
59
+ § 4 THE LEARNING FRAMEWORK
60
+
61
+ Since in practice we typically only have a single trail of past behavior to learn from, we consider the problem of learning a game model parameters from a single behavior sequence collected over $l$ time steps, i.e., ${\mathcal{D}}_{l} =$ $\left\{ {\left( {{\mathbf{x}}^{1},{\mathbf{y}}^{1}}\right) ,\ldots ,\left( {{\mathbf{x}}^{l},{\mathbf{y}}^{l}}\right) }\right\}$ , where ${\mathbf{x}}^{k}$ is the action profile of all agents at time step $k$ and ${\mathbf{y}}^{k}$ is the group-level statistics that capture aggregate behavior by each group in the multi-scale game. We assume that the utility functions of players ${u}_{i}$ have parametric representations, with associated parameter vectors denoted by ${\mathbf{\theta }}_{i} \in {\mathcal{F}}_{i} \mathrel{\text{ := }} {\left\lbrack -1,1\right\rbrack }^{m}$ , where $m$ is the dimension of ${\mathbf{\theta }}_{i}$ ; these are concatenations of the parameters of ${g}_{i}$ and ${h}_{i}$ (and the cost ${c}_{i}$ ), the two main constituent functions in player utilities. We use $\Theta = \left\{ {{\mathbf{\theta }}_{1},\ldots ,{\mathbf{\theta }}_{n}}\right\}$ to represent all learnable parameters of the game, where $\Theta \in \Pi = { \times }_{i = 1}^{n}{\mathcal{F}}_{i}$ . The utility function in (I) is a high-level description; we will instantiate ${g}_{i}$ and ${h}_{i}$ to specific parametric functions below. We present a general likelihood-based approach for learning multi-scale games from such data, and subsequently study an important special case which admits efficient learning.
62
+
63
+ § 4.1 THE GENERAL CASE
64
+
65
+ The binary multi-scale game together with the logit-response dynamics define a generative time-series model of joint behavior of both agents and groups. We assume that ${\mathbf{y}}^{t}$ is a deterministic function of the individual-level action profile ${\mathbf{x}}^{t}$ , which simplifies the derivation of the data likelihood, as the joint probability of ${\mathbf{x}}^{t + 1}$ and ${\mathbf{y}}^{t + 1}$ reduces to the marginal probability of ${\mathbf{x}}^{t + 1}$ . The generative model is a discrete Markov chain over action profiles. Omitting the dependence of the investment probability on ${\mathbf{x}}^{t}$ and ${\mathbf{y}}^{t}$ , the data likelihood $\mathcal{L}\left( {{\mathcal{D}}_{l};\Theta }\right)$ is formulated as follows:
66
+
67
+ $$
68
+ \mathcal{L}\left( {{\mathcal{D}}_{l};\Theta }\right) = p\left( {\mathbf{x}}^{1}\right) \mathop{\prod }\limits_{{t = 1}}^{{l - 1}}p\left( {{\mathbf{x}}^{t + 1} \mid {\mathbf{x}}^{t},{\mathbf{y}}^{t}}\right) =
69
+ $$
70
+
71
+ $$
72
+ \mathop{\prod }\limits_{{t = 1}}^{{l - 1}}\mathop{\prod }\limits_{{i = 1}}^{n}{\left\lbrack p\left( {x}_{i}^{t + 1} = 1\right) \right\rbrack }^{{x}_{i}^{t + 1}}{\left\lbrack 1 - p\left( {x}_{i}^{t + 1} = 1\right) \right\rbrack }^{1 - {x}_{i}^{t + 1}},
73
+ $$
74
+
75
+ (3)
76
+
77
+ where the last equality utilizes the independence of investment probabilities and the fact that $p\left( {\mathbf{x}}^{1}\right) = 1$ . We learn the parameters $\Theta$ by resorting to the maximum likelihood estimation (MLE). In general, we can leverage gradient-based methods and automatic differentiation tools to maximize the likelihood, as long as the utility functions are differentiable.
78
+
79
+ With a slight abuse of notation, we use $\mathrm{b} - \operatorname{MSGN}\left( \Theta \right)$ to represent the generative model (consisting of the game together with the logit-response dynamics solution concept), with the utility functions parameterized by $\Theta$ . We now instantiate the utility function to a specific parametric form. In particular, we consider the games with linear-quadratic utility functions, augmented with the ${h}_{i}$ to account for the multi-scale structures. The resulting MLE problem is convex, and can thus be (near-)optimally solved using interior point methods. We also develop a statistical test for the existence of multi-scale structure in this game based on the classic likelihood ratio test.
80
+
81
+ § 4.2 LEARNING MULTI-SCALE LINEAR-QUADRATIC GAMES
82
+
83
+ Linear-quadratic games have been used in much prior literature on network game modeling both in economics and machine learning [Ballester et al., 2006, Bramoullé and Kranton, 2007, Galeotti et al., 2020, Leng et al., 2020], with Leng et al. [2020] specifically considering the problem of learning network structure in such models from Nash equilibrium behavior by the agents. The standard utility function in linear-quadratic network games is defined as
84
+
85
+ $$
86
+ {u}_{i}\left( {{x}_{i},{\mathbf{x}}_{-i}}\right) = {b}_{i}{x}_{i} + {\beta }_{i}{x}_{i}\mathop{\sum }\limits_{{j \in \mathcal{V}}}{A}_{i,j}{x}_{j} - {c}_{i}{x}_{i}^{2}, \tag{4}
87
+ $$
88
+
89
+ where ${b}_{i} \geq 0$ is the marginal benefit of investing, ${c}_{i} \geq 0$ is the cost to invest, and ${\beta }_{i} \in \mathbb{R}$ captures peer effects from the neighbors’ investment. When ${\beta }_{i} > 0$ (resp. ${\beta }_{i} < 0$ ) higher investment from the neighbors encourages agent $i$ to make more (resp., less) investment.
90
+
91
+ To model the multi-scale structure in the game, we consider the following group-level aggregate function ${h}_{i}$ :
92
+
93
+ $$
94
+ {h}_{i}\left( {{x}_{i},\mathbf{y}}\right) = {\eta }_{i}{x}_{i}\left( {{y}_{\alpha \left( i\right) } - \frac{\mathop{\sum }\limits_{{g \in \mathcal{J} \smallsetminus \left\{ {\mathcal{G}}_{\alpha \left( i\right) }\right\} }}{y}_{g}}{\left| \mathcal{J}\right| - 1}}\right) , \tag{5}
95
+ $$
96
+
97
+ where ${y}_{\alpha \left( i\right) }$ is some group-level statistics from agent $i$ ’s group and the second term in the parentheses is the average of the statistics from other groups. The difference models the relative magnitude of the statistics between agent $i$ ’s group and other groups. When ${\eta }_{i} > 0$ (resp., ${\eta }_{i} < 0$ ), higher relative investment by agent $i$ ’s group compared to other groups encourages (resp., discourages) $i$ ’s own investment.
98
+
99
+ We augment the linear-quadratic payoff with the function ${h}_{i}$ , leading to the multi-scale linear-quadratic utility:
100
+
101
+ $$
102
+ {u}_{i}\left( {{x}_{i},{\mathbf{x}}_{-i}}\right) = \left( {{b}_{i} - {c}_{i}}\right) {x}_{i} + {\beta }_{i}{x}_{i}\mathop{\sum }\limits_{{j \in \mathcal{V}}}{A}_{i,j}{x}_{j} + {h}_{i}\left( {{x}_{i},\mathbf{y}}\right) .
103
+ $$
104
+
105
+ (6)
106
+
107
+ The set ${\mathbf{\theta }}_{i} = \left\{ {{b}_{i},{\beta }_{i},{\eta }_{i},{c}_{i}}\right\}$ consists of the parameters we aim to learn from data. Note that as the action space in our setting is binary, the term ${b}_{i}{x}_{i} - {c}_{i}{\left( {x}_{i}\right) }^{2}$ becomes $\left( {{b}_{i} - {c}_{i}}\right) {x}_{i}$ . As a result, accurately estimating the two parameters may not be feasible, as they can be shifted the same amount without changing the difference. Therefore, we treat ${b}_{i} - {c}_{i}$ as a single marginal benefit that we estimate from data. As we now show, the key property of this multi-scale linear quadratic game model is that the resulting MLE problem is convex.
108
+
109
+ ${}^{2}$ This problem is not specific to our model: in prior literature, the cost constant ${c}_{i}$ is usually set to $\frac{1}{2}$ in order to avoid the invariance of ${b}_{i} - {c}_{i}$ to the shifting.
110
+
111
+ Proposition 4.1. Consider a $b - \operatorname{MSGN}\left( {\mathbf{A},\mathcal{J},{\left\{ {u}_{i}\right\} }_{i = 1}^{n}}\right)$ . If ${\left\{ {u}_{i}\right\} }_{i = 1}^{n}$ are instantiated as the multi-scale linear-quadratic utilities, the resulting MLE optimization problem is convex.
112
+
113
+ Proof. Recall that $\Theta \in \Pi = \mathop{X}\limits_{{i = 1}}^{n}{\mathcal{F}}_{i}$ , that is, a Cartesian product of a set of convex sets. Thus, the feasible region $\Pi$ of the MLE is convex. In what follows, we show that the $\log$ -likelihoof function $\log \mathcal{L}\left( {{\mathcal{D}}_{l};\Theta }\right)$ is concave w.r.t. $\Theta$ .
114
+
115
+ Note that $\log \mathcal{L}\left( {{\mathcal{D}}_{l};\Theta }\right) = \mathop{\sum }\limits_{{t = 1}}^{{l - 1}}\log p\left( {{\mathbf{x}}^{t + 1} \mid {\mathbf{x}}^{t}}\right)$ ; it is sufficient to show that $\log p\left( {{\mathbf{x}}^{t + 1} \mid {\mathbf{x}}^{t}}\right)$ is concave w.r.t. $\Theta$ for any $1 \leq t \leq l - 1$ . We expand $\log p\left( {{\mathbf{x}}^{t + 1} \mid {\mathbf{x}}^{t}}\right)$ as follows:
116
+
117
+ $\log p\left( {{\mathbf{x}}^{t + 1} \mid {\mathbf{x}}^{t}}\right) =$
118
+
119
+ $$
120
+ \mathop{\sum }\limits_{{i = 1}}^{n}\left\lbrack {{x}_{i}^{t + 1}\log p\left( {{x}_{i}^{t + 1} = 1}\right) }\right.
121
+ $$
122
+
123
+ $$
124
+ \left. {+\left( {1 - {x}_{i}^{t + 1}}\right) \log \left\lbrack {1 - p\left( {{x}_{i}^{t + 1} = 1}\right) }\right\rbrack }\right\rbrack \text{ , }
125
+ $$
126
+
127
+ The logarithm of the investment probability is as follows:
128
+
129
+ $$
130
+ \log p\left( {{x}_{i}^{t + 1} = 1}\right) = \log \left\lbrack \frac{1}{1 + {e}^{-\gamma \cdot {u}_{i}\left( {1 \mid {\mathbf{x}}^{t},{\mathbf{y}}^{t},{\mathbf{\theta }}_{i}}\right) }}\right\rbrack .
131
+ $$
132
+
133
+ It is direct that ${u}_{i}\left( {1 \mid {\mathbf{x}}^{t},{\mathbf{y}}^{t},{\mathbf{\theta }}_{i}}\right)$ is a linear function of ${\mathbf{\theta }}_{i}$ . In addition, $\log p\left( {{x}_{i}^{t + 1} = 1}\right)$ is concave w.r.t. ${u}_{i}\left( {1 \mid {\mathbf{x}}^{t},{\mathbf{y}}^{t},{\mathbf{\theta }}_{i}}\right)$ , as the second derivative is negative over the domain, i.e.,
134
+
135
+ $$
136
+ \frac{{\partial }^{2}\log p\left( {{x}_{i}^{t + 1} = 1}\right) }{{\partial }^{2}{u}_{i}\left( {1 \mid {\mathbf{x}}^{t},{\mathbf{y}}^{t},{\mathbf{\theta }}_{i}}\right) } = - \frac{{e}^{\gamma \cdot {u}_{i}\left( {1 \mid {\mathbf{x}}^{t},{\mathbf{y}}^{t},{\mathbf{\theta }}_{i}}\right) } \cdot {\gamma }^{2}}{{\left( 1 + {e}^{\gamma \cdot {u}_{i}\left( {1 \mid {\mathbf{x}}^{t},{\mathbf{y}}^{t},{\mathbf{\theta }}_{i}}\right) }\right) }^{2}} < 0.
137
+ $$
138
+
139
+ The composition of a linear function with a concave function leads to a concave function (chapter 3.2.2 at [Boyd and Van-denberghe,2004]); thus, $\log p\left( {{x}_{i}^{t + 1} = 1}\right)$ is concave w.r.t. ${\mathbf{\theta }}_{i}$ . We can similarly show that $\log \left\lbrack {1 - p\left( {{x}_{i}^{t + 1} = 1}\right) }\right\rbrack$ is convex w.r.t. ${\mathbf{\theta }}_{i}$ , which implies that $\left( {1 - {x}_{i}^{t + 1}}\right) \log p\left( {{x}_{i}^{t + 1} = 1}\right)$ is concave w.r.t. ${\mathbf{\theta }}_{i}$ . A linear combination of concave functions is still concave, so $\log p\left( {{\mathbf{x}}^{t + 1} \mid {\mathbf{x}}^{t}}\right)$ is concave w.r.t. $\Theta$ .
140
+
141
+ A Statistical Test for Multi-Scale Structure We now further leverage the proposed framework to develop a statistical test to check whether the game exhibits the multi-scale structure. This test is based on the classic likelihood ratio test [Wasserman,2013]. Specifically, let $\widehat{\Theta } = \{ \widehat{\mathbf{b}},\widehat{\mathbf{c}},\widehat{\mathbf{\beta }},\widehat{\mathbf{\eta }}\}$ be the MLE estimator. The feasible region of $\widehat{\Theta }$ is $\mathcal{F} = \{ \widehat{\Theta } \mid$ $\widehat{\mathbf{b}} \geq 0,\widehat{\mathbf{c}} \geq 0,\widehat{\mathbf{\beta }} \in \left\lbrack {-\mathbf{1},\mathbf{1}}\right\rbrack ,\widehat{\mathbf{\eta }} \in \left\lbrack {-\mathbf{1},\mathbf{1}}\right\rbrack \}$ . The null hypothesis set is ${\mathcal{F}}_{0} = \{ \widehat{\Theta } \in \mathcal{F} \mid \widehat{\mathbf{\eta }} = \mathbf{0}\}$ , encoding the hypothesis that group-level statistics have no impact on agents' utilities. The test statistic is as follows:
142
+
143
+ $$
144
+ \lambda = 2\log \left( \frac{\mathop{\max }\limits_{{\Theta \in \mathcal{F}}}\mathcal{L}\left( {{\mathcal{D}}_{l};\Theta }\right) }{\mathop{\max }\limits_{{\Theta \in {\mathcal{F}}_{0}}}\mathcal{L}\left( {{\mathcal{D}}_{l};\Theta }\right) }\right) . \tag{7}
145
+ $$
146
+
147
+ Intuitively, $\lambda$ is large if there is some estimator $\widehat{\Theta }$ in the feasible region $\mathcal{F}$ for which the data ${\mathcal{D}}_{l}$ is much more likely than for any estimator in the null hypothesis set ${\mathcal{F}}_{0}$ . The p-value is equal to $p\left( {{\chi }_{n}^{2} > \lambda }\right)$ , where ${\chi }_{n}^{2}$ follows a chi-square distribution with $n$ degrees of freedom. In the Experiments section, we present experiments on synthetic data to show that the test is indeed effective at identifying multi-scale structure in games. We then use it on real data to demonstrate that such data also exhibits statistically significant multi-scale behavior dependence.
148
+
149
+ § 5 EXPERIMENTS
150
+
151
+ We focus our experimental study on learning a multi-scale linear-quadratic game $\mathrm{b} - \operatorname{MSGN}\left( {\Theta }^{ * }\right)$ . In all cases, we learn the game from a single sequence ${\mathcal{D}}_{l}$ , and experiment on both synthetic and real-world data. We use synthetic data to demonstrate the effectiveness of our approach at recovering the ground-truth parameters of the linear-quadratic games, and additionally show that the statistical test successfully identifies multi-scale game structure.
152
+
153
+ In addition, we evaluate the efficacy of the proposed approach to predict future time-series behavior. For both synthetic and real data, we first compare predictive efficacy of the proposed game learning approach with three conventional generative baseline approaches commonly applied in similar settings with the primary purpose of time-series prediction: a discrete Markov chain, a homogeneous Poisson process, and the Hawkes process [Mohler et al., 2011]. Specifically, our experiments use a discrete-time Hawkes process with exponential decay function; the intensity function at time step $t$ is: $\lambda \left( t\right) = {\lambda }_{0} + \alpha \mathop{\sum }\limits_{{{t}_{i} < t}}{z}_{{t}_{i}}{e}^{-\beta \left( {t - {t}_{i}}\right) };{\lambda }_{0}$ and $\alpha$ are estimated through MLE; $\beta$ is selected by cross-validation; ${z}_{{t}_{i}}$ is the sum of ${\mathbf{x}}^{{t}_{i}}$ , i.e., $\mathop{\sum }\limits_{{j = 1}}^{n}{x}_{j}^{{t}_{i}}$ . We show that the proposed approach outperforms these baselines in terms of prediction efficacy.
154
+
155
+ Additionally, we compare our approach with a method for learning Linear Influence Games (LIGs) [Honorio and Ortiz, 2015], a state-of-the-art game-theoretic baseline for learning utility functions from time-series behavior in network games. LIG is a generative model that assumes that behavior in each step in a time-series is generated according to a mixture of two distributions: a uniform distribution over the set of all pure-strategy Nash equilibria, and a uniform distribution over the set of all non-equilibrium strategy profiles. ${}^{3}$ The learnable parameters of an LIG include the parameters of the players' utility functions as well as a parameter deciding which distribution an action profile comes from. The parameters are learned by maximizing the proportion of equilibria observed in the training data.
156
+
157
+ ${}^{3}$ Note that the LIG approach assumes that the set of all pure-strategy Nash equilibria can be efficiently sampled. Another advantage of the proposed approach over LIG is that we do not need this assumption.
158
+
159
+ § 5.1 SYNTHETIC DATA
160
+
161
+ We generate a synthetic sequence ${\mathcal{D}}_{l}$ by simulating b-MSGN $\left( {\Theta }^{ * }\right)$ for $l - 1$ iterations, with starting action profile initialized as zeros. In each time step, every agent makes a decision according to the Bernoulli distribution with success rate equal to the investment probability (i.e., Equation (2)). The ground-truth parameters ${\Theta }^{ * } = \left\{ {{\mathbf{b}}^{ * },{\mathbf{c}}^{ * },{\mathbf{\beta }}^{ * },{\mathbf{\eta }}^{ * }}\right\}$ are specified as follows: ${b}_{i}^{ * } \sim$ $\mathcal{N}\left( {{0.3},{0.01}^{2}}\right) ,{c}_{i}^{ * } \sim \mathcal{N}\left( {{1.3},{0.1}^{2}}\right) ,{\beta }_{i}^{ * } \sim \mathcal{N}\left( {-1,{0.01}^{2}}\right)$ and ${\eta }_{i}^{ * } \sim \mathcal{N}\left( {{0.1},{0.01}^{2}}\right)$ . The parameter $\gamma$ is set to 5 . We consider three classes of synthetic networks: Barabási-Albert (BA) [Barabási and Albert, 1999], Watts-Strogatz (WS) [Watts and Strogatz, 1998], and Block Two-level Erdős-Rényi (BTER) [Seshadhri et al., 2012] networks. For each class, we randomly generate 20 networks with 100 nodes each. For each randomly generated network, we run the community detection algorithm proposed by Clauset et al. [2004] and use the resulting communities as groups.
162
+
163
+ Figure 1 shows the effectiveness of learning the game parameters from synthetic data. As the length $l$ increases, the Root Mean Squared Error (RMSE) between the estimated parameters and the true parameters consistently decreases, converging to near-zero; this indicates that the MLE estimator approximates the ground-truth ${\Theta }^{ * }$ reasonably well.
164
+
165
+ < g r a p h i c s >
166
+
167
+ Figure 1: The RMSE between the estimated parameters and the true parameters across various lengths $l$ . Left: BA (avg. degree=5.82, avg. clustering coeff.=0.1067); Middle: WS (avg. degree=9.1064, avg. clustering coeff.=0.3542); Right: BTER (avg. degree=9.3200, avg. clustering coeff.=0.1299).
168
+
169
+ Next, we show that the statistical test successfully determines the existence of the multi-scale structure in the game. We simulate two sets of data, one is called "with groups" and the other "without group". The "with groups" data is simulated as usual, such that the agents' utilities are influenced by the multi-scale structure. The "without group" data is simulated with ${\eta }_{i}^{ * }$ set to zero, which implies that the multi-scale structure does not have a direct impact on the agents' utilities. The p-values for the two sets of data are showed in Figure 2. The red horizontal lines represent where $p\left( {{\chi }_{n}^{2} > \lambda }\right) = {0.05}$ : we reject the null hypothesis when the p-value is below the red line. The blue lines represent the p-values for the "with groups" data. We can see that as $l$ (the number of observations) increases the p-values consistently decrease. In particular, for BA (resp. WS) networks when $l > {1500}$ (resp. $l > {750}$ ) we correctly reject the null hypothesis. One exception is the BTER network, where the null hypothesis is not rejected with 2000 steps; the average p-value is ${0.18}\left( {\pm {0.074}}\right)$ . This may due to greater structural complexity of BTER networks compared to BA and WS. The dashed orange lines represent the p-values for the "without group" data. Note that the orange lines are consistently above 0.05 by a large margin, which means that we never incorrectly reject the null hypothesis (i.e., never falsely claim the existence of multi-scale structure).
170
+
171
+ < g r a p h i c s >
172
+
173
+ Figure 2: Experimental results for the statistical test. The blue solid lines (resp. orange dashed lines) represent the p-values evaluated on the data with (resp. without) the multi-scale structure. Left: BA; Middle: WS; Right: BTER.
174
+
175
+ § 5.2 REAL-WORLD DATA
176
+
177
+ Gang-Related Homicides. We learn the game on a gang-related homicides data from Los Angeles [Valasik et al., 2017]. The data includes 1425 incidents from 1978 to 2012. Each incident consists of several attributes, including date, address, coordinates $(X$ and $Y$ correspond to latitude and longitude, respectively), and demographic information of the victim and the suspect. Each incident includes a label indicating whether the homicide is gang-related, and if so, includes an attribute of the suspect's gang affiliation. All sensitive attributes in the experimental results are anonymized with numerical values. The data is preprocessed as follows. First, we keep only the incidents that are gang-related, and discard the incidents with missing attributes. Second, to correct errors in incident coordinates, we compute the geometric center of the incidents' coordinates, and then fit a standard Gaussian distribution on their distances to the center, and finally discard any incidents that are three standard deviation away from the center. After preprocessing, the data contains 606 incidents committed by suspects from 54 gangs. A gang's location is approximated by the geometric center of its associated incidents. We treat the 54 gangs as the agents in the game; they are partitioned into three groups according to their neighborhood information. The network $\mathbf{A}$ is weighted, undirected, and complete, with the gangs as nodes. The weight on an edge is the inverse of the driving time between the two endpoints (gangs) obtained by querying the Google Maps API. A visualization of the processed data is provided in the Supplement.
178
+
179
+ Next, we construct a sequence ${\mathcal{D}}_{l}$ of action profiles from the processed data by discretizing time and grouping incidents that occur in each time interval, where $T$ is the hyperpa-rameter corresponding to the length of the interval in days (i.e., how finely the data is discretized). We experiment with different values of $T$ , e.g., $T = {30},{60},{90}$ . We set ${x}_{j}^{t} = 1$ if there is at least one incident associated with the $j$ -th gang at time step $t$ , and set ${x}_{j}^{t} = 0$ otherwise. The aggregate statistic ${y}_{i}^{t} = \mathop{\sum }\limits_{{j \in {\mathcal{G}}_{i}}}{x}_{j}^{t}$ , measuring the overall level of violence in group ${\mathcal{G}}_{i}$ .
180
+
181
+ We first apply the statistical test on data aggregated with different values of $T$ . The p-values are less than 0.05 across the values of $T$ , except for $T = {30}$ and 120 . The overall observation is that the data consistently exhibits statistically significant multi-scale behavior dependence, an effect that is relatively robust to time discretization; the only instances where its influence is not statistically significant is for $T =$ 30 and 120 .
182
+
183
+ < g r a p h i c s >
184
+
185
+ Figure 3: Comparison of our approach with the game-theoretic baseline LIG and three conventional generative approaches in terms of predictive log-likelihood on test data.
186
+
187
+ < g r a p h i c s >
188
+
189
+ Figure 4: A visualization of the predicted total crimes on test data with $T = {30}$ (i.e., each time step represents 30 days). We omit Poisson and LIG as their predictions are far from the ground-truth. The shaded area represents two standard deviations of the prediction from b-MSGN.
190
+
191
+ To compare the proposed approach, in which we learn the linear-quadratic game on this data, with several baselines in terms of predictive log-likelihood on test data, we split ${\mathcal{D}}_{l}$ into training data and test data with ratio $9 : 1$ . The results are showed in Figure 3. We observe that our approach is considerably better than LIG, particularly for smaller values of $T$ . In addition, our approach is competitive in predictive efficacy with all but the Markov chain baseline (which is considerably worse), including the Hawkes process, which is the state-of-the-art approach for modeling crime data of this kind [Mohler et al., 2011].
192
+
193
+ A visualization of the predicted total crimes on test data is showed in Figure 4; the shaded area represents two standard deviations of the prediction from b-MSGN. The prediction from Poisson and LIG is omitted as they are far from the ground-truth; both are almost horizontal lines without capturing any trends exhibited in real data. We can observe that b-MSGN is capturing the overall trend with high confidence, i.e., the ground-truth lies within two standard deviations of the prediction.
194
+
195
+ < g r a p h i c s >
196
+
197
+ Figure 5: The estimates of ${b}_{i} - {c}_{i},{\beta }_{i}$ and ${\eta }_{i}$ . Top: the homicides data aggregated with $T = {60}$ . Bottom: the bilateral trading data.
198
+
199
+ The key advantage of the proposed approach comes from its interpretability as capturing strategic interactions, and in linear-quadratic games in particular, the parameters we learn have a natural interpretation, which we now consider. Specifically, to analyze the game parameters we have learned, we set $T = {60}$ as an illustration (the results are quite robust to this), so that the resulting sequence ${\mathcal{D}}_{l}$ has $l = {213}$ time steps. As we do not have access to the ground-truth utility functions, the analysis serves to provide insights about the gangs' behavior. The learned parameters are showed in the top row of Figure 5. First, the estimated ${b}_{i} - {c}_{i}$ are showed on the left of the figure; the median is -0.77 . Note that the estimates are negative, that is, perceived costs of homicides by gang members exceed benefits. Overall, gang-related homicides are relatively rare; indeed there are on average ${4.7}\%$ gangs that committed homicides at each time step; when increasing $T$ to 365, there are on average 19.7% gangs that committed homicides at each time step and the median of ${b}_{i} - {c}_{i}$ becomes -0.56 .
200
+
201
+ The estimates of ${\beta }_{i}$ are shown in the middle of the figure. Most of the estimates take two extreme values, +1 or -1 . The mean is 0.18, which indicates that gang members on average tend to commit more homicides as the number of homicides from other members of their gang increases. This may be explained by the self-excitation phenomenon observed by Mohler et al. [2011] that an incident involving rival gangs can lead to retaliatory acts of homicide. Finally, estimated ${\eta }_{i}$ are shown on the right of the figure. Most estimates are positive (except a few outliers), which suggests an intuitive observation that a greater overall level of violence in a gang's neighborhood tends to lead to greater incidence of violence by the gang.
202
+
203
+ To see how the discretization affects the estimates, we plot the estimated parameters across the values of $T$ as in Figure 6 . The estimates of ${b}_{i} - {c}_{i}$ increase slightly increasing as $T$ gets larger, due simply to a greater number of homicides being aggregated in each step. However, the difference is not significant. Similarly, the estimates of ${\beta }_{i}$ and ${\eta }_{i}$ are not severely affected by the values of $T$ .
204
+
205
+ < g r a p h i c s >
206
+
207
+ Figure 6: From left to right, the estimations of ${b}_{i} - {c}_{i},{\beta }_{i}$ and ${\eta }_{i}$ across different values of $T$ ; the feasible region of each estimated parameter is restricted to $\left\lbrack {-1,1}\right\rbrack$ .
208
+
209
+ Bilateral Trading Data. The second dataset we consider is the bilateral trading data from the United Nations Com-trade Database (https://comtrade.un.org/).The data consists of statistics for international bilateral trading (e.g., imports and exports), including over 170 reporting economies and records from 1962 to 2018. We focus on annual exports data in terms of their value in US-dollars and extract a subset consisting of 127 reporting economies with complete statistics since 1962; the reporting economies are partitioned into six groups according to the continents they are located on: Asia, Africa, Europe, South America, Australia and North America. We treat the reporting economies as agents in the game. The graph underlying the game is directed and weighted, where an edge from $i$ to $j$ means that $i$ has exported goods/service to $j$ , and the weight on the edge is the normalized total value of exports since 1962. As the graph is directed, we define the neighborhood of economy $i$ as its exporting destinations. The sequence ${\mathcal{D}}_{l}$ of action profiles consists of 57 time steps, each corresponding to a year. For every economy, we track a moving average of the value of exports over $k$ time steps. Let ${e}_{i}^{t}$ be the value of exports of economy $i$ at time step $t$ . For $t > k$ , if the value is greater than the moving average, i.e., ${e}_{i}^{t} > \left( {{e}_{i}^{t - 1} + \cdots + {e}_{i}^{t - k}}\right) /k$ , we set ${x}_{i}^{t} = 1$ ; otherwise ${x}_{i}^{t} = 0$ . For $t = 1,\ldots ,k$ the actions ${x}_{i}^{t}$ are always set to zero. Intuitively, ${x}_{i}^{t} = 1$ encodes that economy $i$ has a higher value of exports compared with the average value of last three years, which signals economic growth [Michaely, 1977]. The group-level statistic is again ${y}_{i}^{t} = \mathop{\sum }\limits_{{j \in {\mathcal{G}}_{i}}}{x}_{j}^{t}$ . We experiment with five values of $k$ , ranging from 1 to 5 .
210
+
211
+ We first run the statistical test on ${\mathcal{D}}_{l}$ . The resulting p-values are nearly zero across all the values of $k$ , providing evidence for the importance of multi-scale structure.
212
+
213
+ Next, we compare the game with the baselines on test data (the last 5% of the entire sequence) in terms of predicted log-likelihoods. The results for $k = 5$ are as follows: 1) Markov Chain: -36.8413, 2) Poisson: -36.9614, 3) Hawkes: -29.9843, 4) LIG: -31.4041, and 5) b-MSGN: -24.7406; the results for other values of $k$ are similar. Note that in this case the proposed b-MSGN approach outperforms all baselines, including the Hawkes process.
214
+
215
+ Finally, the estimated parameters are shown in Figure 5 (second row). The estimated ${b}_{i} - {c}_{i}$ are mostly negative, indicating that for most economies it is difficult to maintain a steady growth in exports; some exceptions include Hong Kong, Japan, Ireland, Central African Republic, and Taiwan, which are the top five economies ranked by the estimated ${b}_{i} - {c}_{i}$ in descending order. Most estimated values of ${\beta }_{i}$ are positive, suggesting that an economy will have a growth in exports when its exporting destinations also have increasing exports. Similarly, we rank the economies in terms of the estimated ${\beta }_{i}$ in descending order, with Australia, New Zealand, Aruba, Venezuela, and Dominica the top five. For each of these, we identify the top three major export goods in terms of the share in the total value of exports since 1962. The major exporting goods of the five economies are raw materials, e.g., iron ore, meat, crude/refined petroleum, and fruits/nuts. The interpretation is that when the exports of other economies grow, the demand for raw materials also increases. Finally, most estimated values of ${\eta }_{i}$ are positive, which suggests that the relative growth of a group's exports (compared with other groups) is a good predictor of the participating economies' growth.
216
+
217
+ To study the sensitivity of the estimated parameters to $k$ , we plot the estimated parameters across the values of $k$ in Figure 7. There is a decreasing trend in the estimate of ${b}_{i} - {c}_{i}$ for smaller values of $k$ , suggesting that the interpretation of the estimate has to consider the specific value of $k$ . The estimates of ${\beta }_{i}$ and ${\eta }_{i}$ are not sensitive to $k$ .
218
+
219
+ < g r a p h i c s >
220
+
221
+ Figure 7: From left to right, the estimations of ${b}_{i} - {c}_{i},{\beta }_{i}$ and ${\eta }_{i}$ across different values of $k$ ; the feasible region of each estimated parameter is restricted to $\left\lbrack {-1,1}\right\rbrack$ .
222
+
223
+ § 6 CONCLUSION
224
+
225
+ We propose a game-theoretic generative model of time-series behavior data by combining single-shot multi-scale network games with logit-response dynamics. We do not assume that the agents are fully rational, but rather that they make decisions according to logit-response dynamics. We then present a general learning framework based on maximum likelihood estimation (MLE) for inferring parameters of such games. In the special case of multi-scale linear-quadratic games we prove that the MLE is a convex optimization problem and thus admits efficient solution algorithms. We further develop a statistical test to determine whether the game exhibits multi-scale structure. We use extensive experiments on both synthetic and real datasets to show the efficacy of the proposed approach.
UAI/UAI 2022/UAI 2022 Conference/BGfLS_8j5eq/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,245 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ If You've Trained One You've Trained Them All:
2
+
3
+ Inter-Architecture Similarity Increases With Robustness
4
+
5
+ ## Abstract
6
+
7
+ Previous work has shown that commonly-used metrics for comparing representations between neural networks overestimate similarity due to correlations between data points. We show that intra-example feature correlations also causes significant overestimation of network similarity and propose an image inversion technique to analyze only the features used by a network. With this technique, we find that similarity across architectures is significantly lower than commonly understood, but we surprisingly find that similarity between models with different architectures increases as the adversarial robustness of the models increase. Our findings indicate that robust networks tend towards a universal set of representations, regardless of architecture, and that the robust training criterion is a strong prior constraint on the functions that can be learned by diverse modern architectures. We also find that the representations learned by a robust network of any architecture have an asymmetric overlap with non-robust networks of many architectures, indicating that the representations used by robust neural networks are highly entangled with the representations used by non-robust networks.
8
+
9
+ ## 1 INTRODUCTION
10
+
11
+ There is evidence that neural networks-across architectures and weight initializations-rely on similar features for classification. Previous literature has proposed the universality hypothesis, which posits that neural networks learn essentially the same representations when trained on the same data, regardless of exact architecture or training algorithm [Olah et al., 2020]. However, we know that different architectures and random initializations often make different predictions [Lakshminarayanan et al., 2017]. This paper reexamines the similarity question from a novel viewpoint by considering the effect of an adversarial robustness constraint during training.
12
+
13
+ ![0196399c-1c92-7acf-9402-162c574d4b77_0_906_734_687_590_0.jpg](images/0196399c-1c92-7acf-9402-162c574d4b77_0_906_734_687_590_0.jpg)
14
+
15
+ Figure 1: Representation-layer similarity of neural networks increases with robustness. Our proposed metric, based on image inversions, is averaged across every pair of architectures. The representations used by robust neural networks are extremely similar across architectures and random initializations, and show high similarity with non-robust networks.
16
+
17
+ Robust training is a well-established procedure to decrease the sensitivity of a network's outputs to small changes in inputs Madry et al. [2018]. It is well-known that increasing the robustness of neural networks against adversarial examples comes with a cost to accuracy [Tsipras et al., 2019]. However, little attention has been paid to the effect that robust training has on agreement between models. We empirically show through multiple methods of similarity analysis that the representations, and consequently the functions, learned by networks of different architectures become significantly more similar as robustness increases. This finding indicates that robustness serves as a strong prior on the functions that can be learned, and in fact may be a strong enough constraint to matter more than the specific network architecture. While surprising, this hypothesis is consistent with recent theoretical results showing that modern networks are actually under-parameterized to represent smooth functions in high dimensions [Bubeck and Sellke, 2021].
18
+
19
+ Many methods have been proposed to measure similarity between neural networks, including centered kernel alignment (CKA) [Kornblith et al., 2019], Canonical Correlation Analysis (CCA) [Hardoon et al., 2004], singular vector canonical correlation analysis (SVCCA) [Raghu et al., 2017], subspace match [Wang et al., 2018] and more. In this paper, we show that existing methods for evaluating representation similarity tend to over-estimate similarity due to feature correlations. We develop a novel method for measuring network representation similarity that deconfounds the effect of correlated features by constructing model-specific datasets where each data point has been transformed to contain only the features used by the model. Using this metric, we present a comprehensive study examining the similarity between neural networks as a function of the robustness level used in adversarial training.
20
+
21
+ Figure 1 summarizes our results using the novel similarity metric we propose. The similarity between networks with different architectures is extremely high amongst robust networks, and robust networks indeed show high similarity with non-robust networks, indicating that there is significant entanglement between robust and non-robust representations. Overall, we find two novel and surprising results:
22
+
23
+ 1. Similarity between networks trained with empirical risk minimization is limited, and this similarity is overestimated by existing correlation-based similarity metrics due to feature correlations present in the dataset itself. We present experimental evidence that non-robust neural networks rely on features that are highly correlated in the data yet measure distinct patterns.
24
+
25
+ 2. Adversarial robustness is a strong constraint on the function that is learned by a network, regardless of architecture or random initialization. We find that similarity between robust networks is extremely high, and that robust networks also show asymmetric similarity with non-robust neural networks.
26
+
27
+ ## 2 RELATED WORK
28
+
29
+ Cui et al. [2022] investigate the confounding effect of input data on representational similarity across deep neural networks. They highlight that inter-example input similarity can cause representation similarity to be spuriously high even between networks with Gaussian noise added to their parameters. They propose to regress out the input similarity structure from the representation similarity structure and find that doing so corrects for the failure of CKA to distinguish between random neural networks, among other benefits. Ding et al. [2021] further investigates issues surrounding similarity indices, finding that CKA and CCA both fail to satisfy at least one of their proposed criteria expected of similarity metrics. We identify an additional limitation to these similarity metrics by showing that correlated features in the input data can likewise cause the similarity between neural networks to be overestimated.
30
+
31
+ Bai et al. [2021] propose a method to study the representations learned by neural networks by removing all non-linear components of a network and integrating all linear components into linear subnetworks $W$ . By analyzing the weight vectors of $W$ , the authors find that adversarially-trained networks cluster along class hierarchies while standard networks do not. Likewise, Salman et al. [2020] find that the representations learned by robust neural networks provide a better starting point for transfer learning than the representations learned by non-robust networks. These findings may indicate that adversarially-trained networks are extracting more semantic and generalizeable representations than standard networks.
32
+
33
+ Springer et al. [2021b] investigate the ability of adversari-ally trained neural networks to generate targeted adversarial examples. They find that classifiers that have been adver-sarially trained, even those only robust to small-magnitude perturbations, are much more effective than standard classifiers at generating targeted adversarial examples. Based on their findings, they argue that the representations used by slightly robust neural networks are shared widely across non-robust networks. We present further evidence for this hypothesis by evaluating the representational similarity of robust and non-robust neural networks.
34
+
35
+ ## 3 METHODS
36
+
37
+ We consider a labelled classification dataset $\mathcal{D} = X, y$ of data points and ground truth labels. Given two neural networks ${f}_{1}$ and ${f}_{2}$ , comprised of layers
38
+
39
+ $$
40
+ {f}_{1} = {f}_{1}^{\left( {L}_{1}\right) } \circ {f}_{1}^{\left( {L}_{1} - 1\right) } \circ \ldots {f}_{1}^{\left( 1\right) }
41
+ $$
42
+
43
+ $$
44
+ {f}_{2} = {f}_{2}^{\left( {L}_{2}\right) } \circ {f}_{2}^{\left( {L}_{2} - 1\right) } \circ \ldots {f}_{2}^{\left( 1\right) }
45
+ $$
46
+
47
+ we are interested in the activations at the representation layers (i.e. the second-to-last, or penultimate, layer) of ${f}_{1}$ and ${f}_{2}$ , denoted $A$ and $B$
48
+
49
+ $$
50
+ {g}_{1} = {f}_{1}^{\left( {L}_{1} - 1\right) } \circ \ldots \circ {f}_{1}^{1}\;A = {g}_{1}\left( X\right)
51
+ $$
52
+
53
+ $$
54
+ {g}_{2} = {f}_{1}^{\left( {L}_{1} - 1\right) } \circ \ldots \circ {f}_{1}^{1}\;B = {g}_{2}\left( X\right)
55
+ $$
56
+
57
+ ### 3.1 CENTERED KERNEL ALIGNMENT
58
+
59
+ We use centered kernel alignment to compare representations between neural networks. Given two mean-centered matrices of activations $A \in {\mathbb{R}}^{n \times {p}_{1}}$ and $B \in {\mathbb{R}}^{n \times {p}_{2}}$ of ${p}_{1}$ and ${p}_{2}$ neurons on a set of $n$ examples, CKA computes a value in the range $\left\lbrack {0,1}\right\rbrack$ with values closer to 1 indicating higher similarity. Kornblith et al. [2019] show that CKA has a number of desirable properties, including the ability to calculate similarity between layers with a differing number of neurons, invariance to isotropic scaling, and the ability to identify correspondences between the layers of identical architectures trained from different initializations-something many widely used metrics lack.
60
+
61
+ Since Kornblith et al. [2019] show that linear and radial basis function kernels compute similar similarity indices, we use a linear kernel for simplicity. The linear CKA between $A$ and $B$ is given by:
62
+
63
+ $$
64
+ \operatorname{CKA}\left( {A, B}\right) = \frac{{\begin{Vmatrix}{B}^{T}A\end{Vmatrix}}_{F}^{2}}{{\begin{Vmatrix}{A}^{T}A\end{Vmatrix}}_{F}{\begin{Vmatrix}{B}^{T}B\end{Vmatrix}}_{F}} \tag{1}
65
+ $$
66
+
67
+ $$
68
+ = \frac{{\begin{Vmatrix}\operatorname{cov}\left( {A}^{T},{B}^{T}\right) \end{Vmatrix}}_{F}^{2}}{{\begin{Vmatrix}\operatorname{cov}\left( {A}^{T},{A}^{T}\right) \end{Vmatrix}}_{F}{\begin{Vmatrix}\operatorname{cov}\left( {B}^{T},{B}^{T}\right) \end{Vmatrix}}_{F}}
69
+ $$
70
+
71
+ Previous studies have evaluated CKA between each pair of layers in the networks to be compared. In this study, we restrict our attention to the penultimate layer of each network, since this layer effectively captures the summary statistics used by each network for classification.
72
+
73
+ In this work, we empirically demonstrate a shortcoming of CKA, as traditionally applied. Suppose that two features are perfectly correlated in a dataset: ${X}_{\cdot , i} = \mathrm{c}{X}_{\cdot , j}$ for some $\left| c\right| > 0$ and $i \neq j$ . Also suppose that each network computes its representations ${A}_{\cdot , l}$ and ${B}_{\cdot , m}$ using only one of these correlated features each, $i$ for ${f}_{1}$ and $j$ for ${f}_{2}$ respectively. Then, CKA evaluated on test data will show that representations $l$ and $m$ are perfectly correlated, although it does not indicate similar feature usage between the networks. In large, high-dimensional datasets with many correlated features, such as natural images, this shortcoming can lead to a dramatic overestimation of network similarity.
74
+
75
+ ### 3.2 REPRESENTATION-LAYER INVERSION.
76
+
77
+ We use an inversion technique from the adversarial robustness literature to eliminate the effect of correlated features in test data. Ilyas et al. [2019] propose a technique known as representation inversion to investigate the features used by robust and non-robust classifiers. Representation inversion constructs a model-specific inverted dataset ${\mathcal{I}}_{f}\left( \mathcal{D}\right)$ for a given classifier $f$ where all features not used by $f$ are randomized.
78
+
79
+ Given a labeled dataset $\mathcal{D}$ , we choose a pair of inputs that have different labels. The first of each pair will be the seed image $s$ and the second the target image $t$ . Using the seed image as a starting point, we perform gradient descent to find an image that induces the same activations at the representation layer as the target image, under a specific neural network $f$ . We construct this image through gradient descent in input space (with the constraint that the resulting image has pixel values in the range $\left\lbrack {0,1}\right\rbrack$ ) by optimizing the following objective:
80
+
81
+ $$
82
+ \widetilde{s} = \mathop{\min }\limits_{s}\parallel g\left( s\right) - g\left( t\right) {\parallel }_{2} \tag{2}
83
+ $$
84
+
85
+ We perform this process on a subset of 10,000 images drawn from the ImageNet validation set [Deng et al., 2009], producing a unique inverted dataset ${\mathcal{I}}_{f}\left( \mathcal{D}\right)$ for each model. There is no constraint to limit the difference between the seed image $s$ and the inverse image $\widetilde{s}$ , but extensive experiments show that gradient descent consistently finds inverse images close to the seeds with representations very closely matching the target images, regardless of the source/target pair chosen. The difference between $s$ and $\widetilde{s}$ tends to increase as the robustness of the network $f$ increases.
86
+
87
+ By sampling pairs of seed and target images that have distinct labels we eliminate features correlated with the target class that are not used by the model for classification of the target class. For example, if we have a seed and target pair of "microwave" and "cat" respectively, and a given neural network is using the feature of "cat ear" to classify cat images but not "cat fur", by starting from an image of a microwave we can produce an image with the model's representation of "cat ear" that does not contain the correlated feature of "cat fur". If done successfully, the similarity between a neural network utilizing "cat fur" and a neural network utilizing "cat ear" will not be inflated by the co-occurrence of these features.
88
+
89
+ ### 3.3 REPRESENTATION STITCHING
90
+
91
+ Csiszárik et al. [2021] approaches neural network similarity from the functional perspective, asking the question "can network ${f}_{1}$ achieve its task using the representations of network ${f}_{2}$ ?". Their simple and elegant method stitches together the activations $A$ from a body network ${f}_{1}$ with the last layer of a head network ${f}_{2}^{\left( {L}_{2}\right) }$ by fitting an affine transformation $B \approx {AW} + b$ . This procedure creates a stitched network:
92
+
93
+ $$
94
+ {f}_{2 \circ 1} = {f}_{2}^{\left( {L}_{2}\right) } \circ \left( {{g}_{1}W + b}\right)
95
+ $$
96
+
97
+ Stitching is performed via linear regression using only the activations on training data, and we perform no task-specific fine tuning on the stitching parameters $W, b$ . If there exists an identifiable linear transformation between the networks at the penultimate layer, then the stitched network will achieve high performance. Importantly, since the last layer of each network is dense layer followed by a softmax, when the representation stitching procedure causes the stitched model to agree with the head model, it shows that the representations computed by the body network are compatible and useful with respect to the head model.
98
+
99
+ <table><tr><td>$\varepsilon$</td><td>0.0</td><td>0.01</td><td>0.03</td><td>0.05</td><td>0.1</td><td>0.25</td><td>0.5</td><td>1.0</td><td>3.0</td><td>5.0</td></tr><tr><td>ResNet18</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td></tr><tr><td>ResNet50</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td></tr><tr><td>WRN50-2</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td></tr><tr><td>WRN50-4</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td></tr><tr><td>ResNeXt50</td><td>✓</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td><td>✓</td><td>✘</td></tr><tr><td>VGG16-bn</td><td>✓</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td><td>✓</td><td>✘</td></tr><tr><td>DenseNet</td><td>✓</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td><td>✓</td><td>✘</td></tr><tr><td>ShuffleNet</td><td>✓</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td><td>✓</td><td>✘</td></tr><tr><td>MobileNet</td><td>✓</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td><td>✓</td><td>✘</td></tr></table>
100
+
101
+ Table 1: Pretrained ImageNet models used in experiments with available ${\ell }_{2}$ robustnesses, provided by Salman et al. [2020] (github.com/microsoft/robust-models-transfer). WRN50- $N$ is a WideResNet50- $N$ [Zagoruyko and Komodakis,2016].
102
+
103
+ ### 3.4 ADVERSARIAL TRAINING
104
+
105
+ Adversarial training has been shown to be effective for constructing neural networks that are robust to adversarial examples [Madry et al., 2018]. In addition, adversarial training yields neural networks with a number of desirable qualities, including interpretable gradients [Tsipras et al. 2019], high-quality representations that are useful for transfer learning [Salman et al., 2020], and the ability to generate transferable adversarial examples [Springer et al., 2021c]. Despite the extensive research into this training paradigm, to our knowledge no comprehensive study has explored the relationship between network similarity and robustness.
106
+
107
+ During adversarial training, the empirical risk minimization regime is augmented so it produces a model that is robust to adversarial perturbations within a bounded region $S\left( x\right)$ around each input point $x$ . We use the common choice of an $\varepsilon \in \mathbb{R}$ sized ${\ell }_{2}$ -ball and refer to $\varepsilon$ as the robustness level of the model. This set of perturbations, $S$ , is incorporated into the risk, reformulating it as the following min-max optimization:
108
+
109
+ $$
110
+ \mathop{\min }\limits_{\theta }{\mathbb{E}}_{\left( {x, y}\right) \sim \mathcal{D}}\left\lbrack {\mathop{\max }\limits_{{\delta \in S\left( x\right) }}\mathcal{L}\left( {x + \delta , y;\theta }\right) }\right\rbrack \tag{3}
111
+ $$
112
+
113
+ Due to computational requirements, we use the pretrained ${\ell }_{2}$ -robust ImageNet models released by Salman et al. [2020]. The specific architectures and robustnesses $\left( \varepsilon \right)$ studied are outlined in Table 1.
114
+
115
+ ## 4 EXPERIMENTS
116
+
117
+ We extensively evaluate our proposed method for estimating representation similarity, and show that it leads to consistent conclusions with other accepted methods for network similarity evaluation. Section 4.1 discusses the overestimation of neural network similarity and proposes a novel method for similarity estimation based on image inversions. Sections 4.2 to 4.4 discuss our findings on the convergence of representation similarity across robust neural networks, demonstrating that disparate methods for similarity estimation lead to the consistent conclusion that similarity increases significantly between architectures and random initialization as a function of robustness.
118
+
119
+ ### 4.1 OVERESTIMATION OF NEURAL NETWORK SIMILARITY
120
+
121
+ Many previous studies have examined the similarity between neural networks with different architectures or weight initializations [Kornblith et al., 2019, Nguyen et al., 2021, Hermann and Lampinen, 2020]. Here, we argue that similarity between networks may be substantially overestimated by these studies. While these studies suggest that neural network activations are often highly correlated across different networks, none of them have taken into account the substantial confounder that neuron responses may appear to be correlated despite responding to distinct patterns, due to frequent co-occurrence of the patterns in the dataset.
122
+
123
+ We first present experimental results demonstrating that this confounder is present for standard, non-robust neural networks, shown in Figure 2. The left heatmap presents the CKA similarity at the representation layer between all non-robust architectures on a subset of the ImageNet validation set. As is commonly reported, the similarity between all architectures is relatively high with an average of 0.67 between the penultimate layers of distinct architectures. In the right heatmap, at each row-column entry, we present the CKA similarity between the row and column architecture using the inverted dataset ${\mathcal{I}}_{f}\left( \mathcal{D}\right)$ generated by the row’s architecture. We perform these image inversions using 2,000 iterations of stochastic gradient descent with a step size of $1/8$ . In contrast to the high similarities found in the left heatmap, all similarities are significantly lower with an average between distinct architectures of 0.09 .
124
+
125
+ ![0196399c-1c92-7acf-9402-162c574d4b77_4_218_170_1312_645_0.jpg](images/0196399c-1c92-7acf-9402-162c574d4b77_4_218_170_1312_645_0.jpg)
126
+
127
+ Figure 2: Representation layer similarity of non-robust neural networks on the test set $\mathcal{D}$ and the inverted datasets ${\mathcal{I}}_{f}\left( \mathcal{D}\right)$ . High similarity is observed under the natural image dataset and low similarity when on datasets generated by non-robust models. These results indicate that similarity is systematically overestimated when based on responses to natural images but can be more reliably estimated using responses to inverse images.
128
+
129
+ ![0196399c-1c92-7acf-9402-162c574d4b77_4_155_1027_699_615_0.jpg](images/0196399c-1c92-7acf-9402-162c574d4b77_4_155_1027_699_615_0.jpg)
130
+
131
+ Figure 3: Representation layer similarity of robust neural networks $\left( {\varepsilon = 3}\right)$ on the inverted datasets ${\mathcal{I}}_{f}\left( \mathcal{D}\right)$ . Similarity is much higher than observed in Figure 2.
132
+
133
+ The trends shown in Figure 2 clearly show that current similarity metrics are overestimating network similarity to a significant degree due to correlations between distinct features in the data. When images containing only the relevant features for one of the models are used in the similarity calculation, we see that models are far more dissimilar than standard metrics indicate. We therefore propose evaluating CKA on inverted image datasets in order to best measure the manner in which each network computes sufficient and necessary features for classification.
134
+
135
+ ### 4.2 ADVERSARIAL TRAINING INCREASES REPRESENTATION SIMILARITY
136
+
137
+ We apply our new method for network similarity estimation to investigate how robustness affects the similarity of neural networks. In Figure 3 we calculate similarity between networks in a similar fashion to the rightmost plot in Figure 2; however, this time all architectures being compared were adversarially trained with an ${\ell }_{2}$ robustness of $\varepsilon = 3$ . In this plot we find that robust networks are significantly more similar to each other on the inverted datasets ${\mathcal{I}}_{f}\left( \mathcal{D}\right)$ than non robust networks are (average similarity of 0.80 , compared to 0.09 for standard models as seen in Figure 2); this may indicate that robust neural networks tend towards a similar set of representations, regardless of architecture.
138
+
139
+ In Figure 5 we plot the similarity between neural networks across architecture and robustness on the inverted datasets ${\mathcal{I}}_{f}\left( \mathcal{D}\right)$ , restricted to a comparison of three architectures for readability. In each heatmap we present the similarity between the outer-row and outer-column architecture, varied across $\varepsilon$ for each. Each inner-row and inner-column tick corresponds to the robustness of their respective outer-row and outer-column architecture, with the row architecture being the source of the inverted dataset ${\mathcal{I}}_{f}\left( \mathcal{D}\right)$ . In these plots we see a strong and consistent trend: when similarity is calculated between architectures using an inverted dataset produced by a non-robust or slightly-robust model (low value column ticks), low similarity is observed across all column-architecture robustnesses. However, when an inverted dataset produced by a robust architecture is used, we see notably higher similarity across all column-architecture robustnesses. Average comparisons between all architectures are shown in Figure 1, showing that this trend holds for all architectures evaluated.
140
+
141
+ Our results show a strong asymmetric relationship between robust and non-robust models. Inversions produced for a robust model shows high similarity with all other models, but inversions produced for a non-robust model show low similarity with all other models. This finding support the idea that features used by non-robust neural networks are highly entangled with the features used by robust neural networks [Springer et al., 2021a]. As is seen in Figure 5, the features present in the inverted datasets ${\mathcal{I}}_{f}\left( \mathcal{D}\right)$ generated by robust models are causing the representations used by non-robust classifiers to activate, yet the opposite does not hold.
142
+
143
+ ### 4.3 ADVERSARIAL TRAINING INCREASES GRADIENT SIMILARITY
144
+
145
+ We further show the increase in similarity among robust neural networks with an intriguing result that as the pairwise-robustness of models increase, the cosine-similarity of their saliency-maps [Simonyan et al., 2014] with respect to the ground truth labels increases as well. For each pair of models at identical robustness, we compute the cosine similarity between their saliency maps, and present the results in in Figure 4. In a similar fashion to the results in Section 4.1 and Section 4.2, we find that the similarity between standard models is quite low, however, as the pairwise robustness between models increases, so does the gradient similarity.
146
+
147
+ ![0196399c-1c92-7acf-9402-162c574d4b77_5_148_1589_709_465_0.jpg](images/0196399c-1c92-7acf-9402-162c574d4b77_5_148_1589_709_465_0.jpg)
148
+
149
+ Figure 4: Cosine similarity of model gradients in input space across robustness levels. Error bars indicate 1 std. deviation.
150
+
151
+ ![0196399c-1c92-7acf-9402-162c574d4b77_5_893_178_704_635_0.jpg](images/0196399c-1c92-7acf-9402-162c574d4b77_5_893_178_704_635_0.jpg)
152
+
153
+ Figure 5: Representation-layer similarity of neural networks on the natural images and the inverted datasets ${\mathcal{I}}_{f}\left( \mathcal{D}\right)$ , pairwise across different robustness levels.
154
+
155
+ ### 4.4 ADVERSARIAL TRAINING INCREASES FUNCTIONAL SIMILARITY
156
+
157
+ It has been argued that assessments of network similarity must examine functional similarity, since evaluation on known ground-truth labels gives a concrete, performance-based test [Ding et al., 2021]. Here, we apply the representation stitching methodology (Section 3.3) to "stitch" together networks of two different architectures using a single affine transformation. Within each robustness level, we evaluate every pair of models, using the first model as a body and the second model as a head. Figure 6 shows that stitching representations across architectures is effective; regardless of robustness level, when the head and body networks agreee on a label, the stitched model also predicts the same label (93.9% for correct labels and 83.2% for incorrect labels). Interestingly, when the head and body networks predict different labels, we find that agreement between the stitched model and the head model increases significantly as a function of robustness (Figure 7). In other words, it appears that there is a stronger linear correspondence between representations of the body and head network, from a functional perspective, at higher robustness levels.
158
+
159
+ ## 5 DISCUSSION
160
+
161
+ We find that while existing correlation-based similarity metrics overestimate the similarity between non-robust neural networks due to co-occurrence of features in the evaluation dataset, robust neural networks exhibit substantial similarity. We find that as the adversarial robustness of a neural network increases, its similarity to other networks increases, even across differences in architecture and random initialization. This trend of similarity is also present in the gradients, where we find that the Jacobians with respect to inputs are more similar for robust networks than for their non-robust counterparts. These results suggest a modified universality hypothesis, which suggests that neural networks, regardless of exact training condition (i.e., architecture, random initialization, learning parameters) will learn similar representations under mild constraints, such as adversarial robustness. We find empirically that robust neural networks satisfy this hypothesis. Furthermore, we find that the representations of non-robust neural networks overlap substantially with the representations of robust neural networks despite less overlap with the representations of other non-robust neural networks. This suggests that non-robust representations can be thought of as "components" of robust representations, much as a feature that represents the ear of a cat can be thought of as a component of the feature that represents an entire cat.
162
+
163
+ ![0196399c-1c92-7acf-9402-162c574d4b77_6_159_184_688_551_0.jpg](images/0196399c-1c92-7acf-9402-162c574d4b77_6_159_184_688_551_0.jpg)
164
+
165
+ Figure 6: Average agreement between stitched models and their corresponding head models on data instances where the body and head networks agree. Robustness has no effect on agreement. Confidence bands indicate 1 std. error.
166
+
167
+ Our results provide an important step towards understanding the representations learned by neural networks. Our framework justifies the previously observed exceptional transferability of adversarial examples constructed using robust neural networks by demonstrating that non-robust and robust representations exhibit substantial overlap [Springer et al., 2021b]. In addition, we suspect that the fact that a single robust neural network has some degree of similarity to all non-robust neural networks can explain why robust neural networks are often better at learning representations that transfer to new tasks [Salman et al., 2020].
168
+
169
+ Neural network architectures and optimization procedures have often been viewed from a Bayesian perspective as a strong prior on the functions that these networks can learn [Wilson and Izmailov, 2020, Kleinberg et al., 2018]. Our results indicate that robust training, likewise, is a very strong prior which can constrain both the representations extracted from a dataset and the functions learned, independent of architecture. Viewing adversarial robustness as an inductive bias may lead to understanding the limitations of our current models for adversarial robustness, and may help us develop better notions of robustness and improve the accuracy of robust models.
170
+
171
+ ![0196399c-1c92-7acf-9402-162c574d4b77_6_902_186_684_551_0.jpg](images/0196399c-1c92-7acf-9402-162c574d4b77_6_902_186_684_551_0.jpg)
172
+
173
+ Figure 7: Average agreement between stitched models and their corresponding head models on data instances where the body and head networks disagree. Agreement increases as robustness increases. Confidence band indicate 1 std. error.
174
+
175
+ ## 6 CONCLUSION
176
+
177
+ Increased similarity between robust neural networks could mean that empirical analysis of a single robust neural network will reveal insight into the representations learned by every other robust neural network, which may lead us to understand the nature of adversarial robustness itself. If neural networks learn a solution that is largely dependent on the data itself rather than the learning algorithm, random initialization, or architecture, then we may be able to derive insight into the innate structure of data by using the representations learned by neural networks.
178
+
179
+ While we find that non-robust neural networks do not strongly support the universality hypothesis, there is a convergence between both the representations used and the functions encoded by robust neural networks of different architectures. If true, even in the limited case of robust models, the universality hypothesis has substantial implications for the field of machine learning, and more broadly artificial intelligence and neuroscience. First, identifying and understanding the representations used by any individual neural network may allow us to understand the representations learned by every neural network that has been trained on the same dataset. This can have applications in mitigating transferable adversarial examples [Moosavi-Dezfooli et al., 2017] as well as building representations that are more useful for transfer learning [Salman et al., 2020]. Second, if architecture matters less given a robustness constraint, robust representations may give us insight into patterns learned by biological brains [Conwell et al., 2021a, b, Zhuang et al., 2021, Yamins et al., 2014, Güçlü and van Gerven, 2015, Eickenberg et al., 2017].
180
+
181
+ ## References
182
+
183
+ Yang Bai, Xin Yan, Yong Jiang, Shutao Xia, and Yisen Wang. Clustering effect of (linearized) adversarial robust models. ArXiv, abs/2111.12922, 2021.
184
+
185
+ Sébastien Bubeck and Mark Sellke. A universal law of robustness via isoperimetry. CoRR, abs/2105.12806, 2021. URL https://arxiv.org/abs/2105.12806.
186
+
187
+ Colin Conwell, David Mayo, Michael A. Buice, Boris Katz, George A. Alvarez, and Andrei Barbu. Neural regression, representational similarity, model zoology & neural taskonomy at scale in rodent visual cortex. bioRxiv, 2021a. doi: 10.1101/2021.06.18.448431. URL https://www.biorxiv.org/content/ early/2021/06/18/2021.06.18.448431
188
+
189
+ Colin Conwell, Jacob S. Prince, George A. Alvarez, and Talia Konkle. What can 5.17 billion regression fits tell us about artificial models of the human visual system? In SVRHM 2021 Workshop @ NeurIPS, 2021b. URL https://openreview.net/forum? id=i_xiyGq6FNT.
190
+
191
+ Adrián Csiszárik, Péter Korösi-Szabó, Ákos K. Matszan-gosz, Gergely Papp, and Dániel Varga. Similarity and matching of neural network representations. ArXiv, abs/2110.14633, 2021.
192
+
193
+ Tianyu Cui, Yogesh Kumar, Pekka Marttinen, and Samuel Kaski. Deconfounded representation similarity for comparison of neural networks. ArXiv, abs/2202.00095, 2022.
194
+
195
+ Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, K. Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255, 2009.
196
+
197
+ Frances Ding, Jean-Stanislas Denain, and Jacob Steinhardt. Grounding representation similarity with statistical testing. ArXiv, abs/2108.01661, 2021.
198
+
199
+ Michael Eickenberg, Alexandre Gramfort, Gaël Varoquaux, and Bertrand Thirion. Seeing it all: Convolutional network layers map the function of the human visual system. NeuroImage, 152:184-194, 2017.
200
+
201
+ Umut Güçlü and Marcel AJ van Gerven. Deep neural networks reveal a gradient in the complexity of neural representations across the ventral stream. Journal of Neuroscience, 35(27):10005-10014, 2015.
202
+
203
+ David Roi Hardoon, Sándor Szedmák, and John Shawe-Taylor. Canonical correlation analysis: An overview with application to learning methods. Neural Computation, 16:2639-2664, 2004.
204
+
205
+ Katherine L. Hermann and Andrew Kyle Lampinen. What shapes feature representations? exploring datasets, architectures, and training. ArXiv, abs/2006.12433, 2020.
206
+
207
+ Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features, 2019.
208
+
209
+ Robert Kleinberg, Yuanzhi Li, and Yang Yuan. An alternative view: When does SGD escape local minima? In Jennifer G. Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 2703-2712. PMLR, 2018. URL http://proceedings.mlr.press/ v80/kleinberg18a.html
210
+
211
+ Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey E. Hinton. Similarity of neural network representations revisited. ArXiv, abs/1905.00414, 2019.
212
+
213
+ Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 6402- 6413, 2017. URL https://proceedings neurips.cc/paper/2017/hash/ 9ef2ed4b7fd2c810847ffa5fa85bce38-Abstract. html
214
+
215
+ Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. ArXiv, abs/1706.06083, 2018.
216
+
217
+ Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universal adversarial perturbations. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 86-94, 2017.
218
+
219
+ Thao Nguyen, Maithra Raghu, and Simon Kornblith. Do wide and deep networks learn the same things? uncovering how neural network representations vary with width and depth. ArXiv, abs/2010.15327, 2021.
220
+
221
+ Chris Olah, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter. Zoom in: An introduction to circuits. Distill, 5(3):e00024-001, 2020.
222
+
223
+ Maithra Raghu, Justin Gilmer, Jason Yosinski, and Jascha Sohl-Dickstein. Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability. In NIPS, 2017.
224
+
225
+ Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, and Aleksander Madry. Do adversarially robust imagenet models transfer better? ArXiv, abs/2007.08489, 2020.
226
+
227
+ Karen Simonyan, Andrea Vedaldi, and Andrew Zisser-man. Deep inside convolutional networks: Visualising image classification models and saliency maps. CoRR, abs/1312.6034, 2014.
228
+
229
+ Jacob M. Springer, Melanie Mitchell, and Garrett T. Kenyon. Adversarial perturbations are not so weird: Entanglement of robust and non-robust features in neural network classifiers, 2021a.
230
+
231
+ Jacob M. Springer, Melanie Mitchell, and Garrett T. Kenyon. A little robustness goes a long way: Leveraging universal features for targeted transfer attacks, 2021b.
232
+
233
+ Jacob M. Springer, Melanie Mitchell, and Garrett T. Kenyon. Uncovering universal features: How adversarial training improves adversarial transferability. In ICML 2021 Workshop on Adversarial Machine Learning, 2021c. URL https://openreview.net/forum? id=tzModtpOW71
234
+
235
+ Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy, 2019.
236
+
237
+ Liwei Wang, Lunjia Hu, Jia-Yuan Gu, Yue Kris Wu, Zhiqiang Hu, Kun He, and John E. Hopcroft. Towards understanding learning representations: To what extent do different neural networks learn the same representation. In NeurIPS, 2018.
238
+
239
+ Andrew Gordon Wilson and Pavel Izmailov. Bayesian deep learning and a probabilistic perspective of generalization. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/ 322f62469c5e3c7dc3e58f5a4d1ea399-Abstract. html Daniel LK Yamins, Ha Hong, Charles F Cadieu, Ethan A Solomon, Darren Seibert, and James J DiCarlo.
240
+
241
+ Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the national academy of sciences, 111(23):8619-8624, 2014.
242
+
243
+ Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. ArXiv, abs/1605.07146, 2016.
244
+
245
+ Chengxu Zhuang, Siming Yan, Aran Nayebi, Martin Schrimpf, Michael C. Frank, James J. DiCarlo, and Daniel L. K. Yamins. Unsupervised neural network models of the ventral visual stream. Proceedings of the National Academy of Sciences, 118 (3), 2021. ISSN 0027-8424. doi: 10.1073/pnas. 2014196118. URL https://www.pnas.org/ content/118/3/e2014196118
UAI/UAI 2022/UAI 2022 Conference/BGfLS_8j5eq/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,210 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ If You've Trained One You've Trained Them All:
2
+
3
+ Inter-Architecture Similarity Increases With Robustness
4
+
5
+ § ABSTRACT
6
+
7
+ Previous work has shown that commonly-used metrics for comparing representations between neural networks overestimate similarity due to correlations between data points. We show that intra-example feature correlations also causes significant overestimation of network similarity and propose an image inversion technique to analyze only the features used by a network. With this technique, we find that similarity across architectures is significantly lower than commonly understood, but we surprisingly find that similarity between models with different architectures increases as the adversarial robustness of the models increase. Our findings indicate that robust networks tend towards a universal set of representations, regardless of architecture, and that the robust training criterion is a strong prior constraint on the functions that can be learned by diverse modern architectures. We also find that the representations learned by a robust network of any architecture have an asymmetric overlap with non-robust networks of many architectures, indicating that the representations used by robust neural networks are highly entangled with the representations used by non-robust networks.
8
+
9
+ § 1 INTRODUCTION
10
+
11
+ There is evidence that neural networks-across architectures and weight initializations-rely on similar features for classification. Previous literature has proposed the universality hypothesis, which posits that neural networks learn essentially the same representations when trained on the same data, regardless of exact architecture or training algorithm [Olah et al., 2020]. However, we know that different architectures and random initializations often make different predictions [Lakshminarayanan et al., 2017]. This paper reexamines the similarity question from a novel viewpoint by considering the effect of an adversarial robustness constraint during training.
12
+
13
+ < g r a p h i c s >
14
+
15
+ Figure 1: Representation-layer similarity of neural networks increases with robustness. Our proposed metric, based on image inversions, is averaged across every pair of architectures. The representations used by robust neural networks are extremely similar across architectures and random initializations, and show high similarity with non-robust networks.
16
+
17
+ Robust training is a well-established procedure to decrease the sensitivity of a network's outputs to small changes in inputs Madry et al. [2018]. It is well-known that increasing the robustness of neural networks against adversarial examples comes with a cost to accuracy [Tsipras et al., 2019]. However, little attention has been paid to the effect that robust training has on agreement between models. We empirically show through multiple methods of similarity analysis that the representations, and consequently the functions, learned by networks of different architectures become significantly more similar as robustness increases. This finding indicates that robustness serves as a strong prior on the functions that can be learned, and in fact may be a strong enough constraint to matter more than the specific network architecture. While surprising, this hypothesis is consistent with recent theoretical results showing that modern networks are actually under-parameterized to represent smooth functions in high dimensions [Bubeck and Sellke, 2021].
18
+
19
+ Many methods have been proposed to measure similarity between neural networks, including centered kernel alignment (CKA) [Kornblith et al., 2019], Canonical Correlation Analysis (CCA) [Hardoon et al., 2004], singular vector canonical correlation analysis (SVCCA) [Raghu et al., 2017], subspace match [Wang et al., 2018] and more. In this paper, we show that existing methods for evaluating representation similarity tend to over-estimate similarity due to feature correlations. We develop a novel method for measuring network representation similarity that deconfounds the effect of correlated features by constructing model-specific datasets where each data point has been transformed to contain only the features used by the model. Using this metric, we present a comprehensive study examining the similarity between neural networks as a function of the robustness level used in adversarial training.
20
+
21
+ Figure 1 summarizes our results using the novel similarity metric we propose. The similarity between networks with different architectures is extremely high amongst robust networks, and robust networks indeed show high similarity with non-robust networks, indicating that there is significant entanglement between robust and non-robust representations. Overall, we find two novel and surprising results:
22
+
23
+ 1. Similarity between networks trained with empirical risk minimization is limited, and this similarity is overestimated by existing correlation-based similarity metrics due to feature correlations present in the dataset itself. We present experimental evidence that non-robust neural networks rely on features that are highly correlated in the data yet measure distinct patterns.
24
+
25
+ 2. Adversarial robustness is a strong constraint on the function that is learned by a network, regardless of architecture or random initialization. We find that similarity between robust networks is extremely high, and that robust networks also show asymmetric similarity with non-robust neural networks.
26
+
27
+ § 2 RELATED WORK
28
+
29
+ Cui et al. [2022] investigate the confounding effect of input data on representational similarity across deep neural networks. They highlight that inter-example input similarity can cause representation similarity to be spuriously high even between networks with Gaussian noise added to their parameters. They propose to regress out the input similarity structure from the representation similarity structure and find that doing so corrects for the failure of CKA to distinguish between random neural networks, among other benefits. Ding et al. [2021] further investigates issues surrounding similarity indices, finding that CKA and CCA both fail to satisfy at least one of their proposed criteria expected of similarity metrics. We identify an additional limitation to these similarity metrics by showing that correlated features in the input data can likewise cause the similarity between neural networks to be overestimated.
30
+
31
+ Bai et al. [2021] propose a method to study the representations learned by neural networks by removing all non-linear components of a network and integrating all linear components into linear subnetworks $W$ . By analyzing the weight vectors of $W$ , the authors find that adversarially-trained networks cluster along class hierarchies while standard networks do not. Likewise, Salman et al. [2020] find that the representations learned by robust neural networks provide a better starting point for transfer learning than the representations learned by non-robust networks. These findings may indicate that adversarially-trained networks are extracting more semantic and generalizeable representations than standard networks.
32
+
33
+ Springer et al. [2021b] investigate the ability of adversari-ally trained neural networks to generate targeted adversarial examples. They find that classifiers that have been adver-sarially trained, even those only robust to small-magnitude perturbations, are much more effective than standard classifiers at generating targeted adversarial examples. Based on their findings, they argue that the representations used by slightly robust neural networks are shared widely across non-robust networks. We present further evidence for this hypothesis by evaluating the representational similarity of robust and non-robust neural networks.
34
+
35
+ § 3 METHODS
36
+
37
+ We consider a labelled classification dataset $\mathcal{D} = X,y$ of data points and ground truth labels. Given two neural networks ${f}_{1}$ and ${f}_{2}$ , comprised of layers
38
+
39
+ $$
40
+ {f}_{1} = {f}_{1}^{\left( {L}_{1}\right) } \circ {f}_{1}^{\left( {L}_{1} - 1\right) } \circ \ldots {f}_{1}^{\left( 1\right) }
41
+ $$
42
+
43
+ $$
44
+ {f}_{2} = {f}_{2}^{\left( {L}_{2}\right) } \circ {f}_{2}^{\left( {L}_{2} - 1\right) } \circ \ldots {f}_{2}^{\left( 1\right) }
45
+ $$
46
+
47
+ we are interested in the activations at the representation layers (i.e. the second-to-last, or penultimate, layer) of ${f}_{1}$ and ${f}_{2}$ , denoted $A$ and $B$
48
+
49
+ $$
50
+ {g}_{1} = {f}_{1}^{\left( {L}_{1} - 1\right) } \circ \ldots \circ {f}_{1}^{1}\;A = {g}_{1}\left( X\right)
51
+ $$
52
+
53
+ $$
54
+ {g}_{2} = {f}_{1}^{\left( {L}_{1} - 1\right) } \circ \ldots \circ {f}_{1}^{1}\;B = {g}_{2}\left( X\right)
55
+ $$
56
+
57
+ § 3.1 CENTERED KERNEL ALIGNMENT
58
+
59
+ We use centered kernel alignment to compare representations between neural networks. Given two mean-centered matrices of activations $A \in {\mathbb{R}}^{n \times {p}_{1}}$ and $B \in {\mathbb{R}}^{n \times {p}_{2}}$ of ${p}_{1}$ and ${p}_{2}$ neurons on a set of $n$ examples, CKA computes a value in the range $\left\lbrack {0,1}\right\rbrack$ with values closer to 1 indicating higher similarity. Kornblith et al. [2019] show that CKA has a number of desirable properties, including the ability to calculate similarity between layers with a differing number of neurons, invariance to isotropic scaling, and the ability to identify correspondences between the layers of identical architectures trained from different initializations-something many widely used metrics lack.
60
+
61
+ Since Kornblith et al. [2019] show that linear and radial basis function kernels compute similar similarity indices, we use a linear kernel for simplicity. The linear CKA between $A$ and $B$ is given by:
62
+
63
+ $$
64
+ \operatorname{CKA}\left( {A,B}\right) = \frac{{\begin{Vmatrix}{B}^{T}A\end{Vmatrix}}_{F}^{2}}{{\begin{Vmatrix}{A}^{T}A\end{Vmatrix}}_{F}{\begin{Vmatrix}{B}^{T}B\end{Vmatrix}}_{F}} \tag{1}
65
+ $$
66
+
67
+ $$
68
+ = \frac{{\begin{Vmatrix}\operatorname{cov}\left( {A}^{T},{B}^{T}\right) \end{Vmatrix}}_{F}^{2}}{{\begin{Vmatrix}\operatorname{cov}\left( {A}^{T},{A}^{T}\right) \end{Vmatrix}}_{F}{\begin{Vmatrix}\operatorname{cov}\left( {B}^{T},{B}^{T}\right) \end{Vmatrix}}_{F}}
69
+ $$
70
+
71
+ Previous studies have evaluated CKA between each pair of layers in the networks to be compared. In this study, we restrict our attention to the penultimate layer of each network, since this layer effectively captures the summary statistics used by each network for classification.
72
+
73
+ In this work, we empirically demonstrate a shortcoming of CKA, as traditionally applied. Suppose that two features are perfectly correlated in a dataset: ${X}_{\cdot ,i} = \mathrm{c}{X}_{\cdot ,j}$ for some $\left| c\right| > 0$ and $i \neq j$ . Also suppose that each network computes its representations ${A}_{\cdot ,l}$ and ${B}_{\cdot ,m}$ using only one of these correlated features each, $i$ for ${f}_{1}$ and $j$ for ${f}_{2}$ respectively. Then, CKA evaluated on test data will show that representations $l$ and $m$ are perfectly correlated, although it does not indicate similar feature usage between the networks. In large, high-dimensional datasets with many correlated features, such as natural images, this shortcoming can lead to a dramatic overestimation of network similarity.
74
+
75
+ § 3.2 REPRESENTATION-LAYER INVERSION.
76
+
77
+ We use an inversion technique from the adversarial robustness literature to eliminate the effect of correlated features in test data. Ilyas et al. [2019] propose a technique known as representation inversion to investigate the features used by robust and non-robust classifiers. Representation inversion constructs a model-specific inverted dataset ${\mathcal{I}}_{f}\left( \mathcal{D}\right)$ for a given classifier $f$ where all features not used by $f$ are randomized.
78
+
79
+ Given a labeled dataset $\mathcal{D}$ , we choose a pair of inputs that have different labels. The first of each pair will be the seed image $s$ and the second the target image $t$ . Using the seed image as a starting point, we perform gradient descent to find an image that induces the same activations at the representation layer as the target image, under a specific neural network $f$ . We construct this image through gradient descent in input space (with the constraint that the resulting image has pixel values in the range $\left\lbrack {0,1}\right\rbrack$ ) by optimizing the following objective:
80
+
81
+ $$
82
+ \widetilde{s} = \mathop{\min }\limits_{s}\parallel g\left( s\right) - g\left( t\right) {\parallel }_{2} \tag{2}
83
+ $$
84
+
85
+ We perform this process on a subset of 10,000 images drawn from the ImageNet validation set [Deng et al., 2009], producing a unique inverted dataset ${\mathcal{I}}_{f}\left( \mathcal{D}\right)$ for each model. There is no constraint to limit the difference between the seed image $s$ and the inverse image $\widetilde{s}$ , but extensive experiments show that gradient descent consistently finds inverse images close to the seeds with representations very closely matching the target images, regardless of the source/target pair chosen. The difference between $s$ and $\widetilde{s}$ tends to increase as the robustness of the network $f$ increases.
86
+
87
+ By sampling pairs of seed and target images that have distinct labels we eliminate features correlated with the target class that are not used by the model for classification of the target class. For example, if we have a seed and target pair of "microwave" and "cat" respectively, and a given neural network is using the feature of "cat ear" to classify cat images but not "cat fur", by starting from an image of a microwave we can produce an image with the model's representation of "cat ear" that does not contain the correlated feature of "cat fur". If done successfully, the similarity between a neural network utilizing "cat fur" and a neural network utilizing "cat ear" will not be inflated by the co-occurrence of these features.
88
+
89
+ § 3.3 REPRESENTATION STITCHING
90
+
91
+ Csiszárik et al. [2021] approaches neural network similarity from the functional perspective, asking the question "can network ${f}_{1}$ achieve its task using the representations of network ${f}_{2}$ ?". Their simple and elegant method stitches together the activations $A$ from a body network ${f}_{1}$ with the last layer of a head network ${f}_{2}^{\left( {L}_{2}\right) }$ by fitting an affine transformation $B \approx {AW} + b$ . This procedure creates a stitched network:
92
+
93
+ $$
94
+ {f}_{2 \circ 1} = {f}_{2}^{\left( {L}_{2}\right) } \circ \left( {{g}_{1}W + b}\right)
95
+ $$
96
+
97
+ Stitching is performed via linear regression using only the activations on training data, and we perform no task-specific fine tuning on the stitching parameters $W,b$ . If there exists an identifiable linear transformation between the networks at the penultimate layer, then the stitched network will achieve high performance. Importantly, since the last layer of each network is dense layer followed by a softmax, when the representation stitching procedure causes the stitched model to agree with the head model, it shows that the representations computed by the body network are compatible and useful with respect to the head model.
98
+
99
+ max width=
100
+
101
+ $\varepsilon$ 0.0 0.01 0.03 0.05 0.1 0.25 0.5 1.0 3.0 5.0
102
+
103
+ 1-11
104
+ ResNet18 ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
105
+
106
+ 1-11
107
+ ResNet50 ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
108
+
109
+ 1-11
110
+ WRN50-2 ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
111
+
112
+ 1-11
113
+ WRN50-4 ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
114
+
115
+ 1-11
116
+ ResNeXt50 ✓ ✘ ✘ ✘ ✘ ✘ ✘ ✘ ✓ ✘
117
+
118
+ 1-11
119
+ VGG16-bn ✓ ✘ ✘ ✘ ✘ ✘ ✘ ✘ ✓ ✘
120
+
121
+ 1-11
122
+ DenseNet ✓ ✘ ✘ ✘ ✘ ✘ ✘ ✘ ✓ ✘
123
+
124
+ 1-11
125
+ ShuffleNet ✓ ✘ ✘ ✘ ✘ ✘ ✘ ✘ ✓ ✘
126
+
127
+ 1-11
128
+ MobileNet ✓ ✘ ✘ ✘ ✘ ✘ ✘ ✘ ✓ ✘
129
+
130
+ 1-11
131
+
132
+ Table 1: Pretrained ImageNet models used in experiments with available ${\ell }_{2}$ robustnesses, provided by Salman et al. [2020] (github.com/microsoft/robust-models-transfer). WRN50- $N$ is a WideResNet50- $N$ [Zagoruyko and Komodakis,2016].
133
+
134
+ § 3.4 ADVERSARIAL TRAINING
135
+
136
+ Adversarial training has been shown to be effective for constructing neural networks that are robust to adversarial examples [Madry et al., 2018]. In addition, adversarial training yields neural networks with a number of desirable qualities, including interpretable gradients [Tsipras et al. 2019], high-quality representations that are useful for transfer learning [Salman et al., 2020], and the ability to generate transferable adversarial examples [Springer et al., 2021c]. Despite the extensive research into this training paradigm, to our knowledge no comprehensive study has explored the relationship between network similarity and robustness.
137
+
138
+ During adversarial training, the empirical risk minimization regime is augmented so it produces a model that is robust to adversarial perturbations within a bounded region $S\left( x\right)$ around each input point $x$ . We use the common choice of an $\varepsilon \in \mathbb{R}$ sized ${\ell }_{2}$ -ball and refer to $\varepsilon$ as the robustness level of the model. This set of perturbations, $S$ , is incorporated into the risk, reformulating it as the following min-max optimization:
139
+
140
+ $$
141
+ \mathop{\min }\limits_{\theta }{\mathbb{E}}_{\left( {x,y}\right) \sim \mathcal{D}}\left\lbrack {\mathop{\max }\limits_{{\delta \in S\left( x\right) }}\mathcal{L}\left( {x + \delta ,y;\theta }\right) }\right\rbrack \tag{3}
142
+ $$
143
+
144
+ Due to computational requirements, we use the pretrained ${\ell }_{2}$ -robust ImageNet models released by Salman et al. [2020]. The specific architectures and robustnesses $\left( \varepsilon \right)$ studied are outlined in Table 1.
145
+
146
+ § 4 EXPERIMENTS
147
+
148
+ We extensively evaluate our proposed method for estimating representation similarity, and show that it leads to consistent conclusions with other accepted methods for network similarity evaluation. Section 4.1 discusses the overestimation of neural network similarity and proposes a novel method for similarity estimation based on image inversions. Sections 4.2 to 4.4 discuss our findings on the convergence of representation similarity across robust neural networks, demonstrating that disparate methods for similarity estimation lead to the consistent conclusion that similarity increases significantly between architectures and random initialization as a function of robustness.
149
+
150
+ § 4.1 OVERESTIMATION OF NEURAL NETWORK SIMILARITY
151
+
152
+ Many previous studies have examined the similarity between neural networks with different architectures or weight initializations [Kornblith et al., 2019, Nguyen et al., 2021, Hermann and Lampinen, 2020]. Here, we argue that similarity between networks may be substantially overestimated by these studies. While these studies suggest that neural network activations are often highly correlated across different networks, none of them have taken into account the substantial confounder that neuron responses may appear to be correlated despite responding to distinct patterns, due to frequent co-occurrence of the patterns in the dataset.
153
+
154
+ We first present experimental results demonstrating that this confounder is present for standard, non-robust neural networks, shown in Figure 2. The left heatmap presents the CKA similarity at the representation layer between all non-robust architectures on a subset of the ImageNet validation set. As is commonly reported, the similarity between all architectures is relatively high with an average of 0.67 between the penultimate layers of distinct architectures. In the right heatmap, at each row-column entry, we present the CKA similarity between the row and column architecture using the inverted dataset ${\mathcal{I}}_{f}\left( \mathcal{D}\right)$ generated by the row’s architecture. We perform these image inversions using 2,000 iterations of stochastic gradient descent with a step size of $1/8$ . In contrast to the high similarities found in the left heatmap, all similarities are significantly lower with an average between distinct architectures of 0.09 .
155
+
156
+ < g r a p h i c s >
157
+
158
+ Figure 2: Representation layer similarity of non-robust neural networks on the test set $\mathcal{D}$ and the inverted datasets ${\mathcal{I}}_{f}\left( \mathcal{D}\right)$ . High similarity is observed under the natural image dataset and low similarity when on datasets generated by non-robust models. These results indicate that similarity is systematically overestimated when based on responses to natural images but can be more reliably estimated using responses to inverse images.
159
+
160
+ < g r a p h i c s >
161
+
162
+ Figure 3: Representation layer similarity of robust neural networks $\left( {\varepsilon = 3}\right)$ on the inverted datasets ${\mathcal{I}}_{f}\left( \mathcal{D}\right)$ . Similarity is much higher than observed in Figure 2.
163
+
164
+ The trends shown in Figure 2 clearly show that current similarity metrics are overestimating network similarity to a significant degree due to correlations between distinct features in the data. When images containing only the relevant features for one of the models are used in the similarity calculation, we see that models are far more dissimilar than standard metrics indicate. We therefore propose evaluating CKA on inverted image datasets in order to best measure the manner in which each network computes sufficient and necessary features for classification.
165
+
166
+ § 4.2 ADVERSARIAL TRAINING INCREASES REPRESENTATION SIMILARITY
167
+
168
+ We apply our new method for network similarity estimation to investigate how robustness affects the similarity of neural networks. In Figure 3 we calculate similarity between networks in a similar fashion to the rightmost plot in Figure 2; however, this time all architectures being compared were adversarially trained with an ${\ell }_{2}$ robustness of $\varepsilon = 3$ . In this plot we find that robust networks are significantly more similar to each other on the inverted datasets ${\mathcal{I}}_{f}\left( \mathcal{D}\right)$ than non robust networks are (average similarity of 0.80, compared to 0.09 for standard models as seen in Figure 2); this may indicate that robust neural networks tend towards a similar set of representations, regardless of architecture.
169
+
170
+ In Figure 5 we plot the similarity between neural networks across architecture and robustness on the inverted datasets ${\mathcal{I}}_{f}\left( \mathcal{D}\right)$ , restricted to a comparison of three architectures for readability. In each heatmap we present the similarity between the outer-row and outer-column architecture, varied across $\varepsilon$ for each. Each inner-row and inner-column tick corresponds to the robustness of their respective outer-row and outer-column architecture, with the row architecture being the source of the inverted dataset ${\mathcal{I}}_{f}\left( \mathcal{D}\right)$ . In these plots we see a strong and consistent trend: when similarity is calculated between architectures using an inverted dataset produced by a non-robust or slightly-robust model (low value column ticks), low similarity is observed across all column-architecture robustnesses. However, when an inverted dataset produced by a robust architecture is used, we see notably higher similarity across all column-architecture robustnesses. Average comparisons between all architectures are shown in Figure 1, showing that this trend holds for all architectures evaluated.
171
+
172
+ Our results show a strong asymmetric relationship between robust and non-robust models. Inversions produced for a robust model shows high similarity with all other models, but inversions produced for a non-robust model show low similarity with all other models. This finding support the idea that features used by non-robust neural networks are highly entangled with the features used by robust neural networks [Springer et al., 2021a]. As is seen in Figure 5, the features present in the inverted datasets ${\mathcal{I}}_{f}\left( \mathcal{D}\right)$ generated by robust models are causing the representations used by non-robust classifiers to activate, yet the opposite does not hold.
173
+
174
+ § 4.3 ADVERSARIAL TRAINING INCREASES GRADIENT SIMILARITY
175
+
176
+ We further show the increase in similarity among robust neural networks with an intriguing result that as the pairwise-robustness of models increase, the cosine-similarity of their saliency-maps [Simonyan et al., 2014] with respect to the ground truth labels increases as well. For each pair of models at identical robustness, we compute the cosine similarity between their saliency maps, and present the results in in Figure 4. In a similar fashion to the results in Section 4.1 and Section 4.2, we find that the similarity between standard models is quite low, however, as the pairwise robustness between models increases, so does the gradient similarity.
177
+
178
+ < g r a p h i c s >
179
+
180
+ Figure 4: Cosine similarity of model gradients in input space across robustness levels. Error bars indicate 1 std. deviation.
181
+
182
+ < g r a p h i c s >
183
+
184
+ Figure 5: Representation-layer similarity of neural networks on the natural images and the inverted datasets ${\mathcal{I}}_{f}\left( \mathcal{D}\right)$ , pairwise across different robustness levels.
185
+
186
+ § 4.4 ADVERSARIAL TRAINING INCREASES FUNCTIONAL SIMILARITY
187
+
188
+ It has been argued that assessments of network similarity must examine functional similarity, since evaluation on known ground-truth labels gives a concrete, performance-based test [Ding et al., 2021]. Here, we apply the representation stitching methodology (Section 3.3) to "stitch" together networks of two different architectures using a single affine transformation. Within each robustness level, we evaluate every pair of models, using the first model as a body and the second model as a head. Figure 6 shows that stitching representations across architectures is effective; regardless of robustness level, when the head and body networks agreee on a label, the stitched model also predicts the same label (93.9% for correct labels and 83.2% for incorrect labels). Interestingly, when the head and body networks predict different labels, we find that agreement between the stitched model and the head model increases significantly as a function of robustness (Figure 7). In other words, it appears that there is a stronger linear correspondence between representations of the body and head network, from a functional perspective, at higher robustness levels.
189
+
190
+ § 5 DISCUSSION
191
+
192
+ We find that while existing correlation-based similarity metrics overestimate the similarity between non-robust neural networks due to co-occurrence of features in the evaluation dataset, robust neural networks exhibit substantial similarity. We find that as the adversarial robustness of a neural network increases, its similarity to other networks increases, even across differences in architecture and random initialization. This trend of similarity is also present in the gradients, where we find that the Jacobians with respect to inputs are more similar for robust networks than for their non-robust counterparts. These results suggest a modified universality hypothesis, which suggests that neural networks, regardless of exact training condition (i.e., architecture, random initialization, learning parameters) will learn similar representations under mild constraints, such as adversarial robustness. We find empirically that robust neural networks satisfy this hypothesis. Furthermore, we find that the representations of non-robust neural networks overlap substantially with the representations of robust neural networks despite less overlap with the representations of other non-robust neural networks. This suggests that non-robust representations can be thought of as "components" of robust representations, much as a feature that represents the ear of a cat can be thought of as a component of the feature that represents an entire cat.
193
+
194
+ < g r a p h i c s >
195
+
196
+ Figure 6: Average agreement between stitched models and their corresponding head models on data instances where the body and head networks agree. Robustness has no effect on agreement. Confidence bands indicate 1 std. error.
197
+
198
+ Our results provide an important step towards understanding the representations learned by neural networks. Our framework justifies the previously observed exceptional transferability of adversarial examples constructed using robust neural networks by demonstrating that non-robust and robust representations exhibit substantial overlap [Springer et al., 2021b]. In addition, we suspect that the fact that a single robust neural network has some degree of similarity to all non-robust neural networks can explain why robust neural networks are often better at learning representations that transfer to new tasks [Salman et al., 2020].
199
+
200
+ Neural network architectures and optimization procedures have often been viewed from a Bayesian perspective as a strong prior on the functions that these networks can learn [Wilson and Izmailov, 2020, Kleinberg et al., 2018]. Our results indicate that robust training, likewise, is a very strong prior which can constrain both the representations extracted from a dataset and the functions learned, independent of architecture. Viewing adversarial robustness as an inductive bias may lead to understanding the limitations of our current models for adversarial robustness, and may help us develop better notions of robustness and improve the accuracy of robust models.
201
+
202
+ < g r a p h i c s >
203
+
204
+ Figure 7: Average agreement between stitched models and their corresponding head models on data instances where the body and head networks disagree. Agreement increases as robustness increases. Confidence band indicate 1 std. error.
205
+
206
+ § 6 CONCLUSION
207
+
208
+ Increased similarity between robust neural networks could mean that empirical analysis of a single robust neural network will reveal insight into the representations learned by every other robust neural network, which may lead us to understand the nature of adversarial robustness itself. If neural networks learn a solution that is largely dependent on the data itself rather than the learning algorithm, random initialization, or architecture, then we may be able to derive insight into the innate structure of data by using the representations learned by neural networks.
209
+
210
+ While we find that non-robust neural networks do not strongly support the universality hypothesis, there is a convergence between both the representations used and the functions encoded by robust neural networks of different architectures. If true, even in the limited case of robust models, the universality hypothesis has substantial implications for the field of machine learning, and more broadly artificial intelligence and neuroscience. First, identifying and understanding the representations used by any individual neural network may allow us to understand the representations learned by every neural network that has been trained on the same dataset. This can have applications in mitigating transferable adversarial examples [Moosavi-Dezfooli et al., 2017] as well as building representations that are more useful for transfer learning [Salman et al., 2020]. Second, if architecture matters less given a robustness constraint, robust representations may give us insight into patterns learned by biological brains [Conwell et al., 2021a, b, Zhuang et al., 2021, Yamins et al., 2014, Güçlü and van Gerven, 2015, Eickenberg et al., 2017].
UAI/UAI 2022/UAI 2022 Conference/BKZCKwIjcl5/Initial_manuscript_md/Initial_manuscript.md ADDED
The diff for this file is too large to render. See raw diff
 
UAI/UAI 2022/UAI 2022 Conference/BKZCKwIjcl5/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,483 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § EFFICIENT RESOURCE ALLOCATION WITH FAIRNESS CONSTRAINTS IN RESTLESS MULTI-ARMED BANDITS
2
+
3
+ § ABSTRACT
4
+
5
+ Restless Multi-Armed Bandits (RMAB) is an apt model to represent decision-making problems in public health interventions (e.g., tuberculosis, maternal, and child care), anti-poaching planning, sensor monitoring, personalized recommendations and many more. Existing research in RMAB has contributed mechanisms and theoretical results to a wide variety of settings, where the focus is on maximizing expected value. In this paper, we are interested in ensuring that RMAB decision making is also fair to different arms while maximizing expected value. In the context of public health settings, this would ensure that different people and/or communities are fairly represented while making public health intervention decisions. To achieve this goal, we formally define the fairness constraints in RMAB and provide planning and learning methods to solve RMAB in a fair manner. We demonstrate key theoretical properties of fair RMAB and experimentally demonstrate that our proposed methods handle fairness constraints without sacrificing significantly on solution quality.
6
+
7
+ § 1 INTRODUCTION
8
+
9
+ Picking the right time and manner of limited interventions is a problem of great practical importance in tuberculosis [Mate et al., 2020], maternal and child care [Biswas et al., 2021, Mate et al., 2021b], anti-poaching operations [Qian et al., 2016], cancer detection [Lee et al., 2019], and many others. All these problems are characterized by multiple arms (i.e., patients, pregnant mothers, regions of a forest) whose state evolves in an uncertain manner (e.g., medication usage in the case of tuberculosis, engagement patterns of mothers on calls related to good practices in pregnancy) and threads moving to "bad" states have to be steered to
10
+
11
+ "good" outcomes through interventions. The key challenge is that the number of interventions is limited due to a limited set of resources (e.g., public health workers, patrol officers in anti-poaching operations). Restless Multi-Armed Bandits (RMAB), a generalization of Multi-Armed Bandits (MAB) that allows non-active bandits to also undergo the Markovian state transition, has become an ideal model to represent the aforementioned problems of interest as it models uncertainty in arm transitions (to capture uncertain state evolution), actions (to represent interventions) and budget constraint (to represent limited resources).
12
+
13
+ Existing work [Mate et al., 2020, Biswas et al., 2021, Mate et al., 2021a] has focused on developing theoretical insights and practically efficient methods to solve RMAB. At each decision epoch, RMAB methods identify arms that provide the biggest improvement with an intervention. Such an approach though technically optimal can result in certain arms (or type of arms) getting starved for interventions.
14
+
15
+ In the case of interventions with regards to public health, RMAB algorithms focus interventions on the top beneficiaries who will improve the objective (public health outcomes) the most. This can result in certain beneficiaries never talking to public health workers and thereby moving to bad states (and potentially also impacting other beneficiaries in the same community) from where improvements can be minor even with intervention and hence never getting picked by RMAB algorithms. As shown in Fig. 1, when using the Threshold Whittle index approach proposed by Mate et al. [2020], the arm activation probability is lopsided, with 30% of arms getting activated more than 50 times and ${50}\%$ of the arms are never activated. Such starvation of interventions can result in arms moving to a bad state from where interventions cannot provide big improvements and therefore there is further starvation of interventions for those arms. Such starvation can happen to entire regions or communities, resulting in lack of fair support for beneficiaries in those regions/communities. To avoid such cycles between bad outcomes, there is a need for RMAB algorithms to consider fairness in addition to maximizing expected reward when picking arms. Risk sensitive RMAB [Mate et al., 2021b] considers an objective that targets to reduce such starvation, however, they do not guarantee that arms (or types of arms) are picked a minimum number of times.
16
+
17
+ Recent work in Multi-Armed Bandits (MAB) has presented different notions of fairness. For example, Li et al. [2019] study a Combinatorial Sleeping MAB model with Fairness constraints, called CSMAB-F. The fairness constraints ensure a minimum selection fraction for each arm. Patil et al. [2020] introduce similar fairness constraints in the stochastic multi-armed bandit problem, where they use a pre-specified vector to denote the guaranteed number of pulls. Joseph et al. [2016] define fairness as saying that a worse arm should not be picked compared to a better arm, despite the uncertainty on payoffs. Chen et al. [2020] define the fairness constraint as a minimum rate that is required when allocating a task or resource to a user. The above fairness definitions are relevant and we generalize from these to propose a fairness notion for RMAB. Unfortunately, approaches developed for fair MAB cannot be utilized for RMAB, due to uncertain state transitions with passive actions as well.
18
+
19
+ Contributions: To the best of our knowledge, this is the first paper to consider fairness constraints in RMAB. Here are the key contributions:
20
+
21
+ * We propose a fairness constraint wherein for any arm (or more generally, for a type of arm), we require that the number of decision epochs since the arm (or the type of arm) was activated last time is upper bounded. This will ensure that every arm (or type of arm) gets activated a minimum number of times, thus generalizing on the fairness notions in MAB described earlier.
22
+
23
+ * We provide a modification to the Whittle index algorithm that is scalable and optimal while being able to handle both finite and infinite horizon cases. We also provide a model-free learning method to solve the problem when the transition probabilities are not known beforehand.
24
+
25
+ * Experiment results on the generated dataset show that our proposed approaches can achieve good performance while still satisfying the fairness constraint.
26
+
27
+ § 2 PROBLEM DESCRIPTION
28
+
29
+ In this section, we formally introduce the RMAB problem. There are $N$ independent arms, each of which evolves according to an associated Markov Decision Process (MDP). An MDP is characterized by a tuple $\{ \mathcal{S},\mathcal{A},\mathcal{P},r\}$ , where $\mathcal{S}$ represents the state space, $\mathcal{A}$ represents the action space, $\mathcal{P}$ represents the transition function, and $r$ is the state-dependent reward function. Specifically, each arm has a binary-state space: 1 ("good") and 0 ("bad"), with action-dependent transition matrix $\mathcal{P}$ that is potentially different for each arm. Let ${a}_{t}^{i} \in \{ 0,1\}$ denote the action taken at time step $t$ for arm $i$ , and ${a}_{t}^{i} = 1\left( {{a}_{t}^{i} = 0}\right)$ indicates an active (passive) action for arm $i$ . Due to limited resources, at each decision epoch, the decision-maker can activate (or intervene on) at most $k$ out of $N$ arms and receive reward accrued from all arms determined by their states. $\mathop{\sum }\limits_{{i = 1}}^{N}{a}_{t}^{i} = k$ describes this limited resource constraint. Figure 2 provides an example of an arm in RMAB.
30
+
31
+ < g r a p h i c s >
32
+
33
+ Figure 1: The x-axis is the number of times activated, and the $y$ -axis is the percentage of each frequency range. We consider the RMAB given in Section 2, with $k = {10},N =$ ${100},T = {1000}$ and $L = {50},\eta = 2$ . Left: the result of using the Whittle index algorithm without considering fairness constraints. Right: the result of when considering fairness constraints. As can be noted, without fairness constraints in place, almost 50% of the arms never get activated.
34
+
35
+ < g r a p h i c s >
36
+
37
+ Figure 2: $a$ and $p$ denote the active and passive actions on arm $i$ respectively. ${P}_{s,{s}^{\prime }}^{i,a}$ and ${P}_{s,{s}^{\prime }}^{i,p}$ are the transition probabilities from state $s$ to state ${s}^{\prime }$ under action $a$ and $p$ respectively for arm $i$ .
38
+
39
+ The state of arm $i$ evolves according to the transition matrix ${P}_{s,{s}^{\prime }}^{a,i}$ for the active action and ${P}_{s,{s}^{\prime }}^{p,i}$ for the passive action. We follow the setting in Mate et al. [2020], when the arm $i$ is activated, the latent state of arm $i$ will be fully observed by the decision-maker. The states of passive arms are unobserved by the decision-maker. When considering such partially observable problem, it is sufficient to let the MDP state be the belief state: the probability that the arm is in the "good" state. We need to keep track of the belief state on the current state of the unobserved arm. This can be derived from the decision-maker's partial information which is encompassed by the last observed state and the number of decision time steps since the last activation of the arm. Let ${\omega }_{s}^{i}\left( u\right)$ denote the belief state, i.e., the probability that the state of arm $i$ is 1 when it was activated $u$ time steps ago with the observed state $s$ . The belief state in next time step can be obtained by solving the following recursive equations:
40
+
41
+ $$
42
+ {\omega }_{s}^{i}\left( {u + 1}\right) = \left\{ \begin{array}{ll} {\omega }_{s}^{i}\left( u\right) {P}_{1,1}^{p,i} + \left( {1 - {\omega }_{s}^{i}\left( u\right) }\right) {P}_{0,1}^{p,i} & \text{ passive } \\ {P}_{{s}^{\prime },1}^{i,a} & \text{ active } \end{array}\right.
43
+ $$
44
+
45
+ (1)
46
+
47
+ Where ${s}^{\prime }$ is the new state observed for arm $i$ when the active action was taken. The belief state can be calculated in closed form with the given transition probabilities. We let $\omega = {\omega }_{s}^{i}\left( {u + 1}\right)$ for ease of explanation when there is no ambiguity.
48
+
49
+ A policy $\pi$ maps the belief state vector ${\Omega }_{t} = \left\{ {{\omega }_{t}^{1},\cdots ,{\omega }_{t}^{N}}\right\}$ at each time step $t$ for all arms to the action vector, ${a}_{t} =$ $\{ 0,1{\} }^{N}$ . Here ${\omega }_{t}^{i}$ is the belief state for arm $i$ at time step $t$ . We want to design an optimal policy to maximize the cumulative long-term reward over all the arms. One widely used performance measure is the expected discounted reward over the horizon $T$ :
50
+
51
+ $$
52
+ {\mathbb{E}}_{\pi }\left\lbrack {\mathop{\sum }\limits_{{t = 1}}^{T}{\beta }^{t - 1}{R}_{t}\left( {{\Omega }_{t},\pi \left( {\Omega }_{t}\right) }\right) \mid {\Omega }_{0}}\right\rbrack
53
+ $$
54
+
55
+ Here ${R}_{t}\left( {{\Omega }_{t},\pi \left( {\Omega }_{t}\right) }\right)$ is the reward obtained in slot $t$ under action ${a}_{t} = \pi \left( {\Omega }_{t}\right)$ determined by policy $\pi ,\beta$ is the discount factor. As we discussed in the introduction, in addition to maximizing the cumulative reward, ensuring fairness among the arms is also a key design concern for many real-world applications. In order to model the fairness requirement, we introduce constraints that ensure that any arm (or kind of arms) is activated at least $\eta$ times during any decision interval of length $L$ . The overall optimization problem corresponding to the problem at hand is thus given by:
56
+
57
+ $$
58
+ \mathop{\operatorname{maximize}}\limits_{\pi }{\mathbb{E}}_{\pi }\left\lbrack {\mathop{\sum }\limits_{{t = 1}}^{T}{\beta }^{t - 1}{R}_{t}\left( {{\Omega }_{t},\pi \left( {\Omega }_{t}\right) }\right) \mid {\Omega }_{0}}\right\rbrack
59
+ $$
60
+
61
+ $$
62
+ \text{ subject to }\mathop{\sum }\limits_{i}^{N}{a}_{t}^{i} = k,\forall t \in \{ 1,\ldots ,T\}
63
+ $$
64
+
65
+ $$
66
+ \mathop{\sum }\limits_{{t = u}}^{{u + L}}{a}_{t}^{i} \geq \eta \;\forall u \in \{ 1,\ldots ,T - L\} ,\forall i \in \{ 1,\ldots ,N\} .
67
+ $$
68
+
69
+ (2)
70
+
71
+ $\eta$ is the minimum number of times an arm should be activated in a decision period of length $L$ . The strength of fairness constraints is thus governed by the combination of $L$ and $\eta$ . Obviously, this requires $k \times L > N \times \left( {\eta - 1}\right)$ as the fairness constraint should meet the resource constraint. This fairness problem can be formulated at the level of regions/communities by also summing over all the arms, $i$ in a region in the second constraint, i.e.,
72
+
73
+ $$
74
+ \mathop{\sum }\limits_{{i \in r}}\mathop{\sum }\limits_{{t = u}}^{{u + L}}{a}_{t}^{i} \geq \eta
75
+ $$
76
+
77
+ Our approaches with a simple modification are also applicable to this fairness constraint at the level of regions/communities.
78
+
79
+ § 3 BACKGROUND: WHITTLE INDEX
80
+
81
+ In this section, we describe the Whittle Index algorithm [Whittle, 1988] to solve RMAB. This algorithm at every time step, computes index values (Whittle Index values) for every arm and then activates the arms that have the top " $k$ " index values. Whittle index quantifies how appealing it is to activate a certain arm. This algorithm provides optimal solutions if the underlying RMAB satisfies the indexability property, defined in Definition 1.
82
+
83
+ Formally ${}^{\top }$ , the Whittle index of an arm in a belief state $\omega$ (i.e., the probability of good state 1) is the minimum subsidy $\lambda$ such that it is optimal to make the arm passive in that belief state. Let ${V}_{\lambda ,T}\left( \omega \right)$ denote the value function for the belief state $\omega$ over a horizon $T$ . Then it could be written as
84
+
85
+ $$
86
+ {V}_{\lambda ,T}\left( \omega \right) = \max \left\{ {{V}_{\lambda ,T}\left( {\omega ;a = 0}\right) ,{V}_{\lambda ,T}\left( {\omega ;a = 1}\right) }\right\} , \tag{3}
87
+ $$
88
+
89
+ where ${V}_{\lambda ,T}\left( {\omega ;a = 0}\right)$ and ${V}_{\lambda ,T}\left( {\omega ;a = 1}\right)$ denote the value function when taking passive and active actions respectively at the first decision epoch followed by optimal policy in the future time steps. Because the expected immediate reward is $\omega$ and subsidy for a passive action is $\lambda$ , we have the value function for passive action as:
90
+
91
+ $$
92
+ {V}_{\lambda ,T}\left( {\omega ,a = 0}\right) = \lambda + \omega + \beta {V}_{\lambda ,T - 1}\left( {{\tau }^{1}\left( \omega \right) }\right) , \tag{4}
93
+ $$
94
+
95
+ where ${\tau }^{1}\left( \omega \right)$ is the 1 -step belief state update of $\omega$ when the passive arm is unobserved for another 1 consecutive slot (see the update rule in Eq. 1). Note that $\omega$ is also the expected reward associated with that belief state. For an active action, the immediate reward is $\omega$ and there is no subsidy. However, the actual state will be known and then evolve according to the transition matrix for the next step:
96
+
97
+ $$
98
+ {V}_{\lambda ,T}\left( {\omega ,a = 1}\right) = \omega + \beta \left( {\omega {V}_{\lambda ,T - 1}\left( {P}_{1,1}^{a}\right) + }\right.
99
+ $$
100
+
101
+ $$
102
+ \left. {\left( {1 - \omega }\right) {V}_{\lambda ,T - 1}\left( {P}_{0,1}^{a}\right) }\right) \tag{5}
103
+ $$
104
+
105
+ Definition 1 An arm is indexable if the passive set under the subsidy $\lambda$ given as ${\mathcal{P}}_{\lambda } = \left\{ {\omega : {V}_{\lambda ,T}\left( {\omega ,a = 0}\right) \geq }\right.$ $\left. {{V}_{\lambda ,T}\left( {\omega ,a = 1}\right) }\right\}$ monotonically increases from $\varnothing$ to the entire state space as $\lambda$ increases from $- \infty$ to $\infty$ . The RMAB is indexable if every arm is indexable.
106
+
107
+ Intuitively, this means that if an arm takes passive action with subsidy $\lambda$ , it will also take passive action if ${\lambda }^{\prime } > \lambda$ . Given the indexability, ${W}_{T}\left( \omega \right)$ is the least subsidy, $\lambda$ that makes it equally desirable to take active and passive actions.
108
+
109
+ $$
110
+ {W}_{T}\left( \omega \right) = \mathop{\inf }\limits_{\lambda }\left\{ {\lambda : {V}_{\lambda ,T}\left( {\omega ;a = 1}\right) \leq {V}_{\lambda ,T}\left( {\omega ;a = 0}\right) }\right\} \tag{6}
111
+ $$
112
+
113
+ Definition 2 A policy is a threshold policy if there exists a threshold ${\lambda }_{th}$ such that the action is passive $a = 0$ if $\lambda > {\lambda }_{th}$ and $a = 1$ otherwise.
114
+
115
+ § EXISTING EFFICIENT METHODS FOR SOLVING RMABS DERIVE THESE
116
+
117
+ ${}^{1}$ Since we will only be talking about one arm at a time step, we will abuse the notation by not indexing belief, action and value function with arm id or time index.
118
+
119
+ threshold policies.
120
+
121
+ § 4 FAIRNESS IN RMAB
122
+
123
+ The key advantage of a Whittle index based approach is scalability without sacrificing solution quality. In this section, we provide Whittle index based approaches to handle fairness constraints under known and unknown transition models, with both infinite and finite horizon settings. We specifically consider partially observable settings ${}^{2}$ .
124
+
125
+ § 4.1 INFINITE HORIZON
126
+
127
+ When we need to consider the partial observability of the state of the RMAB problem, it is sufficient to let the MDP state be the belief state: the probability that the arm is in the "good" state [Kaelbling et al., 1998]. As a result, the partially observable RMAB has a large number of belief states [Mate et al., 2020].
128
+
129
+ Recall that the definition of the Whittle index ${W}_{T}\left( \omega \right)$ of belief state $\omega$ is the smallest $\lambda$ s.t. it is optimal to make the arm passive in the current state. We can compute the Whittle index value for each arm, and then rank the index value of all $N$ arms and select top $k$ arms at each time step to activate. With fairness constraints, the change to the approach is minimal and intuitive. The optimal policy is to choose the arms with the top " $k$ " index values until a fairness constraint is violated for an arm. In that time step, we replace the last arm in top- $k$ with the arm for which fairness constraint is violated. We show that this simple change works across the board for the infinite and finite horizon, fully and partially observable settings. We provide the detailed algorithm in Algorithm 1 and also provide sufficient conditions under which the Algorithm 1 is optimal.
130
+
131
+ We now provide the expression for $\lambda .{V}_{\lambda ,\infty }\left( \omega \right)$ denotes the value that can be accrued from a single-armed bandit process with subsidy $\lambda$ over infinite time horizon $\left( {T \rightarrow \infty }\right)$ if the belief state is $\omega$ . Therefore, we have:
132
+
133
+ $$
134
+ {V}_{\lambda ,\infty }\left( \omega \right) = \max \left\{ \begin{array}{ll} \lambda + \omega + \beta {V}_{\lambda ,\infty }\left( {{\tau }^{1}\left( \omega \right) }\right) & \text{ passive } \\ \omega + \beta \left( {\omega {V}_{\lambda ,\infty }\left( {P}_{1,1}^{a}\right) + \left( {1 - \omega }\right) {V}_{\lambda ,\infty }\left( {P}_{0,1}^{a}\right) }\right) & \text{ active } \end{array}\right.
135
+ $$
136
+
137
+ (7)
138
+
139
+ For any belief state $\omega$ , the $u$ -steps belief update ${\tau }^{u}\left( \omega \right)$ will converge to ${\omega }^{ * }$ as $u \rightarrow \infty$ , where ${\omega }^{ * } = \frac{{P}_{0,1}^{p}}{1 + {P}_{0,1}^{p} - {P}_{1,1}^{p}}$ . It should be noted that this convergence can happen in two ways depending on the state transition patterns:
140
+
141
+ * Case 1: Positively correlated channel $\left( {{P}_{1,1}^{p} \geq {P}_{0,1}^{p}}\right)$ .
142
+
143
+ The belief update process is shown in Figure 3. We can see that for the positively correlated case, they have a monotonous belief update process.
144
+
145
+ < g r a p h i c s >
146
+
147
+ Figure 3: The $u$ -step belief update of an unobserved arm $\left( {{P}_{1,1}^{p} \geq {P}_{0,1}^{p}}\right)$
148
+
149
+ We first consider the non-increasing belief process as indicated in the right graph. Formally, for $\forall u \in {\mathbb{N}}^{ + }$ , we have $\omega \left( u\right) \geq \omega \left( {u + 1}\right)$ if the initial belief state $\omega$ is above the convergence value. Similarly, for the increasing belief process shown in the left graph, we have the initial belief state $\omega < {\omega }^{ * }$ .
150
+
151
+ * Case 2: Negatively correlated channel $\left( {{P}_{1,1}^{p} < {P}_{0,1}^{p}}\right)$ .
152
+
153
+ < g r a p h i c s >
154
+
155
+ Figure 4: The $u$ -step belief update of an unobserved arm $\left( {{P}_{1,1}^{p} < {P}_{0,1}^{p}}\right)$
156
+
157
+ The belief state converges to ${\omega }^{ * }$ from the opposite direction as shown in Figure 4. This case has similar properties and is less common in the real world because it is more likely to remain in a good state than to move from a bad state to a good state. Therefore, we omit the lengthy discussion.
158
+
159
+ The belief state transition patterns are of particular importance because in proving optimality of Algorithm 1, the belief evolution pattern for the arm (whose fairness constraint will be violated) plays a crucial role.
160
+
161
+ Theorem 1 For infinite time horizon $\left( {T \rightarrow \infty }\right)$ RMAB with Fairness Constraints governed by parameters $\eta$ and $L$ , Algorithm 1 (i.e., activating arm $i$ at the end of the time period when its fairness constraint is violated) is optimal:
162
+
163
+ 1. For ${\omega }^{i} \leq {\omega }^{ * }$ (increasing belief process), if
164
+
165
+ $$
166
+ \left( {{P}_{1,1}^{i,p} - {P}_{0,1}^{i,p}}\right) \left( {1 + \frac{{\Delta }_{3}}{1 - \beta }}\right) \left( {1 - \beta \left( {{P}_{1,1}^{i,a} - {P}_{0,1}^{i,a}}\right) }\right)
167
+ $$
168
+
169
+ $$
170
+ \leq \left( {{P}_{1,1}^{i,a} - {P}_{0,1}^{i,a}}\right) \tag{8}
171
+ $$
172
+
173
+ $$
174
+ {\Delta }_{3} = \min \left\{ {\left( {{P}_{1,1}^{i,p} - {P}_{0,1}^{i,p}}\right) ,\left( {{P}_{1,1}^{i,a} - {P}_{0,1}^{i,a}}\right) }\right\} .
175
+ $$
176
+
177
+ 2. For ${\omega }^{i} \geq {\omega }^{ * }$ (non-increasing belief process), if:
178
+
179
+ $$
180
+ \left( {{P}_{1,1}^{i,p} - {P}_{0,1}^{i,p}}\right) \left( {1 - \beta }\right) {\Delta }_{1} \geq
181
+ $$
182
+
183
+ $$
184
+ \left( {{P}_{1,1}^{i,a} - {P}_{0,1}^{i,a}}\right) \left( {1 - \beta \left( {{P}_{1,1}^{i,a} - {P}_{0,1}^{i,a}}\right) }\right) \tag{9}
185
+ $$
186
+
187
+ $$
188
+ {\Delta }_{1} = \min \left\{ {1,1 + \beta \left( {{P}_{1,1}^{i,p} - {P}_{0,1}^{i,p}}\right) - \beta \left( {{P}_{1,1}^{i,a} - {P}_{0,1}^{i,a}}\right) }\right\}
189
+ $$
190
+
191
+ ${}^{2}$ We also provide a discussion about fully observable setting in the appendix
192
+
193
+ Algorithm 1: Fair Whittle Thresholding (FaWT)
194
+
195
+ Input: Transition matrix $\mathcal{P}$ , fairness constraint, $\eta$ and
196
+
197
+ $L$ , set of belief states $\left\{ {{\omega }^{1},\ldots ,{\omega }^{N}}\right\} ,k$
198
+
199
+ for each arm i in 1 to $N$ do
200
+
201
+ Compute the corresponding Whittle index ${TW}\left( {\omega }^{i}\right)$
202
+
203
+ under the infinite horizon using the Forward and
204
+
205
+ Reverse Threshold policy;
206
+
207
+ if the activation frequency $\eta$ for arm $i$ will not be
208
+
209
+ satisfied at the end of the period of length $L$ then
210
+
211
+ Add arm $i$ to the action set $\phi$ ;
212
+
213
+ $k = k - 1$ ;
214
+
215
+ if finite horizon then
216
+
217
+ Compute the the index value ${W}_{1}\left( {\omega }^{i}\right)$ ;
218
+
219
+ Compute the Whittle index ${W}_{T}\left( {\omega }^{i}\right)$ using
220
+
221
+ Equation 10:
222
+
223
+ 9 Add arms with top $\overline{\mathrm{k}}$ highest ${TW}\left( \cdot \right)$ (for infinite
224
+
225
+ horizon case) or ${W}_{T}\left( \cdot \right)$ (for finite horizon case) values
226
+
227
+ to the action set $\phi$ Decrease the residual time horizon
228
+
229
+ by $T = T - 1$ ;
230
+
231
+ Output: Action set $\phi$
232
+
233
+ Proof Sketch. Consider an arm $i$ that has not been activated for $L - 1$ time slots. In such a case, Algorithm 1 will select arm $i$ to activate in the next time step $t = L$ . Define the intervention effect of activating arm $i$ as
234
+
235
+ $$
236
+ {V}_{\lambda ,\infty }\left( {\omega ,a = 1}\right) - {V}_{\lambda ,\infty }\left( {\omega ,a = 0}\right)
237
+ $$
238
+
239
+ Following standard practice and for notational convenience, we do not index the intervention effect and value functions with $i$ . Due to independent evolution of arms, moving active action of arm $i$ does not result in a greater value function for other arms according to the Whittle index algorithm, thus it suffices to only consider arm $i$ . Here is the proof flow:
240
+
241
+ (1) Algorithm 1 ofimality requires that the intervention effect at time step $t = L - 1$ is smaller than intervention effect at $t = L$ . Optimality can be established by requiring the partial derivative of the intervention effect w.r.t. time step $t$ is greater than 0 .
242
+
243
+ (2) However, computing this partial derivative $\frac{\partial \left( {{V}_{\lambda ,\infty }\left( {\omega ,a = 1}\right) - {V}_{\lambda ,\infty }\left( {\omega ,a = 0}\right) }\right) }{\partial t}$ is difficult because value function expression is complex. We use chain rule to get:
244
+
245
+ $$
246
+ \frac{\partial \left( {{V}_{\lambda ,\infty }\left( {\omega ,a = 1}\right) - {V}_{\lambda ,\infty }\left( {\omega ,a = 0}\right) }\right) }{\partial \omega } \cdot \frac{\partial \left( \omega \right) }{\partial \left( t\right) }
247
+ $$
248
+
249
+ (3) The sign of second term, $\frac{\partial \omega }{\partial t}$ is based on the belief state transition pattern described before this theorem. We then need to consider the sign of the first term, $\frac{\partial \left( {{V}_{\lambda ,\infty }\left( {\omega ,a = 1}\right) - {V}_{\lambda ,\infty }\left( {\omega ,a = 0}\right) }\right) }{\partial \omega }.$
250
+
251
+ (4) We can compute this by deriving the bound on ${V}_{\lambda ,\infty }\left( {\omega }_{1}\right) - {V}_{\lambda ,\infty }\left( {\omega }_{2}\right) ,\forall {\omega }_{1},{\omega }_{2}$ as well as bounds on
252
+
253
+ < g r a p h i c s >
254
+
255
+ Figure 5: The action vector for RMAB is ${a}_{t}$ at time step $t$ . Then we move the action ${a}^{i}$ that satisfies fairness constraint to earlier slot and replace $k$ -th ranked action ${a}^{j}$ . Action ${a}^{l}$ is then added according to the index value at the end.
256
+
257
+ $\frac{\partial {V}_{\lambda ,\infty }\left( \omega \right) }{\partial \omega }$ . Detailed proof in appendix.
258
+
259
+ § 4.2 FINITE HORIZON
260
+
261
+ In this part, we demonstrate that the mechanism developed for handling fairness in the infinite horizon setting can also be applied to the finite horizon setting. In showing this, we address two key challenges:
262
+
263
+ 1. Computing the Whittle index under the finite horizon setting in a scalable manner.
264
+
265
+ 2. Showing that Whittle index value reduces as residual horizon decreases. This will assist in showing that it is optimal to activate the fairness violating arm at the absolute last step where a violation will happen and not earlier;
266
+
267
+ It is costly to compute the index under the finite horizon setting $- O\left( {{\left| \mathcal{S}\right| }^{k}T}\right)$ time and space complexity [Hu and Frazier, 2017]. Therefore, we take advantage of the fact that the index value has an upper and lower bound, and it will converge to the upper bound as the time horizon $T \rightarrow \infty$ . Specifically, we use an appropriate functional form to approximate the index value. To do this, we first show gradual Index decay $\left( {{\lambda }_{T} > {\lambda }_{T - 1} > {\lambda }_{0}}\right)$ by improving on the Index decay $\left( {{\lambda }_{T} > {\lambda }_{0}}\right)$ introduced in Mate et al. [2021a].
268
+
269
+ Theorem 2 For a finite horizon $T$ , the Whittle index ${\lambda }_{T}$ is the value that satisfies the equation ${V}_{{\lambda }_{T},T}\left( {\omega ,a = 0}\right) =$ ${V}_{{\lambda }_{T},T}\left( {\omega ,a = 1}\right)$ for the belief state $\omega$ . Assuming indexa-bility holds, the Whittle index will decay as the value of horizon $T$ decreases: $\forall T > 1 : {\lambda }_{T + 1} > {\lambda }_{T} > {\lambda }_{0} = 0$ .
270
+
271
+ Proof Sketch. We can calculate ${\lambda }_{0}$ and ${\lambda }_{1}$ by solving equation ${V}_{{\lambda }_{0},0}\left( {\omega ,a = 0}\right) = {V}_{{\lambda }_{0},0}\left( {\omega ,a = 1}\right)$ and ${V}_{{\lambda }_{1},1}(\omega ,a =$ $0) = {V}_{{\lambda }_{1},1}\left( {\omega ,a = 1}\right)$ according to Eq. 4 and Eq. 5. We can then derive ${\lambda }_{t} > {\lambda }_{t - 1}$ by obtaining $\frac{\partial {\lambda }_{t}}{\partial t} > 0$ for $\forall t > 1$ through induction method. The detailed proof can be found in the appendix.
272
+
273
+ We can easily compute ${\lambda }_{0},{\lambda }_{1}$ , and we have $\forall T > 1$ : ${\lambda }_{T + 1} > {\lambda }_{T} > {\lambda }_{0} = 0$ according to Theorem 2, and $\mathop{\lim }\limits_{{T \rightarrow \infty }}{\lambda }_{T} \rightarrow {TW}\left( \omega \right)$ , where ${TW}\left( \omega \right)$ is the Whittle index value for state $\omega$ under infinite horizon. Hence, we can use a sigmoid curve to approximate the index value. One common example of a sigmoid function is the logistic function. This form is also used by Mate et al. [2021a]. Specifically, we let
274
+
275
+ $$
276
+ {W}_{T}\left( \omega \right) = \frac{A}{1 + {e}^{-{kT}}} + C, \tag{10}
277
+ $$
278
+
279
+ where $A$ and $\frac{A}{2} + C$ are the curve’s bounds; $k$ is the logistic growth rate or steepness of the curve. Recall that the definition of the Whittle index ${W}_{T}\left( \omega \right)$ of belief state $\omega$ is the smallest $\lambda$ s.t. it is optimal to make the arm passive in the current state. We have ${W}_{0}\left( \omega \right) = 0$ and ${W}_{1}\left( \omega \right) = \beta \left( {\omega \left( {{P}_{1,1}^{a} - {P}_{1,1}^{p}}\right) + \left( {1 - \omega }\right) \left( {{P}_{0,1}^{a} - {P}_{0,1}^{p}}\right) }\right)$ , and ${W}_{\infty }\left( \omega \right) = {TW}\left( \omega \right)$ . By solving these three constraints, we can get the three unknown parameters,
280
+
281
+ $$
282
+ C = - {TW}\left( \omega \right) ,A = {2TW}\left( \omega \right) ,
283
+ $$
284
+
285
+ $$
286
+ k = - \log \left( {\frac{{2TW}\left( \omega \right) }{\beta \left( {\omega \left( {{P}_{1,1}^{a} - {P}_{1,1}^{p}}\right) + \left( {1 - \omega }\right) \left( {{P}_{0,1}^{a} - {P}_{0,1}^{p}}\right) }\right) + {TW}\left( \omega \right) } - 1}\right)
287
+ $$
288
+
289
+ Algorithm 1 shows how to use ${W}_{T}\left( \omega \right)$ in considering fairness constraint under the finite horizon setting. Next, we show that like in the infinite horizon case, value function and Whittle index decay over time in the case of the finite horizon.
290
+
291
+ Theorem 3 Consider the finite horizon RMAB problem with fairness constraint. Algorithm 1 (activating arm $i$ at the end of the time period when its fairness constraint is violated) is optimal:
292
+
293
+ 1. When ${\omega }^{i} \leq {\omega }^{ * }$ (increasing belief process), if
294
+
295
+ $$
296
+ \left( {{P}_{1,1}^{i,p} - {P}_{0,1}^{i,p}}\right) \left( {{\Delta }_{4}\beta \mathop{\sum }\limits_{{t = 0}}^{{T - 2}}\left\lbrack {\beta }^{t}\right\rbrack + 1}\right) \leq
297
+ $$
298
+
299
+ $$
300
+ \left( {{P}_{1,1}^{i,a} - {P}_{0,1}^{i,a}}\right) \mathop{\sum }\limits_{{t = 0}}^{{T - 2}}\left\lbrack {{\beta }^{t}{\left( {P}_{1,1}^{i,a} - {P}_{0,1}^{i,a}\right) }^{t}}\right\rbrack \tag{11}
301
+ $$
302
+
303
+ ${\Delta }_{4} = \min \left\{ {\left( {{P}_{1,1}^{i,p} - {P}_{0,1}^{i,p}}\right) ,\left( {{P}_{1,1}^{i,a} - {P}_{0,1}^{i,a}}\right) }\right\}$ , and $T$ is the residual horizon length.
304
+
305
+ 2. When ${\omega }^{i} \geq {\omega }^{ * }$ (non-increasing belief process), if
306
+
307
+ $$
308
+ \left( {{P}_{1,1}^{i,p} - {P}_{0,1}^{i,p}}\right) \left( {{\Delta }_{2}\beta \mathop{\sum }\limits_{{t = 0}}^{{T - 2}}\left\lbrack {{\beta }^{t}{\left( {P}_{1,1}^{i,a} - {P}_{0,1}^{i,a}\right) }^{t}}\right\rbrack + 1}\right) \geq
309
+ $$
310
+
311
+ $$
312
+ \left( {{P}_{1,1}^{i,a} - {P}_{0,1}^{i,a}}\right) \mathop{\sum }\limits_{{t = 0}}^{{T - 2}}{\beta }^{t} \tag{12}
313
+ $$
314
+
315
+ $$
316
+ {\Delta }_{2} = \min \left\{ {\left( {{P}_{1,1}^{i,p} - {P}_{0,1}^{i,p}}\right) ,\left( {{P}_{1,1}^{i,a} - {P}_{0,1}^{i,a}}\right) }\right\} .
317
+ $$
318
+
319
+ Proof Sketch. The proof is similar to the infinite horizon case (detailed in Appendix).
320
+
321
+ § 4.3 UNCERTAINTY IN TRANSITION MATRIX
322
+
323
+ In most real-world applications [Biswas et al. 2021], there may not be adequate information about all the state transitions. In such cases, we don't know how likely a transition is and thus, we won't be able to use the Whittle index approach directly. We provide a mechanism to apply the Thompson sampling based learning mechanism for solving RMAB problems without prior knowledge and where it is feasible to get learning experiences. Thompson sampling [Thompson, 1933] is an algorithm for online decision problems, and can be applied in MDP [Gopalan and Mannor, 2015] as well as Partially Observable MDP [Meshram et al., 2016]. In Thompson sampling, we initially assume that arm has a prior Beta distribution in the transition probability according to the prior knowledge (if there is no prior knowledge available, we assume a prior $\operatorname{Beta}\left( {1,1}\right)$ as this is the uniform distribution on(0,1)). We choose Beta distribution because it is a convenient and useful prior option for Bernoulli rewards [Agrawal and Goyal, 2012].
324
+
325
+ In our algorithm, referred to as FaWT-U and provided in [2, at each time step, we sample the posterior distribution over the parameters, and then use the Whittle index algorithm to select the arm with the highest index value to play if the fairness constraint is not violated. We can utilize our observations to update our posterior distribution, because playing the selected arms will reveal their current state. Then, the algorithm takes samples from the posterior distribution and repeats the procedure again.
326
+
327
+ Algorithm 2: Fair Whittle Thresholding with Uncertainty in transition matrix(FaWT-U)
328
+
329
+ Input: Posterior Beta distribution over the transition
330
+
331
+ matrix $\mathcal{P}$ , fairness constraint, $\eta$ and $L$ , set of
332
+
333
+ belief states $\left\{ {{\omega }^{1},\ldots ,{\omega }^{N}}\right\}$ , budget $k$
334
+
335
+ for each arm i in 1 to $N$ do
336
+
337
+ Sample the transition probability parameters
338
+
339
+ independently from posterior;
340
+
341
+ Compute Whittle indices based on the transition
342
+
343
+ matrix and belief state;
344
+
345
+ if the activation frequency $\eta$ for arm $i$ is not satisfied at
346
+
347
+ the end of the period of length $L$ then
348
+
349
+ Add arm $i$ to the action set $\phi$ ;
350
+
351
+ $k = k - 1$ ;
352
+
353
+ Add the arms with top $k$ index value into $\phi$ ;
354
+
355
+ Play the selected arms and receive the observations;
356
+
357
+ Update the posterior distribution;
358
+
359
+ Output: Action set $\phi$ and updated posterior distribution
360
+
361
+ over parameters
362
+
363
+ We employ the sampled transition probabilities and belief states $\left\{ {{\omega }^{1},\ldots ,{\omega }^{N}}\right\}$ , as well as the residual time horizon $T$ as the input to the Whittle index computation (Line 3 in Algorithm 2).
364
+
365
+ § 4.4 UNKNOWN TRANSITION MATRIX
366
+
367
+ We now tackle the second challenge mentioned, in which the transition matrix is completely unknown. In this case, we can take advantage of the model-free learning method to avoid directly using the whittling index policy.
368
+
369
+ Q-Learning is most commonly used to solve the sequential decision-making problem, which was first introduced by Watkins [1989] as an early breakthrough in reinforcement learning. It has also been extensively used in RMAB problems Fu et al. [2019], Avrachenkov and Borkar [2020], Biswas et al. [2021] to estimate the expected Q-value, ${Q}^{ * }\left( {s,a,l}\right)$ , of taking action $a \in \{ 0,1\}$ after $l \in \{ 1,\ldots ,L\}$ time slots since last observation $s \in \{ 0,1\}$ . The off-policy TD control algorithm is defined as
370
+
371
+ $$
372
+ {Q}^{t + 1}\left( {{s}_{t},{a}_{t},{l}_{t}}\right) \leftarrow {Q}^{t}\left( {{s}_{t},{a}_{t},{l}_{t}}\right) +
373
+ $$
374
+
375
+ $$
376
+ {\alpha }_{t}\left( {{s}_{t},{a}_{t},{l}_{t}}\right) \left\lbrack {{R}_{t + 1} + \gamma \mathop{\max }\limits_{a}\left( {{Q}^{t}\left( {{s}_{t + 1},a,{l}_{t + 1}}\right) - {Q}^{t}\left( {{s}_{t},{a}_{t},{l}_{t}}\right) }\right) }\right\rbrack
377
+ $$
378
+
379
+ (13)
380
+
381
+ Where $\gamma$ is the discount rate, ${\alpha }_{t}\left( {{s}_{t},{a}_{t},{l}_{t}}\right) \in \left\lbrack {0,1}\right\rbrack$ is the learning rate parameter, i.e., a small ${\alpha }_{t}\left( {{s}_{t},{a}_{t},{l}_{t}}\right)$ will result in a slow learning process and no update when ${\alpha }_{t}\left( {{s}_{t},{a}_{t},{l}_{t}}\right) = 0$ . While a large ${\alpha }_{t}\left( {{s}_{t},{a}_{t},{l}_{t}}\right)$ may cause the estimated Q-value to rely heavily on the most recent return, when ${\alpha }_{t}\left( {{s}_{t},{a}_{t},{l}_{t}}\right) = 1$ , the Q-value will always be the most recent return.
382
+
383
+ We now describe how to use the Whittle index-based Q-Learning mechanism to solve the RMAB problem with fairness constraints. We build on the work by Biswas et al. [2021] for fully observable settings. In addition to considering fairness constraints, our model can be viewed as an extension to the partially observable setting. Due to fairness constraints, $l$ can be a maximum of $L$ time steps. Therefore, belief space is also limited. We are able to use the Q-Learning based approach to effectively compute the Whittle index value and this approach is summarized in Algorithm 3,
384
+
385
+ One typical form of ${\alpha }_{t}\left( {{s}_{t},{a}_{t},{l}_{t}}\right)$ could be $1/z\left( {{s}_{t},{a}_{t},{l}_{t}}\right)$ , where $z\left( {{s}_{t},{a}_{t},{l}_{t}}\right) = \left( {\mathop{\sum }\limits_{{u = 0}}^{t}\mathbb{I}\left\{ {{s}_{u} = s,{a}_{u} = a,{l}_{u} = l}\right\} }\right) +$ 1 for each initial observed state $s \in \{ 0,1\}$ , action $a \in$ $\{ 0,1\}$ and time length since last activation $l \in \{ 1,\ldots ,L\}$ at the time slot $u$ from the beginning. With such mild form of ${\alpha }_{t}\left( {{s}_{t},{a}_{t},{l}_{t}}\right)$ , we now are able to build the theoretical support for the Q-Learning based Whittle index approach.
386
+
387
+ Theorem 4 Selecting the highest-ranking arms according to the ${Q}_{i}^{ * }\left( {s,a = 1,l}\right) - {Q}_{i}^{ * }\left( {s,a = 0,l}\right)$ till the budget constraint is met is equivalent to maximizing $\left\{ {\mathop{\sum }\limits_{{i = 1}}^{N}{Q}_{i}^{ * }\left( {s,a,l}\right) }\right\}$ over all possible action set $\{ 0,1{\} }^{N}$ such that $\mathop{\sum }\limits_{{i = 1}}^{N}{a}_{i} = k$ .
388
+
389
+ Proof Sketch. A proof based on work by Biswas et al. [2021] is given in Appendix D.3.
390
+
391
+ Algorithm 3: Fair Whittle Thresholding based Q-Learning(FaWT-Q)
392
+
393
+ Input: parameter $\epsilon$ and $k$ , and ${\alpha }_{t}\left( {{s}_{t},{a}_{t},{l}_{t}}\right)$ , initial
394
+
395
+ observed state set $\{ s{\} }^{N}$ ,
396
+
397
+ for each arm i in 1 to $N$ do
398
+
399
+ Initialize the ${Q}_{i}\left( {s,a,l}\right) \leftarrow 0$ for each state
400
+
401
+ $s \in \{ 0,1\}$ , and each action $a \in \{ 0,1\}$ and time
402
+
403
+ length $l \in \{ 1,\ldots ,L\}$ ;
404
+
405
+ For each $s \in \{ 0,1\}$ and $l \in \{ 1,\ldots ,L\}$ initialize
406
+
407
+ the Whittle index value set ${\lambda }_{i}\left( {s,l}\right) \leftarrow 0$ ;
408
+
409
+ for $t$ from 1 to $T$ do
410
+
411
+ for arm i in1to $N$ do
412
+
413
+ if the fairness constraint is violated then
414
+
415
+ Add arm $i$ to the action set $\phi$ ;
416
+
417
+ $k = k - 1$ ;
418
+
419
+ With prob $\epsilon$ add random $k$ arms to $\phi$ and with prob
420
+
421
+ $1 - \epsilon$ add arms with top $k{\lambda }_{i}\left( {s,l}\right)$ value ;
422
+
423
+ Activate the selected arms and receive rewards and
424
+
425
+ observations;
426
+
427
+ for each arm $i$ in 1 to $N$ do
428
+
429
+ Update the ${Q}_{i}^{t + 1}\left( {s,a,l}\right)$ according to Eq. 13
430
+
431
+ if $i \in \phi$ then
432
+
433
+ Set $l = 1$ and update ${s}_{i}$ according to the
434
+
435
+ received observation;
436
+
437
+ else
438
+
439
+ Set $l = l + 1$ ;
440
+
441
+ Update the new Q-Learning based Whittle
442
+
443
+ index by
444
+
445
+ ${\lambda }_{i}^{t + 1}\left( {s,l}\right) = {Q}_{i}\left( {s,a = 1,l}\right) - {Q}_{i}\left( {s,a = 0,l}\right)$
446
+
447
+ Output: Action set $\phi$
448
+
449
+ Theorem 5 Stability and convergence: The proposed approach converges to the optimal with probability 1 under the following conditions:
450
+
451
+ 1. The state space and action space are finite;
452
+
453
+ 2. $\mathop{\sum }\limits_{{t = 1}}^{\infty }{\alpha }_{t}\left( {{s}_{t},{a}_{t},{l}_{t}}\right) = \infty \;\mathop{\sum }\limits_{{t = 1}}^{\infty }{\alpha }_{t}^{2}\left( {{\omega }_{i}\left( t\right) }\right) < \infty$
454
+
455
+ Proof Sketch. The key to the convergence is contingent on a particular sequence of episodes observed in the real process Watkins and Dayan [1992]. Detailed proof is given in Appendix D.4.
456
+
457
+ § 5 EXPERIMENT
458
+
459
+ To the best of our knowledge, we are the first to explore fairness constraints in RMAB, hence the goal of the experiment section is to evaluate the performance of our approach in comparison to existing baselines:
460
+
461
+ Random: At each round, decision-maker randomly select $k$ arms to activate.
462
+
463
+ Myopic: Select $k$ arms that maximize the expected reward at the immediate next round. A myopic policy ignores the impact of present actions on future rewards and instead focuses entirely on predicted immediate returns. Formally, this could be described as choosing the $k$ arms with the largest gap $\Delta {\omega }_{t} = \left( {{\omega }_{t + 1} \mid {a}_{t} = 1}\right) - \left( {{\omega }_{t + 1} \mid {a}_{t} = 0}\right)$ at time $t$ .
464
+
465
+ < g r a p h i c s >
466
+
467
+ Figure 7: Intervention benefit ratio of our approach and baseline approaches without penalty for the violation of the fairness constraint. We set $N = {100},k = {10},T = {1000},\eta = 2$ and $L = \{ {15},{30},{50}\}$ .
468
+
469
+ Constraint Myopic: It is the same as the Myopic when there is no conflict with fairness constraints, but if the fairness constraint is violated, it will choose the arm that satisfies the fairness constraint to play.
470
+
471
+ Oracle: Algorithm by Qian et al. [2016] under the assumption that the states of all arms are fully observable and the transition probabilities are known without considering fairness constraints.
472
+
473
+ To demonstrate the performance of our proposed methods, we test our algorithms on synthetic domains [Mate et al., 2020] and provide numerical results averaged over 50 runs.
474
+
475
+ Average reward value with penalty: In Figure 6, we show the average reward $\bar{R}$ at each time step received by an arm over the time interval $T = {1000}$ for $N =$ 50,100,200,500and $k = {10}\% \times N$ with the fairness constraint $L = {20}$ , and $\eta = 2$ . We will receive a reward of 1 if the state of an arm is $s = 1$ , and no reward otherwise. We impose a small penalty of -0.01 if the fairness constraint of an arm is not satisfied. The graph on the left shows the performance of FaWT method when assuming the transition matrix is known. The middle graph is the average reward obtained using the FaWT-U approach when the transition model is not fully available. The right graph illustrates the result of FaWT-Q method when the transition model is unknown. As shown in the figure, our approaches consistently outperform the Random and Myopic baselines, and in addition to satisfying the fairness constraints, they have a near-optimal performance with a small difference gap when compared to the Oracle baseline. Note that the Myopic approach may fail in some cases(shown in Mate et al. [2020]), it performs worse than the Random approach.
476
+
477
+ No penalty for the violation of the fairness constraint: We also investigate the intervention benefit ratio defined as $\frac{{\bar{R}}_{\text{ method }} - {\bar{R}}_{\text{ No intervention }}}{{\bar{R}}_{\text{ Oncle }} - {\bar{R}}_{\text{ No intervention }}} \times {100}\%$ , where ${\bar{R}}_{\text{ No intervention }}$ denotes the average reward without any intervention involved. Here, we do not employ penalties when the fairness constraint is not satisfied, as we want to evaluate the benefit provided by interventions with our fair policy and policies of other approaches. We provide the intervention benefit ratio for different values of $L$ for all approaches in Figure 7. Again, the left graph shows the result of FaWT approach, the middle graph is the result of FaWT-U approach, and the right graph shows the result of FaWT-Q method. Our proposed approaches can achieve a better intervention benefit ratio compared with the baseline when $L$ is 30 and above. However, for $L = {15}$ , where there is a strict fairness constraint (i.e., $\frac{k \times L}{\left( {\eta - 1}\right) \times N}$ is close to 1), it has a significant impact on solution quality. The performances of all our approaches improve when the fairness constraint’s strength decreases $(L$ increases). Overall, our proposed methods can handle various levels of fairness constraint strength without sacrificing significantly on solution quality.
478
+
479
+ We also provide the additional experiment result that studies the influence of intervention level and fairness constraint's strength in the Appendix.
480
+
481
+ § 6 CONCLUSION
482
+
483
+ In this paper, we initiate the study of fairness constraints in Restless Multi-Arm Bandit problems. We define a fairness metric that encapsulates and generalizes existing fairness definitions employed for Multi-Arm Bandit problems. Contrary to expectations, we are able to provide minor modifications to the existing algorithm for RMAB problems in order to handle fairness. We provide theoretical results on how our methods provide the best way to handle fairness without sacrificing solution quality. This is demonstrated empirically as well on benchmark problems from the literature.
UAI/UAI 2022/UAI 2022 Conference/BKZIivLs9xc/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,381 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Fast Inference and Transfer of Compositional Task Structures for Few-shot Task Generalization
2
+
3
+ ## Abstract
4
+
5
+ We tackle real-world problems with complex structures beyond the pixel-based game or simulator. We formulate it as a few-shot reinforcement learning problem where a task is characterized by a subtask graph that defines a set of subtasks and their dependencies that are unknown to the agent. Different from the previous meta-RL methods trying to directly infer the unstructured task embedding, our multi-task subtask graph inferencer (MTSGI) first infers the common high-level task structure in terms of the subtask graph from the training tasks, and use it as a prior to improve the task inference in testing. Our experiment results on $2\mathrm{D}$ grid-world and complex web navigation domains show that the proposed method can learn and leverage the common underlying structure of the tasks for faster adaptation to the unseen tasks than various existing algorithms such as meta reinforcement learning, hierarchical reinforcement learning, and other heuristic agents.
6
+
7
+ ## 1 INTRODUCTION
8
+
9
+ Recently, deep reinforcement learning (RL) has shown an outstanding performance on various domains such as video games [Mnih et al., 2015, Vinyals et al., 2019] and board games [Silver et al., 2017]. However, most of the successes of deep RL were focused on a single-task setting where the agent is allowed to interact with the environment for hundreds of millions of time steps. In numerous real-world scenarios, interacting with the environment is expensive or limited, and the agent is often presented with a novel task that is not seen during its training time. To overcome this limitation, many recent works focused on scaling the RL algorithm beyond the single-task setting. Recent works on multi-task RL aim to build a single, contextual policy that can solve multiple related tasks and generalize to unseen tasks. However, they require a certain form of task embedding as an extra input that often fully characterizes the given task [Oh et al., 2017, Andreas et al., 2017, Yu et al., 2017, Chaplot et al., 2018], or requires a human demonstration Huang et al. [2018], which are not readily available in practice. Meta RL [Finn et al., 2017, Duan et al., 2016] focuses on a more general setting where the agent should learn about the unseen task purely via interacting with the environment without any additional information. However, such meta-RL algorithms either require a large amount of experience on the diverse set of tasks or are limited to a relatively smaller set of simple tasks with a simple task structure.
10
+
11
+ On the contrary, real-world problems require the agent to solve much more complex and compositional tasks without human supervision. Consider a web-navigating RL agent given the task of checking out the products from an online store as shown in Figure 1. The agent can complete the task by filling out the required web elements with the correct information such as shipping or payment information, navigating between the web pages, and placing the order. Note that the task consists of multiple subtasks and the subtasks have complex dependencies in the form of precondition; for instance, the agent may proceed to the payment web page (see Bottom, B) after all the required shipping information has been correctly filled in (see Bottom, A), or the credit_card_number field will appear after selecting the credit_card as a payment method (see Top, Middle in Figure 1). Learning to perform such a task can be quite challenging if the reward is given only after yielding meaningful outcomes (i.e., sparse reward task). This is the problem scope we focus on in this work: solving and generalizing to unseen compositional sparse-reward tasks with complex subtask dependencies without human supervision.
12
+
13
+ Recent works [Sohn et al., 2019, Xu et al., 2017, Huang et al., 2018, Liu et al., 2016, Ghazanfari and Taylor, 2017] tackled the compositional tasks by explicitly inferring the underlying task structure in a graph form. Specifically, the subtask graph inference (SGI) framework [Sohn et al., 2019] uses inductive logic programming (ILP) on the agent's own experience to infer the task structure in terms of subtask graph and learns a contextual policy to execute the inferred task in few-shot RL setting. However, it only meta-learned the adaptation policy that relates to the efficient exploration, while the task inference and execution policy learning were limited to a single task (i.e., both task inference and policy learning were done from scratch for each task), limiting its capability of handling large variance in the task structure. We claim that the inefficient task inference may hinder applying the SGI framework to a more complex domain such as web navigation [Shi et al., 2017, Liu et al., 2018] where a task may have a large number of subtasks and complex dependencies between them. We note that humans can navigate an unseen website by transferring the high-level process learned from previously seen websites.
14
+
15
+ ![01963936-f4dd-7ee7-bc23-e66bfe548b96_1_189_171_1422_492_0.jpg](images/01963936-f4dd-7ee7-bc23-e66bfe548b96_1_189_171_1422_492_0.jpg)
16
+
17
+ Figure 1: An illustration of the train (Top) and test task (Bottom) in our SymWoB domain. Some selected actionable web-elements (e.g., text fields and buttons) are magnified (dotted arrow and box) for readability. The agent's goal (green box) is to checkout the products in unseen test website by interacting with the web elements in a correct order. For example, in train task, the agent should fill out all the text fields in (Top, A) before clicking the credit_card button to transition (gray arrow) to next page. The high-level checkout processes in different websites have many commonalities while certain details may differ. For example, in both train and test tasks, the agent should fill out the user information (Top and Bottom, A) before proceeding to the next page or there exist similar elements (Top and Bottom, C). However, the details may differ; e.g., the train task(Top, A)has a single text field for full name, while the test task(Bottom, A)has separate text fields for the first and last name, respectively. Also, only the test website (Bottom, B) requires shipping information since the training website does not ship the product.
18
+
19
+ Inspired by this, we extend the SGI framework to a multitask subtask graph inferencer (MTSGI) that can generalize the previously learned task structure to the unseen task for faster adaptation and stronger generalization. Figure 2 outlines our method. MTSGI estimates the prior model of the subtask graphs from the training tasks. When an unseen task is presented, MTSGI samples the prior that best matches with the current task, and incorporates the sampled prior model to improve the latent subtask graph inference, which in turn improves the performance of the evaluation policy. We demonstrate results in the 2D grid-world domain and the web navigation domain that simulates the interaction with 15 actual websites. We compare our method with MSGI [Sohn et al., 2019] that learns the task hierarchy from scratch for each task, and two other baselines including hierarchical RL and a heuristic algorithm. We find that MTSGI significantly outperforms all other baselines, and the learned prior model enables more efficient task inference compared to MSGI.
20
+
21
+ ## 2 PRELIMINARIES
22
+
23
+ Few-shot Reinforcement Learning A task is defined by an MDP ${\mathcal{M}}_{G} = \left( {\mathcal{S},\mathcal{A},{\mathcal{P}}_{G},{\mathcal{R}}_{G}}\right)$ parameterized by a task parameter $G$ with a set of states $\mathcal{S}$ , a set of actions $\mathcal{A}$ , transition dynamics ${\mathcal{P}}_{G}$ , reward function ${\mathcal{R}}_{G}$ . The goal of $K$ -shot RL [Duan et al., 2016, Finn et al., 2017], is to efficiently solve a distribution of unseen test tasks ${\mathcal{M}}^{\text{test }}$ by learning and transferring the common knowledge from the training tasks ${\mathcal{M}}^{\text{train }}$ . It is assumed that the training and test tasks do not overlap (i.e., ${\mathcal{M}}^{\text{train }} \cap {\mathcal{M}}^{\text{test }} = \varnothing$ ) but share a certain commonality such that the knowledge learned from the training tasks may be helpful for learning the test tasks. For each task ${\mathcal{M}}_{G}$ , the agent is given $K$ steps budget for interacting with the environment. During meta-training, the goal of multi-task RL agent is to learn a prior (i.e., slow-learning) over the training tasks ${\mathcal{M}}^{\text{train }}$ . Then, the learned prior may be exploited during the meta-test to enable faster adaptation on unseen test tasks ${\mathcal{M}}^{\text{test }}$ . For each task, the agent faces two phases: an adaptation phase where the agent learns a task-specific behavior (i.e., fast-learning) for $K$ environment steps, which often spans over multiple episodes, and a evaluation phase where the adapted behavior is evaluated. In the evaluation phase, the agent is not allowed to perform any form of learning, and agent's performance on the task ${\mathcal{M}}_{G}$ is measured in terms of the return:
24
+
25
+ $$
26
+ {\mathcal{R}}_{{\mathcal{M}}_{G}}\left( {\pi }_{{\phi }_{K}}\right) = {\mathbb{E}}_{{\pi }_{{\phi }_{K}},{\mathcal{M}}_{G}}\left\lbrack {\mathop{\sum }\limits_{{t = 1}}^{H}{r}_{t}}\right\rbrack , \tag{1}
27
+ $$
28
+
29
+ ![01963936-f4dd-7ee7-bc23-e66bfe548b96_2_189_174_1421_446_0.jpg](images/01963936-f4dd-7ee7-bc23-e66bfe548b96_2_189_174_1421_446_0.jpg)
30
+
31
+ Figure 2: The overview of our algorithm and the example of agent's trajectory and the inferred subtask graph. In meta-train (Left), the adaptation policy ${\pi }^{\text{adapt }}$ interacts with the environment and collects the trajectory $\tau$ . The inductive logic programming (ILP) module takes as input the trajectory, and infers the task structure in terms of the subtask graph ${G}^{\tau }$ . The trajectory and the subtask graph are stored as a prior. In meta-testing (Right), the adaptation policy incorporates the prior trajectory ${\tau }^{\mathrm{p}}$ to efficiently explore the environment, and ILP module infers the subtask graph ${G}^{\tau }$ from the adaptation trajectory $\tau$ . Finally, the evaluation policy ${\pi }^{\text{eval }}$ takes as input the prior and inferred subtask graphs $\left( {{G}^{\mathrm{p}},{G}^{\tau }}\right)$ to solve the test task.
32
+
33
+ where ${\pi }_{{\phi }_{K}}$ is the policy after $K$ update steps of adaptation, $H$ is the horizon of evaluation phase, and ${r}_{t}$ is the reward at time $t$ in the evaluation phase.
34
+
35
+ ## 3 SUBTASK GRAPH INFERENCE PROBLEM
36
+
37
+ The subtask graph inference problem [Sohn et al., 2019] is a few-shot RL problem where a task is parameterized by a set of subtasks and their dependencies. Formally, a task consists of $N$ subtasks $\mathbf{\Phi } = \left\{ {{\Phi }^{1},\ldots ,{\Phi }^{N}}\right\}$ , and each subtask ${\Phi }^{i}$ is parameterized by a tuple $\left( {{\mathcal{S}}_{\text{comp }}{}^{i},{G}_{\mathbf{c}}^{i},{G}_{\mathbf{r}}^{i}}\right)$ . The goal state ${\mathcal{S}}_{\text{comp }}{}^{i} \subset \mathcal{S}$ and precondition ${G}_{\mathbf{c}}^{i} : \mathcal{S} \rightarrow \{ 0,1\}$ defines the condition that a subtask is completed: the current state should be contained in its goal states (i.e., ${\mathbf{s}}_{t} \in {\mathcal{S}}_{\text{comp }}^{i}$ ) and the precondition should be satisfied (i.e., ${G}_{\mathbf{c}}^{i}\left( {\mathbf{s}}_{t}\right) =$ 1). If the precondition is not satisfied (i.e., ${G}_{\mathbf{c}}^{i}\left( {\mathbf{s}}_{t}\right) = 0$ ), the subtask cannot be completed and the agent receives no reward even if the goal state is achieved. The subtask reward function ${G}_{\mathbf{r}}^{i}$ defines the amount of reward given to the agent when it completes the subtask $i : {r}_{t} \sim {G}_{\mathbf{r}}^{i}$ . We note that the subtasks $\left\{ {{\Phi }^{1},\ldots ,{\Phi }^{N}}\right\}$ are unknown to the agent. Thus, the agent should learn to infer the underlying task structure and complete the subtasks in an optimal order while satisfying the required preconditions.
38
+
39
+ State In the subtask graph inference problem, it is assumed that the state input provides the high-level status of the subtasks. Specifically, the state consists of the followings: ${\mathbf{s}}_{t} = \left( {{\mathrm{{obs}}}_{t},{\mathbf{x}}_{t},{\mathbf{e}}_{t},{\operatorname{step}}_{\mathrm{{epi}}, t},{\operatorname{step}}_{\mathrm{{phase}}, t}}\right)$ . The ${\mathrm{{obs}}}_{t} \in$ $\{ 0,1{\} }^{W \times H \times C}$ is a visual observation of the environment. The completion vector ${\mathbf{x}}_{t} \in \{ 0,1{\} }^{N}$ indicates whether each subtask is complete. The eligibility vector ${\mathbf{e}}_{t} \in \{ 0,1{\} }^{N}$ indicates whether each subtask is eligible (i.e., precondition is satisfied). Following the few-shot RL setting, the agent observes two scalar-valued time features: the remaining time steps until the episode termination ${\operatorname{step}}_{\mathrm{{epi}}, t} \in \mathbb{R}$ and the remaining time steps until the phase termination ${\operatorname{step}}_{\text{phase }, t} \in \mathbb{R}$ .
40
+
41
+ Options For each subtask ${\Phi }^{i}$ , the agent can learn an option ${\mathcal{O}}^{i}$ [Sutton et al.,1999] that reaches the goal state of the subtask. Following Sohn et al. [2019], such options are pre-learned individually by maximizing the goal-reaching reward: ${r}_{t} = \mathbb{I}\left( {{\mathbf{s}}_{t} \in {\mathcal{S}}_{\text{comp }}^{i}}\right)$ . At time step $t$ , we denote the option taken by the agent as ${\mathbf{o}}_{t}$ and the binary variable that indicates whether episode is terminated as ${d}_{t}$ .
42
+
43
+ ## 4 METHOD
44
+
45
+ We propose a novel Multi-Task Subtask Graph Inference (MTSGI) framework that can perform an efficient inference of latent task embedding (i.e., subtask graph). The overall method is outlined in Figure 2. Specifically, in meta-training, MTSGI models the prior in terms of (1) adaptation trajectory $\tau$ and (2) subtask graph $G$ from the agent’s experience. In meta-testing, MTSGI samples (1) the prior trajectory ${\tau }^{\mathrm{p}}$ for more efficient exploration in adaptation and (2) the prior subtask graph ${G}^{\mathrm{p}}$ for more accurate task inference.
46
+
47
+ Algorithm 1 Meta-training: learning the prior
48
+
49
+ ---
50
+
51
+ Require: Adaptation policy ${\pi }^{\text{adapt }}$
52
+
53
+ Ensure: Prior set ${\mathcal{T}}^{\mathrm{p}}$
54
+
55
+ ${\mathcal{T}}^{\mathrm{p}} \leftarrow \varnothing$
56
+
57
+ for each task $\mathcal{M} \in {\mathcal{M}}^{\text{train }}$ do
58
+
59
+ Rollout adaptation policy:
60
+
61
+ $\tau = {\left\{ {\mathbf{s}}_{t},{\mathbf{o}}_{t},{r}_{t},{d}_{t}\right\} }_{t = 1}^{K} \sim {\pi }^{\text{adapt }}$ in task $\mathcal{M}$
62
+
63
+ Infer subtask graph ${G}^{\tau } = \arg \mathop{\max }\limits_{G}p\left( {\tau \mid G}\right)$
64
+
65
+ ${\pi }^{\text{eval }} = \operatorname{GRProp}\left( {G}^{\tau }\right)$
66
+
67
+ Evaluate the agent: ${\tau }^{\text{eval }} \sim {\pi }^{\text{eval }}$ in task $\mathcal{M}$
68
+
69
+ Update prior ${\mathcal{T}}^{\mathrm{p}} \leftarrow {\mathcal{T}}^{\mathrm{p}} \cup \left( {{G}^{\tau },\tau }\right)$
70
+
71
+ end for
72
+
73
+ ---
74
+
75
+ ### 4.1 MULTI-TASK ADAPTATION POLICY
76
+
77
+ The goal of adaptation policy is to efficiently explore and gather the information about the task. Intuitively, if the adaptation policy completes more diverse subtasks, then it can provide more data to the task inference module (ILP), which in turn can more accurately infer the task structure. To this end, we extend the upper confidence bound (UCB)- based adaptation policy proposed in Sohn et al. [2019] as follows:
78
+
79
+ $$
80
+ {\pi }^{\text{adapt }}\left( {o = {\mathcal{O}}^{i} \mid s}\right) \propto \exp \left( {{r}^{i} + \sqrt{2}\frac{\log \left( {\mathop{\sum }\limits_{j}{n}^{j}}\right) }{{n}^{i}}}\right) ,
81
+ $$
82
+
83
+ (2)
84
+
85
+ where ${r}^{i}$ is the empirical mean of the reward received after executing subtask $i$ and ${n}^{i}$ is the number of times subtask $i$ has been executed within the current task. Note that the exploration parameters ${\left\{ {r}^{i},{n}^{i}\right\} }_{i = 1}^{N}$ can be computed from the agent's trajectory. In meta-train, the exploration parameters are initialized to zero when a new task is sampled. In meta-test, the exploration parameters are initialized with those of the sampled prior. Intuitively, this helps the agent visit novel states that were unseen during meta-training.
86
+
87
+ ### 4.2 META-TRAIN: LEARNING THE PRIOR SUBTASK GRAPH
88
+
89
+ Let $\tau$ be an adaptation trajectory of the agent for $K$ steps. The goal is to infer the latent subtask graph $G$ for the given training task ${\mathcal{M}}_{G} \in {\mathcal{M}}^{\text{train }}$ , specified by preconditions ${G}_{\mathbf{c}}$ and subtask rewards ${G}_{\mathbf{r}}$ . We find the maximum-likelihood estimate (MLE) of $G = \left( {{G}_{\mathbf{c}},{G}_{\mathbf{r}}}\right)$ that maximizes the likelihood of the adaptation trajectory $\tau$ :
90
+
91
+ $$
92
+ {\widehat{G}}^{\mathrm{{MLE}}} = \underset{{G}_{\mathbf{c}},{G}_{\mathbf{r}}}{\arg \max }p\left( {\tau \mid {G}_{\mathbf{c}},{G}_{\mathbf{r}}}\right) . \tag{3}
93
+ $$
94
+
95
+ Following Sohn et al. [2019], we infer the precondition ${G}_{\mathbf{c}}$ and the subtask reward ${G}_{\mathbf{r}}$ as follows (See Appendix for the
96
+
97
+ Algorithm 2 meta-testing: multi-task SGI
98
+
99
+ ---
100
+
101
+ Require: Adaptation policy ${\pi }^{\text{adapt }}$ , prior set ${\mathcal{T}}^{\mathrm{p}}$
102
+
103
+ for each task $\mathcal{M} \in {\mathcal{M}}^{\text{test }}$ do
104
+
105
+ Sample prior: $\left( {{G}^{\mathrm{p}},{\tau }^{\mathrm{p}}}\right) \sim p\left( {\mathcal{T}}^{\mathrm{p}}\right)$
106
+
107
+ Rollout adaptation policy:
108
+
109
+ $\tau = {\left\{ {\mathbf{s}}_{t},{\mathbf{o}}_{t},{r}_{t},{d}_{t}\right\} }_{t = 1}^{K} \sim {\pi }^{\text{adapt }}$ in task $\mathcal{M}$
110
+
111
+ Infer subtask graph ${G}^{\tau } = \arg \mathop{\max }\limits_{G}p\left( {\tau \mid G}\right)$
112
+
113
+ ${\pi }^{\text{eval }}\left( {\cdot \mid \tau ,{\tau }^{\mathrm{p}}}\right) \propto \operatorname{GRProp}{\left( \cdot \mid {G}^{\tau }\right) }^{\alpha }\operatorname{GRProp}{\left( \cdot \mid {G}^{\mathrm{p}}\right) }^{\left( 1 - \alpha \right) }$
114
+
115
+ Evaluate the agent: ${\tau }^{\text{eval }} \sim {\pi }^{\text{eval }}$ in task $\mathcal{M}$
116
+
117
+ end for
118
+
119
+ ---
120
+
121
+ detailed derivation):
122
+
123
+ $$
124
+ {\widehat{G}}_{\mathbf{c}}^{\mathrm{{MLE}}} = \underset{{G}_{\mathbf{c}}}{\arg \max }\mathop{\prod }\limits_{{t = 1}}^{H}p\left( {{\mathbf{e}}_{t} \mid {\mathbf{x}}_{t},{G}_{\mathbf{c}}}\right) , \tag{4}
125
+ $$
126
+
127
+ $$
128
+ {\widehat{G}}_{\mathbf{r}}^{\mathrm{{MLE}}} = \underset{{G}_{\mathbf{r}}}{\arg \max }\mathop{\prod }\limits_{{t = 1}}^{H}p\left( {{r}_{t} \mid {\mathbf{e}}_{t},{\mathbf{o}}_{t},{G}_{\mathbf{r}}}\right) . \tag{5}
129
+ $$
130
+
131
+ where ${\mathbf{e}}_{t}$ is the eligibility vector, ${\mathbf{x}}_{t}$ is the completion vector, ${\mathbf{o}}_{t}$ is the option taken, ${r}_{t}$ is the reward at time step $t$ .
132
+
133
+ Precondition inference The problem in Equation (4) is known as the inductive logic programming (ILP) problem that finds a boolean function that satisfies all the indicator functions. Specifically, ${\left\{ {\mathbf{x}}_{t}\right\} }_{t = 1}^{H}$ forms binary vector inputs to programs, and ${\left\{ {e}_{t}^{i}\right\} }_{t = 1}^{H}$ forms Boolean-valued outputs of the $i$ -th program that predicts the eligibility of the $i$ -th subtask. We use the classification and regression tree (CART) to infer the precondition function ${f}_{{G}_{\mathbf{c}}} : \mathbf{x} \rightarrow \mathbf{e}$ for each subtask based on Gini impurity [Breiman, 1984]. Intuitively, the constructed decision tree is the simplest boolean function approximation for the given input-output pairs $\left\{ {{\mathbf{x}}_{t},{\mathbf{e}}_{t}}\right\}$ . The decision tree is converted to a logic expression (i.e., precondition) in sum-of-product (SOP) form to build the subtask graph.
134
+
135
+ Subtask reward inference To infer the subtask reward ${\widehat{G}}_{\mathbf{r}}^{\text{MLE }}$ in Equation (5), we model the reward for $i$ -th subtask as a Gaussian distribution: ${G}_{\mathbf{r}}^{i} \sim \mathcal{N}\left( {{\widehat{\mu }}^{i},{\widehat{\sigma }}^{i}}\right)$ . Then, the MLE of subtask reward is given as the empirical mean and variance of the rewards received after taking the eligible option ${\mathcal{O}}^{i}$ in adaptation phase:
136
+
137
+ $$
138
+ {\widehat{\mu }}_{\mathrm{{MLE}}}^{i} = \mathbb{E}\left\lbrack {{r}_{t} \mid {\mathbf{o}}_{t} = {\mathcal{O}}^{i},{\mathbf{e}}_{t}^{i} = 1}\right\rbrack , \tag{6}
139
+ $$
140
+
141
+ $$
142
+ {\widehat{{\sigma }^{2}}}_{\mathrm{{MLE}}}^{i} = \mathbb{E}\left\lbrack {{\left( {r}_{t} - {\widehat{\mu }}_{\mathrm{{MLE}}}^{i}\right) }^{2} \mid {\mathbf{o}}_{t} = {\mathcal{O}}^{i},{\mathbf{e}}_{t}^{i} = 1}\right\rbrack , \tag{7}
143
+ $$
144
+
145
+ where ${\mathcal{O}}^{i}$ is the option corresponding to the $i$ -th subtask. Algorithm 1 outlines the meta-training process.
146
+
147
+ ### 4.3 EVALUATION: GRAPH-REWARD PROPAGATION POLICY
148
+
149
+ In both meta-training and meta-testing, the agent's adapted behavior is evaluated during the test phase. Following Sohn et al. [2019], we adopted the graph reward propagation (GRProp) policy as an evaluation policy ${\pi }^{\text{eval }}$ that takes as input the inferred subtask graph $\widehat{G}$ and outputs the subtasks to execute to maximize the return. Intuitively, the GRProp policy approximates a subtask graph to a differentiable form such that we can compute the gradient of return with respect to the completion vector to measure how much each subtask is likely to increase the return. Due to space limitations, we give a detail of the GRProp policy in Appendix. The overall meta-training process is summarized in Appendix.
150
+
151
+ ### 4.4 META-TESTING: MULTI-TASK TASK INFERENCE
152
+
153
+ Prior sampling In meta-testing, MTSGI first chooses the prior task that most resembles the given evaluation task. Specifically, we define the pair-wise similarity between a prior task ${\mathcal{M}}_{G}^{\text{prior }}$ and the evaluation task ${\mathcal{M}}_{G}$ as follows:
154
+
155
+ $$
156
+ \operatorname{sim}\left( {{\mathcal{M}}_{G},{\mathcal{M}}_{G}^{\text{prior }}}\right) = {F}_{\beta }\left( {\mathbf{\Phi },{\mathbf{\Phi }}^{\text{prior }}}\right) + {\kappa R}\left( {\tau }^{\text{prior }}\right) ,
157
+ $$
158
+
159
+ (8)
160
+
161
+ where ${F}_{\beta }$ is the F-score with weight parameter $\beta ,\mathbf{\Phi }$ is the subtask set of ${\mathcal{M}}_{G},{\Phi }^{\text{prior }}$ is the subtask set of ${\mathcal{M}}_{G}^{\text{prior }}$ , $R\left( {\tau }^{\text{prior }}\right)$ is the agent’s empirical performance on the prior task ${\mathcal{M}}_{G}^{\text{prior }}$ , and $\kappa$ is a scalar-valued weight which we used $\kappa = {1.0}$ in experiment. ${F}_{\beta }$ measures how many subtasks overlap between current and prior tasks in terms of precision and recall as follows:
162
+
163
+ $$
164
+ {F}_{\beta } = \left( {1 + {\beta }^{2}}\right) \cdot \frac{\text{ precision } \cdot \text{ recall }}{\left( {{\beta }^{2} \cdot \text{ precision }}\right) + \text{ recall }}, \tag{9}
165
+ $$
166
+
167
+ $$
168
+ \text{Precision} = \left| {\mathbf{\Phi } \cap {\mathbf{\Phi }}^{\text{prior }}}\right| /\left| {\mathbf{\Phi }}^{\text{prior }}\right| \text{,} \tag{10}
169
+ $$
170
+
171
+ $$
172
+ \text{Recall} = \left| {\mathbf{\Phi } \cap {\mathbf{\Phi }}^{\text{prior }}}\right| /\left| \mathbf{\Phi }\right| \text{.} \tag{11}
173
+ $$
174
+
175
+ We used $\beta = {10}$ to assign a higher weight to the current task (i.e., recall) than the prior task (i.e., precision).
176
+
177
+ Multi-task subtask graph inference Let $\tau$ be the adaptation trajectory, and ${\tau }^{\mathrm{p}}$ be the sampled prior adaptation trajectory. Then, we model our evaluation policy as follows:
178
+
179
+ $$
180
+ \pi \left( {o \mid s,\tau ,{\tau }^{\mathrm{p}}}\right) \simeq \pi {\left( o \mid s,{G}^{\tau }\right) }^{\alpha }\pi {\left( o \mid s,{G}^{\mathrm{p}}\right) }^{\left( 1 - \alpha \right) }. \tag{12}
181
+ $$
182
+
183
+ Due to the limited space, we include the detailed derivation of Equation (12) in Appendix. Finally, we deploy the GRProp policy as a contextual policy:
184
+
185
+ $$
186
+ {\pi }^{\text{eval }}\left( {\cdot \mid \tau ,{\tau }^{\mathrm{p}}}\right) = \operatorname{GRProp}{\left( \cdot \mid {G}^{\tau }\right) }^{\alpha }\operatorname{GRProp}{\left( \cdot \mid {G}^{\mathrm{p}}\right) }^{\left( 1 - \alpha \right) }.
187
+ $$
188
+
189
+ (13)Note that Equation (13) is the weighted sum of the logits of two GRProp policies induced by prior ${\tau }^{\mathrm{p}}$ and current experience $\tau$ . We claim that such form of ensemble induces the positive transfer in compositional tasks. Intuitively, en-sembling GRProp is taking a union of preconditions since GRProp assigns a positive logit to task-relevant subtask and non-positive logit to other subtasks. As motivated in the Introduction, related tasks often share the task-relevant preconditions; thus, taking the union of task-relevant preconditions is likely to be a positive transfer and improve the generalization. The pseudo-code of the multi-task subtask graph inference process is summarized in Algorithm 2.
190
+
191
+ ## 5 RELATED WORK
192
+
193
+ Web navigating RL agent Previous work introduced MiniWoB [Shi et al., 2017] and MiniWoB++ [Liu et al., 2018] benchmarks that are manually curated sets of simulated toy environments for the web navigation problem. They formulated the problem as acting on a page represented as a Document Object Model (DOM), a hierarchy of objects in the page. The agent is trained with human demonstrations and online episodes in an RL loop. Jia et al. [2019] proposed a graph neural network based DOM encoder and a multi-task formulation of the problem similar to this work. Gur et al. [2018] introduced a manually-designed curriculum learning method and an LSTM based DOM encoder. DOM level representations of web pages pose a significant sim-to-real gap as simulated websites are considerably smaller (100s of nodes) compared to noisy real websites (1000s of nodes). As a result, these models are trained and evaluated on the same simulated environments which are difficult to deploy on real websites. Our work formulates the problem as abstract web navigation on real websites where the objective is to learn a latent subtask dependency graph similar to a sitemap of websites. We propose a multi-task training objective that generalizes from a fixed set of real websites to unseen websites without any demonstration, illustrating an agent capable of navigating real websites for the first time.
194
+
195
+ Meta-reinforcement learning To tackle the few-shot RL problem, researchers have proposed two broad categories of meta-RL approaches: RNN- and gradient-based methods. The RNN-based meta-RL methods [Duan et al., 2016, Wang et al., 2016, Hochreiter et al., 2001] encode the common knowledge of the task into the hidden states and the parameters of the RNN. The gradient-based meta-RL methods [Finn et al., 2017, Nichol et al., 2018, Gupta et al., 2018, Finn et al., 2018, Kim et al., 2018] encode the task embedding in terms of the initial policy parameter for fast adaptation through meta gradient. Existing meta-RL approaches, however, often require a large amount of environment interaction due to the long-horizon nature of the few-shot RL tasks. Our work instead explicitly infers the underlying task parameter in terms of subtask graph, which can be efficiently inferred using the inductive logic programming (ILP) method and be transferred across different, unseen tasks.
196
+
197
+ More Related Works Please refer to the Appendix for further discussions about other related works.
198
+
199
+ ## 6 EXPERIMENT
200
+
201
+ ### 6.1 DOMAINS
202
+
203
+ Mining Mining [Sohn et al., 2018] is a 2D grid-world domain inspired by Minecraft game where the agent receives a reward by picking up raw materials in the world or crafting items with raw materials. The subtask dependency in Mining domain comes from the crafting recipe implemented in the game. Following Sohn et al. [2018], we used the pre-generated training/testing task splits generated with four different random seeds. Each split set consists of 3200 training tasks and 440 testing tasks for meta-training and meta-testing, respectively. We report the performance averaged over the four task split sets.
204
+
205
+ SymWoB We implement a symbolic version of the checkout process on the 15 real-world websites such as Amazon, BestBuy, and Walmart, etc.
206
+
207
+ Subtask and option policy. Each actionable web element (e.g., text field, button drop-down list, and hyperlink) is considered as a subtask. We assume the agent has pre-learned the option policies that correctly interact with each element (e.g., click the button or fill out the text field). Thus, the agent should learn a policy over the option.
208
+
209
+ Completion and eligibility. For each subtask, the completion and eligibility are determined based on the status of the corresponding web element. For example, the subtask of a text field is completed if the text field is filled with the correct information, and the subtask of a confirm_credit_info button is eligible if all the required subtasks (i.e., filling out credit card information) on the webpage are completed. Executing an option will complete the corresponding subtask only if the subtask is eligible.
210
+
211
+ Reward function and episode termination. The agent may receive a non-zero reward only at the end of the episode (i.e., sparse-reward task). When the episode terminates due to the time budget, the agent may not receive any reward. Otherwise, the following two types of subtasks terminate the episode and give a non-zero reward upon completion:
212
+
213
+ - Goal subtask refers to the button that completes the order (See the green boxes in Figure 1). Completing this subtask gives the agent a +5 reward, and the episode is terminated.
214
+
215
+ - Distractor subtask does not contribute to solving the given task but terminates the episode with a -1 reward. It models the web elements that lead to external web pages such as Contact_Us button in Figure 1.
216
+
217
+ Transition dynamics. The transition dynamics follow the dynamics of the actual website. Each website consists of multiple web pages. The agent may only execute the subtasks that are currently visible (i.e., on the current web page) and can navigate to the next web page only after filling out all the required fields and clicking the continue button. The goal subtask is present in the last web page; thus, the agent must learn to navigate through the web pages to solve the task.
218
+
219
+ For more details about each task, please refer to Appendix.
220
+
221
+ ### 6.2 AGENTS
222
+
223
+ We compared the following algorithms in the experiment.
224
+
225
+ - MTSGI (Ours): our multi-task SGI agent
226
+
227
+ - MSGI [Sohn et al., 2019]: SGI agent without multi-task learning
228
+
229
+ - HRL: an Option [Sutton et al., 1999]-based proximal policy optimization (PPO) [Schulman et al., 2017] agent with the gated rectifier unit (GRU)
230
+
231
+ - Random: a heuristic policy that uniform randomly executes an eligible subtask
232
+
233
+ More details on the architectures and the hyperparameters can be found in Appendix.
234
+
235
+ Meta-training In SymWoB, for each task chosen for a meta-testing, we randomly sampled ${N}_{\text{train }}$ tasks among the remaining 14 tasks and used it for meta-training. We used ${N}_{\text{train }} = 1$ in the experiment (See Figure 8 for the impact of the choice of ${N}_{\text{train }}$ ). For example, we meta-trained our MTSGI on Amazon and meta-tested on Expedia. For Mining, we used the train/test task split provided in the environment. The RL agents (e.g., HRL) were individually trained on each test task; the policy was initialized when a new task is sampled and trained during the adaptation phase. All the experiments were repeated with four random seeds, where different training tasks were sampled for different seeds.
236
+
237
+ ### 6.3 RESULT: FEW-SHOT GENERALIZATION PERFORMANCE
238
+
239
+ Figure 3 and Figure 4 show the few-shot generalization performance of the compared methods on SymWoB and Mining. In Figure 3, MTSGI achieves more than 75% zero-shot success rate (i.e., success rate at $\mathrm{x} -$ axis $= 0$ ) on all five tasks, which is significantly higher than the zero-shot performance of MSGI. This indicates that the prior learned from the training task significantly improves the subtask graph inference and in turn improves the multi-task evaluation policy. Moreover, our MTSGI can learn a near-optimal policy on all the tasks after only 1,000 steps of environment interactions, demonstrating that the proposed multi-task learning scheme enables fast adaptation. Even though the MSGI agent is learning each task from scratch, it still outperforms the HRL and Random agents. This shows that explicitly inferring the underlying task structure and executing the predicted subtask graph is significantly more effective than learning the policy from the reward signal (i.e., HRL) on complex compositional tasks. Given the pre-learned options, HRL agent can slightly improve the success rate during the adaptation via PPO update. However, training the policy only from the sparse reward requires a large number of interactions especially for the tasks with many distractors (e.g., Expedia and Walgreens).
240
+
241
+ ![01963936-f4dd-7ee7-bc23-e66bfe548b96_6_195_179_1399_275_0.jpg](images/01963936-f4dd-7ee7-bc23-e66bfe548b96_6_195_179_1399_275_0.jpg)
242
+
243
+ Figure 3: The success rate (y-axis) of the compared methods in the test phase in terms of the environment step during the adaptation phase (x-axis) on SymWoB domain. See Appendix for the results on other tasks.
244
+
245
+ ![01963936-f4dd-7ee7-bc23-e66bfe548b96_6_183_563_685_250_0.jpg](images/01963936-f4dd-7ee7-bc23-e66bfe548b96_6_183_563_685_250_0.jpg)
246
+
247
+ Figure 4: The performance of the compared methods in terms of the adaptation steps averaged over all the tasks in SymWoB (Left) and Mining (Right) domains.
248
+
249
+ ![01963936-f4dd-7ee7-bc23-e66bfe548b96_6_175_934_692_270_0.jpg](images/01963936-f4dd-7ee7-bc23-e66bfe548b96_6_175_934_692_270_0.jpg)
250
+
251
+ Figure 5: The precision and recall of the subtask graphs inferred by MTSGI and MSGI on SymWoB and Mining.
252
+
253
+ ### 6.4 ANALYSIS ON THE INFERRED SUBTASK GRAPH
254
+
255
+ We compare the inferred subtask graph with the ground-truth subtask graph. Figure 6 shows the subtask graph inferred by MTSGI in Walmart. We can see that MTSGI can accurately infer the subtask graph; the inferred subtask graph is missing only two preconditions (shown in red color) of Click_Continue_Payment subtask.We note that such a small error in the subtask graph has a negligible effect as shown in Figure 3: i.e., MTSGI achieves near-optimal performance on Walmart after 1,000 steps of adaptation. Figure 5 measures the precision and recall of the inferred precondition (i.e., the edge of the graph). First, both MTSGI and MSGI achieve high precision and recall after only a few hundred of adaptation. Also, MTSGI outperforms MSGI in the early stage of adaptation. This clearly demonstrates that the MTSGI can perform more accurate task inference due to the prior learned from the training tasks.
256
+
257
+ ### 6.5 ABLATION STUDY: EFFECT OF EXPLORATION STRATEGY
258
+
259
+ In this section, we investigate the effect of various exploration strategies on the performance of MTSGI. We compared the following three adaptation policies:
260
+
261
+ - Random: A policy that uniformly randomly executes any eligible subtask.
262
+
263
+ - UCB: The UCB policy defined in Equation (2) that aims to execute the novel subtask. The exploration parameters are initialized to zero when a new task is sampled.
264
+
265
+ - MTUCB (Ours): Our multi-task extension of UCB policy. When a new task is sampled, the exploration parameter is initialized with those of the sampled prior.
266
+
267
+ Figure 7 summarizes the result on SymWoB and Mining domain, respectively. Using the more sophisticated exploration policy such as MTSGI+UCB or MTSGI+MTUCB improved the performance of MTSGI compared to MTSGI+Random, which was also observed in Sohn et al. [2019]. This is because better exploration helps the adaptation policy collect more data for logic induction by executing more diverse subtasks. In turn, this results in more accurate subtask graph inference and better performance. Also, MTSGI+MTUCB outperforms MTSGI+UCB on both domains. This indicates that transferring the exploration parameters makes the agent's exploration more efficient in meta-testing. Intuitively, the transferred exploration counts inform the agent which subtasks were under-explored during meta-training, such that the agent can focus more on exploring those in meta-testing.
268
+
269
+ ![01963936-f4dd-7ee7-bc23-e66bfe548b96_7_251_172_1298_345_0.jpg](images/01963936-f4dd-7ee7-bc23-e66bfe548b96_7_251_172_1298_345_0.jpg)
270
+
271
+ Figure 6: The visualization of the subtask graph inferred by our MTSGI after 1,000 steps of environment interaction on Walmart domain. Compared to the ground-truth subtask graph (not available to the agent), there was no error in the nodes and only two missing edges (in red). See Appendix for the progression of the inferred subtask graph with varying adaptation steps.
272
+
273
+ ![01963936-f4dd-7ee7-bc23-e66bfe548b96_7_186_690_684_260_0.jpg](images/01963936-f4dd-7ee7-bc23-e66bfe548b96_7_186_690_684_260_0.jpg)
274
+
275
+ Figure 7: Comparison of different exploration strategies for MTSGI used in adaptation phase for SymWoB and Mining.
276
+
277
+ ![01963936-f4dd-7ee7-bc23-e66bfe548b96_7_183_1033_688_251_0.jpg](images/01963936-f4dd-7ee7-bc23-e66bfe548b96_7_183_1033_688_251_0.jpg)
278
+
279
+ Figure 8: Comparison of different number of priors for MTSGI on SymWoB and Mining.
280
+
281
+ ### 6.6 ABLATION STUDY: EFFECT OF THE PRIOR SET SIZE
282
+
283
+ MTSGI learns the prior from the training tasks. We investigated how many training tasks are required for MTSGI to learn a good prior for transfer learning. Figure 8 compares the performance of MTSGI with the varying number of training tasks:1,4,14tasks for SymWoB and 10,100, 500, 3200 tasks for Mining. The training tasks are randomly subsampled from the entire training set. The result shows that training on a larger number of tasks generally improves the performance. Mining generally requires more number of training tasks than SymWoB because the agent is required to solve 440 different tasks in Mining while SymWoB was evaluated on 15 tasks; the agent is required to capture a wider range of task distribution in Mining than SymWoB. Also, we note that MTSGI can still adapt much more efficiently than all other baseline methods even when only a small number of training tasks are available (e.g., one task for SymWoB and ten tasks for Mining).
284
+
285
+ ## 7 CONCLUSION
286
+
287
+ We introduce a multi-task RL extension of the subtask graph inference framework that can quickly adapt to the unseen tasks by modeling the prior of subtask graph from the training tasks and transferring it to the test tasks. The empirical results demonstrate that our MTSGI achieves strong zero-shot and few-shot generalization performance on $2\mathrm{D}$ grid-world and complex web navigation domains by transferring the common knowledge learned in the training tasks to the unseen ones in terms of subtask graph.
288
+
289
+ In this work, we have assumed that the subtasks and the corresponding options are pre-learned and that the environment provides a high-level status of each subtask (e.g., whether the web element is filled in with the correct information). In future work, our approach may be extended to a more general setting where the relevant subtask structure is fully learned from (visual) observations, and the corresponding options are autonomously discovered.
290
+
291
+ ## References
292
+
293
+ Jacob Andreas, Dan Klein, and Sergey Levine. Modular multitask reinforcement learning with policy sketches. In ICML, 2017.
294
+
295
+ Leo Breiman. Classification and regression trees. Rout-ledge, 1984.
296
+
297
+ Devendra Singh Chaplot, Kanthashree Mysore Sathyendra, Rama Kumar Pasumarthi, Dheeraj Rajagopal, and Ruslan Salakhutdinov. Gated-attention architectures for task-oriented language grounding. In ${AAAI},{2018}$ .
298
+
299
+ Zhao Chen, Vijay Badrinarayanan, Chen-Yu Lee, and Andrew Rabinovich. Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. In International Conference on Machine Learning, pages 794-803. PMLR, 2018.
300
+
301
+ Yan Duan, John Schulman, Xi Chen, Peter L Bartlett, Ilya Sutskever, and Pieter Abbeel. R1 ${}^{2}$ : Fast reinforcement learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779, 2016.
302
+
303
+ Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1126-1135. JMLR. org, 2017.
304
+
305
+ Chelsea Finn, Kelvin Xu, and Sergey Levine. Probabilistic model-agnostic meta-learning. In NeurIPS, pages 9516- 9527, 2018.
306
+
307
+ Behzad Ghazanfari and Matthew E Taylor. Autonomous extracting a hierarchical structure of tasks in reinforcement learning and multi-task reinforcement learning. arXiv preprint arXiv:1709.04579, 2017.
308
+
309
+ Abhishek Gupta, Russell Mendonca, YuXuan Liu, Pieter Abbeel, and Sergey Levine. Meta-reinforcement learning of structured exploration strategies. arXiv preprint arXiv:1802.07245, 2018.
310
+
311
+ Izzeddin Gur, Ulrich Rueckert, Aleksandra Faust, and Dilek Hakkani-Tur. Learning to navigate the web. arXiv preprint arXiv:1812.09195, 2018.
312
+
313
+ Karol Hausman, Jost Tobias Springenberg, Ziyu Wang, Nicolas Heess, and Martin Riedmiller. Learning an embedding space for transferable robot skills. In International Conference on Learning Representations, 2018.
314
+
315
+ Sepp Hochreiter, A Steven Younger, and Peter R Conwell. Learning to learn using gradient descent. In International Conference on Artificial Neural Networks, pages 87-94. Springer, 2001.
316
+
317
+ Matt Hoffman, Bobak Shahriari, John Aslanides, Gabriel Barth-Maron, Feryal Behbahani, Tamara Norman, Abbas Abdolmaleki, Albin Cassirer, Fan Yang, Kate Baumli, Sarah Henderson, Alex Novikov, Sergio Gómez Col-menarejo, Serkan Cabi, Caglar Gulcehre, Tom Le Paine, Andrew Cowie, Ziyu Wang, Bilal Piot, and Nando de Fre-itas. Acme: A research framework for distributed reinforcement learning. arXiv preprint arXiv:2006.00979, 2020. URL https://arxiv.org/abs/2006.00979.
318
+
319
+ De-An Huang, Suraj Nair, Danfei Xu, Yuke Zhu, Ani-mesh Garg, Li Fei-Fei, Silvio Savarese, and Juan Carlos Niebles. Neural task graphs: Generalizing to unseen tasks from a single video demonstration. arXiv preprint arXiv:1807.03480, 2018.
320
+
321
+ Sheng Jia, Jamie Ryan Kiros, and Jimmy Ba. DOM-q-NET: Grounded RL on structured language. In In-
322
+
323
+ ternational Conference on Learning Representations, 2019. URL https://openreview.net/forum? id=HJgd1nAqFX.
324
+
325
+ Taesup Kim, Jaesik Yoon, Ousmane Dia, Sungwoong Kim, Yoshua Bengio, and Sungjin Ahn. Bayesian model-agnostic meta-learning. arXiv preprint arXiv:1806.03836, 2018.
326
+
327
+ George Konidaris and Andrew Barto. Autonomous shaping: Knowledge transfer in reinforcement learning. In Proceedings of the 23rd international conference on Machine learning, pages 489-496, 2006.
328
+
329
+ Alessandro Lazaric. Transfer in reinforcement learning: a framework and a survey. In Reinforcement Learning, pages 143-173. Springer, 2012.
330
+
331
+ Alessandro Lazaric, Marcello Restelli, and Andrea Bonar-ini. Transfer of samples in batch reinforcement learning. In Proceedings of the 25th international conference on Machine learning, pages 544-551, 2008.
332
+
333
+ Xingyu Lin, Harjatin Singh Baweja, George Kantor, and David Held. Adaptive auxiliary task weighting for reinforcement learning. Advances in neural information processing systems, 32, 2019.
334
+
335
+ Changsong Liu, Shaohua Yang, Sari Iaba-Sadiya, Nishant Shukla, Yunzhong He, Song-chun Zhu, and Joyce Chai. Jointly learning grounded task structures from language instruction and visual demonstration. In EMNLP, 2016.
336
+
337
+ Evan Zheran Liu, Kelvin Guu, Panupong Pasupat, Tian-lin Shi, and Percy Liang. Reinforcement learning on web interfaces using workflow-guided exploration. arXiv preprint arXiv:1802.08802, 2018.
338
+
339
+ Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015.
340
+
341
+ Stephen Muggleton. Inductive logic programming. New Gen. Comput., 8(4):295-318, February 1991. ISSN 0288- 3635. doi: 10.1007/BF03037089. URL http://dx.doi.org/10.1007/BF03037089.
342
+
343
+ Alex Nichol, Joshua Achiam, and John Schulman. On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999, 2018.
344
+
345
+ Junhyuk Oh, Satinder Singh, Honglak Lee, and Pushmeet Kohli. Zero-shot task generalization with multi-task deep reinforcement learning. In ${ICML},{2017}$ .
346
+
347
+ Lerrel Pinto and Abhinav Gupta. Learning to push by grasping: Using multiple tasks for effective learning. In 2017
348
+
349
+ IEEE international conference on robotics and automation (ICRA), pages 2161-2168. IEEE, 2017.
350
+
351
+ Earl D Sacerdoti. The nonlinear nature of plans. Technical report, STANFORD RESEARCH INST MENLO PARK CA, 1975a.
352
+
353
+ Earl D Sacerdoti. A structure for plans and behavior. Technical report, SRI INTERNATIONAL MENLO PARK CA ARTIFICIAL INTELLIGENCE CENTER, 1975b.
354
+
355
+ John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
356
+
357
+ Tianlin Shi, Andrej Karpathy, Linxi Fan, Jonathan Hernandez, and Percy Liang. World of bits: An open-domain platform for web-based agents. In International Conference on Machine Learning, pages 3135-3144, 2017.
358
+
359
+ David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. nature, 550(7676):354-359, 2017.
360
+
361
+ Sungryull Sohn, Junhyuk Oh, and Honglak Lee. Hierarchical reinforcement learning for zero-shot generalization with subtask dependencies. In NeurIPS, pages 7156- 7166, 2018.
362
+
363
+ Sungryull Sohn, Hyunjae Woo, Jongwook Choi, and Honglak Lee. Meta reinforcement learning with autonomous inference of subtask dependencies. In International Conference on Learning Representations, 2019.
364
+
365
+ Richard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial intelligence, 112(1-2):181-211, 1999.
366
+
367
+ Austin Tate. Generating project networks. In Proceedings of the 5th international joint conference on Artificial intelligence-Volume 2, pages 888-893. Morgan Kaufmann Publishers Inc., 1977.
368
+
369
+ Matthew E Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research, 10(7), 2009.
370
+
371
+ Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature, 575(7782): 350-354, 2019.
372
+
373
+ Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, and Matt Botvinick. Learning to reinforcement learn. arXiv preprint arXiv:1611.05763, 2016.
374
+
375
+ Aaron Wilson, Alan Fern, Soumya Ray, and Prasad Tade-palli. Multi-task reinforcement learning: a hierarchical bayesian approach. In Proceedings of the 24th international conference on Machine learning, pages 1015-1022, 2007.
376
+
377
+ Danfei Xu, Suraj Nair, Yuke Zhu, Julian Gao, Animesh Garg, Li Fei-Fei, and Silvio Savarese. Neural task programming: Learning to generalize across hierarchical tasks. arXiv preprint arXiv:1710.01813, 2017.
378
+
379
+ Haonan Yu, Haichao Zhang, and Wei Xu. A deep compositional framework for human-like language acquisition in virtual environment. arXiv preprint arXiv:1703.09831, 2017.
380
+
381
+ Yu Zhang and Dit-Yan Yeung. A regularization approach to learning task relationships in multitask learning. ACM Transactions on Knowledge Discovery from Data (TKDD), 8(3):1-31, 2014.
UAI/UAI 2022/UAI 2022 Conference/BKZIivLs9xc/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,281 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § FAST INFERENCE AND TRANSFER OF COMPOSITIONAL TASK STRUCTURES FOR FEW-SHOT TASK GENERALIZATION
2
+
3
+ § ABSTRACT
4
+
5
+ We tackle real-world problems with complex structures beyond the pixel-based game or simulator. We formulate it as a few-shot reinforcement learning problem where a task is characterized by a subtask graph that defines a set of subtasks and their dependencies that are unknown to the agent. Different from the previous meta-RL methods trying to directly infer the unstructured task embedding, our multi-task subtask graph inferencer (MTSGI) first infers the common high-level task structure in terms of the subtask graph from the training tasks, and use it as a prior to improve the task inference in testing. Our experiment results on $2\mathrm{D}$ grid-world and complex web navigation domains show that the proposed method can learn and leverage the common underlying structure of the tasks for faster adaptation to the unseen tasks than various existing algorithms such as meta reinforcement learning, hierarchical reinforcement learning, and other heuristic agents.
6
+
7
+ § 1 INTRODUCTION
8
+
9
+ Recently, deep reinforcement learning (RL) has shown an outstanding performance on various domains such as video games [Mnih et al., 2015, Vinyals et al., 2019] and board games [Silver et al., 2017]. However, most of the successes of deep RL were focused on a single-task setting where the agent is allowed to interact with the environment for hundreds of millions of time steps. In numerous real-world scenarios, interacting with the environment is expensive or limited, and the agent is often presented with a novel task that is not seen during its training time. To overcome this limitation, many recent works focused on scaling the RL algorithm beyond the single-task setting. Recent works on multi-task RL aim to build a single, contextual policy that can solve multiple related tasks and generalize to unseen tasks. However, they require a certain form of task embedding as an extra input that often fully characterizes the given task [Oh et al., 2017, Andreas et al., 2017, Yu et al., 2017, Chaplot et al., 2018], or requires a human demonstration Huang et al. [2018], which are not readily available in practice. Meta RL [Finn et al., 2017, Duan et al., 2016] focuses on a more general setting where the agent should learn about the unseen task purely via interacting with the environment without any additional information. However, such meta-RL algorithms either require a large amount of experience on the diverse set of tasks or are limited to a relatively smaller set of simple tasks with a simple task structure.
10
+
11
+ On the contrary, real-world problems require the agent to solve much more complex and compositional tasks without human supervision. Consider a web-navigating RL agent given the task of checking out the products from an online store as shown in Figure 1. The agent can complete the task by filling out the required web elements with the correct information such as shipping or payment information, navigating between the web pages, and placing the order. Note that the task consists of multiple subtasks and the subtasks have complex dependencies in the form of precondition; for instance, the agent may proceed to the payment web page (see Bottom, B) after all the required shipping information has been correctly filled in (see Bottom, A), or the credit_card_number field will appear after selecting the credit_card as a payment method (see Top, Middle in Figure 1). Learning to perform such a task can be quite challenging if the reward is given only after yielding meaningful outcomes (i.e., sparse reward task). This is the problem scope we focus on in this work: solving and generalizing to unseen compositional sparse-reward tasks with complex subtask dependencies without human supervision.
12
+
13
+ Recent works [Sohn et al., 2019, Xu et al., 2017, Huang et al., 2018, Liu et al., 2016, Ghazanfari and Taylor, 2017] tackled the compositional tasks by explicitly inferring the underlying task structure in a graph form. Specifically, the subtask graph inference (SGI) framework [Sohn et al., 2019] uses inductive logic programming (ILP) on the agent's own experience to infer the task structure in terms of subtask graph and learns a contextual policy to execute the inferred task in few-shot RL setting. However, it only meta-learned the adaptation policy that relates to the efficient exploration, while the task inference and execution policy learning were limited to a single task (i.e., both task inference and policy learning were done from scratch for each task), limiting its capability of handling large variance in the task structure. We claim that the inefficient task inference may hinder applying the SGI framework to a more complex domain such as web navigation [Shi et al., 2017, Liu et al., 2018] where a task may have a large number of subtasks and complex dependencies between them. We note that humans can navigate an unseen website by transferring the high-level process learned from previously seen websites.
14
+
15
+ Full Name Credit Card Number Webpage Exp. Month Exp. Year ... CVV Zip Magnified view Country Page transition by click Agree and Confirm Terms of Use Questions? Contact Us_____ Terms of use Submit Order Need Help? Live Chat Email Train Phone task Google Pay Credit Cara Email Standard - FREE (2) Big below Arrives in 3-5 business days.↑ Test Phone Standard - FREE task Arrives in 3-5 business days.↑ (Unseen) First Name Expedited - $8.99 Arrives in 2 business days guaran Last Name ............
16
+
17
+ Figure 1: An illustration of the train (Top) and test task (Bottom) in our SymWoB domain. Some selected actionable web-elements (e.g., text fields and buttons) are magnified (dotted arrow and box) for readability. The agent's goal (green box) is to checkout the products in unseen test website by interacting with the web elements in a correct order. For example, in train task, the agent should fill out all the text fields in (Top, A) before clicking the credit_card button to transition (gray arrow) to next page. The high-level checkout processes in different websites have many commonalities while certain details may differ. For example, in both train and test tasks, the agent should fill out the user information (Top and Bottom, A) before proceeding to the next page or there exist similar elements (Top and Bottom, C). However, the details may differ; e.g., the train task(Top, A)has a single text field for full name, while the test task(Bottom, A)has separate text fields for the first and last name, respectively. Also, only the test website (Bottom, B) requires shipping information since the training website does not ship the product.
18
+
19
+ Inspired by this, we extend the SGI framework to a multitask subtask graph inferencer (MTSGI) that can generalize the previously learned task structure to the unseen task for faster adaptation and stronger generalization. Figure 2 outlines our method. MTSGI estimates the prior model of the subtask graphs from the training tasks. When an unseen task is presented, MTSGI samples the prior that best matches with the current task, and incorporates the sampled prior model to improve the latent subtask graph inference, which in turn improves the performance of the evaluation policy. We demonstrate results in the 2D grid-world domain and the web navigation domain that simulates the interaction with 15 actual websites. We compare our method with MSGI [Sohn et al., 2019] that learns the task hierarchy from scratch for each task, and two other baselines including hierarchical RL and a heuristic algorithm. We find that MTSGI significantly outperforms all other baselines, and the learned prior model enables more efficient task inference compared to MSGI.
20
+
21
+ § 2 PRELIMINARIES
22
+
23
+ Few-shot Reinforcement Learning A task is defined by an MDP ${\mathcal{M}}_{G} = \left( {\mathcal{S},\mathcal{A},{\mathcal{P}}_{G},{\mathcal{R}}_{G}}\right)$ parameterized by a task parameter $G$ with a set of states $\mathcal{S}$ , a set of actions $\mathcal{A}$ , transition dynamics ${\mathcal{P}}_{G}$ , reward function ${\mathcal{R}}_{G}$ . The goal of $K$ -shot RL [Duan et al., 2016, Finn et al., 2017], is to efficiently solve a distribution of unseen test tasks ${\mathcal{M}}^{\text{ test }}$ by learning and transferring the common knowledge from the training tasks ${\mathcal{M}}^{\text{ train }}$ . It is assumed that the training and test tasks do not overlap (i.e., ${\mathcal{M}}^{\text{ train }} \cap {\mathcal{M}}^{\text{ test }} = \varnothing$ ) but share a certain commonality such that the knowledge learned from the training tasks may be helpful for learning the test tasks. For each task ${\mathcal{M}}_{G}$ , the agent is given $K$ steps budget for interacting with the environment. During meta-training, the goal of multi-task RL agent is to learn a prior (i.e., slow-learning) over the training tasks ${\mathcal{M}}^{\text{ train }}$ . Then, the learned prior may be exploited during the meta-test to enable faster adaptation on unseen test tasks ${\mathcal{M}}^{\text{ test }}$ . For each task, the agent faces two phases: an adaptation phase where the agent learns a task-specific behavior (i.e., fast-learning) for $K$ environment steps, which often spans over multiple episodes, and a evaluation phase where the adapted behavior is evaluated. In the evaluation phase, the agent is not allowed to perform any form of learning, and agent's performance on the task ${\mathcal{M}}_{G}$ is measured in terms of the return:
24
+
25
+ $$
26
+ {\mathcal{R}}_{{\mathcal{M}}_{G}}\left( {\pi }_{{\phi }_{K}}\right) = {\mathbb{E}}_{{\pi }_{{\phi }_{K}},{\mathcal{M}}_{G}}\left\lbrack {\mathop{\sum }\limits_{{t = 1}}^{H}{r}_{t}}\right\rbrack , \tag{1}
27
+ $$
28
+
29
+ Name Credit Card Number Credit Card Number Email Address 8 Credit Card Exp Month 三 +1 Phone number Add a voucher code Exp Year Billing Zip Code Prior trajectory Trajectory _adapt Env C $\tau$ ILP Subtask ${G}^{\tau }$ Prior subtask graph ${G}^{\mathrm{p}}$ graph eval Meta-test Email Address Exp Month Exp Year 三+1 Phone number Billing Zip Code SE USA G Pay Google Pay Credit Card +) Add a voucher code Add a voucher code Agree and Confirm Trajectory Subtask graph Env A ILP Prior set $\tau$ adapt ${T}^{\mathrm{p}}$ Env $B$ ... Meta-train
30
+
31
+ Figure 2: The overview of our algorithm and the example of agent's trajectory and the inferred subtask graph. In meta-train (Left), the adaptation policy ${\pi }^{\text{ adapt }}$ interacts with the environment and collects the trajectory $\tau$ . The inductive logic programming (ILP) module takes as input the trajectory, and infers the task structure in terms of the subtask graph ${G}^{\tau }$ . The trajectory and the subtask graph are stored as a prior. In meta-testing (Right), the adaptation policy incorporates the prior trajectory ${\tau }^{\mathrm{p}}$ to efficiently explore the environment, and ILP module infers the subtask graph ${G}^{\tau }$ from the adaptation trajectory $\tau$ . Finally, the evaluation policy ${\pi }^{\text{ eval }}$ takes as input the prior and inferred subtask graphs $\left( {{G}^{\mathrm{p}},{G}^{\tau }}\right)$ to solve the test task.
32
+
33
+ where ${\pi }_{{\phi }_{K}}$ is the policy after $K$ update steps of adaptation, $H$ is the horizon of evaluation phase, and ${r}_{t}$ is the reward at time $t$ in the evaluation phase.
34
+
35
+ § 3 SUBTASK GRAPH INFERENCE PROBLEM
36
+
37
+ The subtask graph inference problem [Sohn et al., 2019] is a few-shot RL problem where a task is parameterized by a set of subtasks and their dependencies. Formally, a task consists of $N$ subtasks $\mathbf{\Phi } = \left\{ {{\Phi }^{1},\ldots ,{\Phi }^{N}}\right\}$ , and each subtask ${\Phi }^{i}$ is parameterized by a tuple $\left( {{\mathcal{S}}_{\text{ comp }}{}^{i},{G}_{\mathbf{c}}^{i},{G}_{\mathbf{r}}^{i}}\right)$ . The goal state ${\mathcal{S}}_{\text{ comp }}{}^{i} \subset \mathcal{S}$ and precondition ${G}_{\mathbf{c}}^{i} : \mathcal{S} \rightarrow \{ 0,1\}$ defines the condition that a subtask is completed: the current state should be contained in its goal states (i.e., ${\mathbf{s}}_{t} \in {\mathcal{S}}_{\text{ comp }}^{i}$ ) and the precondition should be satisfied (i.e., ${G}_{\mathbf{c}}^{i}\left( {\mathbf{s}}_{t}\right) =$ 1). If the precondition is not satisfied (i.e., ${G}_{\mathbf{c}}^{i}\left( {\mathbf{s}}_{t}\right) = 0$ ), the subtask cannot be completed and the agent receives no reward even if the goal state is achieved. The subtask reward function ${G}_{\mathbf{r}}^{i}$ defines the amount of reward given to the agent when it completes the subtask $i : {r}_{t} \sim {G}_{\mathbf{r}}^{i}$ . We note that the subtasks $\left\{ {{\Phi }^{1},\ldots ,{\Phi }^{N}}\right\}$ are unknown to the agent. Thus, the agent should learn to infer the underlying task structure and complete the subtasks in an optimal order while satisfying the required preconditions.
38
+
39
+ State In the subtask graph inference problem, it is assumed that the state input provides the high-level status of the subtasks. Specifically, the state consists of the followings: ${\mathbf{s}}_{t} = \left( {{\mathrm{{obs}}}_{t},{\mathbf{x}}_{t},{\mathbf{e}}_{t},{\operatorname{step}}_{\mathrm{{epi}},t},{\operatorname{step}}_{\mathrm{{phase}},t}}\right)$ . The ${\mathrm{{obs}}}_{t} \in$ $\{ 0,1{\} }^{W \times H \times C}$ is a visual observation of the environment. The completion vector ${\mathbf{x}}_{t} \in \{ 0,1{\} }^{N}$ indicates whether each subtask is complete. The eligibility vector ${\mathbf{e}}_{t} \in \{ 0,1{\} }^{N}$ indicates whether each subtask is eligible (i.e., precondition is satisfied). Following the few-shot RL setting, the agent observes two scalar-valued time features: the remaining time steps until the episode termination ${\operatorname{step}}_{\mathrm{{epi}},t} \in \mathbb{R}$ and the remaining time steps until the phase termination ${\operatorname{step}}_{\text{ phase },t} \in \mathbb{R}$ .
40
+
41
+ Options For each subtask ${\Phi }^{i}$ , the agent can learn an option ${\mathcal{O}}^{i}$ [Sutton et al.,1999] that reaches the goal state of the subtask. Following Sohn et al. [2019], such options are pre-learned individually by maximizing the goal-reaching reward: ${r}_{t} = \mathbb{I}\left( {{\mathbf{s}}_{t} \in {\mathcal{S}}_{\text{ comp }}^{i}}\right)$ . At time step $t$ , we denote the option taken by the agent as ${\mathbf{o}}_{t}$ and the binary variable that indicates whether episode is terminated as ${d}_{t}$ .
42
+
43
+ § 4 METHOD
44
+
45
+ We propose a novel Multi-Task Subtask Graph Inference (MTSGI) framework that can perform an efficient inference of latent task embedding (i.e., subtask graph). The overall method is outlined in Figure 2. Specifically, in meta-training, MTSGI models the prior in terms of (1) adaptation trajectory $\tau$ and (2) subtask graph $G$ from the agent’s experience. In meta-testing, MTSGI samples (1) the prior trajectory ${\tau }^{\mathrm{p}}$ for more efficient exploration in adaptation and (2) the prior subtask graph ${G}^{\mathrm{p}}$ for more accurate task inference.
46
+
47
+ Algorithm 1 Meta-training: learning the prior
48
+
49
+ Require: Adaptation policy ${\pi }^{\text{ adapt }}$
50
+
51
+ Ensure: Prior set ${\mathcal{T}}^{\mathrm{p}}$
52
+
53
+ ${\mathcal{T}}^{\mathrm{p}} \leftarrow \varnothing$
54
+
55
+ for each task $\mathcal{M} \in {\mathcal{M}}^{\text{ train }}$ do
56
+
57
+ Rollout adaptation policy:
58
+
59
+ $\tau = {\left\{ {\mathbf{s}}_{t},{\mathbf{o}}_{t},{r}_{t},{d}_{t}\right\} }_{t = 1}^{K} \sim {\pi }^{\text{ adapt }}$ in task $\mathcal{M}$
60
+
61
+ Infer subtask graph ${G}^{\tau } = \arg \mathop{\max }\limits_{G}p\left( {\tau \mid G}\right)$
62
+
63
+ ${\pi }^{\text{ eval }} = \operatorname{GRProp}\left( {G}^{\tau }\right)$
64
+
65
+ Evaluate the agent: ${\tau }^{\text{ eval }} \sim {\pi }^{\text{ eval }}$ in task $\mathcal{M}$
66
+
67
+ Update prior ${\mathcal{T}}^{\mathrm{p}} \leftarrow {\mathcal{T}}^{\mathrm{p}} \cup \left( {{G}^{\tau },\tau }\right)$
68
+
69
+ end for
70
+
71
+ § 4.1 MULTI-TASK ADAPTATION POLICY
72
+
73
+ The goal of adaptation policy is to efficiently explore and gather the information about the task. Intuitively, if the adaptation policy completes more diverse subtasks, then it can provide more data to the task inference module (ILP), which in turn can more accurately infer the task structure. To this end, we extend the upper confidence bound (UCB)- based adaptation policy proposed in Sohn et al. [2019] as follows:
74
+
75
+ $$
76
+ {\pi }^{\text{ adapt }}\left( {o = {\mathcal{O}}^{i} \mid s}\right) \propto \exp \left( {{r}^{i} + \sqrt{2}\frac{\log \left( {\mathop{\sum }\limits_{j}{n}^{j}}\right) }{{n}^{i}}}\right) ,
77
+ $$
78
+
79
+ (2)
80
+
81
+ where ${r}^{i}$ is the empirical mean of the reward received after executing subtask $i$ and ${n}^{i}$ is the number of times subtask $i$ has been executed within the current task. Note that the exploration parameters ${\left\{ {r}^{i},{n}^{i}\right\} }_{i = 1}^{N}$ can be computed from the agent's trajectory. In meta-train, the exploration parameters are initialized to zero when a new task is sampled. In meta-test, the exploration parameters are initialized with those of the sampled prior. Intuitively, this helps the agent visit novel states that were unseen during meta-training.
82
+
83
+ § 4.2 META-TRAIN: LEARNING THE PRIOR SUBTASK GRAPH
84
+
85
+ Let $\tau$ be an adaptation trajectory of the agent for $K$ steps. The goal is to infer the latent subtask graph $G$ for the given training task ${\mathcal{M}}_{G} \in {\mathcal{M}}^{\text{ train }}$ , specified by preconditions ${G}_{\mathbf{c}}$ and subtask rewards ${G}_{\mathbf{r}}$ . We find the maximum-likelihood estimate (MLE) of $G = \left( {{G}_{\mathbf{c}},{G}_{\mathbf{r}}}\right)$ that maximizes the likelihood of the adaptation trajectory $\tau$ :
86
+
87
+ $$
88
+ {\widehat{G}}^{\mathrm{{MLE}}} = \underset{{G}_{\mathbf{c}},{G}_{\mathbf{r}}}{\arg \max }p\left( {\tau \mid {G}_{\mathbf{c}},{G}_{\mathbf{r}}}\right) . \tag{3}
89
+ $$
90
+
91
+ Following Sohn et al. [2019], we infer the precondition ${G}_{\mathbf{c}}$ and the subtask reward ${G}_{\mathbf{r}}$ as follows (See Appendix for the
92
+
93
+ Algorithm 2 meta-testing: multi-task SGI
94
+
95
+ Require: Adaptation policy ${\pi }^{\text{ adapt }}$ , prior set ${\mathcal{T}}^{\mathrm{p}}$
96
+
97
+ for each task $\mathcal{M} \in {\mathcal{M}}^{\text{ test }}$ do
98
+
99
+ Sample prior: $\left( {{G}^{\mathrm{p}},{\tau }^{\mathrm{p}}}\right) \sim p\left( {\mathcal{T}}^{\mathrm{p}}\right)$
100
+
101
+ Rollout adaptation policy:
102
+
103
+ $\tau = {\left\{ {\mathbf{s}}_{t},{\mathbf{o}}_{t},{r}_{t},{d}_{t}\right\} }_{t = 1}^{K} \sim {\pi }^{\text{ adapt }}$ in task $\mathcal{M}$
104
+
105
+ Infer subtask graph ${G}^{\tau } = \arg \mathop{\max }\limits_{G}p\left( {\tau \mid G}\right)$
106
+
107
+ ${\pi }^{\text{ eval }}\left( {\cdot \mid \tau ,{\tau }^{\mathrm{p}}}\right) \propto \operatorname{GRProp}{\left( \cdot \mid {G}^{\tau }\right) }^{\alpha }\operatorname{GRProp}{\left( \cdot \mid {G}^{\mathrm{p}}\right) }^{\left( 1 - \alpha \right) }$
108
+
109
+ Evaluate the agent: ${\tau }^{\text{ eval }} \sim {\pi }^{\text{ eval }}$ in task $\mathcal{M}$
110
+
111
+ end for
112
+
113
+ detailed derivation):
114
+
115
+ $$
116
+ {\widehat{G}}_{\mathbf{c}}^{\mathrm{{MLE}}} = \underset{{G}_{\mathbf{c}}}{\arg \max }\mathop{\prod }\limits_{{t = 1}}^{H}p\left( {{\mathbf{e}}_{t} \mid {\mathbf{x}}_{t},{G}_{\mathbf{c}}}\right) , \tag{4}
117
+ $$
118
+
119
+ $$
120
+ {\widehat{G}}_{\mathbf{r}}^{\mathrm{{MLE}}} = \underset{{G}_{\mathbf{r}}}{\arg \max }\mathop{\prod }\limits_{{t = 1}}^{H}p\left( {{r}_{t} \mid {\mathbf{e}}_{t},{\mathbf{o}}_{t},{G}_{\mathbf{r}}}\right) . \tag{5}
121
+ $$
122
+
123
+ where ${\mathbf{e}}_{t}$ is the eligibility vector, ${\mathbf{x}}_{t}$ is the completion vector, ${\mathbf{o}}_{t}$ is the option taken, ${r}_{t}$ is the reward at time step $t$ .
124
+
125
+ Precondition inference The problem in Equation (4) is known as the inductive logic programming (ILP) problem that finds a boolean function that satisfies all the indicator functions. Specifically, ${\left\{ {\mathbf{x}}_{t}\right\} }_{t = 1}^{H}$ forms binary vector inputs to programs, and ${\left\{ {e}_{t}^{i}\right\} }_{t = 1}^{H}$ forms Boolean-valued outputs of the $i$ -th program that predicts the eligibility of the $i$ -th subtask. We use the classification and regression tree (CART) to infer the precondition function ${f}_{{G}_{\mathbf{c}}} : \mathbf{x} \rightarrow \mathbf{e}$ for each subtask based on Gini impurity [Breiman, 1984]. Intuitively, the constructed decision tree is the simplest boolean function approximation for the given input-output pairs $\left\{ {{\mathbf{x}}_{t},{\mathbf{e}}_{t}}\right\}$ . The decision tree is converted to a logic expression (i.e., precondition) in sum-of-product (SOP) form to build the subtask graph.
126
+
127
+ Subtask reward inference To infer the subtask reward ${\widehat{G}}_{\mathbf{r}}^{\text{ MLE }}$ in Equation (5), we model the reward for $i$ -th subtask as a Gaussian distribution: ${G}_{\mathbf{r}}^{i} \sim \mathcal{N}\left( {{\widehat{\mu }}^{i},{\widehat{\sigma }}^{i}}\right)$ . Then, the MLE of subtask reward is given as the empirical mean and variance of the rewards received after taking the eligible option ${\mathcal{O}}^{i}$ in adaptation phase:
128
+
129
+ $$
130
+ {\widehat{\mu }}_{\mathrm{{MLE}}}^{i} = \mathbb{E}\left\lbrack {{r}_{t} \mid {\mathbf{o}}_{t} = {\mathcal{O}}^{i},{\mathbf{e}}_{t}^{i} = 1}\right\rbrack , \tag{6}
131
+ $$
132
+
133
+ $$
134
+ {\widehat{{\sigma }^{2}}}_{\mathrm{{MLE}}}^{i} = \mathbb{E}\left\lbrack {{\left( {r}_{t} - {\widehat{\mu }}_{\mathrm{{MLE}}}^{i}\right) }^{2} \mid {\mathbf{o}}_{t} = {\mathcal{O}}^{i},{\mathbf{e}}_{t}^{i} = 1}\right\rbrack , \tag{7}
135
+ $$
136
+
137
+ where ${\mathcal{O}}^{i}$ is the option corresponding to the $i$ -th subtask. Algorithm 1 outlines the meta-training process.
138
+
139
+ § 4.3 EVALUATION: GRAPH-REWARD PROPAGATION POLICY
140
+
141
+ In both meta-training and meta-testing, the agent's adapted behavior is evaluated during the test phase. Following Sohn et al. [2019], we adopted the graph reward propagation (GRProp) policy as an evaluation policy ${\pi }^{\text{ eval }}$ that takes as input the inferred subtask graph $\widehat{G}$ and outputs the subtasks to execute to maximize the return. Intuitively, the GRProp policy approximates a subtask graph to a differentiable form such that we can compute the gradient of return with respect to the completion vector to measure how much each subtask is likely to increase the return. Due to space limitations, we give a detail of the GRProp policy in Appendix. The overall meta-training process is summarized in Appendix.
142
+
143
+ § 4.4 META-TESTING: MULTI-TASK TASK INFERENCE
144
+
145
+ Prior sampling In meta-testing, MTSGI first chooses the prior task that most resembles the given evaluation task. Specifically, we define the pair-wise similarity between a prior task ${\mathcal{M}}_{G}^{\text{ prior }}$ and the evaluation task ${\mathcal{M}}_{G}$ as follows:
146
+
147
+ $$
148
+ \operatorname{sim}\left( {{\mathcal{M}}_{G},{\mathcal{M}}_{G}^{\text{ prior }}}\right) = {F}_{\beta }\left( {\mathbf{\Phi },{\mathbf{\Phi }}^{\text{ prior }}}\right) + {\kappa R}\left( {\tau }^{\text{ prior }}\right) ,
149
+ $$
150
+
151
+ (8)
152
+
153
+ where ${F}_{\beta }$ is the F-score with weight parameter $\beta ,\mathbf{\Phi }$ is the subtask set of ${\mathcal{M}}_{G},{\Phi }^{\text{ prior }}$ is the subtask set of ${\mathcal{M}}_{G}^{\text{ prior }}$ , $R\left( {\tau }^{\text{ prior }}\right)$ is the agent’s empirical performance on the prior task ${\mathcal{M}}_{G}^{\text{ prior }}$ , and $\kappa$ is a scalar-valued weight which we used $\kappa = {1.0}$ in experiment. ${F}_{\beta }$ measures how many subtasks overlap between current and prior tasks in terms of precision and recall as follows:
154
+
155
+ $$
156
+ {F}_{\beta } = \left( {1 + {\beta }^{2}}\right) \cdot \frac{\text{ precision } \cdot \text{ recall }}{\left( {{\beta }^{2} \cdot \text{ precision }}\right) + \text{ recall }}, \tag{9}
157
+ $$
158
+
159
+ $$
160
+ \text{ Precision } = \left| {\mathbf{\Phi } \cap {\mathbf{\Phi }}^{\text{ prior }}}\right| /\left| {\mathbf{\Phi }}^{\text{ prior }}\right| \text{ , } \tag{10}
161
+ $$
162
+
163
+ $$
164
+ \text{ Recall } = \left| {\mathbf{\Phi } \cap {\mathbf{\Phi }}^{\text{ prior }}}\right| /\left| \mathbf{\Phi }\right| \text{ . } \tag{11}
165
+ $$
166
+
167
+ We used $\beta = {10}$ to assign a higher weight to the current task (i.e., recall) than the prior task (i.e., precision).
168
+
169
+ Multi-task subtask graph inference Let $\tau$ be the adaptation trajectory, and ${\tau }^{\mathrm{p}}$ be the sampled prior adaptation trajectory. Then, we model our evaluation policy as follows:
170
+
171
+ $$
172
+ \pi \left( {o \mid s,\tau ,{\tau }^{\mathrm{p}}}\right) \simeq \pi {\left( o \mid s,{G}^{\tau }\right) }^{\alpha }\pi {\left( o \mid s,{G}^{\mathrm{p}}\right) }^{\left( 1 - \alpha \right) }. \tag{12}
173
+ $$
174
+
175
+ Due to the limited space, we include the detailed derivation of Equation (12) in Appendix. Finally, we deploy the GRProp policy as a contextual policy:
176
+
177
+ $$
178
+ {\pi }^{\text{ eval }}\left( {\cdot \mid \tau ,{\tau }^{\mathrm{p}}}\right) = \operatorname{GRProp}{\left( \cdot \mid {G}^{\tau }\right) }^{\alpha }\operatorname{GRProp}{\left( \cdot \mid {G}^{\mathrm{p}}\right) }^{\left( 1 - \alpha \right) }.
179
+ $$
180
+
181
+ (13)Note that Equation (13) is the weighted sum of the logits of two GRProp policies induced by prior ${\tau }^{\mathrm{p}}$ and current experience $\tau$ . We claim that such form of ensemble induces the positive transfer in compositional tasks. Intuitively, en-sembling GRProp is taking a union of preconditions since GRProp assigns a positive logit to task-relevant subtask and non-positive logit to other subtasks. As motivated in the Introduction, related tasks often share the task-relevant preconditions; thus, taking the union of task-relevant preconditions is likely to be a positive transfer and improve the generalization. The pseudo-code of the multi-task subtask graph inference process is summarized in Algorithm 2.
182
+
183
+ § 5 RELATED WORK
184
+
185
+ Web navigating RL agent Previous work introduced MiniWoB [Shi et al., 2017] and MiniWoB++ [Liu et al., 2018] benchmarks that are manually curated sets of simulated toy environments for the web navigation problem. They formulated the problem as acting on a page represented as a Document Object Model (DOM), a hierarchy of objects in the page. The agent is trained with human demonstrations and online episodes in an RL loop. Jia et al. [2019] proposed a graph neural network based DOM encoder and a multi-task formulation of the problem similar to this work. Gur et al. [2018] introduced a manually-designed curriculum learning method and an LSTM based DOM encoder. DOM level representations of web pages pose a significant sim-to-real gap as simulated websites are considerably smaller (100s of nodes) compared to noisy real websites (1000s of nodes). As a result, these models are trained and evaluated on the same simulated environments which are difficult to deploy on real websites. Our work formulates the problem as abstract web navigation on real websites where the objective is to learn a latent subtask dependency graph similar to a sitemap of websites. We propose a multi-task training objective that generalizes from a fixed set of real websites to unseen websites without any demonstration, illustrating an agent capable of navigating real websites for the first time.
186
+
187
+ Meta-reinforcement learning To tackle the few-shot RL problem, researchers have proposed two broad categories of meta-RL approaches: RNN- and gradient-based methods. The RNN-based meta-RL methods [Duan et al., 2016, Wang et al., 2016, Hochreiter et al., 2001] encode the common knowledge of the task into the hidden states and the parameters of the RNN. The gradient-based meta-RL methods [Finn et al., 2017, Nichol et al., 2018, Gupta et al., 2018, Finn et al., 2018, Kim et al., 2018] encode the task embedding in terms of the initial policy parameter for fast adaptation through meta gradient. Existing meta-RL approaches, however, often require a large amount of environment interaction due to the long-horizon nature of the few-shot RL tasks. Our work instead explicitly infers the underlying task parameter in terms of subtask graph, which can be efficiently inferred using the inductive logic programming (ILP) method and be transferred across different, unseen tasks.
188
+
189
+ More Related Works Please refer to the Appendix for further discussions about other related works.
190
+
191
+ § 6 EXPERIMENT
192
+
193
+ § 6.1 DOMAINS
194
+
195
+ Mining Mining [Sohn et al., 2018] is a 2D grid-world domain inspired by Minecraft game where the agent receives a reward by picking up raw materials in the world or crafting items with raw materials. The subtask dependency in Mining domain comes from the crafting recipe implemented in the game. Following Sohn et al. [2018], we used the pre-generated training/testing task splits generated with four different random seeds. Each split set consists of 3200 training tasks and 440 testing tasks for meta-training and meta-testing, respectively. We report the performance averaged over the four task split sets.
196
+
197
+ SymWoB We implement a symbolic version of the checkout process on the 15 real-world websites such as Amazon, BestBuy, and Walmart, etc.
198
+
199
+ Subtask and option policy. Each actionable web element (e.g., text field, button drop-down list, and hyperlink) is considered as a subtask. We assume the agent has pre-learned the option policies that correctly interact with each element (e.g., click the button or fill out the text field). Thus, the agent should learn a policy over the option.
200
+
201
+ Completion and eligibility. For each subtask, the completion and eligibility are determined based on the status of the corresponding web element. For example, the subtask of a text field is completed if the text field is filled with the correct information, and the subtask of a confirm_credit_info button is eligible if all the required subtasks (i.e., filling out credit card information) on the webpage are completed. Executing an option will complete the corresponding subtask only if the subtask is eligible.
202
+
203
+ Reward function and episode termination. The agent may receive a non-zero reward only at the end of the episode (i.e., sparse-reward task). When the episode terminates due to the time budget, the agent may not receive any reward. Otherwise, the following two types of subtasks terminate the episode and give a non-zero reward upon completion:
204
+
205
+ * Goal subtask refers to the button that completes the order (See the green boxes in Figure 1). Completing this subtask gives the agent a +5 reward, and the episode is terminated.
206
+
207
+ * Distractor subtask does not contribute to solving the given task but terminates the episode with a -1 reward. It models the web elements that lead to external web pages such as Contact_Us button in Figure 1.
208
+
209
+ Transition dynamics. The transition dynamics follow the dynamics of the actual website. Each website consists of multiple web pages. The agent may only execute the subtasks that are currently visible (i.e., on the current web page) and can navigate to the next web page only after filling out all the required fields and clicking the continue button. The goal subtask is present in the last web page; thus, the agent must learn to navigate through the web pages to solve the task.
210
+
211
+ For more details about each task, please refer to Appendix.
212
+
213
+ § 6.2 AGENTS
214
+
215
+ We compared the following algorithms in the experiment.
216
+
217
+ * MTSGI (Ours): our multi-task SGI agent
218
+
219
+ * MSGI [Sohn et al., 2019]: SGI agent without multi-task learning
220
+
221
+ * HRL: an Option [Sutton et al., 1999]-based proximal policy optimization (PPO) [Schulman et al., 2017] agent with the gated rectifier unit (GRU)
222
+
223
+ * Random: a heuristic policy that uniform randomly executes an eligible subtask
224
+
225
+ More details on the architectures and the hyperparameters can be found in Appendix.
226
+
227
+ Meta-training In SymWoB, for each task chosen for a meta-testing, we randomly sampled ${N}_{\text{ train }}$ tasks among the remaining 14 tasks and used it for meta-training. We used ${N}_{\text{ train }} = 1$ in the experiment (See Figure 8 for the impact of the choice of ${N}_{\text{ train }}$ ). For example, we meta-trained our MTSGI on Amazon and meta-tested on Expedia. For Mining, we used the train/test task split provided in the environment. The RL agents (e.g., HRL) were individually trained on each test task; the policy was initialized when a new task is sampled and trained during the adaptation phase. All the experiments were repeated with four random seeds, where different training tasks were sampled for different seeds.
228
+
229
+ § 6.3 RESULT: FEW-SHOT GENERALIZATION PERFORMANCE
230
+
231
+ Figure 3 and Figure 4 show the few-shot generalization performance of the compared methods on SymWoB and Mining. In Figure 3, MTSGI achieves more than 75% zero-shot success rate (i.e., success rate at $\mathrm{x} -$ axis $= 0$ ) on all five tasks, which is significantly higher than the zero-shot performance of MSGI. This indicates that the prior learned from the training task significantly improves the subtask graph inference and in turn improves the multi-task evaluation policy. Moreover, our MTSGI can learn a near-optimal policy on all the tasks after only 1,000 steps of environment interactions, demonstrating that the proposed multi-task learning scheme enables fast adaptation. Even though the MSGI agent is learning each task from scratch, it still outperforms the HRL and Random agents. This shows that explicitly inferring the underlying task structure and executing the predicted subtask graph is significantly more effective than learning the policy from the reward signal (i.e., HRL) on complex compositional tasks. Given the pre-learned options, HRL agent can slightly improve the success rate during the adaptation via PPO update. However, training the policy only from the sparse reward requires a large number of interactions especially for the tasks with many distractors (e.g., Expedia and Walgreens).
232
+
233
+ Apple Dicks Expedia Ikea Walgreens 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 1K adaptation steps adaptation steps adaptation steps HRL Random 1.0 1.0 0.8 0.8 0.8 0.6 0.6 0.6 0.4 0.4 0.4 0.2 0.4K 0.2K 0.4k adaptation steps adaptation steps MTSGI MSGI
234
+
235
+ Figure 3: The success rate (y-axis) of the compared methods in the test phase in terms of the environment step during the adaptation phase (x-axis) on SymWoB domain. See Appendix for the results on other tasks.
236
+
237
+ SymWoB mining MTSGI MSGI HRL Random 0.2K 0.4K 0.6K 0.8K adaptation steps 1.0 success rate 0.5 MTSGI MSGI HRL Random 0.04 0K 0.2K 0.4K 0.6K 1K adaptation steps
238
+
239
+ Figure 4: The performance of the compared methods in terms of the adaptation steps averaged over all the tasks in SymWoB (Left) and Mining (Right) domains.
240
+
241
+ 1.0 SymWoB Mining 0.8 MTSGI-Precision MTSGI-Recall 0.7 MSGI-Precision MSGI-Recall adaptation steps 0.6 MTSGI-Recall MSGI-Precision MSGI-Recall 0.4 adaptation steps
242
+
243
+ Figure 5: The precision and recall of the subtask graphs inferred by MTSGI and MSGI on SymWoB and Mining.
244
+
245
+ § 6.4 ANALYSIS ON THE INFERRED SUBTASK GRAPH
246
+
247
+ We compare the inferred subtask graph with the ground-truth subtask graph. Figure 6 shows the subtask graph inferred by MTSGI in Walmart. We can see that MTSGI can accurately infer the subtask graph; the inferred subtask graph is missing only two preconditions (shown in red color) of Click_Continue_Payment subtask.We note that such a small error in the subtask graph has a negligible effect as shown in Figure 3: i.e., MTSGI achieves near-optimal performance on Walmart after 1,000 steps of adaptation. Figure 5 measures the precision and recall of the inferred precondition (i.e., the edge of the graph). First, both MTSGI and MSGI achieve high precision and recall after only a few hundred of adaptation. Also, MTSGI outperforms MSGI in the early stage of adaptation. This clearly demonstrates that the MTSGI can perform more accurate task inference due to the prior learned from the training tasks.
248
+
249
+ § 6.5 ABLATION STUDY: EFFECT OF EXPLORATION STRATEGY
250
+
251
+ In this section, we investigate the effect of various exploration strategies on the performance of MTSGI. We compared the following three adaptation policies:
252
+
253
+ * Random: A policy that uniformly randomly executes any eligible subtask.
254
+
255
+ * UCB: The UCB policy defined in Equation (2) that aims to execute the novel subtask. The exploration parameters are initialized to zero when a new task is sampled.
256
+
257
+ * MTUCB (Ours): Our multi-task extension of UCB policy. When a new task is sampled, the exploration parameter is initialized with those of the sampled prior.
258
+
259
+ Figure 7 summarizes the result on SymWoB and Mining domain, respectively. Using the more sophisticated exploration policy such as MTSGI+UCB or MTSGI+MTUCB improved the performance of MTSGI compared to MTSGI+Random, which was also observed in Sohn et al. [2019]. This is because better exploration helps the adaptation policy collect more data for logic induction by executing more diverse subtasks. In turn, this results in more accurate subtask graph inference and better performance. Also, MTSGI+MTUCB outperforms MTSGI+UCB on both domains. This indicates that transferring the exploration parameters makes the agent's exploration more efficient in meta-testing. Intuitively, the transferred exploration counts inform the agent which subtasks were under-explored during meta-training, such that the agent can focus more on exploring those in meta-testing.
260
+
261
+ Fill Zip Fill C_NUM Fill C CVV Fill C_First Fill C_Last Click Credit_ Click Place Order ck ContinuePayment Click EditShipping Fill C EXPYY Fill C_Phone Fill G NUM Fill G PIN Click G No PIN Fill Street Fill City Fill State Fill Country Click ContinueBase Fill Apt Click Zip Fill First Click Subscribe Click Continue Contact Fill Last Fill Email A Click Help Fill Phone
262
+
263
+ Figure 6: The visualization of the subtask graph inferred by our MTSGI after 1,000 steps of environment interaction on Walmart domain. Compared to the ground-truth subtask graph (not available to the agent), there was no error in the nodes and only two missing edges (in red). See Appendix for the progression of the inferred subtask graph with varying adaptation steps.
264
+
265
+ SymWoB mining 5.75 5,50 5.25 5.00 MTSGI+MTUCB MTSGI+UCB 4.75 MTSGI+Random 200 600 800 adaptation steps 1.0 success rate 0.9 0.8 0.7 MTSGI+MTUCB MTSGI+UCB MTSGI+Random 0K 0.2K 0.41 1K adaptation steps
266
+
267
+ Figure 7: Comparison of different exploration strategies for MTSGI used in adaptation phase for SymWoB and Mining.
268
+
269
+ 1.00 SymWoB mining 5.5 5.0 MTSGI-3200 MTSGI-500 MTSGI-100 MTSGI-10 200 400 600 800 adaptation steps success rate 0.95 0.90 MTSGI-14 MTSGI-4 MTSGI-1 0.85 6K 0.2K 0.4K 0.6k 0.8K 1K adaptation steps
270
+
271
+ Figure 8: Comparison of different number of priors for MTSGI on SymWoB and Mining.
272
+
273
+ § 6.6 ABLATION STUDY: EFFECT OF THE PRIOR SET SIZE
274
+
275
+ MTSGI learns the prior from the training tasks. We investigated how many training tasks are required for MTSGI to learn a good prior for transfer learning. Figure 8 compares the performance of MTSGI with the varying number of training tasks:1,4,14tasks for SymWoB and 10,100, 500, 3200 tasks for Mining. The training tasks are randomly subsampled from the entire training set. The result shows that training on a larger number of tasks generally improves the performance. Mining generally requires more number of training tasks than SymWoB because the agent is required to solve 440 different tasks in Mining while SymWoB was evaluated on 15 tasks; the agent is required to capture a wider range of task distribution in Mining than SymWoB. Also, we note that MTSGI can still adapt much more efficiently than all other baseline methods even when only a small number of training tasks are available (e.g., one task for SymWoB and ten tasks for Mining).
276
+
277
+ § 7 CONCLUSION
278
+
279
+ We introduce a multi-task RL extension of the subtask graph inference framework that can quickly adapt to the unseen tasks by modeling the prior of subtask graph from the training tasks and transferring it to the test tasks. The empirical results demonstrate that our MTSGI achieves strong zero-shot and few-shot generalization performance on $2\mathrm{D}$ grid-world and complex web navigation domains by transferring the common knowledge learned in the training tasks to the unseen ones in terms of subtask graph.
280
+
281
+ In this work, we have assumed that the subtasks and the corresponding options are pre-learned and that the environment provides a high-level status of each subtask (e.g., whether the web element is filled in with the correct information). In future work, our approach may be extended to a more general setting where the relevant subtask structure is fully learned from (visual) observations, and the corresponding options are autonomously discovered.
UAI/UAI 2022/UAI 2022 Conference/BKbcdPUs9ec/Initial_manuscript_md/Initial_manuscript.md ADDED
The diff for this file is too large to render. See raw diff
 
UAI/UAI 2022/UAI 2022 Conference/BKbcdPUs9ec/Initial_manuscript_tex/Initial_manuscript.tex ADDED
The diff for this file is too large to render. See raw diff
 
UAI/UAI 2022/UAI 2022 Conference/BbGv8Lsql9/Initial_manuscript_md/Initial_manuscript.md ADDED
The diff for this file is too large to render. See raw diff
 
UAI/UAI 2022/UAI 2022 Conference/BbGv8Lsql9/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,465 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § FIXING THE BETHE APPROXIMATION: HOW STRUCTURAL MODIFICATIONS IN A GRAPH IMPROVE BELIEF PROPAGATION
2
+
3
+ § ABSTRACT
4
+
5
+ Belief propagation is an iterative method for reasoning in probabilistic graphical models. Its wellknown relationship to a classical concept from statistical physics, the Bethe free energy, puts it on a solid theoretical foundation. If belief propagation fails to approximate the marginals, then this is often due to a failure of the Bethe approximation. In this work, we show how modifications in a graphical model can be a great remedy for fixing the Bethe approximation. Specifically, we analyze how the removal of edges influences and improves belief propagation, and demonstrate that this positive effect is particularly distinct for dense graphs.
6
+
7
+ § 1 INTRODUCTION
8
+
9
+ Message passing algorithms are an effective method for approximate inference in probabilistic graphical models (Koller and Friedman, 2009). Although they often perform well in practice, there are only few guarantees about their theoretical behavior. A remarkable milestone was the discovery of a direct connection between message passing algorithms and concepts of statistical mechanics (Yedidia et al., 2001), perhaps the most famous being the relationship between belief propagation (BP) (Pearl, 1988) and the so-called Bethe free energy (Bethe, 1935; Peierls, 1936).
10
+
11
+ One favorable property of BP is its exactness on trees. On loopy graphs, however, it frequently suffers from two major issues: first, it may fail to converge to a fixed point and thus to find reasonable estimates of the marginals. Second, the fixed points themselves may induce bad estimates of the marginals, in which case even convergence would not help.
12
+
13
+ To solve the first issue, various techniques have been developed that can improve the convergence behavior of BP; e.g., one can damp the message updates (Murphy et al. 1999) or utilize elaborate scheduling schemes (Elidan et al., 2006; Sutton and McCallum, 2007; Knoll et al., 2015; Aksenov et al., 2020). Moreover, it depends on the properties of the graphical model (Tatikonda and Jordan, 2002; Ihler et al., 2005; Mooij and Kappen, 2007) and on the initialization of the messages (Koehler, 2019; Knoll et al., 2021; Leisen-berger et al., 2021) if and to which fixed point BP converges.
14
+
15
+ The second issue might be even harder to overcome, as it is inherently linked to a failure of the Bethe approximation (Weller et al., 2014). The detrimental influence of loops can make the Bethe free energy non-convex and cause its local minima - and thus BP fixed points - to be far away from the exact marginals. Enhanced variants of free energy approximations (Yedidia et al., 2005) or loop corrections (Mooij et al., 2007) are prudent alternatives, that improve the accuracy but also increase the complexity.
16
+
17
+ In this work, we follow a different path: we aim to improve the approximation quality of the Bethe free energy itself. To address this problem, we modify the structure of the graphical model and show how this transforms the Bethe free energy in a way that moves its local minima closer to the exact marginals. In particular, we analyze the effect of removing individual edges from the graph. This loop-breaking approach enforces a 'reconvexification' of the Bethe free energy and therefore not only improves the accuracy of fixed points, but also the convergence behavior of BP.
18
+
19
+ We make a series of interesting theoretical contributions that arise from analyzing the behavior of the Bethe free energy on a 'small scale'. More precisely, we introduce a measure for the discrepancy between two different representations of the Bethe free energy, each induced by a different graphical model, and then utilize this tool to relate variations in the Bethe free energy to the characteristics of the model and the behavior of BP. Theoretically and experimentally, we address the following questions: (i) How does edge removal influence the estimated marginals? (ii) How does edge removal influence the estimated partition function? (iii) Which and how many edges - if at all - should we remove? The structure of this paper is as follows: Sec. 2 summarizes all relevant background on graphical models, BP, and the Bethe approximation. Sec. 3 contains a detailed theoretical analysis and the main results. We experimentally validate our findings in Sec. 4 and conclude the paper in Sec. 5.
20
+
21
+ § 2 BACKGROUND
22
+
23
+ This introductory section provides a compact overview of the topics that we deal with: probabilistic graphical models, belief propagation, and the Bethe approximation. We further introduce the Ising model and discuss related work.
24
+
25
+ § 2.1 PROBABILISTIC GRAPHICAL MODELS
26
+
27
+ We consider an undirected graph $\mathcal{G} = \left( {\mathbf{X},\mathbf{E}}\right)$ , where $\mathbf{X} =$ $\{ 1,\ldots ,N\}$ is a set of nodes and $\mathbf{E} \subseteq \{ \left( {i,j}\right) : i,j \in \mathbf{X}\}$ is a set of edges. An edge connects two nodes if $\left( {i,j}\right) \in \mathbf{E}$ . Note that we assume all edges to be undirected and hence $\left( {i,j}\right) = \left( {j,i}\right)$ for all pairs of connected nodes; specifically we do not count edges twice. Furthermore, $\mathcal{N}\left( i\right)$ denotes the neighborhood of node $i$ (i.e., the set of nodes that are connected to $i$ ) and ${d}_{i} \mathrel{\text{ := }} \left| {\mathcal{N}\left( i\right) }\right|$ denotes the degree of $i$ .
28
+
29
+ Let ${X}_{1},\ldots ,{X}_{N}$ be random variables (RVs) with state spaces ${X}_{1},\ldots ,{X}_{N}$ . A probabilistic graphical model (PGM) represents a joint probability distribution ${P}_{\mathbf{X}}\left( \mathbf{x}\right)$ over the RVs, where each node represents a RV ${}^{1}$ and edges indicate statistical dependencies between RVs. Formally, a PGM is a pair $\left( {\mathcal{G},\mathbf{\Phi }}\right)$ that associates a set of potential functions (or potentials) $\mathbf{\Phi } = \left\{ {{\Phi }_{1}\left( {\mathbf{x}}_{\mathbf{1}}\right) ,\ldots ,{\Phi }_{K}\left( {\mathbf{x}}_{\mathbf{K}}\right) }\right\}$ with the graph $\mathcal{G}$ that are defined over joint realizations of subsets ${X}_{1},\ldots ,{X}_{K} \subseteq \mathrm{X}$ . We shall focus on the class of binary pairwise models ${}^{2}$ that satisfy the following two assumptions: first, each RV has two states, i.e., ${X}_{i} = X = \{ + 1, - 1\}$ . Second, the potentials are defined over either one (singleton potentials $\left. {{\Phi }_{i}\left( {x}_{i}\right) }\right)$ or two (pairwise potentials $\left. {{\Phi }_{ij}\left( {{x}_{i},{x}_{j}}\right) }\right)$ RVs. Then the joint distribution factorizes as
30
+
31
+ $$
32
+ {P}_{\mathbf{X}}\left( \mathbf{x}\right) = \frac{1}{Z}\mathop{\prod }\limits_{{\left( {i,j}\right) \in \mathbf{E}}}{\Phi }_{ij}\left( {{x}_{i},{x}_{j}}\right) \mathop{\prod }\limits_{{i = 1}}^{N}{\Phi }_{i}\left( {x}_{i}\right) , \tag{1}
33
+ $$
34
+
35
+ where $z$ is the normalization constant or partition function.
36
+
37
+ We consider the Ising model, whose potentials have the form ${\Phi }_{ij}\left( {{x}_{i},{x}_{j}}\right) = \exp \left( {{J}_{ij}{x}_{i}{x}_{j}}\right)$ and ${\Phi }_{i}\left( {x}_{i}\right) = \exp \left( {{\theta }_{i}{x}_{i}}\right)$ with ${J}_{ij} \in \mathbb{R}$ being the coupling strength of edge(i, j)and ${\theta }_{i} \in \mathbb{R}$ being the local field of node $i$ . We further call an edge(i, j)attractive if ${J}_{ij} > 0$ , and repulsive if ${J}_{ij} < 0$ .
38
+
39
+ Then (1) takes the exponential form
40
+
41
+ $$
42
+ {P}_{\mathbf{X}}\left( \mathbf{x}\right) = \frac{1}{Z}\exp \left( {-E\left( \mathbf{x}\right) }\right) \tag{2}
43
+ $$
44
+
45
+ with $E\left( \mathbf{x}\right) \mathrel{\text{ := }} \mathop{\sum }\limits_{{\left( {i,j}\right) \in \mathbf{E}}}{J}_{ij}{x}_{i}{x}_{j} + \mathop{\sum }\limits_{{i = 1}}^{N}{\theta }_{i}{x}_{i}$ being the state energy.
46
+
47
+ Finally, following the terminology in Knoll et al. (2021), we specify three types of models: unidirectional models do only contain attractive edges and all variables are biased towards the same state (i.e., either ${\theta }_{i} \leq 0$ or ${\theta }_{i} \geq 0$ ); attractive models do only contain attractive edges, but there may be local fields that differ in sign; general models may contain both attractive and repulsive edges. Note that, by definition, attractive models include unidirectional models, while general models include both unidirectional and attractive models.
48
+
49
+ § 2.2 BELIEF PROPAGATION
50
+
51
+ In this work, we consider two problems: first, the computation of marginal distributions where our specific interest lies in singleton marginals ${P}_{{X}_{i}}\left( {x}_{i}\right)$ ; and second, the computation of the partition function. It is well known that an exact computation of these quantities is intractable (Valiant, 1979; Cooper, 1990) and even the approximation of the marginals to a certain precision is NP-hard (Dagum and Luby, 1993).
52
+
53
+ Belief propagation (BP) approximates the marginals by iteratively exchanging local statistical information between nodes in form of 'messages'. This process is governed by the recursive message update equations
54
+
55
+ $$
56
+ {\mu }_{ij}^{\left( n + 1\right) }\left( {x}_{j}\right) \propto \mathop{\sum }\limits_{{{x}_{i} \in \mathcal{X}}}{\Phi }_{ij}\left( {{x}_{i},{x}_{j}}\right) {\Phi }_{i}\left( {x}_{i}\right) \mathop{\prod }\limits_{{k \in N\left( i\right) \smallsetminus j}}{\mu }_{ki}^{\left( n\right) }\left( {x}_{k}\right) ,
57
+ $$
58
+
59
+ (3)
60
+
61
+ where the superscript(n)refers to the current iteration and the subscript ${ij}$ refers to the direction in which a message is sent (e.g., node $i$ sends ${\mu }_{ij}$ to node $j$ ). In principle, one can approximate the singleton marginals at any iteration, by multiplying all incoming messages with the local potential:
62
+
63
+ $$
64
+ {\widetilde{P}}_{i}^{\left( n\right) }\left( {x}_{i}\right) \propto \frac{1}{{Z}_{i}}{\Phi }_{i}\left( {x}_{i}\right) \mathop{\prod }\limits_{{k \in \mathcal{N}\left( i\right) }}{\mu }_{ki}^{\left( n\right) }\left( {x}_{k}\right) \tag{4}
65
+ $$
66
+
67
+ Generally, however, marginal estimates are considered to be more accurate when they are obtained from a BP fixed point (Murphy et al., 1999; Knoll et al., 2017); more precisely, BP has converged to a fixed point ${\mu }_{ij}^{ \circ }$ , whenever an update of the form (3) does not alter the message values anymore (that is ${\mu }_{ij}^{\left( n + 1\right) } = {\mu }_{ij}^{\left( n\right) }$ for all(i, j)).
68
+
69
+ ${}^{3}$ This parameterization does often facilitate the theoretical analysis as it associates a unique parameter vector - consisting of all ${J}_{ij}$ and ${\theta }_{i}$ - with each distribution (a so-called minimal representation, Wainwright et al. (2008)).
70
+
71
+ ${}^{4}$ Actually, these two problems are closely related as marginals can be expressed as a ratio of sub-partition functions (Weller and Jebara, 2014b).
72
+
73
+ ${}^{1}$ Due to the one-to-one correspondence between variables ${X}_{i}$ and nodes $i$ , we shall not rigorously distinguish between them; e.g., we often write ${P}_{i}\left( {x}_{i}\right)$ instead of ${P}_{{X}_{i}}\left( {x}_{i}\right)$ .
74
+
75
+ ${}^{2}$ A wide range of models can be equivalently transformed into binary pairwise models, although this may increase the state space considerably (Weiss, 2000; Eaton and Ghahramani, 2013).
76
+
77
+ § 2.3 BETHE APPROXIMATION
78
+
79
+ It is often useful to formulate marginal inference in terms of a variational problem. For this purpose, we consider some trial distribution ${Q}_{\mathbf{X}}\left( \mathbf{x}\right)$ and define the Gibbs free energy as
80
+
81
+ $$
82
+ \mathcal{F}\left( {Q}_{\mathbf{X}}\right) = \mathcal{U}\left( {Q}_{\mathbf{X}}\right) - \mathcal{S}\left( {Q}_{\mathbf{X}}\right) \tag{5}
83
+ $$
84
+
85
+ with $\mathcal{U} = {\mathbb{E}}_{Q}\left\lbrack {E\left( \mathbf{X}\right) }\right\rbrack$ being the average energy of the model and $\mathcal{S} = - \mathop{\sum }\limits_{{\mathbf{x} \in {\mathcal{X}}^{N}}}{Q}_{\mathbf{X}}\left( \mathbf{x}\right) \log {Q}_{\mathbf{X}}\left( \mathbf{x}\right)$ being the entropy of
86
+
87
+ ${Q}_{\mathbf{X}}\left( \mathbf{x}\right)$ . Let us further define the marginal polytope $\mathbb{M}$ as the set of all valid probability distributions over $\mathbf{X}$ (i.e., that satisfy all global and local marginalization and normalization constraints). If one minimizes $\mathcal{F}$ over $\mathbb{M}$ , then one recovers the true distribution ${P}_{\mathbf{X}}\left( \mathbf{x}\right)$ with the negative log-partition function $- \log \left( Z\right)$ as the functional value at the global minimum, i.e., $- \log \left( \mathcal{Z}\right) = \mathop{\min }\limits_{\mathbb{M}}\mathcal{F}\left( {Q}_{\mathbf{X}}\right) = \mathcal{F}\left( {P}_{\mathbf{X}}\right)$ .
88
+
89
+ Two aspects, however, render the minimiziation of the Gibbs free energy intractable: first, the definition of the marginal polytope by exponentially many constraints; second, the evaluation of the entropy that requires summing over exponentially many terms. The Bethe free energy approximation addresses these two issues as follows: first, it relaxes the marginal polytope $\mathbb{M}$ to the local polytope $\mathbb{L}$ that involves only local marginalization and normalization constraints of the pairwise and singleton ’pseudo-marginals’ ${\widetilde{P}}_{ij}$ and ${\widetilde{P}}_{i}$ :
90
+
91
+ $$
92
+ \mathbb{L} = \left\{ {{\widetilde{P}}_{ij},{\widetilde{P}}_{i} : \mathop{\sum }\limits_{{x}_{j}}{\widetilde{P}}_{ij}\left( {{x}_{i},{x}_{j}}\right) = {\widetilde{P}}_{i}\left( {x}_{i}\right) ,}\right.
93
+ $$
94
+
95
+ $$
96
+ \mathop{\sum }\limits_{{{x}_{i},{x}_{j}}}{\widetilde{P}}_{ij}\left( {{x}_{i},{x}_{j}}\right) = 1,\mathop{\sum }\limits_{{x}_{i}}{\widetilde{P}}_{i}\left( {x}_{i}\right) = 1 \tag{6}
97
+ $$
98
+
99
+ $$
100
+ \left( {i,j}\right) \in \mathbf{E},i \in \mathbf{X}\} \text{ . }
101
+ $$
102
+
103
+ Second, it replaces the entropy $\mathcal{S}$ by an accordingly weighted sum of entropy contributions from edges and nodes. More concretly, the Bethe free energy is defined as ${\mathcal{F}}_{B} = {\mathcal{U}}_{B} - {\mathcal{S}}_{B}$ where the Bethe average energy is
104
+
105
+ $$
106
+ {\mathcal{U}}_{B} = - \mathop{\sum }\limits_{{\left( {i,j}\right) \in \mathbf{E}}}\mathop{\sum }\limits_{{{x}_{i},{x}_{j} \in \mathcal{X}}}{\widetilde{P}}_{ij}\left( {{x}_{i},{x}_{j}}\right) \log {\Phi }_{ij}\left( {{x}_{i},{x}_{j}}\right)
107
+ $$
108
+
109
+ $$
110
+ - \mathop{\sum }\limits_{{i = 1}}^{n}\mathop{\sum }\limits_{{{x}_{i} \in X}}{\widetilde{P}}_{i}\left( {x}_{i}\right) \log {\Phi }_{i}\left( {x}_{i}\right) , \tag{7}
111
+ $$
112
+
113
+ and the Bethe entropy is
114
+
115
+ $$
116
+ {\mathcal{S}}_{B} = - \mathop{\sum }\limits_{{\left( {i,j}\right) \in \mathbf{E}}}\mathop{\sum }\limits_{{{x}_{i},{x}_{j} \in \mathcal{X}}}{\widetilde{P}}_{ij}\left( {{x}_{i},{x}_{j}}\right) \log {\widetilde{P}}_{ij}\left( {{x}_{i},{x}_{j}}\right)
117
+ $$
118
+
119
+ $$
120
+ + \mathop{\sum }\limits_{{i = 1}}^{n}\left( {{d}_{i} - 1}\right) \mathop{\sum }\limits_{{{x}_{i} \in \mathcal{X}}}{\widetilde{P}}_{i}\left( {x}_{i}\right) \log {\widetilde{P}}_{i}\left( {x}_{i}\right) . \tag{8}
121
+ $$
122
+
123
+ While ${\mathcal{U}}_{B}$ equals the true average energy $\mathcal{U}$ in the exact marginals, ${S}_{B}$ is only an approximation to the true entropy S - unless the graph is a tree (Yedidia et al., 2005). To obtain locally consistent approximations to the exact marginals, one usually aims to minimize ${\mathcal{F}}_{B}$ over $\mathbb{L}$ . Also, the so-called Bethe partition function ${Z}_{\mathcal{B}}$ , that is implicitely defined by $- \log \left( {\mathcal{Z}}_{\mathcal{B}}\right) = \mathop{\min }\limits_{\mathbb{L}}{\mathcal{F}}_{B}$ , provides an estimation of $\mathcal{Z}$ .
124
+
125
+ Binary variables allow for a particularly simple description of the local polytope. Following the notation of Welling and Teh (2001); Weller and Jebara (2013), we define the pseudo-marginal distribution of ${X}_{i}$ by ${P}_{i}\left( {{X}_{i} = + 1}\right) =$ ${q}_{i}$ (implying ${\widetilde{P}}_{i}\left( {{X}_{i} = - 1}\right) = 1 - {q}_{i}$ ) and, for any pair of connected nodes, we denote the joint pseudo-marginal probability ${\widetilde{P}}_{ij}\left( {{X}_{i} = + 1,{X}_{j} = + 1}\right)$ by ${\xi }_{ij}$ . Then the local marginalization and normalization constraints induce a full specification of the joint probability table between ${X}_{i}$ and ${X}_{j}$ in terms of the three parameters ${q}_{i},{q}_{j},{\xi }_{ij}$ (Tab. 1).
126
+
127
+ Table 1: Variational joint probability table for two binary variables ${X}_{i}$ and ${X}_{j}$ .
128
+
129
+ max width=
130
+
131
+ X X ${X}_{j} = - 1$ X
132
+
133
+ 1-4
134
+ X ${\xi }_{ij}$ ${q}_{i} - {\xi }_{ij}$ ${q}_{i}$
135
+
136
+ 1-4
137
+ X X $\begin{matrix} {X}_{i} = - 1 & {q}_{j} - {\xi }_{ij} & 1 + {\xi }_{ij} - {q}_{i} - {q}_{j} & \\ & \left| \right| & {q}_{j} & 1 - {q}_{j} \end{matrix}$ X
138
+
139
+ 1-4
140
+
141
+ If we assume that all probabilities are strictly positive, then ${\xi }_{ij}$ is bounded by
142
+
143
+ $$
144
+ \max \left( {0,{q}_{i} + {q}_{j} - 1}\right) < {\xi }_{ij} < \min \left( {{q}_{i},{q}_{j}}\right) . \tag{9}
145
+ $$
146
+
147
+ Inserting singleton and pairwise pseudo-marginals from Table 1 together with the Ising potentials from Sec. 2.1 into (7) and (8), the Bethe free energy becomes
148
+
149
+ $$
150
+ {\mathcal{F}}_{B} = - \mathop{\sum }\limits_{{\left( {i,j}\right) \in \mathbf{E}}}\left( {1 + 2\left( {2{\xi }_{ij} - {q}_{i} - {q}_{j}}\right) }\right) {J}_{ij}
151
+ $$
152
+
153
+ $$
154
+ + \mathop{\sum }\limits_{{i = 1}}^{n}\left( {1 - 2{q}_{i}}\right) {\theta }_{i} - \mathop{\sum }\limits_{{\left( {i,j}\right) \in \mathbf{E}}}{S}_{ij} + \mathop{\sum }\limits_{{i = 1}}^{n}\left( {{d}_{i} - 1}\right) {S}_{i}, \tag{10}
155
+ $$
156
+
157
+ where the pairwise entropies are
158
+
159
+ $$
160
+ {S}_{ij} = - {\xi }_{ij}\log {\xi }_{ij}
161
+ $$
162
+
163
+ $$
164
+ - \left( {1 + {\xi }_{ij} - {q}_{i} - {q}_{j}}\right) \log \left( {1 + {\xi }_{ij} - {q}_{i} - {q}_{j}}\right) \tag{11}
165
+ $$
166
+
167
+ $$
168
+ - \left( {{q}_{i} - {\xi }_{ij}}\right) \log \left( {{q}_{i} - {\xi }_{ij}}\right) - \left( {{q}_{j} - {\xi }_{ij}}\right) \log \left( {{q}_{j} - {\xi }_{ij}}\right)
169
+ $$
170
+
171
+ are the local entropies are
172
+
173
+ $$
174
+ {\mathcal{S}}_{i} = - {q}_{i}\log {q}_{i} - \left( {1 - {q}_{i}}\right) \log \left( {1 - {q}_{i}}\right) . \tag{12}
175
+ $$
176
+
177
+ Then, with $\left( {\mathbf{q};\mathbf{\xi }}\right)$ being the vector that contains all ${q}_{i}$ and ${\xi }_{ij}$ , the local polytope now takes the much simpler form
178
+
179
+ $$
180
+ \mathbb{L} = \left\{ {\left( {\mathbf{q};\mathbf{\xi }}\right) \in {\mathbb{R}}^{\left| \mathbf{X}\right| + \left| \mathbf{E}\right| } : 0 < {q}_{i} < 1,i \in \mathbf{X};}\right.
181
+ $$
182
+
183
+ $$
184
+ \left. {\max \left( {0,{q}_{i} + {q}_{j} - 1}\right) < {\xi }_{ij} < \min \left( {{q}_{i},{q}_{j}}\right) ,\left( {i,j}\right) \in \mathbf{E}}\right\} \text{ . }
185
+ $$
186
+
187
+ (13)To facilitate the task of minimizing ${\mathcal{F}}_{B}$ over $\mathbb{L}$ , Welling and Teh (2001) have derived necessary conditions for local minima of $\overline{{\mathcal{F}}_{B}}$ . By setting the partial derivative $\frac{\partial }{\partial {\xi }_{ij}}{\mathcal{F}}_{B}$ to zero, they proved that the resulting quadratic equation
188
+
189
+ $$
190
+ {\alpha }_{ij}{\xi }_{ij}^{2} - \left( {1 + {\alpha }_{ij}\left( {{q}_{i} + {q}_{j}}\right) }\right) {\xi }_{ij} + \left( {1 + {\alpha }_{ij}}\right) {q}_{i}{q}_{j} = 0, \tag{14}
191
+ $$
192
+
193
+ ${}^{5}$ For more details on variational inference in graphical models and the marginal polytope, we refer the reader to Wainwright et al. (2008); Mezard and Montanari (2009).
194
+
195
+ where ${\alpha }_{ij} = {e}^{4{J}_{ij}} - 1$ ,(15)
196
+
197
+ has a unique valid (i.e., inside the bounds (9)) solution
198
+
199
+ $$
200
+ {\xi }_{ij}^{ * }\left( {{q}_{i},{q}_{j}}\right) = \frac{1}{2{\alpha }_{ij}}\left( \left( {1 + {\alpha }_{ij}\left( {{q}_{i} + {q}_{j}}\right) }\right) \right. \tag{16}
201
+ $$
202
+
203
+ $$
204
+ \left. {-\sqrt{{\left( 1 + {\alpha }_{ij}\left( {q}_{i} + {q}_{j}\right) \right) }^{2} - 4{\alpha }_{ij}\left( {1 + {\alpha }_{ij}}\right) {q}_{i}{q}_{j}}}\right) \text{ . }
205
+ $$
206
+
207
+ This means that for each edge(i, j), the only ${\xi }_{ij}^{ * }\left( {{q}_{i},{q}_{j}}\right)$ , that can be located at a stationary point of ${\mathcal{F}}_{B}$ , depends directly on ${q}_{i}$ and ${q}_{j}$ and may therefore be inserted in the definition of ${\mathcal{F}}_{B}$ (10). This is advantageous for two reasons: first, it considerably reduces the number of independent variables that are involved in optimizing ${\mathcal{F}}_{B}$ (i.e., from $\left| \mathbf{X}\right| + \left| \mathbf{E}\right|$ to $\left| \mathbf{X}\right|$ ); second, it simplifies the shape of the domain, as ${\mathcal{F}}_{B}$ is now defined on a box-constrained domain, the Bethe box
208
+
209
+ $$
210
+ \mathbb{B} = \left\{ {\mathbf{q} \in {\mathbb{R}}^{\left| \mathbf{X}\right| } : 0 < {q}_{i} < 1,i \in \mathbf{X}}\right\} . \tag{17}
211
+ $$
212
+
213
+ In this work, we do always refer to the Bethe free energy by ${\mathcal{F}}_{B}$ , if it is defined over the local polytope or the Bethe box.
214
+
215
+ § 2.4 RELATED WORK
216
+
217
+ Since the seminal work of Yedidia et al. (2001), it is well known that fixed points of BP correspond one-to-one to stationary points of the Bethe free energy; moreover, stable fixed points of BP must always be associated to local minima of the Bethe free energy (Heskes, 2003). Consequently, one can try to overcome the convergence issue of BP by minimizing the Bethe free directly. To solve the problem, Welling and Teh (2001); Shin (2012) have derived gradient-based algorithms; Yuille (2002); Heskes (2006) have proposed provably convergent double-loop algorithms.
218
+
219
+ Yedidia et al. (2005) have also shown that BP is only a special case of a general class of message passing algorithms, the generalized belief propagation (GBP). Likewise, fixed points of these algorithms correspond to stationary points of the so-called Kikuchi free energies that try to approximate the true entropy by a sum over entropy contributions from larger node clusters (Kikuchi, 1951; Pelizzola, 2005). In practice, many of these methods can be prohibitively slow and may suffer in the same way as BP from non-convexity of the particular free energy approximation; i.e., they may - if at all - converge to suboptimal minima. This inspired various researchers to design free energy approximations that are convex (Wainwright et al., 2005; Globerson and Jaakkola, 2007b), some of which are related to convergent message passing algorithms (Kolmogorov and Wainwright,
220
+
221
+ § 2006; GLOBERSON AND JAAKKOLA, 2007A; HAZAN AND SHASHUA, 2008; MELTZER ET AL., 2009; JANCSARY AND MATZ, 2011).
222
+
223
+ The Bethe approximation proves often to be superior to other methods in terms of a tradeoff between efficiency and ac-cuaracy (Meshi et al., 2009). Its theoretical properties have therefore been intensely studied: Heskes (2004); Pakzad and Anantharam (2005) derived conditions for the convexity of the Bethe free energy. Chertkov and Chernyak (2006) formulated the so-called loop series expansion that directly relates the Bethe partition function to the true partition function. Others have found interesting connections between the Bethe approximation and classical graph theory (Watanabe and Fukumizu, 2009; Vontobel, 2013). Moreover, Weller and Jebara (2014a) derived an FPTAS 6 to approximate the Bethe partition function in attractive models.
224
+
225
+ Another line of research, that is in some sense complementary to variational inference, tries to approximate the graphical model itself. The classical Chow-Liu algorithm (Chow and Liu, 1968) finds a spanning tree such that the Kullback-Leibler (KL) divergence between the original distribution and the tree distribution is minimal. Furthermore, two different techniques deserve special attention: first, the 'annihilation' of small probabilities that are below a certain treshold (Jensen and Andersen, 1990); second, the deletion of one or more edges from the model (not necessarily until a spanning tree is reached). Due to its empirical success, the second method deserves special attention: Kjaerulff (1994) carefully selected edges whose removal decreases the treewidth of a graph. van Engelen (1997) studied how the removal of edges in a directed graph influences the KL divergence. Choi and Darwiche (2006) showed that a particular class of GBP, the so-called join graph propagation (Dechter et al. 2002), can be equivalently cast in terms of a procedure that consecutively deletes and recovers edges. In the past, these methods were primarily applied to perform exact inference in the approximated model. For large graphs, this does often remain a hard computational challenge.
226
+
227
+ § 3 THEORETICAL ANALYSIS
228
+
229
+ We shall now devote our attention to the central topic of this work: how removing edges from a graphical model influences the properties of BP. While the accuracy of the exact marginals degrades if one approximates a model by a sparser one (van Engelen, 1997), one might expect a similar behavior for the marginals estimated by BP. We show, that the opposite is the case: sparsifying the graph does often significantly improve the marginal accuracy of BP. The quality of the estimated partition function, however, tends to degrade by deviating from the original model.
230
+
231
+ In this section, we explain these phenomena theoretically. We further analyze the role of the 'optimal' edge to be removed and relate this problem to the Bethe free energy. In particular, we prove an inherent relationship between global error measures on the Bethe free energy and the coupling strength of the edges. Our detailed analysis of the Bethe free energy on a 'small-scale' extends the work of Welling and Teh (2001); Weller and Jebara (2013); Weller et al. (2014) and leads to better understanding of BP in general.
232
+
233
+ ${}^{6}$ Fully polynomial-time approximation scheme.
234
+
235
+ § 3.1 PROBLEM SPECIFICATION
236
+
237
+ We briefly clarify the problem to be considered. Let $\left( {\mathcal{G},\mathbf{\Phi }}\right)$ be a PGM and let $\left( {{\mathcal{G}}^{\prime },{\Phi }^{\prime }}\right)$ be a second PGM that is obtained by removing a set of edges $\widetilde{\mathbf{E}}$ (and the associated pairwise potentials) from the original model. Let $\mathcal{P} \mathrel{\text{ := }} \left\{ {{P}_{i} : i \in \mathbf{X}}\right\}$ be the set of exact (singleton) marginals on $\left( {\mathcal{G},\mathbf{\Phi }}\right)$ and $z$ be the partition function. Assume that we run BP on both models and obtain pseudo-marginals $\widetilde{\mathcal{P}} \mathrel{\text{ := }} \left\{ {{\widetilde{P}}_{i} : i \in \mathbf{X}}\right\}$ on $\left( {\mathcal{G},\mathbf{\Phi }}\right)$ resp. ${\widetilde{\mathcal{P}}}^{\prime } \mathrel{\text{ := }} \left\{ {{\widetilde{P}}_{i}^{\prime } : i \in \mathbf{X}}\right\}$ on $\left( {{\mathcal{G}}^{\prime },{\mathbf{\Phi }}^{\prime }}\right)$ , together with partition function estimates $\widetilde{z}$ , resp. ${\widetilde{z}}^{\prime }{}^{8}$ Then we are interested in comparing the following quantities: first, the ${l}^{1}$ -errors ${\begin{Vmatrix}{\mathcal{P}}_{\mathbf{X}} - {\widetilde{\mathcal{P}}}_{\mathbf{X}}\end{Vmatrix}}_{{l}^{1}}$ and ${\begin{Vmatrix}{\mathcal{P}}_{\mathbf{X}} - {\widetilde{\mathcal{P}}}_{\mathbf{X}}^{\prime }\end{Vmatrix}}_{{l}^{1}}$ ; and second, the absolute errors $\left| {\log Z - \log \widetilde{Z}}\right|$ and $\left| {\log Z - \log {\widetilde{Z}}^{\prime }}\right|$ .
238
+
239
+ Ideally, we would like to remove a set of edges, such that the induced errors ${\begin{Vmatrix}{P}_{\mathbf{X}} - {\widetilde{P}}_{\mathbf{X}}^{\prime }\end{Vmatrix}}_{{l}^{1}}$ and $\left| {\log Z - \log {\widetilde{Z}}^{\prime }}\right|$ become minimal over all subsets $\widetilde{\mathbf{E}} \subseteq \mathbf{E}$ . That is, we want to find the model for which BP best approximates the marginals and the partition function of the original model. This is of course an intractable problem (in particular, as we do not have access to the exact quantities). Still, it remains a crucial question whether and to what extent the removal of edges has a positive impact on the estimates. To identify edges to be deleted, we need to define an objective that contains information about the discrepancy between different graphical models. Note that a global comparison via the KL divergence and its generalizations (Minka, 2005) is prohibitive as this would involve a summation over exponentially many terms. Likewise, it is intractable to compare between different representations of the Gibbs free energy (5). To relax the problem, we focus on the analysis of local discrepancies between two models. The Bethe free energy (10) provides an ideal tool to explicitly measure these local differences.
240
+
241
+ § 3.2 THE BETHE ENERGY DIFFERENCE
242
+
243
+ Our main idea to make model comparison tractable lies in comparing between two different representations of the Bethe free energy. We formalize this concept as follows: assume for now that we remove a single edge(i, j)from a model $\left( {\mathcal{G},\mathbf{\Phi }}\right)$ and let $\left( {{\mathcal{G}}^{\smallsetminus \left( {i,j}\right) },{\mathbf{\Phi }}^{\smallsetminus \left( {i,j}\right) }}\right.$ denote the resulting model. Let further ${\mathcal{F}}_{B}$ resp. ${\mathcal{F}}_{B}^{\smallsetminus \left( {i,j}\right) }$ be the representations of the Bethe free energy that are associated with $\left( {\mathcal{G},\mathbf{\Phi }}\right)$ resp. $\left( {{\mathcal{G}}^{\smallsetminus \left( {i,j}\right) },{\mathbf{\Phi }}^{\smallsetminus \left( {i,j}\right) }}\right.$ . Specifically, ${\mathcal{F}}_{B}^{\smallsetminus \left( {i,j}\right) }$ does not contain the pairwise energy and entropy contributions from edge(i, j), while the local entropy contributions from nodes $i$ and $j$ are counted once less than in the definition of ${\mathcal{F}}_{B}$ (10). Then we define the Bethe free energy difference $\Delta {\mathcal{F}}_{B}^{\left( i,j\right) }$ as the difference between ${\mathcal{F}}_{B}$ and ${\mathcal{F}}_{B}^{\smallsetminus \left( {i,j}\right) }$ , i.e.,
244
+
245
+ $$
246
+ \Delta {\mathcal{F}}_{B}^{\left( i,j\right) } \mathrel{\text{ := }} {\mathcal{F}}_{B} - {\mathcal{F}}_{B}^{\smallsetminus \left( {i,j}\right) }
247
+ $$
248
+
249
+ $$
250
+ = \overset{ \mathrel{\text{ := }} \Delta {u}_{B}^{\left( i,j\right) }}{\overbrace{-\left( {1 + 2\left( {2{\xi }_{ij} - {q}_{i} - {q}_{j}}\right) }\right) {J}_{ij}}} + \overset{ \mathrel{\text{ := }} {I}_{B}^{\left( i,j\right) }}{\overbrace{{S}_{i} + {S}_{j} - {S}_{ij}}}, \tag{18}
251
+ $$
252
+
253
+ where $\Delta {\mathcal{U}}_{B}^{\left( i,j\right) }$ is the difference in the Bethe average energy and ${I}_{B}^{\left( i,j\right) }$ is the mutual information between ${X}_{i}$ and ${X}_{j}$ .
254
+
255
+ Depending on whether we consider ${\mathcal{F}}_{B}$ on the local polytope $\mathbb{L}$ ([13]) or the Bethe box $\mathbb{B}$ (17), $\Delta {\mathcal{F}}_{B}^{\left( i,j\right) }$ is defined on slices of these objects, that is either on the sliced local polytope
256
+
257
+ $$
258
+ {\mathbb{L}}^{\left( i,j\right) } \mathrel{\text{ := }} \left\{ {\left( {{q}_{i},{q}_{j};{\xi }_{ij}}\right) \in {\mathbb{R}}^{3} : 0 < {q}_{i},{q}_{j} < 1;}\right. \tag{19}
259
+ $$
260
+
261
+ $$
262
+ \max \left( {0,{q}_{i} + {q}_{j} - 1}\right) < {\xi }_{ij} < \min \left( {{q}_{i},{q}_{j}}\right) \}
263
+ $$
264
+
265
+ or the sliced Bethe box
266
+
267
+ $$
268
+ {\mathbb{B}}^{\left( i,j\right) } \mathrel{\text{ := }} \left\{ {\left( {{q}_{i},{q}_{j}}\right) \in {\mathbb{R}}^{2} : 0 < {q}_{i},{q}_{j} < 1}\right\} . \tag{20}
269
+ $$
270
+
271
+ It only depends on three resp. two variables and may therefore be considered as a function that contains variational information about local changes in a model when removing an edge. Moreover, it entails an effective way of measuring the local discrepancy between two graphical models, e.g., by computing an arbitrary norm of $\Delta {\mathcal{F}}_{B}^{\left( i,j\right) }$ on ${\mathbb{L}}^{\left( i,j\right) }$ or ${\mathbb{B}}^{\left( i,j\right) }$ . In this work, we consider ${L}^{p}$ -norms as the most natural choice and analyze the special cases of $p = \infty$ and $p = 2$ in Sec. 3.3 (Theorem 1, Corollary 1, and Theorem 2).
272
+
273
+ In principle, one can generalize the above idea to compare between models that result from removing multiple edges $\widetilde{\mathbf{E}}$ in one step, as the associated Bethe free energy difference $\Delta {\mathcal{F}}_{B}^{\widetilde{\mathbf{E}}}$ is then simply the sum over energy differences $\Delta {\mathcal{F}}_{B}^{\left( i,j\right) }$ for all(i, j)in $\widetilde{\mathbf{E}}$ . However, this increases both the number of variables to be integrated over and the number of edge sets to be taken into account for removal. To facilitate the theoretical and experimental analysis of edge removal, we shall therefore focus on removing edges one by one.
274
+
275
+ We now derive a series of auxiliary theorems on the mathematical properties of ${\xi }_{ij}^{ * }$ from (16) and $\Delta {\mathcal{F}}_{B}^{\left( i,j\right) }$ . These results - besides being interesting in themselves - will help us in proving our main results in Sec. 3.3. All proves for Sec. 3.2 and 3.3 are contained in the Appendix B.
276
+
277
+ Lemma 1. Let(i, j)be an edge. In the center point (0.5,0.5)of the sliced Bethe box ${\mathbb{B}}^{\left( i,j\right) }$ , the unique ${\xi }_{ij}^{ * }$ that can be located at a stationary point of ${\mathcal{F}}_{B}$ has the form
278
+
279
+ $$
280
+ {\xi }_{ij}^{ * }\left( {{0.5},{0.5}}\right) = \frac{\sigma \left( {2{J}_{ij}}\right) }{2}. \tag{21}
281
+ $$
282
+
283
+ ${}^{7}$ Without loss of generality, we assume that the removal of $\widetilde{\mathbf{E}}$ does not make the graph disconnected (otherwise, individual connected components can be treated separately).
284
+
285
+ ${}^{8}$ Note that ${\widetilde{P}}_{\mathbf{X}}^{\prime }$ and ${\widetilde{Z}}^{\prime }$ are approximations to the exact marginals and partition function in the new model $\left( {{\mathcal{G}}^{\prime },{\mathbf{\Phi }}^{\prime }}\right)$ .
286
+
287
+ We will also have to analyze the behavior of ${\xi }_{ij}^{ * }$ if ${q}_{i}$ and ${q}_{j}$ approach the boundary $\partial {\mathbb{B}}^{\left( i,j\right) }$ of the sliced Bethe box:
288
+
289
+ Lemma 2. Let(i, j)be edge and let $k \in \left\lbrack {0,1}\right\rbrack$ . The limits of ${\xi }_{ij}^{ * }$ at the boundary $\partial {\mathbb{B}}^{\left( i,j\right) }$ of the sliced Bethe box are
290
+
291
+ $$
292
+ \mathop{\lim }\limits_{\substack{{{q}_{i} \rightarrow 0} \\ {{q}_{j} \rightarrow k} }}{\xi }_{ij}^{ * }\left( {{q}_{i},{q}_{j}}\right) = 0 = \mathop{\lim }\limits_{\substack{{{q}_{i} \rightarrow k} \\ {{q}_{j} \rightarrow 0} }}{\xi }_{ij}^{ * }\left( {{q}_{i},{q}_{j}}\right) , \tag{22}
293
+ $$
294
+
295
+ $$
296
+ \mathop{\lim }\limits_{\substack{{{q}_{i} \rightarrow 1} \\ {{q}_{j} \rightarrow k} }}{\xi }_{ij}^{ * }\left( {{q}_{i},{q}_{j}}\right) = k = \mathop{\lim }\limits_{\substack{{{q}_{i} \rightarrow k} \\ {{q}_{j} \rightarrow 1} }}{\xi }_{ij}^{ * }\left( {{q}_{i},{q}_{j}}\right) . \tag{23}
297
+ $$
298
+
299
+ Moreover, we shall prepare bounds and compute the boundary limits at $\partial {\mathbb{B}}^{\left( i,j\right) }$ of the mutual information ${I}_{B}^{\left( i,j\right) }$ (18):
300
+
301
+ Lemma 3. Let(i, j)be an edge.
302
+
303
+ (a) In the interior of the sliced Bethe box ${\mathbb{B}}^{\left( i,j\right) }$ , the mutual information ${I}_{B}^{\left( i,j\right) }$ is bounded by
304
+
305
+ $$
306
+ 0 < 8{\left( {\xi }_{ij}^{ * } - {q}_{i}{q}_{j}\right) }^{2} \leq {I}_{B}^{\left( i,j\right) }\left( {{q}_{i},{q}_{j}}\right) \leq
307
+ $$
308
+
309
+ $$
310
+ \leq \frac{{\left( {\xi }_{ij}^{ * } - {q}_{i}{q}_{j}\right) }^{2}}{{q}_{i}\left( {1 - {q}_{i}}\right) {q}_{j}\left( {1 - {q}_{j}}\right) }. \tag{24}
311
+ $$
312
+
313
+ (b) The limit of ${I}_{B}^{\left( i,j\right) }$ at the boundary $\partial {\mathbb{B}}^{\left( i,j\right) }$ is
314
+
315
+ $$
316
+ \mathop{\lim }\limits_{{\left( {{q}_{i},{q}_{j}}\right) \rightarrow \partial {\mathbb{B}}^{\left( i,j\right) }}}{I}_{B}^{\left( i,j\right) }\left( {{q}_{i},{q}_{j}}\right) = 0. \tag{25}
317
+ $$
318
+
319
+ Next, we compute first-order and second-order derivatives of $\Delta {\mathcal{F}}_{B}^{\left( i,j\right) }$ on ${\mathbb{B}}^{\left( i,j\right) }$ . The proof utilizes results from Welling and Teh (2001); Weller and Jebara (2013) (Appendix C).
320
+
321
+ Lemma 4. Let(i, j)be an edge.
322
+
323
+ (a) The first-order derivatives of $\Delta {\mathcal{F}}_{B}^{\left( i,j\right) }$ on ${\mathbb{B}}^{\left( i,j\right) }$ are
324
+
325
+ $$
326
+ \frac{\partial }{\partial {q}_{i}}\Delta {\mathcal{F}}_{B}^{\left( i,j\right) } = 2{J}_{ij} + \log \left( \frac{\left( {1 - {q}_{i}}\right) \left( {{q}_{i} - {\xi }_{ij}^{ * }}\right) }{{q}_{i}\left( {1 + {\xi }_{ij}^{ * } - {q}_{i} - {q}_{j}}\right) }\right) . \tag{26}
327
+ $$
328
+
329
+ (b) The second-order derivatives of $\Delta {\mathcal{F}}_{B}^{\left( i,j\right) }$ on ${\mathbb{B}}^{\left( i,j\right) }$ are
330
+
331
+ $$
332
+ \frac{{\partial }^{2}}{\partial {q}_{i}^{2}}\Delta {\mathcal{F}}_{B}^{\left( i,j\right) } = \frac{{q}_{j}\left( {1 - {q}_{j}}\right) }{{T}_{ij}} - \frac{1}{{q}_{i}\left( {1 - {q}_{i}}\right) }, \tag{27}
333
+ $$
334
+
335
+ $$
336
+ \frac{{\partial }^{2}}{\partial {q}_{i}{q}_{j}}\Delta {\mathcal{F}}_{B}^{\left( i,j\right) } = \frac{{\partial }^{2}}{\partial {q}_{j}{q}_{i}}\Delta {\mathcal{F}}_{B}^{\left( i,j\right) } = \frac{{q}_{i}{q}_{j} - {\xi }_{ij}^{ * }}{{T}_{ij}}, \tag{28}
337
+ $$
338
+
339
+ $$
340
+ \frac{{\partial }^{2}}{\partial {q}_{j}^{2}}\Delta {\mathcal{F}}_{B}^{\left( i,j\right) } = \frac{{q}_{i}\left( {1 - {q}_{i}}\right) }{{T}_{ij}} - \frac{1}{{q}_{j}\left( {1 - {q}_{j}}\right) }, \tag{29}
341
+ $$
342
+
343
+ $$
344
+ \text{ where }{T}_{ij} \mathrel{\text{ := }} {q}_{i}{q}_{j}\left( {1 - {q}_{i}}\right) \left( {1 - {q}_{j}}\right) - {\left( {\xi }_{ij}^{ * } - {q}_{i}{q}_{j}\right) }^{2}\text{ . }
345
+ $$
346
+
347
+ The following result formulates a useful property of the Bethe free energy difference on the sliced Bethe box:
348
+
349
+ Lemma 5. Let(i, j)be an edge. $\Delta {\mathcal{F}}_{B}^{\left( i,j\right) }$ has precisely one stationary point on ${\mathbb{B}}^{\left( i,j\right) }$ , which is $\left( {{\bar{q}}_{i},{\bar{q}}_{j}}\right) = \left( {{0.5},{0.5}}\right)$ and is neither a maximum nor a minimum (i.e., a saddle point).
350
+
351
+ Lemma 5 implies that $\Delta {\mathcal{F}}_{B}^{\left( i,j\right) }$ cannot possess a maximum nor a minimum in the interior of ${\mathbb{B}}^{\left( i,j\right) }$ . This implies that the supremum and infimum of $\Delta {\mathcal{F}}_{B}^{\left( i,j\right) }$ must lie at the boundary. Finally, we characterize regions of ${\mathbb{B}}^{\left( i,j\right) }$ on which $\Delta {\mathcal{F}}_{B}^{\left( i,j\right) }$ contributes always negatively to the Bethe free energy ${\mathcal{F}}_{B}$ :
352
+
353
+ Lemma 6. Let(i, j)be an edge.
354
+
355
+ (a) For an attractive edge, $\Delta {\mathcal{F}}_{B}^{\left( i,j\right) }\left( {{q}_{i},{q}_{j}}\right)$ is negative if either both ${q}_{i},{q}_{i} < {0.5}$ or both ${q}_{i},{q}_{i} > {0.5}$ .
356
+
357
+ (b) For a repulsive edge, $\Delta {\mathcal{F}}_{B}^{\left( i,j\right) }\left( {{q}_{i},{q}_{j}}\right)$ is negative if either ${q}_{i} < {0.5}$ and ${q}_{j} > {0.5}$ or ${q}_{i} < {0.5}$ and ${q}_{j} > {0.5}$ .
358
+
359
+ § 3.3 MAIN RESULTS
360
+
361
+ After having prepared the technical framework in Sec. 3.2, we now proceed by presenting our main results. First, we directly relate the Bethe free energy difference to the local properties of the graphical model (Theorem 1). Then, we address the problem of an 'Bethe-optimal' edge to be deleted (Corollary 1 and Theorem 2). Finally, we conclude about the approximation quality of BP regarding the estimated partition function if edges are deleted (Theorems 3 and 4).
362
+
363
+ Theorem 1. Let(i, j)be an arbitrary edge. Then the ${L}^{\infty }$ - norm of the Bethe free energy difference is
364
+
365
+ $$
366
+ {\begin{Vmatrix}\Delta {\mathcal{F}}_{B}^{\left( i,j\right) }\end{Vmatrix}}_{{L}^{\infty }} = \left| {J}_{ij}\right| \tag{30}
367
+ $$
368
+
369
+ with
370
+
371
+ $$
372
+ - \left| {J}_{ij}\right| = \mathop{\inf }\limits_{{\left( {{q}_{i},{q}_{j}}\right) \in {\mathbb{B}}^{\left( i,j\right) }}}\Delta {\mathcal{F}}_{B}^{\left( i,j\right) }\left( {{q}_{i},{q}_{j}}\right) , \tag{31}
373
+ $$
374
+
375
+ $$
376
+ \left| {J}_{ij}\right| = \mathop{\sup }\limits_{{\left( {{q}_{i},{q}_{j}}\right) \in {\mathbb{B}}^{\left( i,j\right) }}}\Delta {\mathcal{F}}_{B}^{\left( i,j\right) }\left( {{q}_{i},{q}_{j}}\right) . \tag{32}
377
+ $$
378
+
379
+ Theorem 1 reveals a monotonic dependence between the strength of the couplings and absolute changes in the Bethe free energy that are caused by local modifications in the graphical structure. In terms of edge deletion, this implies that the 'Bethe-optimal' choice of an edge to be removed from the graph is the one with the weakest coupling strength:
380
+
381
+ Corollary 1. Suppose we aim to remove an edge from the graphical model such that the induced maximum error in the Bethe free energy is minimal. Then this ’ ${L}^{\infty }$ -Bethe-optimal’ edge is the one with the lowest absolute coupling strength:
382
+
383
+ $$
384
+ \mathop{\operatorname{argmin}}\limits_{{\left( {i,j}\right) \in \mathbf{E}}}{\begin{Vmatrix}\Delta {\mathcal{F}}_{B}^{\left( i,j\right) }\end{Vmatrix}}_{{L}^{\infty }} = \mathop{\operatorname{argmin}}\limits_{{\left( {i,j}\right) \in \mathbf{E}}}\left| {J}_{ij}\right| \tag{33}
385
+ $$
386
+
387
+ An analogous property holds for the ${L}^{2}$ -error of ${\mathcal{F}}_{B}$ on $\mathbb{L}$ :
388
+
389
+ ${}^{9}$ That is, the four line segments connecting the points $\left( {0,0}\right) -$ $\left( {0,1}\right) ,\left( {0,0}\right) - \left( {1,0}\right) ,\left( {0,1}\right) - \left( {1,1}\right)$ , and $\left( {1,0}\right) - \left( {1,1}\right)$ .
390
+
391
+ Theorem 2. Suppose we aim to remove an edge from the graphical model such that the induced mean squared error in the Bethe free energy on the local polytope $\mathbb{L}$ is minimal. Then this ’ ${L}^{2}$ -Bethe-optimal’ edge is the one with the lowest absolute coupling strength:
392
+
393
+ $$
394
+ \mathop{\operatorname{argmin}}\limits_{{\left( {i,j}\right) \in \mathbf{E}}}{\begin{Vmatrix}\Delta {\mathcal{F}}_{B}^{\left( i,j\right) }\end{Vmatrix}}_{{L}^{2}} = \mathop{\operatorname{argmin}}\limits_{{\left( {i,j}\right) \in \mathbf{E}}}\left| {J}_{ij}\right| \tag{34}
395
+ $$
396
+
397
+ Next, we conclude about the quantitative change in the Bethe partition function ${Z}_{\mathcal{B}}$ if an edge is removed:
398
+
399
+ Theorem 3. Let ${z}_{\mathcal{B}}$ be the Bethe partition function associated with some graphical model, i.e., the quanitity that satisfies $- \log \left( {\mathcal{Z}}_{\mathcal{B}}\right) = \mathop{\min }\limits_{\mathbb{B}}{\mathcal{F}}_{B}$ . Suppose we remove an (attractive or repulsive) edge from the graph. Let ${\mathcal{F}}_{B}^{\smallsetminus \left( {i,j}\right) }$ be the representation of the Bethe free energy associated with the new model, together with the new Bethe partition function ${Z}_{\mathcal{B}}{}^{\smallsetminus \left( {i,j}\right) }$ that is implicitly defined by $- \log \left( {{Z}_{\mathcal{B}}{}^{\smallsetminus \left( {i,j}\right) }}\right) =$ $\mathop{\min }\limits_{\mathbb{B}}{\mathcal{F}}_{B}^{\smallsetminus \left( {i,j}\right) }$ . Then the following error estimate holds:
400
+
401
+ $$
402
+ \left| {\log \left( \frac{{Z}_{\mathcal{B}}}{{{Z}_{\mathcal{B}}}^{\smallsetminus \left( {i,j}\right) }}\right) }\right| < \left| {J}_{ij}\right| \tag{35}
403
+ $$
404
+
405
+ Finally, we conclude about the quality of the estimated partition function if edges are removed. We consider unidirectional models (Sec. 2.1) that allow for a precise statement:
406
+
407
+ Theorem 4. Consider a unidirectional model, i.e., where all edges are attractive and all variables are biased towards the same state. Let $z$ , resp. ${z}_{\mathcal{B}}$ be the associated partition resp. Bethe partition function. Suppose we remove an arbitrary edge from the graph and let ${\mathcal{Z}}_{\mathcal{B}} \smallsetminus \left( {i,j}\right)$ be the Bethe partition function associated with the new model. Then the quality of the estimated partition function degrades, i.e.,
408
+
409
+ $$
410
+ \left| {z - {z}_{\mathcal{B}}}\right| < \left| {z - {z}_{\mathcal{B}}{}^{\smallsetminus \left( {i,j}\right) }}\right| . \tag{36}
411
+ $$
412
+
413
+ Theorem 4 does not formally extend to models that contain both positive and negative local fields. Generally, however, the error between the true and the BP-estimated partition function tends to increase, the more edges we remove.
414
+
415
+ This negative result is contrasted by the positive effect of edge removal on the estimated marginals. While it is difficult even for restricted model classes to provide any theoretical guarantees, we validate and explain our statement in Sec. 4,
416
+
417
+ § 4 EXPERIMENTS
418
+
419
+ We now demonstrate empirically how removing edges can have an astonishingly positive impact on the approximation accuracy of BP. We perform a range of experiments on a fully connected graph on 10 vertices. ${}^{10}$ Further experiments including a $5 \times 5$ - grid graph are contained in Appendix C. We consider both attractive and general models (Sec. 2.1). In Sec. 4.1, we focus on attractive models and sample ${J}_{ij}$ uniformly from $\left\lbrack {0,\widehat{J}}\right\rbrack$ for $\widehat{J} \in \{ {0.1},{0.2},\ldots ,2\}$ . In Sec. 4.2, we focus on general models and sample ${J}_{ij}$ uniformly from $\left\lbrack {-\widehat{J},\widehat{J}}\right\rbrack$ for $\widehat{J} \in \{ {0.1},{0.2},\ldots ,2\}$ . For both settings, we create two scenarios: first, models with weak local fields (each ${\theta }_{i}$ is sampled uniformly from $\left\lbrack {-{0.2},{0.2}}\right\rbrack$ ); second, models with strong local fields (each ${\theta }_{i}$ is sampled uniformly from $\left\lbrack {-{0.5},{0.5}}\right\rbrack )$ . For each configuration, we create 200 models.
420
+
421
+ For each individual model, we remove edges one by one until we reach a spanning tree. We do not remove edges, whose deletion makes the graph disconnected. ${}^{\Pi }$ We compare two criteria for selecting the next edge to be removed: first, the Bethe-optimal criterion (Corollary 1, Theorem 2); second, we remove edges that induce the lowest mutual information between two connected variables in the original model. More precisely: assume that we have already removed edge set $\widetilde{\mathbf{E}}$ from a model; then the next edge(i, j)to be removed is the one that minimizes either of the following criteria:
422
+
423
+ $\left( {i,j}\right) \in \mathbf{E} \smallsetminus \widetilde{\mathbf{E}}$
424
+
425
+ CHOW-LIU : $\;\operatorname{argmin}I\left( {{X}_{i};{X}_{j}}\right)$
426
+
427
+ $\left( {i,j}\right) \in \mathbf{E} \smallsetminus \widetilde{\mathbf{E}}$
428
+
429
+ Note that by applying the second criterion, we end up in a Chow-Liu tree (Chow and Liu, 1968), i.e., the spanning tree with the lowest KL divergence from the original model. ${}^{12}$
430
+
431
+ For each intermediate model during the edge deletion process, we run BP 100 times with random message initialization to approximate the marginals. For each run, we perform at most 1000 iterations. If BP has not converged, we estimate the marginals from the final iteration. We utilize a randomized message scheduling to achieve better convergence (Elidan et al., 2006). For the error evaluation, we compute the ${l}^{1}$ -distance between the exact and estimated marginals. The results for each model are averaged over the 100 runs. Finally, the results are averaged over all 200 models, each based on a different configuration of the potentials.
432
+
433
+ § 4.1 ATTRACTIVE MODELS
434
+
435
+ For weak couplings, BP finds accurate marginal estimates in the original model. If the strength of the couplings increases, this favorable property suddenly disappears at some critical treshold and BP fails to approximate the marginals for larger values of $\widehat{J}$ (Fig. 1). This behavior is not due to worse convergence properties of BP, but results from inaccurate BP fixed points and thus inaccurate minima of the Bethe free energy (Weller et al., 2014). While the Bethe free energy is convex for weaker couplings and possesses a unique global minimum, this minimum becomes an (unstable) maximum if $\widehat{J}$ increases and cannot be reached by BP any longer (Hes-kes, 2003; Mooij and Kappen, 2005; Knoll and Pernkopf, 2017). For even larger couplings, the landscape of the Bethe free energy becomes increasingly complex and the (possibly many ${}^{13}$ ) Bethe minima approach the boundary of the domain, thus moving away from the exact marginals.
436
+
437
+ ${}^{11}$ This procedure corresponds to the so-called reverse-delete algorithm (Kruskal, 1956) that constructs a maximum spanning tree with respect to a given criterion.
438
+
439
+ ${}^{12}$ We cannot generally apply the second criterion, as the computation of the mutual information between two variables requires knowledge of the related exact singleton and pairwise marginals.
440
+
441
+ ${}^{10}$ This allows for a computation of the exact marginals via the junction tree algorithm (Lauritzen and Spiegelhalter, 1988) and enables us to compare the approximated marginals to them.
442
+
443
+ If we remove edges from the graph, the marginal accuracy of BP in the new model is much better than in the original model. This can be explained by a 'reconvexification' of the Bethe free energy that makes unstable maxima stable minima again and allows BP to converge to accurate fixed points. The question on how many edges we should actually remove, is a difficult one. In Fig. 1, we observe that there appears to be a 'channel' that defines an optimal number of edges to be removed. The stronger the couplings become, the more preferable is it to rely on tree approximations. For stronger local potentials, the channel tends to become narrower and edge removal loses some of its benefit (although the results are still superior in comparison to the original model). Moreover, we observe that the BETHE-OPT criterion performs slightly better than the CHOW-LIU criterion, with an increasing advantage for stronger local potentials.
444
+
445
+ § 4.2 GENERAL MODELS
446
+
447
+ The situation for general models is similar as in the attrac-tice case. We can observe certain differences though (Fig. 2): first, the critical treshold of the couplings, beyond which the Bethe free energy becomes non-convex, is higher than for attractive models. Second, for models with strong local potentials, edge removal based on the BETHE-OPT criterion improves the marginal accuracy only slightly. Interestingly, the Chow-Liu tree induces strikingly accurate Bethe minima for all models. As in the attractive case, we observe that the problem becomes more difficult if both the pairwise and the local potentials become stronger at the same time.
448
+
449
+ § 5 CONCLUSION
450
+
451
+ We have proposed to approximate a graphical model as a 'preprocessing step' for approximate inference. We focused on the removal of single edges and showed that this can have a beneficial impact on the behavior of belief propagation.
452
+
453
+ We have exploited the relationship between belief propagation and the Bethe free energy to explain the success of such an approach. Subsequently, we have validated our findings in an experimental study. Most importantly, our analysis contributes to an improved understanding of belief propagation and the Bethe approximation in general.
454
+
455
+ We are convinced that our observations inspire the development of further sophisticated methods that try to approximate a graphical model and improve the behavior of message passing algorithms. We believe that one logical extension lies in the modification of the local potentials to compensate for the lost information caused by edge removal.
456
+
457
+ < g r a p h i c s >
458
+
459
+ Figure 1: Attractive models. First row: ${\theta }_{i} \in \left\lbrack {-{0.2},{0.2}}\right\rbrack$ ; second row: ${\theta }_{i} \in \left\lbrack {-{0.5},{0.5}}\right\rbrack$ . (a) + (c): BETHE-OPT criterion; (b) + (d): CHOW LIU criterion.
460
+
461
+ < g r a p h i c s >
462
+
463
+ Figure 2: General models. First row: ${\theta }_{i} \in \left\lbrack {-{0.2},{0.2}}\right\rbrack$ ; second row: ${\theta }_{i} \in \left\lbrack {-{0.5},{0.5}}\right\rbrack$ . (a) + (c): BETHE-OPT criterion; (b) + (d): CHOW LIU criterion.
464
+
465
+ ${}^{13}$ The Bethe free energy may theoretically possess exponentially many local minima (Watanabe and Fukumizu, 2009; Knoll and Pernkopf, 2019).
UAI/UAI 2022/UAI 2022 Conference/BcIfJuIscx5/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,721 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Learning a Neural Pareto Manifold Extractor with Constraints
2
+
3
+ ## Abstract
4
+
5
+ Multi-objective optimization (MOO) problems require balancing competing objectives, often under constraints. The Pareto optimal solution set defines all possible optimal trade-offs over such objectives. In this work, we present a novel method for Pareto-front learning: inducing the full Pareto manifold at train-time so users can pick any desired optimal trade-off point at run-time. Our key insight is to exploit Fritz-John Conditions for a novel guided double gradient descent strategy. Evaluation on synthetic benchmark problems allows us to vary MOO problem difficulty in controlled fashion and measure accuracy ${vs}$ . known analytic solutions. We further test scalability and generalization in learning optimal neural model parameterizations for Multi-Task Learning (MTL) on image classification. Results show consistent improvement in accuracy and efficiency over prior MTL methods as well as techniques from operations research.
6
+
7
+ ## 1 INTRODUCTION
8
+
9
+ Multi-Objective Optimization (MOO) problems require balancing multiple objectives, often competing with one another under further constraints [Van Rooyen et al., 1994, Ehrgott and Wiecek, 2005]. A Pareto optimal solution [Pareto, 1906] defines the set of all saddle points [Ehrgott and Wiecek, 2005] such that no objective can be further improved without penalizing at least one other objective.
10
+
11
+ As operational systems today increasingly seek to balance competing objectives, research on Pareto optimal learning has quickly grown across tasks such as fair classification Balashankar et al., 2019, Martinez et al., 2020], diversified ranking [Liu et al., 2019, Sacharidis, 2019], and recommendation [Xiao et al., 2017b, Azadjalal et al., 2017]. Many practical classification and recommendation problems have been shown to be non-convex [Hsieh et al., 2015]. A general Pareto solver should thus support optimization for both non-convex objectives and constraints.
12
+
13
+ Because MOO problems typically lack a single global optimum, one must choose among optimal solutions by selecting a trade-off over competing objectives. Ideally this choice could be deferred to run-time, so that each user could choose whichever trade-off they prefer. Unfortunately, prior Pareto solvers have typically required training a separate model to find the Pareto solution point for each desired trade-off.
14
+
15
+ To address this, recent work has proposed Pareto front learning (PFL): inducing the full Pareto manifold in training so that users can quickly select any desired optimal trade-off point at run-time [Navon et al., 2021, Lin et al., 2021, Singh et al., 2021]. These works learn a neural model manifold to map any desired trade-off over objectives to a corresponding Pareto point. As with other supervised learning, inducing an accurate prediction model requires high quality training data. Pareto points used in model training should be accurate.
16
+
17
+ In this work, we devise a efficient Pareto search procedure for Singh et al. [2021]'s HNPF model, so that we may benefit from its correctness guarantees in identifying true Pareto points for PFL training. While HNPF supports non-convex MOO with constraints and bounded error, it suffers from a lack of scalability with increasing variable space. Our innovation is a novel, guided double gradient descent strategy, updating the candidate point set in the outer descent loop and the manifold estimators in the inner descent loop.
18
+
19
+ Our evaluation spans both synthetic benchmarks and multitask learning (MTL) problems. Benchmark problems allow us to conduct controlled experiments varying MOO problem complexity (e.g., the presence of constraints and/or convexity in variable or function domains). Analytic solutions to benchmark problems allow us to measure the true accuracy of model predictions, something which is often difficult or impossible on real-world problems. Additional evaluation on a set of MTL problems in image classification enable us to further test scalability and generalization in learning high dimensional, Pareto optimal neural models.
20
+
21
+ Results across synthetic benchmarks and MTL problems show clear, consistent advantages of SUHNPF in terms of capability (handling non-convexity and constraints), denser coverage and higher accuracy in recovering the true Pareto front, and greater efficiency (time and space). Beyond empirical findings, our conceptual framing and review of prior work also serves to further bridge complementary lines of MTL and operations research work. For reproducibility, we will share our sourcecode and data upon publication.
22
+
23
+ ## 2 DEFINITIONS
24
+
25
+ We adopt Pareto definitions from Marler and Arora [2004]. A general MOO problem can formulated as follows:
26
+
27
+ optimize $\;F\left( x\right) = \left( {{f}_{1}\left( x\right) ,\ldots ,{f}_{k}\left( x\right) }\right)$(1)
28
+
29
+ s.t. $x \in S = \left\{ {x \in {\mathbb{R}}^{n} \mid G\left( x\right) = \left( {{g}_{1}\left( x\right) ,\ldots ,{g}_{m}\left( x\right) }\right) \leq 0}\right\}$
30
+
31
+ with $n$ variables $\left( {{x}_{1},\ldots ,{x}_{n}}\right) , k$ objectives $\left( {{f}_{1},\ldots ,{f}_{k}}\right)$ , and $m$ constraints $\left( {{g}_{1},\ldots ,{g}_{m}}\right)$ . Here, $S$ is the feasible set, i.e., the set of input values $x$ that satisfy the constraints $G\left( x\right)$ . For a MOO problem optimizing $F\left( x\right)$ subject to $G\left( x\right)$ , the solution is usually a manifold as opposed to a single global optimum, therefore one must find the set of all points that satisfy the chosen definition for an optimum.
32
+
33
+ Strong Pareto Optimal: A point ${\widetilde{x}}^{ * } \in S$ is strong Pareto optimal if no point in the feasible set exists that improves an objective without detriment to at least one other objective.
34
+
35
+ $$
36
+ \nexists {x}_{j} : {f}_{p}\left( {x}_{j}\right) \leq {f}_{p}\left( {x}^{ * }\right) ,\;\text{ for }\;p = 1,2,\ldots , k
37
+ $$
38
+
39
+ $$
40
+ \exists l : {f}_{l}\left( {x}_{j}\right) < {f}_{l}\left( {x}^{ * }\right) \tag{2}
41
+ $$
42
+
43
+ Weak Pareto Optimal: A point ${\widetilde{x}}^{ * } \in S$ is weak Pareto optimal if no other point exists in the feasible set that improves all of the objectives simultaneously. This is different from strong Pareto, where points might exist which improves at least one objective without detriment to other.
44
+
45
+ $$
46
+ \nexists {x}_{j} : {f}_{p}\left( {x}_{j}\right) < {f}_{p}\left( {\widetilde{x}}^{ * }\right) ,\;\text{ for }p = 1,2,\ldots , k \tag{3}
47
+ $$
48
+
49
+ ## 3 RELATED WORK
50
+
51
+ Linear Scalarization (LS). A variety of work has adopted LS to find Pareto points [Xiao et al., 2017b, Lin et al., 2019, Milojkovic et al., 2019]. For example, the Weighted Sum Method (WSM) Cohon [2004] is a LS approach to convert an MOO into a SOO using a convex combination of objective functions and constraints. However, because Karush-Kuhn-Tucker (KKT) conditions are known to hold true only for convex cases [Boyd et al., 2004], LS solutions are guaranteed to be Pareto optimal only under fully convex setting of objectives and constraints, as shown in Gobbi et al. [2015].
52
+
53
+ Operations Research (OR). A variety of OR methods support MOO problems with non-convex objectives and constraints, guaranteeing correctness within a user-specified error tolerance. Correctness has also been further verified by evaluation on synthetic MOO benchmark problems with known, analytic solutions. However, a key limitation of these methods is lack of scalability: they suffer from significant computational and run-time limitations as the variable dimension increases. Hence, they cannot be applied to optimizing neural model parameters for MOO problems.
54
+
55
+ Table 1: SUHNPF vs. existing Operations Research (OR) and Multi-Task Learning (MTL) methods. OR methods account for both objectives and constraints, produce Pareto points only, and are known to find true Pareto points for non-convex MOO problems. However, these methods do not scale to high-dimensional neural MOO problems. In contrast, MTL methods scale well but typically do not support constraints and can struggle with non-convexity.
56
+
57
+ <table><tr><td>Type</td><td>$\mathbf{{Method}}$</td><td>Finds Only Pareto points</td><td>Handles Constraints</td><td>Scalable Neural MOO</td></tr><tr><td rowspan="4">Operations Research (OR)</td><td>NBI [1998]</td><td>✓</td><td>✓</td><td>✘</td></tr><tr><td>mCHIM [2015]</td><td>✓</td><td>✓</td><td>✘</td></tr><tr><td>PK [2016</td><td>✓</td><td>✓</td><td>✘</td></tr><tr><td>HNPF [2021</td><td>✓</td><td>✓</td><td>✘</td></tr><tr><td rowspan="5">Multi- Task Learning (MTL)</td><td>MOOMTL [2018</td><td>✘</td><td>✘</td><td>✓</td></tr><tr><td>PMTL [2019]</td><td>✘</td><td>✘</td><td>✓</td></tr><tr><td>EPO J2020</td><td>✘</td><td>✘</td><td>✓</td></tr><tr><td>EPSE 2020</td><td>✘</td><td>✘</td><td>✓</td></tr><tr><td>PHN [2021</td><td>✘</td><td>✘</td><td>✓</td></tr><tr><td>Ours</td><td>SUHNPF</td><td>✓</td><td>✓</td><td>✓</td></tr></table>
58
+
59
+ Examples include enhanced scalarization approaches such as NBI [Das and Dennis, 1998], mCHIM [Ghane-Kanafi and Khorram, 2015], and PK [Pirouz and Khorram, 2016]. NBI produces an evenly distributed set of Pareto points given an evenly distributed set of weights, using the concept of Convex Hull of Individual Minima (CHIM) to break down the boundary/hull into evenly spaced segments before tracing the weak Pareto points. mCHIM improves upon NBI via a quasi-normal procedure to update the aforementioned CHIM set iteratively, to obtain a strong Pareto set. PK uses a local $\epsilon$ -scalarization based strategy that searches for the Pareto front using controllable step-lengths in a restricted search region, thereby accounting for non-convexity.
60
+
61
+ Multi-Task Learning (MTL). Recent MTL works have devised Pareto solvers for estimating high-dimensional neural models. MOOMTL [Sener and Koltun, 2018] effectively scales via a multi-gradient descent approach, but does not guarantee an even spread of solution points found along the Pareto front. PMTL [Lin et al., 2019] addresses this spread issue by diving the functional domain into equal spaced cones, but this increases computational complexity as the number of cones increases. EPO [Mahapatra and Rajan, 2020 extends preference rays along specified weights to find Pareto points evenly spread in the vicinity of the rays. EPSE [Ma et al., 2020] uses a combination Hessian of the functions and Krylov subspace to find Pareto solutions.
62
+
63
+ MTL methods rely upon KKT conditions to check for optimality, which assumes convexity (see earlier LS discussion). While methods seek an even distribution of Pareto points by dividing the functional space into evenly spaced cones or preference rays, our results on a non-convex benchmark problem clearly show an uneven point spread (Section 6.1). Moreover, most MTL methods are point-based solvers, meaning they must be run $P$ times to find $P$ points. This is too expensive to adjust trade-off preferences at run-time.
64
+
65
+ Pareto front learning. PFL methods [Navon et al., 2021, Lin et al., 2021, Singh et al., 2021] induce the full Pareto manifold at train-time so that users can quickly select any desired optimal trade-off point at run-time. For example, a manifold model trained on $P$ Pareto points might then quickly produce any number of additional Pareto points via interpolation. Of course, quality training data quality is necessary to learn an accurate, supervised prediction model. The method and resulting accuracy of the Pareto points used for model training is thus crucial to prediction accuracy.
66
+
67
+ Navon et al. [2021]'s PHN considers two way to acquire Pareto training points: LS and EPO [2020]. Lin et al. [2021] use their PMTL [2019] method to identify Pareto points for training. Singh et al. [2021]'s HNPF uses the Fritz-John conditions (FJC) [Maruşciac, 1982] to identify Pareto points.
68
+
69
+ Like other OR methods, HNPF provides a theoretical guarantee of Pareto front accuracy within a user-specified error tolerance. In evaluation on canonical OR benchmark problems, HNPF was shown to recover known Pareto fronts across various non-convex MOO problems while also being more efficient in finding Pareto points than NBI [1998], mCHIM [2015], and PK [2016]). However, like other OR methods, HNPF cannot scale to learn optimal high-dimensional neural model weights for MOO problems.
70
+
71
+ Ha et al. [2016]'s hypernetworks proposed training one neural model to generate effective weights for a second, target model. Navon et al. [2021] and Lin et al. [2021] apply this approach to learn a manifold mapping MOO solutions to different target model weights, enabling the target model to achieve the desired Pareto trade-off for the MOO problem. However, HNPF cannot be similarly applied to MTL problems due to its lack of scalability.
72
+
73
+ ## 4 PRELIMINARIES
74
+
75
+ Fritz John Conditions (FJC). Let the objective and constraint function in Eq. (1) be differentiable once at a decision vector ${x}^{ * } \in \mathcal{S}$ . The Fritz-John [Levi and Gobbi,2006] necessary conditions for ${x}^{ * }$ to be weak Pareto optimal is that vectors must exists for $0 \leq \lambda \in {\mathbb{R}}^{k},0 \leq \mu \in {\mathbb{R}}^{m}$ and $\left( {\lambda ,\mu }\right) \neq \left( {0,0}\right)$ (not identically zero) s.t. the following holds:
76
+
77
+ $$
78
+ \mathop{\sum }\limits_{{i = 1}}^{k}{\lambda }_{i}\nabla {f}_{i}\left( {x}^{ * }\right) + \mathop{\sum }\limits_{{j = 1}}^{m}{\mu }_{j}\nabla {g}_{j}\left( {x}^{ * }\right) = 0 \tag{4}
79
+ $$
80
+
81
+ $$
82
+ {\mu }_{j}{g}_{j}\left( {x}^{ * }\right) = 0,\forall j = 1,\ldots , m
83
+ $$
84
+
85
+ Gobbi et al. [2015] present an $L$ matrix form of FJC:
86
+
87
+ $$
88
+ L = \left\lbrack \begin{matrix} \nabla F & \nabla G \\ \mathbf{0} & G \end{matrix}\right\rbrack \;\left\lbrack {\left( {n + m}\right) \times \left( {k + m}\right) }\right\rbrack \tag{5}
89
+ $$
90
+
91
+ $$
92
+ \nabla {F}_{n \times k} = \left\lbrack {\nabla {f}_{1},\ldots ,\nabla {f}_{k}}\right\rbrack
93
+ $$
94
+
95
+ $$
96
+ \nabla {G}_{n \times m} = \left\lbrack {\nabla {g}_{1},\ldots ,\nabla {g}_{m}}\right\rbrack
97
+ $$
98
+
99
+ $$
100
+ {G}_{m \times m} = \operatorname{diag}\left( {{g}_{1},\ldots ,{g}_{m}}\right)
101
+ $$
102
+
103
+ comprising the gradients of the functions and constraints. The matrix equivalent of FJC for ${x}^{ * }$ to be Pareto optimal is to show the existence of $\delta = \left( {\lambda ,\mu }\right) \in {\mathbb{R}}^{k + m}$ (i.e., $\delta$ not identically zero) in Eq. (4) such that:
104
+
105
+ $$
106
+ L \cdot \delta = 0\;\text{ s.t. }\;L = L\left( {x}^{ * }\right) ,\delta \geq 0,\delta \neq 0 \tag{6}
107
+ $$
108
+
109
+ Therefore the non-trivial solution for Eq. (6) is:
110
+
111
+ $$
112
+ \det \left( {{L}^{T}L}\right) = 0 \tag{7}
113
+ $$
114
+
115
+ Remark. If ${f}_{i}s$ and ${g}_{j}s$ are continuous and differentiable once, then the set of weak Pareto optimal points are ${x}^{ * } =$ $\left\{ {x \mid \det \left( {L{\left( x\right) }^{T}L\left( x\right) }\right) = 0}\right\} ,\delta \geq 0$ for a non-square matrix $L\left( x\right)$ , and is equivalent to ${x}^{ * } = \{ x \mid \det \left( {L\left( x\right) }\right) = 0\} ,\delta \geq 0$ , for a square matrix $L\left( x\right)$ . See an illustration in Appendix E for the unconstrained setting.
116
+
117
+ Hybrid Neural Pareto Front (HNPF). Like other Pareto front learning (PFL) methods, HNPF [Singh et al., 2021] learns a neural Pareto manifold from training data. With HNPF, Pareto points for use as training data data are acquired via Fritz-John conditions. In particular, once a given a data point from the input variable domain is mapped to the output function domain (via objective functions), FJC are tested to determine Pareto optimality.
118
+
119
+ HNPF's neural network first identifies weak Pareto points via feed-forward layers to smoothly approximate the weak Pareto optimal solution manifold $M\left( {X}^{ * }\right)$ as $\widetilde{M}\left( {\widetilde{X},\Phi }\right)$ . The last layer of the network has two neurons with softmax activation for binary classification of Pareto vs. non-Pareto points, distinguishing sub-optimal points from the weak Pareto points. The network loss is representation driven, since the Fritz John discriminator (Eq. (7)), described by the objective functions and constraints, explicitly classifies each input data point ${X}_{i}$ as being weak Pareto or not. After identifying weak Pareto points, HNPF uses an efficient Pareto filter to find the subset of non-dominated points.
120
+
121
+ HNPF's scalability bottleneck lies in how it samples variable domain points to test for Pareto optimality in model training. If there are any direct constraints on variable values, this naturally restricts the feasible domain for sampling. However, lacking any prior distribution on where to find Pareto optima, HNPF performs uniform random sampling in the variable domain to ensure broad coverage for locating optima. For small benchmark problems with known variable domains, this suffices. However, it is infeasible to apply this to find optimal model parameters for a neural MOO model.
122
+
123
+ ## 5 SCALABLE UNIDIRECTIONAL HNPF
124
+
125
+ To address HNPF's scalability bottleneck, we introduce SUHNPF, a scalable variant of HNPF for finding weak Pareto points with an arbitrary density and distribution of initial data points. This is achieved via a scalable unidirectional FJC-guided double-gradient descent algorithm that encompasses HNPF's neural manifold estimator. Given continuous differentiable loss functions, SUNHPF's guided double gradient descent strategy efficiently searches the variable domain to find Pareto optimal points in the function domain. This enables SUHNPF to learn an $\epsilon$ -bounded approximation $\widetilde{M}\left( {\Theta }^{ * }\right)$ to the weak Pareto optimal manifold.
126
+
127
+ ### 5.1 FJC-GUIDED DOUBLE GRADIENT DESCENT
128
+
129
+ Constructing a classification manifold of Pareto vs. non-Pareto points requires a set of feasible points to represent both classes. Since the Pareto manifold is unknown a priori, feasible points are drawn from a random distribution (lacking an informed prior) to initialize both classes. We then refine the points in the Pareto class $\mathcal{P}1$ while holding the non-Pareto points $\mathcal{P}0$ constant.
130
+
131
+ We assume an equal-sized sample set of $P$ points for each class, which helps to address class imbalance for harsh cases. For benchmark problems where the feasible set over the variable domain is known, we randomly sample points over this feasible domain to initialize $\mathcal{P}1$ and $\mathcal{P}0$ . Given these input points $x$ , held constant for $\mathcal{P}0$ and used as initial seed values for $\mathcal{P}1$ , Alg. 1 specifies our FJC-guided double-gradient descent algorithm. The algorithm iteratively updates $\mathcal{P}1$ towards the Pareto manifold via FJC-guided descent. The training dataset $D$ is the union of $\mathcal{P}0 \cup \mathcal{P}1$ . The algorithm iterates over Steps 5-9 until the error (err) converges to the user-specified error tolerance $\left( {\epsilon }_{\text{outer }}\right)$ .
132
+
133
+ $$
134
+ {err} = \mathop{\sum }\limits_{{p \in \mathcal{P}1}}{\left( \det \left( {L}^{T}L\right) \right) }^{2} \tag{8}
135
+ $$
136
+
137
+ Algorithm 1 FJC-guided descent of variable domain
138
+
139
+ ---
140
+
141
+ : Input: Data $D = \mathcal{P}0 \cup \mathcal{P}1\; \vartriangleright$ Training Data
142
+
143
+ Input: Functions $F$ and Constraints $G$
144
+
145
+ Input: Error tolerance ${\epsilon }_{\text{outer }},{\epsilon }_{\text{inner }}$
146
+
147
+ while err $> {\epsilon }_{\text{outer }}$ do $\; \vartriangleright$ Run until convergence
148
+
149
+ Train network using $D$ as data for $e$ epochs
150
+
151
+ Compute current error err
152
+
153
+ Compute ${\nabla }_{p}$ det $= \frac{\partial \det \left( {{L}^{T}L}\right) }{\partial p},\forall p \in \mathcal{P}1$
154
+
155
+ $\mathcal{P}1 \leftarrow \mathcal{P}1 - \eta \nabla$ det $\; \vartriangleright$ Update points in $\mathcal{P}1$
156
+
157
+ $D = \mathcal{P}0 \cup \mathcal{P}1\; \vartriangleright$ Update Training Data
158
+
159
+ Output: Weak Pareto manifold $\widetilde{M}$
160
+
161
+ ---
162
+
163
+ Eq. 8 in Alg. 1 ensures that all of the points in the Pareto set $\left( {p \in \mathcal{P}1}\right)$ are optimal once we converge to the desired error tolerance $\epsilon$ . Hence, Step 7 computes gradients of the $\det \left( {{L}^{T}L}\right)$ matrix w.r.t. the variables at points $p \in \mathcal{P}1$ and creates an approximation of the $\nabla$ det matrix. The training data $D$ is then updated with the new values of $\mathcal{P}1$ . The output is an approximation of the true weak Pareto manifold $M$ as $\widetilde{M}$ on the discrete dataset $D \subset X$ . Note that in Step 8, we do not allow the point set $\mathcal{P}1$ to leave the feasible set $\mathcal{S}$ i.e., if the step crosses the boundary of the feasible set, then we update the point to be the point on the boundary.
164
+
165
+ Alg. 1 includes two separate gradient descent steps. The outer descent loop (Step 4-9) updates the candidate point set $\mathcal{P}1$ using the error measurement of ${err}$ through a squared loss in Eq. 8. The inner descent (Step 5) updates the parameters $\left( \Phi \right)$ of the neural net to closely approximate the Pareto manifold $M\left( X\right)$ as $\widetilde{M}\left( {X,\Phi }\right)$ . This is done using the Binary Cross Entropy Loss on $\left( {\det \left( {L{\left( X\right) }^{T}L\left( X\right) }\right) ,\widetilde{M}\left( X\right) }\right)$ , and reaches convergence only when ${BCE} \leq {\epsilon }_{\text{inner }}$ . The unidirectional property of this double-gradient update lets the outer loop influence the inner loop but not vice-versa.
166
+
167
+ Space Complexity Analysis. Alg. 1 maintains $D$ of size $P \times n$ . The ${L}^{T}L$ and $\nabla$ det matrices are of sizes ${\left( k + m\right) }^{2}$ and $n\left( {k + m}\right)$ , respectively. The total memory is $O(n(k +$ $m + P) + {\left( k + m\right) }^{2}$ ), where $n$ is the dimension of the variable space, and the scale of $k, m, P$ varies w.r.t. the problem. Trade-off $\alpha$ ’s are computed by solving a linear system as post-processing. SUHNPF achieves better memory and runtime efficiency since it does not rely upon solving primal and dual problems used in MTL methods (Appendix B).
168
+
169
+ ## 6 BENCHMARKING
170
+
171
+ Motivation. Lack of analytical solutions to real MOO problems makes it difficult to measure the true accuracy of any Pareto solver. Consequently, we follow the OR literature in advocating that the correctness of any proposed Pareto solver should first be tested on constructed benchmark problems with known analytic solutions. This is also consistent with broader ML community practice of first evaluating proposed methods across a range of simulated, controlled conditions to verify correctness, often yielding valuable insights into model behavior prior to evaluation on real data.
172
+
173
+ We consider three such benchmark problems (Cases I-III). These problems are non-convex in either the functional or variable domain, or due to constraints (Table 2). Note that whether or not the Pareto front itself is non-convex is not always the best indicator of benchmark difficulty. For example, even though both objectives are non-convex in Case II, the Pareto front is still convex. As we shall see, PHN [Navon et al., 2021] fails on Case II despite performing well on two benchmark problems in their own study having a non-convex front. In general, non-convexity can greatly challenge MTL approaches relying on KKT conditions in testing solutions for optimality (see Appendix G).
174
+
175
+ Table 2: Characterization of benchmark cases, including convexity (C) vs. non-convexity (NC) in variable and function domains.
176
+
177
+ <table><tr><td>Case</td><td>Dim</td><td>Variable Domain</td><td>Function Domain</td><td>Includes Constraints</td><td>$\mathbf{{OR}}$ Methods</td><td>$\mathbf{{MTL}}$ Methods</td><td>SUHNPF</td></tr><tr><td>I</td><td>2</td><td>Linear</td><td>NC</td><td>No</td><td>Sparse, Slow</td><td>Sparse, Fast</td><td>Dense, Fast</td></tr><tr><td>II</td><td>30</td><td>NC</td><td>C</td><td>No</td><td>Sparse, Slow</td><td>Fail</td><td>Dense, Fast</td></tr><tr><td>III</td><td>2</td><td>NC</td><td>NC</td><td>Yes</td><td>Sparse, Slow</td><td>Fail</td><td>Dense, Fast</td></tr></table>
178
+
179
+ Experimental Setup. For each Case I-III, each method is tasked with finding $P = {50}$ Pareto points. OR methods search until any $P$ Pareto points are found. MTL methods divide the functional search quadrant into cones/rays, seeking one Pareto point per split. For manifold methods (PHN, HNPF, and SUHNPF), methods search for $P$ Pareto points in order to learn the manifold. Ideally, each method should identify an even spread (i.e., broad coverage) of points across the true Pareto front (shown in grey in each figure) in order to faithfully approximate it. We report the total number of iterative steps (i.e., evaluations) taken by each solver to produce the desired number of Pareto points. SUHNPF starts with $P$ random candidates that are progressively refined via its guided, double gradient descent strategy. Following HNPF [Singh et al., 2021], we adopt the same error tolerance ${10}^{-4}$ for both ${\epsilon }_{\text{outer }}$ and ${\epsilon }_{\text{inner }}$ . Any point $x$ that satisfies $\left| {\det \left( {L{\left( x\right) }^{T}L\left( x\right) }\right) }\right| \leq {\epsilon }_{\text{inner }}$ is thus classified as being Pareto (exact zero is often impossible given finite machine precision). Sourcecode for LS, MOOMTL, PMTL and EPO solvers are taken from EPO's repository, while EPSE and PHN's sourcecode are used for them, respectively (see Appendix F). Based on Navon et al. [2021]'s findings, we evaluate the more accurate PHN variant, PHN-EPO, which we refer to simply as PHN.
180
+
181
+ Due to key differences between OR vs. MTL methods, results for each group are presented separately. First, OR methods not only support the full range of non-convex conditions across Cases I-III, but provide error tolerance parameters to guarantee correctness (and our experiments confirm this). Consequently, we report only the efficiency of OR methods in Table 3. In contrast, MTL methods produced variable accuracy on Case I and failed entirely on Cases II-III (as shall be discussed). Consequently, Table 4 reports accuracy and efficiency of MTL methods for Case I only.
182
+
183
+ Appendix F discusses experimental setup, Appendix 1, has convergence details, and Appendix J has loss profiles.
184
+
185
+ ### 6.1 CASE I: Ghane-Kanafi and Khorram [2015]
186
+
187
+ $$
188
+ {f}_{1}\left( {{x}_{1},{x}_{2}}\right) = {x}_{1},{f}_{2}\left( {{x}_{1},{x}_{2}}\right) = 1 + {x}_{2}^{2} - {x}_{1} - {0.1}\sin {3\pi }{x}_{1}
189
+ $$
190
+
191
+ $$
192
+ \text{s.t.}{g}_{1} : 0 \leq {x}_{1} \leq 1,{g}_{2} : - 2 \leq {x}_{2} \leq 2
193
+ $$
194
+
195
+ The analytical Pareto solution to this joint minimization problem is $M : 0 \leq {x}_{1} \leq 1,{x}_{2} = 0$ . In Fig. 1 we observe SUHNPF’s randomly generated point set $\mathcal{P}1$ (red dots) converges towards the true manifold $M$ as a discrete approximation $\widetilde{M}$ . Point set $\mathcal{P}0$ (blue dots) is held constant and serves as representatives for the (background) non-Pareto class. Iteration 5 is the last because the error falls below the user-specified $\epsilon$ . The final cardinality of the weak Pareto set $\left| {\mathcal{P}1}\right| = P$ and any $\mathcal{P}0$ point that happens to fall within the ${\epsilon }_{\text{outer }}$ threshold. Hence Alg. 1 ensures ${100}\%$ Pareto point density in $\mathcal{P}1$ , a vast improvement from HNPF [Singh et al., 2021 , where only $\approx 2\%$ density was achieved. Fig. 2 shows functional domain convergence. SUHNPF achieves an even spread of points in the non-convex portion of the front.
196
+
197
+ ![0196399d-7aa5-769b-a7a6-0ffeac477692_4_157_1734_681_281_0.jpg](images/0196399d-7aa5-769b-a7a6-0ffeac477692_4_157_1734_681_281_0.jpg)
198
+
199
+ Figure 1: Case I: Variable domain. The gray line show the true analytic solution $\left( {0 \leq {x}_{1} \leq 1}\right)$ . SUHNPF Pareto candidates $\mathcal{P}1$ (red dots) converge in 5 iterations. Non-Pareto candidates $\mathcal{P}0$ (blue dots) are held constant throughout the iterative sequence.
200
+
201
+ ![0196399d-7aa5-769b-a7a6-0ffeac477692_4_901_167_681_277_0.jpg](images/0196399d-7aa5-769b-a7a6-0ffeac477692_4_901_167_681_277_0.jpg)
202
+
203
+ Figure 2: Case I: Functional domain corresponding to Figure 1. SUHNPF Pareto candidates $\mathcal{P}1$ (red dots) converge in 5 iterations.
204
+
205
+ Fig. 3 presents results for Linear Scalarization (LS) and several MTL methods: MOOMTL, PMTL, EPO, EPSE, and PHN. LS successfully produces a number of points in the non-convex portions of the front, despite prior studies often asserting that LS cannot handle any non-convexity. Refer to Appendix G for analysis and justification. Refer to Appendix 1 for iterative convergence plots for Case I.
206
+
207
+ To check for optimality, MTL methods rely upon KKT conditions that implicitly assume convexity (see Section 3). The non-convex nature of ${f}_{2}$ is thus challenging for these KKT-based methods. For example, some methods seek an even distribution of Pareto points by breaking up the functional space into evenly spaced cones or preference rays for trade-off values $\alpha$ . However, the uneven point spread seen on this non-convex benchmark illustrates limitations of the cone-based approach in handling non-convexity. We also clearly see non-Pareto points produced by some methods.
208
+
209
+ ![0196399d-7aa5-769b-a7a6-0ffeac477692_4_897_1177_695_841_0.jpg](images/0196399d-7aa5-769b-a7a6-0ffeac477692_4_897_1177_695_841_0.jpg)
210
+
211
+ Figure 3: Case I: function domain for LS and MTL methods. No method produces all 50 of the requested Pareto points. PMTL, EPO and PHN also find non-Pareto points (circled in blue). Methods vary greatly in their coverage of points spanning the true front.
212
+
213
+ ### 6.2 CASE II: Zhang et al. [2008]
214
+
215
+ $$
216
+ {f}_{1}\left( x\right) = {x}_{1} + \frac{2}{\left| {J}_{1}\right| }\mathop{\sum }\limits_{{j \in {J}_{1}}}{y}_{j}^{2}\;,\;{f}_{2}\left( x\right) = 1 - \sqrt{{x}_{1}} + \frac{2}{\left| {J}_{2}\right| }\mathop{\sum }\limits_{{j \in {J}_{2}}}{y}_{j}^{2}
217
+ $$
218
+
219
+ $$
220
+ \text{s.t.}{g}_{1},\ldots ,{g}_{30} : 0 \leq {x}_{1} \leq 1, - 1 \leq {x}_{j} \leq 1, j = 2,\ldots , m
221
+ $$
222
+
223
+ $$
224
+ {J}_{1} = \{ j \mid j\text{ is odd,}2 \leq j \leq m\} ,{J}_{2} = \{ j \mid j\text{ is even,}2 \leq j \leq m\}
225
+ $$
226
+
227
+ $$
228
+ {y}_{j} = \left\{ \begin{array}{ll} {x}_{j} - \left\lbrack {{0.3}{x}_{1}^{2}\cos \left( {{24\pi }{x}_{1} + \frac{4j\pi }{m}}\right) + {0.6}{x}_{1}}\right\rbrack \cos \left( {{6\pi }{x}_{1} + \frac{j\pi }{m}}\right) & j \in {J}_{1} \\ {x}_{j} - \left\lbrack {{0.3}{x}_{1}^{2}\cos \left( {{24\pi }{x}_{1} + \frac{4j\pi }{m}}\right) + {0.6}{x}_{1}}\right\rbrack \cos \left( {{6\pi }{x}_{1} + \frac{j\pi }{m}}\right) & j \in {J}_{2} \end{array}\right.
229
+ $$
230
+
231
+ This joint minimization case operates in a $n = {30}$ dimensional variable space. Fig. 4 shows the true Pareto front and SUHNPF convergence in the variable domain. Note the non-convexity in the variable domain, where ${x}_{1}$ varies uniformly between $\left\lbrack {0,1}\right\rbrack$ , while ${x}_{2},\ldots ,{x}_{30}$ are sinusoidal in nature guided by ${x}_{1}$ . Thus, the Pareto manifold has a spiral trajectory along ${x}_{2},\ldots ,{x}_{30}$ with evolution along ${x}_{1}$ .
232
+
233
+ Despite the Pareto front being convex, the objectives are non-convex. For MTL methods, the min_norm_solver [Sener and Koltun, 2018], which is integral to all MTL solvers, simply fails. Consequently, no MTL results are reported.
234
+
235
+ For SUHNPF, following random initialization (iteration 0) in Fig. 4 (a), we observe that the candidate set $\mathcal{P}1$ propagates more towards increasing values of ${x}_{1}$ in Fig. 4, and approximates the expected Pareto manifold at iteration 5 .
236
+
237
+ ![0196399d-7aa5-769b-a7a6-0ffeac477692_5_214_1035_575_283_0.jpg](images/0196399d-7aa5-769b-a7a6-0ffeac477692_5_214_1035_575_283_0.jpg)
238
+
239
+ Figure 4: Case II: variable domain (SUHNPF). We restrict the four plots to three dimensions $\left( {{x}_{1},{x}_{2}}\right.$ , and $\left. {x}_{3}\right)$ for visualization.
240
+
241
+ ![0196399d-7aa5-769b-a7a6-0ffeac477692_5_218_1390_565_252_0.jpg](images/0196399d-7aa5-769b-a7a6-0ffeac477692_5_218_1390_565_252_0.jpg)
242
+
243
+ Figure 5: Case II: functional domain (SUHNPF).
244
+
245
+ ### 6.3 CASE III: Tanaka et al. [1995]
246
+
247
+ $$
248
+ {f}_{1}\left( {{x}_{1},{x}_{2}}\right) = {x}_{1},{f}_{2}\left( {{x}_{1},{x}_{2}}\right) = {x}_{2}
249
+ $$
250
+
251
+ s.t. ${g}_{1}\left( {{x}_{1},{x}_{2}}\right) = {\left( {x}_{1} - {0.5}\right) }^{2} + {\left( {x}_{2} - {0.5}\right) }^{2} \leq {0.5}$
252
+
253
+ ${g}_{2}\left( {{x}_{1},{x}_{2}}\right) = {x}_{1}^{2} + {x}_{2}^{2} - 1 - {0.1}\cos \left( {{16}\arctan \left( {{x}_{1}/{x}_{2}}\right) }\right) \geq 0$
254
+
255
+ $$
256
+ {g}_{3},{g}_{4} : 0 \leq {x}_{1},{x}_{2} \leq \pi
257
+ $$
258
+
259
+ For this joint minimization problem, the Pareto front is dominated by the two constraints ${g}_{1}$ and ${g}_{2}$ , while linear functions ${f}_{1}$ and ${f}_{2}$ do not contribute to the Pareto optimal solution. Fig. 6 shows the convergence of SUHNPF Pareto candidates toward the known solution manifold.
260
+
261
+ Because MTL approaches do not support constraints, they are not capable of solving this benchmark problem. However, note that if we were to remove constraints ${g}_{1}$ and ${g}_{2},{f}_{1}$ and ${f}_{2}$ would then become independent of each other (and so not compete). The front then collapses to the point(0,0), corresponding to the minimum of both functions. For this unconstrained problem, MTL methods would be expected to find this correct Pareto optimal solution point.
262
+
263
+ ![0196399d-7aa5-769b-a7a6-0ffeac477692_5_899_152_684_281_0.jpg](images/0196399d-7aa5-769b-a7a6-0ffeac477692_5_899_152_684_281_0.jpg)
264
+
265
+ Figure 6: Case III: variable domain. The analytical solution for this problem is driven by constraints ${g}_{1},{g}_{2}$ . SUHNPF Pareto candidates $\mathcal{P}0$ (red dots) converge to the true front.
266
+
267
+ Case III highlights the need for any manifold based extractor to support both explicit and implicit forms of the Pareto front. Cases I and II have explicit form of front in the functional and variable domain. However, Case III has an implicit Pareto front (Fig. 6) owing to constraints ${g}_{1},{g}_{2}$ , where they render an implicit relation between ${x}_{1},{x}_{2}$ and therefore ${f}_{1},{f}_{2}$ . SUHNPF’s ability to construct a full rank diffusive indicator function of Pareto vs. non-Pareto points enables it to approximate the true manifold.
268
+
269
+ ### 6.4 SUHNPF VS. OR AND MTL METHODS
270
+
271
+ Table 3 reports the number of candidate evaluations by OR methods vs. SUHNPF to find $P = {50}$ Pareto points for Cases I-III. Because OR methods and SUHNPF all return $P$ true Pareto points, we compare methods on efficiency only.
272
+
273
+ Note that HNPF is not an iterative solver: given a grid of points in the feasible domain, it identifies those that are weak Pareto optimal. It thus requires fewer evaluations with low variable dimensionality (Case I and III of 2 dimensions). In contrast, SUHNPF is a solver that starts with 50 random points and iteratively converges them onto the weak Pareto front, irrespective of the variable space (Case II of 30 dimension), and hence is scalable to MTL problems. In general, the large number of evaluations required by OR methods is indicative of their lack of scalability to MTL problems.
274
+
275
+ Table 3: The number of evaluations performed by each method to find 50 Pareto points across Cases I-III. HNPF performs well with small variable space dimensionality (e.g., 2D in Case I & III) but scales poorly to higher dimensionality (Case II, 30D).
276
+
277
+ <table><tr><td>$\mathbf{{Method}}$</td><td>Case I</td><td>Case II</td><td>Case III</td></tr><tr><td>NBI</td><td>1,236,034</td><td>1,497,063,168</td><td>447,574</td></tr><tr><td>mCHIM</td><td>1,081,625</td><td>3,605,242,265</td><td>497,537</td></tr><tr><td>PK</td><td>138,224</td><td>169,223,715</td><td>377,410</td></tr><tr><td>HNPF</td><td>2,731</td><td>24,457</td><td>3,626</td></tr><tr><td>SUHNPF</td><td>4,219</td><td>4,682</td><td>4,578</td></tr></table>
278
+
279
+ Table 4 reports the accuracy, efficiency and run-time of SUHNPF vs. MTL methods for Case I. For Case II, the min_norm_solver [Sener and Koltun, 2018] used by MTL methods fails, and Case III's constraints are not supported by MTL methods. Note that for fair evaluation, we only consider candidates that are produced within the feasible functional bounds for the problem. Additional run-time evaluation and discussion can be found in Appendix C.
280
+
281
+ Table 4: SUHNPF vs. MTL methods on Case I in finding $P = {50}$ Pareto points. We report the $\%$ of feasible points each method finds and their avg/max error ${vs}$ . the true front. Our error measure considers feasible points only; infeasible points are not penalized.
282
+
283
+ <table><tr><td>$\mathbf{{Method}}$</td><td>LS</td><td>MOOMTL</td><td>PMTL</td><td>EPO</td><td>EPSE</td><td>PHN</td><td>SUHNPF</td></tr><tr><td>Evaluations</td><td>5K</td><td>5K</td><td>5K</td><td>5K</td><td>5K</td><td>5K</td><td>4,219</td></tr><tr><td>Run-time (secs)</td><td>18.1</td><td>19.2</td><td>527</td><td>752</td><td>641</td><td>853</td><td>10.0</td></tr><tr><td>Points Found</td><td>54%</td><td>32%</td><td>70%</td><td>68%</td><td>30%</td><td>80%</td><td>100%</td></tr><tr><td>Avg Err $\left( {10}^{-4}\right)$</td><td>0.53</td><td>0.45</td><td>4.15</td><td>8.73</td><td>0.61</td><td>3.04</td><td>0.52</td></tr><tr><td>Max Err $\left( {10}^{-4}\right)$</td><td>1.12</td><td>0.98</td><td>126</td><td>106</td><td>0.94</td><td>73.8</td><td>0.82</td></tr></table>
284
+
285
+ Regarding Case I coverage and accuracy, SUHNPF returns all 50 Pareto points; no MTL method does. For all points that are found, we measure their error ${vs}$ . the true Pareto front. SUHNPF is seen to achieve the lowest error, with maximum error bounded by the ${10}^{-4}$ error tolerance parameter set in our experiments. Specifically, the outer loop of Alg. 1 would not achieve convergence until all the points points are within the prescribed error tolerance. In contrast, PMTL, EPO, and PHN yield maximum error two orders of magnitude larger. Note also that our error metric generously scores only the points found by each method, with no penalty for missing points. Visually, SUHNPF (Fig. 2) clearly provides better coverage of the Pareto front via a denser, more even spread of points vs. those found by MTL methods (Fig. 3).
286
+
287
+ Because MTL approaches assume convexity of objective functions to generate points with uniformity on the Pareto front, and Case I includes non-convex objectives, the MTL solvers fail to find points in certain regions (see Fig. 3). While EPO's solver has convergence criteria, it still produces points that did not converge (circled in blue). This stems from EPO's assumption on KKT conditions to achieve optimality, which fails on Case I’s non-convex form of ${f}_{2}$ . Correspondingly PHN(-EPO), which uses EPO as its base solver, also fails to converge on certain points. In contrast, SUHNPF relies on the FJC to test optimality, which fully supports non-convexity in functions and constraints.
288
+
289
+ Regarding Case I efficiency, SUHNPF is also fastest: nearly twice as fast as LS and MOOMTL, more than ${50}\mathrm{x}$ faster than PMTL and EPSE, 75x faster than EPO, and 85x faster than PHN. (Because PHN-EPO calls EPO, it is necessarily slower than EPO). As Navon et al. [2021] note, LS is much faster than EPO, so one could expect PHN-LS to be faster than PHN-EPO and slower than LS.
290
+
291
+ ## 7 SUHNPF AS A HYPERNETWORK
292
+
293
+ Hypernetworks [Ha et al., 2016] train one neural model to generate effective weights for a second, target model. Navon et al. [2021] and Lin et al. [2021] learn a neural manifold mapping MOO solutions to different target model weights, enabling the target model to achieve the desired Pareto trade-off for the MOO problem.
294
+
295
+ Assume the target task maps from input $Y$ to output $Z$ . We seek to minimize objective functions ${f}_{1}$ and ${f}_{2}$ having loss functions ${\mathcal{L}}_{1}$ and ${\mathcal{L}}_{2}$ . Given correct output ${Z}^{ * }$ , we score $Z$ for each loss function ${\mathcal{L}}_{i}\left( {Z,{Z}^{ * }}\right)$ . A target model for this task ${C}_{\Theta } : Y \rightarrow Z$ with parameters $\Theta$ will yield loss ${\forall }_{i}{\mathcal{L}}_{i}\left( {{C}_{\Theta }\left( Y\right) ,{Z}^{ * }}\right)$ . The MOO problem is to find Pareto optimal ${\Theta }^{ * }$ for the ${f}_{1} = {\mathcal{L}}_{1}$ vs. ${f}_{2} = {\mathcal{L}}_{2}$ trade-off.
296
+
297
+ The objectives ${\mathcal{L}}_{1}\left( \Theta \right) ,{\mathcal{L}}_{2}\left( \Theta \right)$ for SUHNPF are continuous differentiable functions of $\Theta$ . This enables SUNHPF’s guided double gradient descent strategy to efficiently search the space of model target parameters $\Theta$ , mapping each to resulting loss values $\left( {{\mathcal{L}}_{1},{\mathcal{L}}_{2}}\right)$ . Training data resulting from this search allows SUHNPF to learn an $\epsilon$ -bounded approximation $\widetilde{M}\left( {\Theta }^{ * }\right)$ to the weak Pareto optimal manifold.
298
+
299
+ As in prior Pareto Front Learning (PFL) work [Navon et al., 2021, Lin et al., 2021], this enables rapid model personalization at run-time based on user preferences. The neural MOO ${\text{Loss}}_{\text{classifier }}$ is a weighted linear combination of the user-prescribed objectives $\left( {{\mathcal{L}}_{1},{\mathcal{L}}_{2}}\right)$ . The classifier loss hyper-parameter $\alpha$ (trade-off value) is computed as a postprocessing step corresponding to Pareto optimal classifier weights ${\Theta }^{ * }$ for rapid traversal of arbitrary $\left( {\alpha ,{\Theta }^{ * }}\right)$ solutions. See Fig. 9 in Appendix A for additional details of the setup of SUHNPF as a hypernetwork to optimize a target model.
300
+
301
+ ### 7.1 EVALUATION ON MULTI-TASK LEARNING
302
+
303
+ We evaluate on the same MTL image classification problems as in Navon et al. [2021]. Given two underlying source datasets, MNIST [LeCun et al., 1998] and Fashion-MNIST [Xiao et al., 2017a], Navon et al. [2021] report on three MTL tasks: MultiMNIST [Sabour et al., 2017], Multi-Fashion, and Multi-Fashion + MNIST. In each case, two images are sampled from source datasets and overlaid, one at the top-left corner and one at the bottom-right, with each also shifted up to 4 pixels in each direction. The two competing tasks are to correctly classify each of the original images: Top-Left (Task 1 or ${f}_{1}$ ) and Bottom-Right (Task 2 or ${f}_{2}$ ). We use ${120}\mathrm{\;K}$ training and ${20k}$ testing examples and directly apply existing single-task models, allocating ${10}\%$ of each training set for constructing validation sets, as used in Lin et al. [2019]. Navon et al. [2021] found that PHN-EPO (henceforth PHN) was more accurate than other methods they compared, so we use PHN as our baseline.
304
+
305
+ We adopt the LeNet architecture [LeCun et al., 1998] as the target model to learn. Following prior MTL work [Sener and Koltun, 2018], we treat all layers other than the last as the shared representation function and put two fully-connected layers as task-specific functions. We use cross-entropy loss with softmax activation for both task-specific loss functions. Because cross-entropy loss functions are differentiable, we can use them directly as training objectives.
306
+
307
+ ![0196399d-7aa5-769b-a7a6-0ffeac477692_7_288_132_1158_317_0.jpg](images/0196399d-7aa5-769b-a7a6-0ffeac477692_7_288_132_1158_317_0.jpg)
308
+
309
+ Figure 7: Cross-entropy loss on the test split for all three MTL datasets for SUHNPF vs. PHN. The 11 points shown for each method correspond (from left-to-right) to varying trade-offs preferences in minimizing the combined linear loss over objectives: $\alpha {f}_{1} + \left( {1 - \alpha }\right) {f}_{2}$ for $\alpha \in \{ 1,{0.9},\ldots ,0\}$ . The gray dashed-line show the best loss achieved by LeNet to classify a single image for each given task.
310
+
311
+ Results. We see SUHNPF vs. PHN results on dataset test splits in Fig. 7. Because SUHNPF defines a strict $\epsilon$ -bound on error, we can assert its correctness on this basis alone. Visual inspection also shows that PHN returns dominated points (e.g., top of MultiMNIST plot), whereas a Pareto front by definition includes only non-dominated points. Nonetheless, we cannot directly measure error ${vs}$ . a known Pareto front because real MOO problems lack a simple analytical solution like synthetic benchmark problems. Of course, we can still compare relative performance of methods. We see that SUHNPF achieves strictly lower loss than PHN across all user trade-off settings of $\alpha$ on all three datasets.
312
+
313
+ Since the minimum loss $\min \left( {f}_{1}\right) = \min \left( {f}_{2}\right) = 0$ , for both objectives, the ideal point [Marler and Arora, 2004] for joint minimization is(0,0). A simple error measure for each point found is thus its 2 distance from $\left( {0,0}\right) : \sqrt{{f}_{1}^{2} + {f}_{2}^{2}}$ . Table 5 reports this distance for each Pareto point found at each $\alpha$ (across methods and datasets). We also report the average over the 11 settings of $\alpha$ . Overall, Table 5 quantifies what Fig. 7 depicts visually: SUHNPF performs strictly better for every Pareto point and thus also on average.
314
+
315
+ Table 5: SUHNPF vs. PHN on MTL tasks, measured by distance of each Pareto point found ${vs}$ . the ideal loss point $\left( {{f}_{1},{f}_{2}}\right) = \left( {0,0}\right)$ .
316
+
317
+ <table><tr><td/><td colspan="12">Trade-off values $\alpha$</td></tr><tr><td>Method</td><td>0.0</td><td>0.1</td><td>0.2</td><td>0.3</td><td>0.4</td><td>0.5</td><td>0.6</td><td>0.7</td><td>0.8</td><td>0.9</td><td>1.0</td><td>$\mathbf{{Avg}}$</td></tr><tr><td colspan="13">MultiMNIST</td></tr><tr><td>PHN</td><td>.621</td><td>.585</td><td>.539</td><td>.504</td><td>.486</td><td>.478</td><td>.483</td><td>.494</td><td>.508</td><td>.521</td><td>.527</td><td>.522</td></tr><tr><td>SUHNPF</td><td>.500</td><td>.478</td><td>.464</td><td>.448</td><td>.441</td><td>.434</td><td>.441</td><td>.443</td><td>.452</td><td>.457</td><td>.465</td><td>.456</td></tr><tr><td colspan="13">MultiFashion</td></tr><tr><td>PHN</td><td>.877</td><td>.872</td><td>.853</td><td>.813</td><td>.784</td><td>.773</td><td>.779</td><td>.797</td><td>.816</td><td>.826</td><td>.829</td><td>.819</td></tr><tr><td>SUHNPF</td><td>.862</td><td>.819</td><td>.792</td><td>.773</td><td>.757</td><td>.746</td><td>.754</td><td>.758</td><td>.767</td><td>.793</td><td>.810</td><td>.784</td></tr><tr><td colspan="13">MultiFashion+MNIST</td></tr><tr><td>PHN</td><td>.690</td><td>.613</td><td>.581</td><td>.569</td><td>.571</td><td>.579</td><td>.598</td><td>.631</td><td>.682</td><td>.752</td><td>.797</td><td>.642</td></tr><tr><td>SUHNPF</td><td>.667</td><td>.617</td><td>.586</td><td>.552</td><td>.547</td><td>.543</td><td>.549</td><td>.553</td><td>.583</td><td>.629</td><td>.695</td><td>.593</td></tr></table>
318
+
319
+ ## 8 UNDERSTANDING SUHNPF VS. PHN
320
+
321
+ While both SUHNPF and PHN are manifold-based (Fig. 8), they differ in the type of manifold being learned. SUHNPF explicity maintains point sets $\mathcal{P}0$ and $\mathcal{P}1$ to learn the classification boundary between Pareto vs. non-Pareto points as per the FJC. PHN fits a regression surface over the set of points returned by LS or EPO. Since neither LS nor EPO are guaranteed to operate under non-convex settings (Section 3), those drawbacks are in turn inherited by PHN in using them. Table 6 highlights the key differences. The distinction between a diffusive full-rank indicator ${vs}$ . a low-rank regressor is further discussed in Appendix D.
322
+
323
+ ![0196399d-7aa5-769b-a7a6-0ffeac477692_7_919_575_642_103_0.jpg](images/0196399d-7aa5-769b-a7a6-0ffeac477692_7_919_575_642_103_0.jpg)
324
+
325
+ Figure 8: High level abstraction of SUHNPF and PHN solvers. While SUHNPF uses the Fritz-John criteria for optimality check, PHN uses the candidates deemed optimal by the EPO solver.
326
+
327
+ Table 6: SUHNPF vs. PHN for Pareto front learning.
328
+
329
+ <table><tr><td>Criteria</td><td>SUHNPF</td><td>PHN</td></tr><tr><td>Handle non-convexity</td><td>✓</td><td>✘</td></tr><tr><td>Supports constraints</td><td>✓</td><td>✘</td></tr><tr><td>Manifold Extractor</td><td>✓</td><td>✓</td></tr><tr><td>Nature of manifold</td><td>Diffusive full-rank indicator</td><td>Low-rank regressor</td></tr><tr><td>Optimality Criteria</td><td>Fritz-John Conditions</td><td>EPO solver</td></tr></table>
330
+
331
+ ## 9 CONCLUSION
332
+
333
+ Multi-objective optimization problems require balancing competing objectives, often under constraints. In this work, we described a novel method for Pareto-front learning (inducing the full Pareto manifold at train-time so users can pick any desired optimal trade-off point at run-time). Our SUHNPF Pareto solver is robust against non-convexity, with error bounded by a user-specified tolerance. Our key innovation over prior work's HNPF [Singh et al., 2021] is to exploit Fritz-John Conditions for a novel guided double gradient descent strategy. The scaling property imparts significant improvement in memory and run-time vs. prior OR and Multi-Task Learning (MTL) approaches. Results across synthetic benchmarks and MTL problems in image classification show clear, consistent advantages of SUHNPF in capability (handling non-convexity and constraints), denser coverage and higher accuracy in recovering the true Pareto front, and efficiency (time and space). Beyond empirical results, our conceptual framing and review of prior work also further bridges disparate lines of OR and MTL research.
334
+
335
+ Both SUHNPF and MTL methods assume differentiable evaluation metrics as training loss so optima to be found through gradient descent. However, loss can be a nondifferentiable, probabilistic measure, such as in fairness-related tasks [Sacharidis, 2019, Valdivia et al., 2020]. This creates a risk of metric divergence between training loss vs. the evaluation measure of interest [Abou-Moustafa and Fer-rie, 2012]. Continuing development of differentiable measures can help to address this [Swezey et al., 2021].
336
+
337
+ ## References
338
+
339
+ Karim T Abou-Moustafa and Frank P Ferrie. A note on metric properties for some divergence measures: The gaussian case. In Asian Conference on Machine Learning, pages 1-15. PMLR, 2012.
340
+
341
+ Mohammad Mahdi Azadjalal, Parham Moradi, Alireza Ab-dollahpouri, and Mahdi Jalili. A trust-aware recommendation method based on pareto dominance and confidence concepts. Knowledge-Based Systems, 116:130- 143, 2017.
342
+
343
+ Ananth Balashankar, Alyssa Lees, Chris Welty, and Lak-shminarayanan Subramanian. What is fair? exploring pareto-efficiency for fairness constrained classifiers. arXiv preprint arXiv:1910.14120, 2019.
344
+
345
+ Stephen Boyd, Stephen P Boyd, and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.
346
+
347
+ François Chollet. keras. https://github.com/ fchollet/keras,2015.
348
+
349
+ Jared L Cohon. Multiobjective programming and planning, volume 140.2004.
350
+
351
+ Indraneel Das and John E Dennis. Normal-boundary intersection: A new method for generating the pareto surface in nonlinear multicriteria optimization problems. SIAM journal on optimization, 8(3):631-657, 1998.
352
+
353
+ Matthias Ehrgott and Margaret M Wiecek. Saddle points and pareto points in multiple objective programming. Journal of Global Optimization, 32(1):11-33, 2005.
354
+
355
+ Yu G Evtushenko and Mikhail Anatol'evich Posypkin. Nonuniform covering method as applied to multicriteria optimization problems with guaranteed accuracy. Computational Mathematics and Mathematical Physics, 53 (2):144-157, 2013.
356
+
357
+ A Ghane-Kanafi and E Khorram. A new scalarization method for finding the efficient frontier in non-convex multi-objective problems. Applied Mathematical Modelling, 39(23-24):7483-7498, 2015.
358
+
359
+ Massimiliano Gobbi, F Levi, Gianpiero Mastinu, and Giorgio Previati. On the analytical derivation of the pareto-optimal set with applications to structural design. Structural and Multidisciplinary Optimization, 51(3):645-657, 2015.
360
+
361
+ David Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016.
362
+
363
+ Cho-Jui Hsieh, Nagarajan Natarajan, and Inderjit Dhillon. Pu learning for matrix completion. In International Conference on Machine Learning, pages 2445-2453. PMLR, 2015.
364
+
365
+ Diederik P Kingma and Jimmy Ba. Adam: A method for
366
+
367
+ stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
368
+
369
+ Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.
370
+
371
+ Francesco Levi and Massimiliano Gobbi. An application of analytical multi-objective optimization to truss structures. In 11th AIAA/ISSMO multidisciplinary analysis and optimization conference, page 6975, 2006.
372
+
373
+ Xi Lin, Hui-Ling Zhen, Zhenhua Li, Qingfu Zhang, and Sam Kwong. Pareto multi-task learning. In Thirty-third Conference on Neural Information Processing Systems (NeurIPS 2019), 2019.
374
+
375
+ Xi Lin, Zhiyuan YANG, Qingfu Zhang, and Sam Kwong. Controllable pareto multi-task learning, 2021. URL https://openreview.net/forum? id=5mhViEOQxaV.
376
+
377
+ Jinfei Liu, Li Xiong, Jian Pei, Jun Luo, Haoyu Zhang, and Si Zhang. Skyrec: Finding pareto optimal groups. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pages 2913- 2916, 2019.
378
+
379
+ Pingchuan Ma, Tao Du, and Wojciech Matusik. Efficient continuous pareto exploration in multi-task learning. In International Conference on Machine Learning, pages 6522-6531. PMLR, 2020.
380
+
381
+ Debabrata Mahapatra and Vaibhav Rajan. Multi-task learning with user preferences: Gradient descent with controlled ascent in pareto optimization. In International Conference on Machine Learning, pages 6597-6607. PMLR, 2020.
382
+
383
+ R Timothy Marler and Jasbir S Arora. Survey of multi-objective optimization methods for engineering. Structural and multidisciplinary optimization, 26(6):369-395, 2004.
384
+
385
+ Natalia Martinez, Martin Bertran, and Guillermo Sapiro. Minimax pareto fairness: A multi objective perspective. In Proceedings of the 37th International Conference on Machine Learning, 2020.
386
+
387
+ I Marusciac. On fritz john type optimality criterion in multi-objective optimization. Mathematica-Rev. Anal. Numér. Théor. Approx., pages 109-114, 1982.
388
+
389
+ Nikola Milojkovic, Diego Antognini, Giancarlo Bergamin, Boi Faltings, and Claudiu Musat. Multi-gradient descent for multi-objective recommender systems. arXiv preprint arXiv:2001.00846, 2019.
390
+
391
+ Aviv Navon, Aviv Shamsian, Gal Chechik, and Ethan Fe-taya. Learning the pareto front with hypernetworks. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum? id=NjF772F4ZZR
392
+
393
+ Vilfredo Pareto. Manuale di economica politica, societa editrice libraria. milan. English translation as Manual of Political Economy, Kelley, New York, 1906.
394
+
395
+ Behzad Pirouz and Esmaile Khorram. A computational approach based on the $\varepsilon$ -constraint method in multi-objective optimization problems. Advances and Applications in Statistics, 49(6):453, 2016.
396
+
397
+ Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. Dynamic routing between capsules. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 3859-3869, 2017.
398
+
399
+ Dimitris Sacharidis. Top-n group recommendations with fairness. In Proceedings of the 34th ACM/SIGAPP symposium on applied computing, pages 1663-1670, 2019.
400
+
401
+ Ozan Sener and Vladlen Koltun. Multi-task learning as multi-objective optimization. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pages 525-536, 2018.
402
+
403
+ Gurpreet Singh, Soumyajit Gupta, Matthew Lease, and Clint Dawson. A hybrid 2-stage neural optimization for pareto front extraction. arXiv preprint arXiv:2101.11684, 2021.
404
+
405
+ Robin Swezey, Aditya Grover, Bruno Charron, and Stefano Ermon. Pirank: Scalable learning to rank via differen-
406
+
407
+ tiable sorting. Advances in Neural Information Processing Systems, 34, 2021.
408
+
409
+ Masahiro Tanaka, Hikaru Watanabe, Yasuyuki Furukawa, and Tetsuzo Tanino. Ga-based decision support system for multicriteria optimization. In 1995 IEEE International Conference on Systems, Man and Cybernetics. Intelligent Systems for the 21st Century, volume 2, pages 1556-1561. IEEE, 1995.
410
+
411
+ Ana Valdivia, Javier Sánchez-Monedero, and Jorge Casillas. How fair can we go in machine learning? assessing the boundaries of fairness in decision trees. arXiv preprint arXiv:2006.12399, 2020.
412
+
413
+ M Van Rooyen, X Zhou, and Sanjo Zlobec. A saddle-point characterization of pareto optima. Mathematical programming, 67(1):77-88, 1994.
414
+
415
+ Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, ${2017}\mathrm{a}$ .
416
+
417
+ Lin Xiao, Zhang Min, Zhang Yongfeng, Gu Zhaoquan, Liu Yiqun, and Ma Shaoping. Fairness-aware group recommendation with pareto-efficiency. In Proceedings of the Eleventh ACM Conference on Recommender Systems, pages ${107} - {115},{2017}\mathrm{\;b}$ .
418
+
419
+ Qingfu Zhang, Aimin Zhou, Shizheng Zhao, Ponnuthu-rai Nagaratnam Suganthan, Wudong Liu, and Santosh Tiwari. Multiobjective optimization test instances for the cec 2009 special session and competition. 2008.
420
+
421
+ ## A SUHNPF AS HYPERNETWORK
422
+
423
+ Fig. 9 shows the overview of SUHNPF as a hypernetwork tasked with optimizing the weights of the target neural classifier. The input to the neural classifier are data points $Y$ and the output are matched against labels ${Z}_{1,\text{true }},{Z}_{2,\text{true }}$ for two different tasks. The weights of the neural classifier are $\Theta$ and SUHNPF as a hypernetwork approximates the weak Pareto manifold $\widetilde{M}\left( {\theta }^{ * }\right)$ for optimal trade-off over different values $\alpha$ for the two MOO losses ${\mathcal{L}}_{1},{\mathcal{L}}_{2}$ .
424
+
425
+ ![0196399d-7aa5-769b-a7a6-0ffeac477692_10_216_517_572_263_0.jpg](images/0196399d-7aa5-769b-a7a6-0ffeac477692_10_216_517_572_263_0.jpg)
426
+
427
+ Figure 9: Framework for extracting the Pareto optimal front $\widetilde{M}\left( \Theta \right)$ of a given target model ${C}_{\Theta }$ (which could also be a nonneural model: Decision Tree, Logistic Regression, etc.)
428
+
429
+ ## B SPACE COMPLEXITY ANALYSIS
430
+
431
+ MTL methods solve problems in both primal and dual space i.e., gradient of the objectives in the primal and the trade-off $\alpha$ ’s in the dual. SUHNPF however works only in the primal space w.r.t. the gradient of the functions necessary in the construction of the Fritz-John matrix, since the FJC ensure $\alpha$ free stationary point identification. Thus, the additional dual optimization space is not required. To fairly compare w.r.t. MTL methods, we consider the general cost of both such systems under a unconstrained setting i.e., only objectives and no additional constraints. Thus $k, n$ indicate the number of objectives and the dimension of the variable space.
432
+
433
+ SUNHPF. To find $P$ Pareto candidates, SUHNPF updates $P$ points of size ${Pn}$ . The $\nabla {F}^{T}\nabla F$ and $\nabla$ det matrices are of size ${k}^{2}$ and ${nk}$ respectively. The total memory cost is thus of order $O\left( {n\left( {P + k}\right) + {k}^{2}}\right)$ ).
434
+
435
+ MTL. To find $P$ Pareto candidates, MTL methods uses $P$ cones or rays requiring size ${Pn}$ . The gradient matrix of the objective function $\nabla F$ takes ${nk}$ , constructing the simplex takes ${k}^{2}$ , solving for trade-off $\alpha$ takes ${k}^{2}$ and the iterative update requires additional ${nk}$ memory. The total memory cost is thus of order $O\left( {n\left( {P + {2k}}\right) + 2{k}^{2}}\right)$ ).
436
+
437
+ ## C RUN-TIME ANALYSIS
438
+
439
+ While correctness and point density in finding the true Pareto optimal solution should be our top priority in comparing methods, we also report run-time of SUHNPF vs. other MTL approaches on the studied cases. As in Table 4, we explicitly request that each method generate 50 Pareto candidates, within the feasible functional domain. Table 4 reports the overall execution time, averaged over 10 runs each, given our experimental setup in Appendix F.
440
+
441
+ PHN uses either EPO or LS as their base solver, hence we report the total time that includes the (a) run-time of the base solver; and (b) the neural network run-time to learn the regression manifold. Cases I and III have a 2D variable domain, where SUHNPF takes 1s per epoch, with 2 epochs for training in Step 7 of Alg. 1. Both the cases took 5 epochs to converge, resulting in a total run-time of 10s. Case II has a 30D variable domain where SUHNPF takes 2s per epoch resulting in a total run-time of ${20}\mathrm{\;s}$ . While LS and MOOMTL are at similar run-time scale with SUHNPF, they fail to generate an even spread of points (Fig. 3(a, b)).
442
+
443
+ ## D SUHNPF VS. OTHER MTL METHODS
444
+
445
+ Point based solvers. Most MTL methods, including MOOMTL, PMTL, EPSE, and EPO are point based solvers. Being point-based, they return one solution per run, relying upon specialized local initialization to generate an even spread of Pareto points, using cones, rays, or other domain partitioning strategies, across the feasible set of saddle points. Thus asked for $P$ Pareto candidates, these solvers would have to run for $P$ instances. Later, if the user demands ${2P}$ points, they have to run for ${2P}$ instances from scratch, without utilizing the results from the previous run.
446
+
447
+ Manifold based solvers. A manifold-based solution strategy should separate Pareto vs. non-Pareto points, without requiring any special initialization. It would also be able to extract ${2P}$ Pareto candidates while being trained to generate $P$ candidates, due to interpolating from the learned boundary. This is highly advantageous over point-based schemes for deployment of practical systems, where the expected user trade-off preference is not known a priori, hence good to have the full approximated front. Notably, PHN [Navon et al., 2021] and SUHNPF (see Fig. 8) are the only manifold based Pareto solvers that we are aware of till date, where both are scalable to optimize large neural models. Another advantage is that both SUHNPF and EPO (used by PHN in backend) solvers have a user-specified error tolerance criteria built in, while other MTL solver lack it and therefore run a specified number of iterations before declaring a candidate Pareto, without actually checking for optimality.
448
+
449
+ Full rank indicator ${vs}$ . low rank regressor. A manifold based solver should also generalize to cases where the manifold is an implicit function as opposed to its easier counterpart of being an explicit function. SUHNPF has an added advantage in extracting the weak Pareto manifold as an $k$ - dimensional diffusive indicator function as opposed to a (k - 1)-dimensional manifold itself, where the regressed manifold is not only guided by the weak Pareto points (indicator value 1) but also the sub-optimal points (indicator value 0 ) for a more robust and accurate extraction. Thus it can generally approximate the manifold, irrespective of the manifold being an explicit or implicit function. In comparison, PHN learns a(k - 1)-dimensional regression manifold, given solution points obtained from EPO or LS. Therefore, PHN's default assumption is that the Pareto manifold is always an explicit function i.e., for $k$ objectives, the Pareto manifold is of dimension $k - 1$ .
450
+
451
+ ## E DISCUSSION ON REMARK 1
452
+
453
+ Remark: If ${f}_{i}s$ are continuous and differentiable once, in an unconstrained setting, then the set of weak pareto optimal points are ${x}^{ * } = \left\{ {x \mid \det \left( {L{\left( x\right) }^{T}L\left( x\right) }\right) = 0}\right\}$ , for a non-square matrix $L\left( x\right)$ , and is equivalent to ${x}^{ * } = \{ x \mid \det \left( {L\left( x\right) }\right) = 0\}$ for a square matrix $L\left( x\right)$ .
454
+
455
+ ### E.1 ILLUSTRATION
456
+
457
+ We begin by considering two multi-variable functions (for ease of description). Let us consider the following two quadratic functions, convex in both variables as:
458
+
459
+ $$
460
+ {f}_{1}\left( \mathbf{x}\right) = {\left( {x}_{1} - 1\right) }^{2} + {\left( {x}_{2} - 1\right) }^{2}
461
+ $$
462
+
463
+ $$
464
+ {f}_{2}\left( \mathbf{x}\right) = {\left( {x}_{1} + 1\right) }^{2} + {\left( {x}_{2} + 1\right) }^{2}
465
+ $$
466
+
467
+ The task is to find the Pareto front between the two objectives ${f}_{1}\left( \mathbf{x}\right)$ and ${f}_{2}\left( \mathbf{x}\right)$ . Since this is a trivial problem the Pareto front is known a-priori as the straight line ${x}_{1} = {x}_{2}$ for ${x}_{1} \in \left\lbrack {-1,1}\right\rbrack$ in the variable domain. Let us now first plot ${f}_{1}$ vs. ${f}_{2}$ for visual assessment in Fig. 10. Note that independent of each other:
468
+
469
+ $$
470
+ \nabla {f}_{1}\left( \mathbf{x}\right) = {\left\lbrack \begin{array}{ll} \frac{\partial {f}_{1}}{\partial {x}_{1}} & \frac{\partial {f}_{1}}{\partial {x}_{2}} \end{array}\right\rbrack }^{T} = \mathbf{0}\text{ at }\left( {{x}_{1},{x}_{2}}\right) = \left( {1,1}\right)
471
+ $$
472
+
473
+ $$
474
+ \nabla {f}_{2}\left( \mathbf{x}\right) = {\left\lbrack \begin{array}{ll} \frac{\partial {f}_{2}}{\partial {x}_{1}} & \frac{\partial {f}_{2}}{\partial {x}_{2}} \end{array}\right\rbrack }^{T} = \mathbf{0}\text{ at }\left( {{x}_{1},{x}_{2}}\right) = \left( {-1, - 1}\right)
475
+ $$
476
+
477
+ ![0196399d-7aa5-769b-a7a6-0ffeac477692_11_238_867_524_395_0.jpg](images/0196399d-7aa5-769b-a7a6-0ffeac477692_11_238_867_524_395_0.jpg)
478
+
479
+ Figure 10: Functional Domain plot for two competing objectives.
480
+
481
+ One can easily confirm that the gradient matrix $L$ cannot be identically zero for any value of $x \in {\mathbb{R}}^{2}$
482
+
483
+ $$
484
+ L = \left\lbrack \begin{array}{ll} \nabla {f}_{1}\left( x\right) & \nabla {f}_{2}\left( x\right) \end{array}\right\rbrack
485
+ $$
486
+
487
+ Note that in the above Fig. 10 we plotted the analytical solution explicitly. However for a Pareto solver we need to find the red curve for any two functions ${f}_{1}$ and ${f}_{2}$ .
488
+
489
+ Target: Visually, we want an algorithm with an arbitrary initialization of a point $\left( {{x}_{1},{x}_{2}}\right)$ such that $\left( {{f}_{1},{f}_{2}}\right)$ is in the feasible region (shaded green above) to converge (or terminate) at the Pareto front (red curve) that satisfies ${x}_{1} = {x}_{2}$ with a user prescribed tolerance.
490
+
491
+ Let us say we have an initialization $\mathbf{x} = \left\lbrack \begin{array}{ll} {x}_{1} & {x}_{2} \end{array}\right\rbrack$ at iterate $i = 0$ . The iterative update can then be written as:
492
+
493
+ $$
494
+ {\mathbf{x}}^{i + 1} = {\mathbf{x}}^{i} \pm {\eta Q}\left( \mathbf{x}\right)
495
+ $$
496
+
497
+ where $\pm$ is used to indicate that a choice of minimization or maximization is not yet decided.
498
+
499
+ Requirement 1: Here, $\eta > 0$ is the step size (user-prescribed) and the unknown vector function $Q\left( \mathbf{x}\right)$ should be such that $Q\left( \mathbf{x}\right) = \mathbf{0}$ (termination) when ${x}_{1} = {x}_{2}$ (Pareto front). Otherwise, the iteration will not terminate and can over/undershoot the Pareto front which we already know as an analytical solution for the above problem.
500
+
501
+ Requirement 2: The vector function $Q\left( \mathbf{x}\right)$ should be somehow related to the original objectives ${f}_{1}$ and ${f}_{2}$ .
502
+
503
+ ### E.2 CHOICE OF $Q\left( \mathbf{x}\right)$ AND CONVERGENCE
504
+
505
+ The most common (and easy to prescribe) choice of $Q\left( \mathbf{x}\right)$ is given by a linear scalarization of the original objectives ${f}_{1}$ and ${f}_{2}$ that linearly combines these two (or more) objectives to create a single scalar objective function:
506
+
507
+ $$
508
+ S\left( \mathbf{x}\right) = {\alpha }_{1}{f}_{1}\left( x\right) + {\alpha }_{2}{f}_{2}\left( x\right) \tag{9}
509
+ $$
510
+
511
+ The vector update function $Q\left( \mathbf{x}\right)$ is now given by the gradient of $S\left( \mathbf{x}\right)$ with respect to variables ${x}_{1}$ and ${x}_{2}$ ,(given ${\alpha }_{i}\mathrm{\;s}$ ):
512
+
513
+ $$
514
+ Q\left( \mathbf{x}\right) = {L\alpha } = \left\lbrack \begin{array}{ll} \nabla {f}_{1}\left( x\right) & \nabla {f}_{2}\left( x\right) \end{array}\right\rbrack {\left\lbrack \begin{array}{ll} {\alpha }_{1} & {\alpha }_{2} \end{array}\right\rbrack }^{T}
515
+ $$
516
+
517
+ ### E.3 DERIVATION OF FRITZ-JOHN CONDITIONS
518
+
519
+ We already know that $L$ or the gradient matrix cannot be identically zeros as discussed before. Furthermore, to avoid a trivial solution the vector $\left\lbrack \begin{array}{ll} {\alpha }_{1} & {\alpha }_{2} \end{array}\right\rbrack$ must also not be identically zero. This becomes clear if the scalarized function $S\left( \mathbf{x}\right)$ is defined.
520
+
521
+ The only remaining possibility is ${L\alpha }$ should approach zero for some $\mathbf{x} = \left\lbrack \begin{array}{ll} {x}_{1} & {x}_{2} \end{array}\right\rbrack$ as we iteratively update $x$ using $Q\left( \mathbf{x}\right)$ . This gives us our termination/convergence criterion. Now let us look at what does ${L\alpha } = 0$ imply to understand the role of $\alpha$ .
522
+
523
+ $$
524
+ {L\alpha } = {\left\lbrack \begin{array}{ll} \nabla {f}_{1}\left( x\right) & \nabla {f}_{2}\left( x\right) \end{array}\right\rbrack }_{2}{\left\lbrack \begin{array}{ll} {\alpha }_{1} & {\alpha }_{2} \end{array}\right\rbrack }^{T}
525
+ $$
526
+
527
+ $$
528
+ = \left\lbrack \begin{array}{ll} \frac{\partial {f}_{1}}{\partial {x}_{1}} & \frac{\partial {f}_{2}}{\partial {x}_{1}} \\ \frac{\partial {f}_{1}}{\partial {x}_{2}} & \frac{\partial {f}_{2}}{\partial {x}_{2}} \end{array}\right\rbrack {\left\lbrack \begin{array}{ll} {\alpha }_{1} & {\alpha }_{2} \end{array}\right\rbrack }^{T} = {\left\lbrack \begin{array}{ll} 0 & 0 \end{array}\right\rbrack }^{T}
529
+ $$
530
+
531
+ Let us assume any point $\left( {{x}_{1},{x}_{2}}\right)$ in the feasible domain. What $\alpha$ values can achieve the above termination criterion? We now have two equations in two unknowns $\left( {{\alpha }_{i}s}\right)$ :
532
+
533
+ $$
534
+ {\alpha }_{1}\frac{\partial {f}_{1}}{\partial {x}_{1}} + {\alpha }_{2}\frac{\partial {f}_{2}}{\partial {x}_{1}} = 0
535
+ $$
536
+
537
+ $$
538
+ {\alpha }_{1}\frac{\partial {f}_{1}}{\partial {x}_{2}} + {\alpha }_{2}\frac{\partial {f}_{2}}{\partial {x}_{2}} = 0
539
+ $$
540
+
541
+ Eliminating ${\alpha }_{1}$ using the first equation and substituting in the second equation:
542
+
543
+ $$
544
+ \left\lbrack {-\left( {\frac{\partial {f}_{1}}{\partial {x}_{2}}\frac{\partial {f}_{2}}{\partial {x}_{1}}}\right) /\left( \frac{\partial {f}_{1}}{\partial {x}_{1}}\right) + \frac{\partial {f}_{2}}{\partial {x}_{2}}}\right\rbrack {\alpha }_{2} = 0
545
+ $$
546
+
547
+ For any ${\alpha }_{2} > 0$ , this implies:
548
+
549
+ $$
550
+ \left\lbrack {-\left( {\frac{\partial {f}_{1}}{\partial {x}_{2}}\frac{\partial {f}_{2}}{\partial {x}_{1}}}\right) /\left( \frac{\partial {f}_{1}}{\partial {x}_{1}}\right) + \frac{\partial {f}_{2}}{\partial {x}_{2}}}\right\rbrack = 0
551
+ $$
552
+
553
+ $$
554
+ \Rightarrow \left\lbrack {\frac{\partial {f}_{1}}{\partial {x}_{1}}\frac{\partial {f}_{2}}{\partial {x}_{2}} - \frac{\partial {f}_{1}}{\partial {x}_{2}}\frac{\partial {f}_{2}}{\partial {x}_{1}}}\right\rbrack = 0
555
+ $$
556
+
557
+ Alternatively,
558
+
559
+ $$
560
+ \det \left( \left\lbrack \begin{array}{ll} \frac{\partial {f}_{1}}{\partial {x}_{1}} & \frac{\partial {f}_{2}}{\partial {x}_{1}} \\ \frac{\partial {f}_{1}}{\partial {x}_{2}} & \frac{\partial {f}_{2}}{\partial {x}_{2}} \end{array}\right\rbrack \right) = \det \left( L\right) = 0
561
+ $$
562
+
563
+ Note that for any matrix $A \neq \mathbf{0},{Ax} = 0$ can be solved for a non-trivial $x \neq 0$ if and only if A has a null-space; or A is low rank; or if A is square then it's determinant is zero.
564
+
565
+ For the two convex objectives ${f}_{1}$ and ${f}_{2}$ described in the beginning, the user can trivially evaluate $\det \left( L\right) = 0$ to arrive at the analytical solution for the Pareto front $\left( {{x}_{1} = {x}_{2}}\right)$ independent of the choice of the trade-off values ${\alpha }_{i}\mathrm{\;s}$ .
566
+
567
+ In effect, this implies that any stationary point at the which the above iteration will stop corresponds to an $\left( {{x}_{1},{x}_{2}}\right)$ such that matrix $L$ becomes low rank. This is what the Fritz-John criteria indicates directly. Note that the Fritz-John conditions are generalized to any form of objectives, be they convex or non-convex. As long as the system $L\left( x\right)$ is rank-deficient i.e., $\det \left( {L\left( x\right) }\right) = 0$ , we are certain to conclude that the point $x$ in the feasible set is indeed a weak Pareto point of the system.
568
+
569
+ ### E.4 EXTENSION TO NON-SQUARE SYSTEMS
570
+
571
+ The $\det \left( L\right)$ matrix defined in Eq. 6 is given by:
572
+
573
+ $$
574
+ L = \left\lbrack \begin{matrix} \nabla F & \nabla G \\ \mathbf{0} & G \end{matrix}\right\rbrack
575
+ $$
576
+
577
+ To achieve $\det \left( L\right) = 0$ requires that either:
578
+
579
+ 1. $\nabla F\left( x\right) = 0$ : atleast one objective function has reached its optimum (local/global minima/maxima under a min/max setting); and/or
580
+
581
+ 2. $G\left( x\right) = 0$ : at least one constraint is satisfied.
582
+
583
+ This criteria is only applicable for square systems. However, for practical problems, the system might become non-square, hence we need to satisfy $\det \left( {{L}^{T}L}\right) = 0$ following Eq. 7. One might think that it's a different optimization problem. However satisfying $\det \left( {{L}^{T}L}\right) = 0$ mathematically provides the same justification and we provide the derivation of it.
584
+
585
+ $$
586
+ \det \left( {{L}^{T}L}\right) = \left\lbrack \begin{matrix} \nabla {F}^{T} & \mathbf{0} \\ \nabla {G}^{T} & {G}^{T} \end{matrix}\right\rbrack \left\lbrack \begin{matrix} \nabla F & \nabla G \\ \mathbf{0} & G \end{matrix}\right\rbrack
587
+ $$
588
+
589
+ $$
590
+ = \left\lbrack \begin{matrix} \nabla {F}^{T}\nabla F & \nabla {F}^{T}\nabla G \\ \nabla {G}^{T}\nabla F & \nabla {G}^{T}\nabla G + {G}^{T}G \end{matrix}\right\rbrack \tag{10}
591
+ $$
592
+
593
+ We now observe Eq. 10 for the two cases prescribed above and see if $\det \left( {{L}^{T}L}\right)$ evaluates to zero or not. For Case 1, where $\nabla F = 0$ , Eq. 10 reduces to:
594
+
595
+ $$
596
+ \det \left( {{L}^{T}L}\right) = \left\lbrack \begin{matrix} \mathbf{0} & \mathbf{0}\nabla G \\ \nabla {G}^{T}\mathbf{0} & \nabla {G}^{T}\nabla G + {G}^{T}G \end{matrix}\right\rbrack
597
+ $$
598
+
599
+ which is low-rank since row 1 equates to 0 . For Case 2, where $G = 0$ , Eq. 10 reduces to:
600
+
601
+ $$
602
+ \det \left( {{L}^{T}L}\right) = \left\lbrack \begin{matrix} \nabla {F}^{T}\nabla F & \nabla {F}^{T}\nabla G \\ \nabla {G}^{T}\nabla F & \nabla {G}^{T}\nabla G + 0 \end{matrix}\right\rbrack
603
+ $$
604
+
605
+ $$
606
+ = \nabla {F}^{T}\nabla {G}^{T}\left\lbrack \begin{array}{ll} \nabla F & \nabla G \\ \nabla F & \nabla G \end{array}\right\rbrack
607
+ $$
608
+
609
+ which is low-rank again because row 1 and row 2 are equal. Hence it is easy to observe that satisfying $\det \left( L\right) = 0$ is equivalent to satisfying $\det \left( {{L}^{T}L}\right) = 0$ .
610
+
611
+ ### E.5 EFFECT OF TRADE-OFF ON OPTIMALITY
612
+
613
+ Let us consider a general gradient matrix $L$ now.
614
+
615
+ Case 1: If the gradient matrix $L$ is full rank. Then $\alpha = \mathbf{0}$ (vector is identically zero) is the only solution.
616
+
617
+ Case 2: If the gradient matrix $L$ has a rank deficiency $q = 1$ (one rank deficient). Then one of the ${\alpha }_{i}\mathrm{\;s}$ can be chosen arbitrarily or only one equation is missing. A simplex criterion $\mathop{\sum }\limits_{i}{\alpha }_{i} = 1$ then supplies this arbitray choice. Note that one could also choose $\mathop{\sum }\limits_{i}{\alpha }_{i}^{2} =$ constant or any other such choice to make ${\alpha }_{i}$ values uniquely determinable. For convex objectives ${f}_{1}$ and ${f}_{2}$ as described in the beginning, the reader can easily check that the $L$ matrix is only one rank deficient for all $\mathbf{x} \in {\mathbb{R}}^{2}$ .
618
+
619
+ Case 3: Here comes the trouble. If matrix $L$ has a rank deficiency $q > 1$ (more than one rank deficient). Now more than one of the ${\alpha }_{i}\mathrm{\;s}$ can be chosen arbitrarily. The simplex criterion is now no longer sufficient to determine a unique $\alpha$ vector. This is the most general case for an arbitrary number of non-convex objective functions ${f}_{1},\cdots ,{f}_{k}$ . Finding the Pareto optimal front would first require us to resolve the rank of $L$ . Without this an adaptive update on ${\alpha }_{i}$ has no bearing whatsoever.
620
+
621
+ Here the Fritz John necessary conditions i.e., $\det \left( L\right) = 0$ is the most general way to state rank deficiency of $L$ irrespective of whether the matrix is one or more than one rank deficient.
622
+
623
+ ## F EXPERIMENTAL SETUP DETAILS
624
+
625
+ Experimental Setup. We use an Nvidia 2060 RTX Super 8GB GPU, Intel Core i7-9700F 3.0GHz 8-core CPU and 16GB DDR4 memory for all experiments. Keras [Chollet, 2015] is used on a Tensorflow 2.0 backend with Python 3.7 to train the SUHNPF networks and evaluate the MTL solvers. For optimization, we use AdaMax [Kingma and Ba, 2014] with parameters (lr=0.001).
626
+
627
+ SUHNPF Setup. Each training step runs for 2 epochs, with 50 steps per epoch. Thus, if the network takes $I$ iterations to converge, then the effective epochs taken by the network is ${2I}$ . For computing the gradient of the Fritz-John matrix w.r.t. the input variables $x$ , we use Tensorflow's Gradient Tape ${}^{1}$ , which implicitly allows us to scale the computation of the gradient matrix $\nabla$ det to arbitrarily large dimensions of variable $x$ . To compute the gradient update on $\mathcal{P}1$ , we use a learning rate of $\eta = {0.01}$ .
628
+
629
+ MTL Setup. Sourcecode for LS, MOOMTL, PMTL and EPO solvers use EPO’s repository ${}^{2}$ , while ${\mathrm{{EPSE}}}^{3}$ and ${\mathrm{{PHN}}}^{4}$ codes are taken from their individual repositories.
630
+
631
+ ---
632
+
633
+ https://www.tensorflow.org/api_docs/python/tf/ GradientTape
634
+
635
+ https://github.com/dbmptr/EPOSearch
636
+
637
+ https://github.com/mit-gfx/ContinuousParetoMTL
638
+
639
+ https://github.com/AvivNavon/pareto-hypernetworks
640
+
641
+ ---
642
+
643
+ ## G GENERAL DISCUSSION
644
+
645
+ Handling Non-Convex forms: Pareto optimal solution set is a collection of saddle points [Van Rooyen et al., 1994, Ehrgott and Wiecek, 2005] of an MOO problem, wherein no objective can be further improved without penalizing at least one of the other objectives. This entails min-max optimization to minimize objectives (such as loss functions) while simultaneously maximizing tradeoffs between them. Although prior works [Sener and Koltun, 2018, Lin et al., 2019, Mahapatra and Rajan, 2020] have asserted that Karush-Kuhn-Tucker (KKT) conditions [Boyd et al., 2004] in this min-max setting ensure that MTL methods find (correct) Pareto optimal solutions, it is known that KKT conditions hold true only for convex cases. Gobbi et al. [2015] further show that KKT-based criteria can give Pareto solutions only under fully convex setting of objectives and constraints.
646
+
647
+ Remark. The number of trade-off values that can be specified arbitrarily (by user) is dependent on the rank of matrix $L$ , and is often not known a priori for a given stationary points ${x}^{ * }$ . Furthermore, it is not necessary that a stationary point ${x}^{ * }$ exists given trade-off values ${\alpha }_{i}s$ .
648
+
649
+ Remark. Current MTL methods with a simplex constraint on trade-off values assume that for $k$ objectives the matrix ${L}^{T}L$ is only one rank deficient or Pareto manifold is $k - 1$ dimensional. These methods do not generalize to $\leq \left( {k - 2}\right)$ dimensional Pareto manifolds or when matrix ${L}^{T}L$ is more than one rank deficient.
650
+
651
+ A shown in Appendix E.5, there can be a MOO system with $k$ objectives, where the Pareto manifold dimension is strictly less than $k - 1$ . Breaking up the then $k$ dimensional functional space into rays and cones is ineffective because it is trying to look for Pareto candidates in sectors, where it does not exist. More precisely, when ${L\alpha } = 0$ and $L$ is more than one rank deficient, the trade-off values $\alpha$ have infinitely many solutions that satisfy the simplex constraints, hence for a practical solver it always becomes hard to stabilize the dual problem used in MTL methods.
652
+
653
+ Evaluation on Benchmarks. Because the Pareto solution is often unknown on real MOO problems, OR works have advocated that any proposed Pareto solver should first be tested on synthetic MOO with known analytic solutions. This permits controlled experimentation that vary MOO problem difficulty (e.g., non-convexity in variable and function domains, presence of constraints, etc.) in order to assess the capabilities and measure the true accuracy against a known front. Ideally studies should evaluate against synthetic benchmark problems that vary in difficulty, and there is sometimes ambiguity and confusion in referring to an MOO problem as non-convex without clarifying the specific non-convex aspects. Difficulty can also vary greatly depending on whether non-convexity occurs in the objectives, constraints, or the front itself.
654
+
655
+ Remark. It is not necessary that if the objective are non-convex then the Pareto functional front is necessarily non-convex.
656
+
657
+ One typical example shown in MTL works is the double inverted gaussian benchmark, where it is stated that although the functions are pseudo-convex, the Pareto front is non-convex. However, any gradient solver is trying to find a Pareto point (i.e., stationary point w.r.t. the objectives) of $S\left( x\right)$ in Eq. 9 for a specific alpha. As a consequence, the form of $S\left( x\right)$ (convex or non-convex) decides where the gradient ascent/descent point stabilizes. The Pareto front in the functional domain is just a post-hoc visualization of the collection of the Pareto set.
658
+
659
+ Refer to Case I, where one of the objectives is non-convex and the Pareto functional domain front is non-convex. However, in Case II, although the two functions are non-convex, the Pareto functional domain front is still convex. Although in literature the relation between the non-/convexity of the functions and the non- /convexity of the Pareto functional front has not been characterized (excluding strictly convex cases), it is an interesting direction to pursue as part of our future work.
660
+
661
+ Termination of Solvers. An iterative solver should define termination criteria based on an error tolerance being satisfied and/or inability to further improve. It is also important that a solver reports inability to converge (achieve the termination criteria/error tolerance) within the specified maximum iterations. While both HNPF (used by SUHNPF) and EPO (used by PHN) define such error tolerance criteria for termination, inspection of source code for MOOMTL [Sener and Koltun, 2018], PMTL [Lin et al., 2019], and EPSE [Ma et al. 2020] iterative solvers (at the time of our submission) shows support only for running a fixed number of iterations, without other termination criteria. See the following sourcecode links to solvers for MOOMTL ${}^{5}$ , PMTL ${}^{6}$ , and EPSE ${}^{7}$ .
662
+
663
+ ## H ADDITIONAL BENCHMARKS
664
+
665
+ We consider two additional synthetic benchmark cases considered by Navon et al. [2021]. We demonstrate that SUHNPF works well in these cases since the considered functions are either convex or monotone within the feasible domain for both cases.
666
+
667
+ Case A:
668
+
669
+ $$
670
+ {f}_{1}\left( {{x}_{1},{x}_{2}}\right) = \left( {\left( {{x}_{1} - 1}\right) {x}_{2}^{2} + 1}\right) /3,{f}_{2}\left( {{x}_{1},{x}_{2}}\right) = {x}_{2}
671
+ $$
672
+
673
+ $$
674
+ \text{s.t.}{g}_{1},{g}_{2} : 0 \leq {x}_{1},{x}_{2} \leq 1 \tag{11}
675
+ $$
676
+
677
+ Case B:
678
+
679
+ $$
680
+ {f}_{1}\left( {{x}_{1},{x}_{2}}\right) = {x}_{1},{f}_{2}\left( {{x}_{1},{x}_{2}}\right) = 1 - {\left( {x}_{1}/\left( 1 + 9{x}_{2}\right) \right) }^{2}
681
+ $$
682
+
683
+ $$
684
+ \text{s.t.}{g}_{1},{g}_{2} : 0 \leq {x}_{1},{x}_{2} \leq 1 \tag{12}
685
+ $$
686
+
687
+ Please note that although in PHN [Navon et al. 2021], the form of ${f}_{2} = {x}_{1}$ for Eq. 11, we believe it is a typo w.r.t. the original work by Evtushenko and Posypkin [2013], where this case was proposed, as the reported Pareto front in their work is achieved only for ${f}_{2} = {x}_{2}$ . We therefore proceed with this updated form.
688
+
689
+ ![0196399d-7aa5-769b-a7a6-0ffeac477692_13_959_1544_567_247_0.jpg](images/0196399d-7aa5-769b-a7a6-0ffeac477692_13_959_1544_567_247_0.jpg)
690
+
691
+ Figure 11: Functional Domain of cases from PHN
692
+
693
+ ## I CONVERGENCE PLOTS OF SUHNPF
694
+
695
+ We show the convergence of Pareto candidates towards the weak Pareto front over iterations for both the functional and variable domains for the benchmark Case I considered in our work.
696
+
697
+ ---
698
+
699
+ https://github.com/dbmptr/EPOSearch/blob/master/ toy_experiments/solvers/moo_mtl.py
700
+
701
+ ${}^{6}$ https://github.com/dbmptr/EPOSearch/blob/master/ toy_experiments/solvers/pmtl.py
702
+
703
+ https://github.com/mit-gfx/ContinuousParetoMTL/ blob/master/pareto/optim/hvp_solver.py
704
+
705
+ ---
706
+
707
+ ![0196399d-7aa5-769b-a7a6-0ffeac477692_14_159_179_681_833_0.jpg](images/0196399d-7aa5-769b-a7a6-0ffeac477692_14_159_179_681_833_0.jpg)
708
+
709
+ Figure 13: Case I: Functional domain corresponding to Figure 1. SUHNPF Pareto candidates $\mathcal{P}1$ (red dots) converge in 5 iterations.
710
+
711
+ ![0196399d-7aa5-769b-a7a6-0ffeac477692_14_288_1079_429_891_0.jpg](images/0196399d-7aa5-769b-a7a6-0ffeac477692_14_288_1079_429_891_0.jpg)
712
+
713
+ Figure 14: Loss profile for cases. Note that since the error threshold $\epsilon$ was set to ${10}^{-4}$ for the benchmark cases, the algorithm terminates once the Binary Cross Entropy (blue) loss falls below the threshold (value at epoch ${10} \leq {10}^{-4}$ ). We also show the Mean Squared Error (dashed red) between the Pareto candidate set $\mathcal{P}1$ and the analytical solution for each iteration. Because each iteration takes two epochs, this leads to the "staircase" MSE shown.
714
+
715
+ ![0196399d-7aa5-769b-a7a6-0ffeac477692_14_897_180_695_842_0.jpg](images/0196399d-7aa5-769b-a7a6-0ffeac477692_14_897_180_695_842_0.jpg)
716
+
717
+ Figure 12: Case I: Variable domain. The gray line show the true analytic solution $\left( {0 \leq {x}_{1} \leq 1,{x}_{2} = 0}\right)$ . SUHNPF Pareto candidates $\mathcal{P}1$ (red dots) converge in 5 iterations.
718
+
719
+ ## J LOSS PROFILES
720
+
721
+ Fig. 14 shows the loss profiles for the benchmark cases I - III. SUHNPF converged in 5 iterations, with each iteration running for 2 epochs, using error tolerance ${10}^{-4}$ for both the outer gradient descent loop ${\epsilon }_{\text{outer }}$ and inner gradient descent loop ${\epsilon }_{\text{inner }}$ . Since the last layer of the SUHNPF network classifies points as being weak Pareto or not, the loss enforced is Binary Cross Entropy (blue line). We also report the Mean Squared Error (MSE, dashed red line) between the current iterate of point set $\mathcal{P}1$ and the true analytical solution manifold. Alg. 1 updates the Pareto candidate set in the outer descent loop. Since the inner descent loop that measures the training loss itself $\mathcal{P}1$ has ran twice for 2 epochs, MSE is be measured only once per iteration. This results in the staircase nature of the MSE loss.
UAI/UAI 2022/UAI 2022 Conference/BcIfJuIscx5/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,467 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § LEARNING A NEURAL PARETO MANIFOLD EXTRACTOR WITH CONSTRAINTS
2
+
3
+ § ABSTRACT
4
+
5
+ Multi-objective optimization (MOO) problems require balancing competing objectives, often under constraints. The Pareto optimal solution set defines all possible optimal trade-offs over such objectives. In this work, we present a novel method for Pareto-front learning: inducing the full Pareto manifold at train-time so users can pick any desired optimal trade-off point at run-time. Our key insight is to exploit Fritz-John Conditions for a novel guided double gradient descent strategy. Evaluation on synthetic benchmark problems allows us to vary MOO problem difficulty in controlled fashion and measure accuracy ${vs}$ . known analytic solutions. We further test scalability and generalization in learning optimal neural model parameterizations for Multi-Task Learning (MTL) on image classification. Results show consistent improvement in accuracy and efficiency over prior MTL methods as well as techniques from operations research.
6
+
7
+ § 1 INTRODUCTION
8
+
9
+ Multi-Objective Optimization (MOO) problems require balancing multiple objectives, often competing with one another under further constraints [Van Rooyen et al., 1994, Ehrgott and Wiecek, 2005]. A Pareto optimal solution [Pareto, 1906] defines the set of all saddle points [Ehrgott and Wiecek, 2005] such that no objective can be further improved without penalizing at least one other objective.
10
+
11
+ As operational systems today increasingly seek to balance competing objectives, research on Pareto optimal learning has quickly grown across tasks such as fair classification Balashankar et al., 2019, Martinez et al., 2020], diversified ranking [Liu et al., 2019, Sacharidis, 2019], and recommendation [Xiao et al., 2017b, Azadjalal et al., 2017]. Many practical classification and recommendation problems have been shown to be non-convex [Hsieh et al., 2015]. A general Pareto solver should thus support optimization for both non-convex objectives and constraints.
12
+
13
+ Because MOO problems typically lack a single global optimum, one must choose among optimal solutions by selecting a trade-off over competing objectives. Ideally this choice could be deferred to run-time, so that each user could choose whichever trade-off they prefer. Unfortunately, prior Pareto solvers have typically required training a separate model to find the Pareto solution point for each desired trade-off.
14
+
15
+ To address this, recent work has proposed Pareto front learning (PFL): inducing the full Pareto manifold in training so that users can quickly select any desired optimal trade-off point at run-time [Navon et al., 2021, Lin et al., 2021, Singh et al., 2021]. These works learn a neural model manifold to map any desired trade-off over objectives to a corresponding Pareto point. As with other supervised learning, inducing an accurate prediction model requires high quality training data. Pareto points used in model training should be accurate.
16
+
17
+ In this work, we devise a efficient Pareto search procedure for Singh et al. [2021]'s HNPF model, so that we may benefit from its correctness guarantees in identifying true Pareto points for PFL training. While HNPF supports non-convex MOO with constraints and bounded error, it suffers from a lack of scalability with increasing variable space. Our innovation is a novel, guided double gradient descent strategy, updating the candidate point set in the outer descent loop and the manifold estimators in the inner descent loop.
18
+
19
+ Our evaluation spans both synthetic benchmarks and multitask learning (MTL) problems. Benchmark problems allow us to conduct controlled experiments varying MOO problem complexity (e.g., the presence of constraints and/or convexity in variable or function domains). Analytic solutions to benchmark problems allow us to measure the true accuracy of model predictions, something which is often difficult or impossible on real-world problems. Additional evaluation on a set of MTL problems in image classification enable us to further test scalability and generalization in learning high dimensional, Pareto optimal neural models.
20
+
21
+ Results across synthetic benchmarks and MTL problems show clear, consistent advantages of SUHNPF in terms of capability (handling non-convexity and constraints), denser coverage and higher accuracy in recovering the true Pareto front, and greater efficiency (time and space). Beyond empirical findings, our conceptual framing and review of prior work also serves to further bridge complementary lines of MTL and operations research work. For reproducibility, we will share our sourcecode and data upon publication.
22
+
23
+ § 2 DEFINITIONS
24
+
25
+ We adopt Pareto definitions from Marler and Arora [2004]. A general MOO problem can formulated as follows:
26
+
27
+ optimize $\;F\left( x\right) = \left( {{f}_{1}\left( x\right) ,\ldots ,{f}_{k}\left( x\right) }\right)$(1)
28
+
29
+ s.t. $x \in S = \left\{ {x \in {\mathbb{R}}^{n} \mid G\left( x\right) = \left( {{g}_{1}\left( x\right) ,\ldots ,{g}_{m}\left( x\right) }\right) \leq 0}\right\}$
30
+
31
+ with $n$ variables $\left( {{x}_{1},\ldots ,{x}_{n}}\right) ,k$ objectives $\left( {{f}_{1},\ldots ,{f}_{k}}\right)$ , and $m$ constraints $\left( {{g}_{1},\ldots ,{g}_{m}}\right)$ . Here, $S$ is the feasible set, i.e., the set of input values $x$ that satisfy the constraints $G\left( x\right)$ . For a MOO problem optimizing $F\left( x\right)$ subject to $G\left( x\right)$ , the solution is usually a manifold as opposed to a single global optimum, therefore one must find the set of all points that satisfy the chosen definition for an optimum.
32
+
33
+ Strong Pareto Optimal: A point ${\widetilde{x}}^{ * } \in S$ is strong Pareto optimal if no point in the feasible set exists that improves an objective without detriment to at least one other objective.
34
+
35
+ $$
36
+ \nexists {x}_{j} : {f}_{p}\left( {x}_{j}\right) \leq {f}_{p}\left( {x}^{ * }\right) ,\;\text{ for }\;p = 1,2,\ldots ,k
37
+ $$
38
+
39
+ $$
40
+ \exists l : {f}_{l}\left( {x}_{j}\right) < {f}_{l}\left( {x}^{ * }\right) \tag{2}
41
+ $$
42
+
43
+ Weak Pareto Optimal: A point ${\widetilde{x}}^{ * } \in S$ is weak Pareto optimal if no other point exists in the feasible set that improves all of the objectives simultaneously. This is different from strong Pareto, where points might exist which improves at least one objective without detriment to other.
44
+
45
+ $$
46
+ \nexists {x}_{j} : {f}_{p}\left( {x}_{j}\right) < {f}_{p}\left( {\widetilde{x}}^{ * }\right) ,\;\text{ for }p = 1,2,\ldots ,k \tag{3}
47
+ $$
48
+
49
+ § 3 RELATED WORK
50
+
51
+ Linear Scalarization (LS). A variety of work has adopted LS to find Pareto points [Xiao et al., 2017b, Lin et al., 2019, Milojkovic et al., 2019]. For example, the Weighted Sum Method (WSM) Cohon [2004] is a LS approach to convert an MOO into a SOO using a convex combination of objective functions and constraints. However, because Karush-Kuhn-Tucker (KKT) conditions are known to hold true only for convex cases [Boyd et al., 2004], LS solutions are guaranteed to be Pareto optimal only under fully convex setting of objectives and constraints, as shown in Gobbi et al. [2015].
52
+
53
+ Operations Research (OR). A variety of OR methods support MOO problems with non-convex objectives and constraints, guaranteeing correctness within a user-specified error tolerance. Correctness has also been further verified by evaluation on synthetic MOO benchmark problems with known, analytic solutions. However, a key limitation of these methods is lack of scalability: they suffer from significant computational and run-time limitations as the variable dimension increases. Hence, they cannot be applied to optimizing neural model parameters for MOO problems.
54
+
55
+ Table 1: SUHNPF vs. existing Operations Research (OR) and Multi-Task Learning (MTL) methods. OR methods account for both objectives and constraints, produce Pareto points only, and are known to find true Pareto points for non-convex MOO problems. However, these methods do not scale to high-dimensional neural MOO problems. In contrast, MTL methods scale well but typically do not support constraints and can struggle with non-convexity.
56
+
57
+ max width=
58
+
59
+ Type $\mathbf{{Method}}$ Finds Only Pareto points Handles Constraints Scalable Neural MOO
60
+
61
+ 1-5
62
+ 4*Operations Research (OR) NBI [1998] ✓ ✓ ✘
63
+
64
+ 2-5
65
+ mCHIM [2015] ✓ ✓ ✘
66
+
67
+ 2-5
68
+ PK [2016 ✓ ✓ ✘
69
+
70
+ 2-5
71
+ HNPF [2021 ✓ ✓ ✘
72
+
73
+ 1-5
74
+ 5*Multi- Task Learning (MTL) MOOMTL [2018 ✘ ✘ ✓
75
+
76
+ 2-5
77
+ PMTL [2019] ✘ ✘ ✓
78
+
79
+ 2-5
80
+ EPO J2020 ✘ ✘ ✓
81
+
82
+ 2-5
83
+ EPSE 2020 ✘ ✘ ✓
84
+
85
+ 2-5
86
+ PHN [2021 ✘ ✘ ✓
87
+
88
+ 1-5
89
+ Ours SUHNPF ✓ ✓ ✓
90
+
91
+ 1-5
92
+
93
+ Examples include enhanced scalarization approaches such as NBI [Das and Dennis, 1998], mCHIM [Ghane-Kanafi and Khorram, 2015], and PK [Pirouz and Khorram, 2016]. NBI produces an evenly distributed set of Pareto points given an evenly distributed set of weights, using the concept of Convex Hull of Individual Minima (CHIM) to break down the boundary/hull into evenly spaced segments before tracing the weak Pareto points. mCHIM improves upon NBI via a quasi-normal procedure to update the aforementioned CHIM set iteratively, to obtain a strong Pareto set. PK uses a local $\epsilon$ -scalarization based strategy that searches for the Pareto front using controllable step-lengths in a restricted search region, thereby accounting for non-convexity.
94
+
95
+ Multi-Task Learning (MTL). Recent MTL works have devised Pareto solvers for estimating high-dimensional neural models. MOOMTL [Sener and Koltun, 2018] effectively scales via a multi-gradient descent approach, but does not guarantee an even spread of solution points found along the Pareto front. PMTL [Lin et al., 2019] addresses this spread issue by diving the functional domain into equal spaced cones, but this increases computational complexity as the number of cones increases. EPO [Mahapatra and Rajan, 2020 extends preference rays along specified weights to find Pareto points evenly spread in the vicinity of the rays. EPSE [Ma et al., 2020] uses a combination Hessian of the functions and Krylov subspace to find Pareto solutions.
96
+
97
+ MTL methods rely upon KKT conditions to check for optimality, which assumes convexity (see earlier LS discussion). While methods seek an even distribution of Pareto points by dividing the functional space into evenly spaced cones or preference rays, our results on a non-convex benchmark problem clearly show an uneven point spread (Section 6.1). Moreover, most MTL methods are point-based solvers, meaning they must be run $P$ times to find $P$ points. This is too expensive to adjust trade-off preferences at run-time.
98
+
99
+ Pareto front learning. PFL methods [Navon et al., 2021, Lin et al., 2021, Singh et al., 2021] induce the full Pareto manifold at train-time so that users can quickly select any desired optimal trade-off point at run-time. For example, a manifold model trained on $P$ Pareto points might then quickly produce any number of additional Pareto points via interpolation. Of course, quality training data quality is necessary to learn an accurate, supervised prediction model. The method and resulting accuracy of the Pareto points used for model training is thus crucial to prediction accuracy.
100
+
101
+ Navon et al. [2021]'s PHN considers two way to acquire Pareto training points: LS and EPO [2020]. Lin et al. [2021] use their PMTL [2019] method to identify Pareto points for training. Singh et al. [2021]'s HNPF uses the Fritz-John conditions (FJC) [Maruşciac, 1982] to identify Pareto points.
102
+
103
+ Like other OR methods, HNPF provides a theoretical guarantee of Pareto front accuracy within a user-specified error tolerance. In evaluation on canonical OR benchmark problems, HNPF was shown to recover known Pareto fronts across various non-convex MOO problems while also being more efficient in finding Pareto points than NBI [1998], mCHIM [2015], and PK [2016]). However, like other OR methods, HNPF cannot scale to learn optimal high-dimensional neural model weights for MOO problems.
104
+
105
+ Ha et al. [2016]'s hypernetworks proposed training one neural model to generate effective weights for a second, target model. Navon et al. [2021] and Lin et al. [2021] apply this approach to learn a manifold mapping MOO solutions to different target model weights, enabling the target model to achieve the desired Pareto trade-off for the MOO problem. However, HNPF cannot be similarly applied to MTL problems due to its lack of scalability.
106
+
107
+ § 4 PRELIMINARIES
108
+
109
+ Fritz John Conditions (FJC). Let the objective and constraint function in Eq. (1) be differentiable once at a decision vector ${x}^{ * } \in \mathcal{S}$ . The Fritz-John [Levi and Gobbi,2006] necessary conditions for ${x}^{ * }$ to be weak Pareto optimal is that vectors must exists for $0 \leq \lambda \in {\mathbb{R}}^{k},0 \leq \mu \in {\mathbb{R}}^{m}$ and $\left( {\lambda ,\mu }\right) \neq \left( {0,0}\right)$ (not identically zero) s.t. the following holds:
110
+
111
+ $$
112
+ \mathop{\sum }\limits_{{i = 1}}^{k}{\lambda }_{i}\nabla {f}_{i}\left( {x}^{ * }\right) + \mathop{\sum }\limits_{{j = 1}}^{m}{\mu }_{j}\nabla {g}_{j}\left( {x}^{ * }\right) = 0 \tag{4}
113
+ $$
114
+
115
+ $$
116
+ {\mu }_{j}{g}_{j}\left( {x}^{ * }\right) = 0,\forall j = 1,\ldots ,m
117
+ $$
118
+
119
+ Gobbi et al. [2015] present an $L$ matrix form of FJC:
120
+
121
+ $$
122
+ L = \left\lbrack \begin{matrix} \nabla F & \nabla G \\ \mathbf{0} & G \end{matrix}\right\rbrack \;\left\lbrack {\left( {n + m}\right) \times \left( {k + m}\right) }\right\rbrack \tag{5}
123
+ $$
124
+
125
+ $$
126
+ \nabla {F}_{n \times k} = \left\lbrack {\nabla {f}_{1},\ldots ,\nabla {f}_{k}}\right\rbrack
127
+ $$
128
+
129
+ $$
130
+ \nabla {G}_{n \times m} = \left\lbrack {\nabla {g}_{1},\ldots ,\nabla {g}_{m}}\right\rbrack
131
+ $$
132
+
133
+ $$
134
+ {G}_{m \times m} = \operatorname{diag}\left( {{g}_{1},\ldots ,{g}_{m}}\right)
135
+ $$
136
+
137
+ comprising the gradients of the functions and constraints. The matrix equivalent of FJC for ${x}^{ * }$ to be Pareto optimal is to show the existence of $\delta = \left( {\lambda ,\mu }\right) \in {\mathbb{R}}^{k + m}$ (i.e., $\delta$ not identically zero) in Eq. (4) such that:
138
+
139
+ $$
140
+ L \cdot \delta = 0\;\text{ s.t. }\;L = L\left( {x}^{ * }\right) ,\delta \geq 0,\delta \neq 0 \tag{6}
141
+ $$
142
+
143
+ Therefore the non-trivial solution for Eq. (6) is:
144
+
145
+ $$
146
+ \det \left( {{L}^{T}L}\right) = 0 \tag{7}
147
+ $$
148
+
149
+ Remark. If ${f}_{i}s$ and ${g}_{j}s$ are continuous and differentiable once, then the set of weak Pareto optimal points are ${x}^{ * } =$ $\left\{ {x \mid \det \left( {L{\left( x\right) }^{T}L\left( x\right) }\right) = 0}\right\} ,\delta \geq 0$ for a non-square matrix $L\left( x\right)$ , and is equivalent to ${x}^{ * } = \{ x \mid \det \left( {L\left( x\right) }\right) = 0\} ,\delta \geq 0$ , for a square matrix $L\left( x\right)$ . See an illustration in Appendix E for the unconstrained setting.
150
+
151
+ Hybrid Neural Pareto Front (HNPF). Like other Pareto front learning (PFL) methods, HNPF [Singh et al., 2021] learns a neural Pareto manifold from training data. With HNPF, Pareto points for use as training data data are acquired via Fritz-John conditions. In particular, once a given a data point from the input variable domain is mapped to the output function domain (via objective functions), FJC are tested to determine Pareto optimality.
152
+
153
+ HNPF's neural network first identifies weak Pareto points via feed-forward layers to smoothly approximate the weak Pareto optimal solution manifold $M\left( {X}^{ * }\right)$ as $\widetilde{M}\left( {\widetilde{X},\Phi }\right)$ . The last layer of the network has two neurons with softmax activation for binary classification of Pareto vs. non-Pareto points, distinguishing sub-optimal points from the weak Pareto points. The network loss is representation driven, since the Fritz John discriminator (Eq. (7)), described by the objective functions and constraints, explicitly classifies each input data point ${X}_{i}$ as being weak Pareto or not. After identifying weak Pareto points, HNPF uses an efficient Pareto filter to find the subset of non-dominated points.
154
+
155
+ HNPF's scalability bottleneck lies in how it samples variable domain points to test for Pareto optimality in model training. If there are any direct constraints on variable values, this naturally restricts the feasible domain for sampling. However, lacking any prior distribution on where to find Pareto optima, HNPF performs uniform random sampling in the variable domain to ensure broad coverage for locating optima. For small benchmark problems with known variable domains, this suffices. However, it is infeasible to apply this to find optimal model parameters for a neural MOO model.
156
+
157
+ § 5 SCALABLE UNIDIRECTIONAL HNPF
158
+
159
+ To address HNPF's scalability bottleneck, we introduce SUHNPF, a scalable variant of HNPF for finding weak Pareto points with an arbitrary density and distribution of initial data points. This is achieved via a scalable unidirectional FJC-guided double-gradient descent algorithm that encompasses HNPF's neural manifold estimator. Given continuous differentiable loss functions, SUNHPF's guided double gradient descent strategy efficiently searches the variable domain to find Pareto optimal points in the function domain. This enables SUHNPF to learn an $\epsilon$ -bounded approximation $\widetilde{M}\left( {\Theta }^{ * }\right)$ to the weak Pareto optimal manifold.
160
+
161
+ § 5.1 FJC-GUIDED DOUBLE GRADIENT DESCENT
162
+
163
+ Constructing a classification manifold of Pareto vs. non-Pareto points requires a set of feasible points to represent both classes. Since the Pareto manifold is unknown a priori, feasible points are drawn from a random distribution (lacking an informed prior) to initialize both classes. We then refine the points in the Pareto class $\mathcal{P}1$ while holding the non-Pareto points $\mathcal{P}0$ constant.
164
+
165
+ We assume an equal-sized sample set of $P$ points for each class, which helps to address class imbalance for harsh cases. For benchmark problems where the feasible set over the variable domain is known, we randomly sample points over this feasible domain to initialize $\mathcal{P}1$ and $\mathcal{P}0$ . Given these input points $x$ , held constant for $\mathcal{P}0$ and used as initial seed values for $\mathcal{P}1$ , Alg. 1 specifies our FJC-guided double-gradient descent algorithm. The algorithm iteratively updates $\mathcal{P}1$ towards the Pareto manifold via FJC-guided descent. The training dataset $D$ is the union of $\mathcal{P}0 \cup \mathcal{P}1$ . The algorithm iterates over Steps 5-9 until the error (err) converges to the user-specified error tolerance $\left( {\epsilon }_{\text{ outer }}\right)$ .
166
+
167
+ $$
168
+ {err} = \mathop{\sum }\limits_{{p \in \mathcal{P}1}}{\left( \det \left( {L}^{T}L\right) \right) }^{2} \tag{8}
169
+ $$
170
+
171
+ Algorithm 1 FJC-guided descent of variable domain
172
+
173
+ : Input: Data $D = \mathcal{P}0 \cup \mathcal{P}1\; \vartriangleright$ Training Data
174
+
175
+ Input: Functions $F$ and Constraints $G$
176
+
177
+ Input: Error tolerance ${\epsilon }_{\text{ outer }},{\epsilon }_{\text{ inner }}$
178
+
179
+ while err $> {\epsilon }_{\text{ outer }}$ do $\; \vartriangleright$ Run until convergence
180
+
181
+ Train network using $D$ as data for $e$ epochs
182
+
183
+ Compute current error err
184
+
185
+ Compute ${\nabla }_{p}$ det $= \frac{\partial \det \left( {{L}^{T}L}\right) }{\partial p},\forall p \in \mathcal{P}1$
186
+
187
+ $\mathcal{P}1 \leftarrow \mathcal{P}1 - \eta \nabla$ det $\; \vartriangleright$ Update points in $\mathcal{P}1$
188
+
189
+ $D = \mathcal{P}0 \cup \mathcal{P}1\; \vartriangleright$ Update Training Data
190
+
191
+ Output: Weak Pareto manifold $\widetilde{M}$
192
+
193
+ Eq. 8 in Alg. 1 ensures that all of the points in the Pareto set $\left( {p \in \mathcal{P}1}\right)$ are optimal once we converge to the desired error tolerance $\epsilon$ . Hence, Step 7 computes gradients of the $\det \left( {{L}^{T}L}\right)$ matrix w.r.t. the variables at points $p \in \mathcal{P}1$ and creates an approximation of the $\nabla$ det matrix. The training data $D$ is then updated with the new values of $\mathcal{P}1$ . The output is an approximation of the true weak Pareto manifold $M$ as $\widetilde{M}$ on the discrete dataset $D \subset X$ . Note that in Step 8, we do not allow the point set $\mathcal{P}1$ to leave the feasible set $\mathcal{S}$ i.e., if the step crosses the boundary of the feasible set, then we update the point to be the point on the boundary.
194
+
195
+ Alg. 1 includes two separate gradient descent steps. The outer descent loop (Step 4-9) updates the candidate point set $\mathcal{P}1$ using the error measurement of ${err}$ through a squared loss in Eq. 8. The inner descent (Step 5) updates the parameters $\left( \Phi \right)$ of the neural net to closely approximate the Pareto manifold $M\left( X\right)$ as $\widetilde{M}\left( {X,\Phi }\right)$ . This is done using the Binary Cross Entropy Loss on $\left( {\det \left( {L{\left( X\right) }^{T}L\left( X\right) }\right) ,\widetilde{M}\left( X\right) }\right)$ , and reaches convergence only when ${BCE} \leq {\epsilon }_{\text{ inner }}$ . The unidirectional property of this double-gradient update lets the outer loop influence the inner loop but not vice-versa.
196
+
197
+ Space Complexity Analysis. Alg. 1 maintains $D$ of size $P \times n$ . The ${L}^{T}L$ and $\nabla$ det matrices are of sizes ${\left( k + m\right) }^{2}$ and $n\left( {k + m}\right)$ , respectively. The total memory is $O(n(k +$ $m + P) + {\left( k + m\right) }^{2}$ ), where $n$ is the dimension of the variable space, and the scale of $k,m,P$ varies w.r.t. the problem. Trade-off $\alpha$ ’s are computed by solving a linear system as post-processing. SUHNPF achieves better memory and runtime efficiency since it does not rely upon solving primal and dual problems used in MTL methods (Appendix B).
198
+
199
+ § 6 BENCHMARKING
200
+
201
+ Motivation. Lack of analytical solutions to real MOO problems makes it difficult to measure the true accuracy of any Pareto solver. Consequently, we follow the OR literature in advocating that the correctness of any proposed Pareto solver should first be tested on constructed benchmark problems with known analytic solutions. This is also consistent with broader ML community practice of first evaluating proposed methods across a range of simulated, controlled conditions to verify correctness, often yielding valuable insights into model behavior prior to evaluation on real data.
202
+
203
+ We consider three such benchmark problems (Cases I-III). These problems are non-convex in either the functional or variable domain, or due to constraints (Table 2). Note that whether or not the Pareto front itself is non-convex is not always the best indicator of benchmark difficulty. For example, even though both objectives are non-convex in Case II, the Pareto front is still convex. As we shall see, PHN [Navon et al., 2021] fails on Case II despite performing well on two benchmark problems in their own study having a non-convex front. In general, non-convexity can greatly challenge MTL approaches relying on KKT conditions in testing solutions for optimality (see Appendix G).
204
+
205
+ Table 2: Characterization of benchmark cases, including convexity (C) vs. non-convexity (NC) in variable and function domains.
206
+
207
+ max width=
208
+
209
+ Case Dim Variable Domain Function Domain Includes Constraints $\mathbf{{OR}}$ Methods $\mathbf{{MTL}}$ Methods SUHNPF
210
+
211
+ 1-8
212
+ I 2 Linear NC No Sparse, Slow Sparse, Fast Dense, Fast
213
+
214
+ 1-8
215
+ II 30 NC C No Sparse, Slow Fail Dense, Fast
216
+
217
+ 1-8
218
+ III 2 NC NC Yes Sparse, Slow Fail Dense, Fast
219
+
220
+ 1-8
221
+
222
+ Experimental Setup. For each Case I-III, each method is tasked with finding $P = {50}$ Pareto points. OR methods search until any $P$ Pareto points are found. MTL methods divide the functional search quadrant into cones/rays, seeking one Pareto point per split. For manifold methods (PHN, HNPF, and SUHNPF), methods search for $P$ Pareto points in order to learn the manifold. Ideally, each method should identify an even spread (i.e., broad coverage) of points across the true Pareto front (shown in grey in each figure) in order to faithfully approximate it. We report the total number of iterative steps (i.e., evaluations) taken by each solver to produce the desired number of Pareto points. SUHNPF starts with $P$ random candidates that are progressively refined via its guided, double gradient descent strategy. Following HNPF [Singh et al., 2021], we adopt the same error tolerance ${10}^{-4}$ for both ${\epsilon }_{\text{ outer }}$ and ${\epsilon }_{\text{ inner }}$ . Any point $x$ that satisfies $\left| {\det \left( {L{\left( x\right) }^{T}L\left( x\right) }\right) }\right| \leq {\epsilon }_{\text{ inner }}$ is thus classified as being Pareto (exact zero is often impossible given finite machine precision). Sourcecode for LS, MOOMTL, PMTL and EPO solvers are taken from EPO's repository, while EPSE and PHN's sourcecode are used for them, respectively (see Appendix F). Based on Navon et al. [2021]'s findings, we evaluate the more accurate PHN variant, PHN-EPO, which we refer to simply as PHN.
223
+
224
+ Due to key differences between OR vs. MTL methods, results for each group are presented separately. First, OR methods not only support the full range of non-convex conditions across Cases I-III, but provide error tolerance parameters to guarantee correctness (and our experiments confirm this). Consequently, we report only the efficiency of OR methods in Table 3. In contrast, MTL methods produced variable accuracy on Case I and failed entirely on Cases II-III (as shall be discussed). Consequently, Table 4 reports accuracy and efficiency of MTL methods for Case I only.
225
+
226
+ Appendix F discusses experimental setup, Appendix 1, has convergence details, and Appendix J has loss profiles.
227
+
228
+ § 6.1 CASE I: GHANE-KANAFI AND KHORRAM [2015]
229
+
230
+ $$
231
+ {f}_{1}\left( {{x}_{1},{x}_{2}}\right) = {x}_{1},{f}_{2}\left( {{x}_{1},{x}_{2}}\right) = 1 + {x}_{2}^{2} - {x}_{1} - {0.1}\sin {3\pi }{x}_{1}
232
+ $$
233
+
234
+ $$
235
+ \text{ s.t. }{g}_{1} : 0 \leq {x}_{1} \leq 1,{g}_{2} : - 2 \leq {x}_{2} \leq 2
236
+ $$
237
+
238
+ The analytical Pareto solution to this joint minimization problem is $M : 0 \leq {x}_{1} \leq 1,{x}_{2} = 0$ . In Fig. 1 we observe SUHNPF’s randomly generated point set $\mathcal{P}1$ (red dots) converges towards the true manifold $M$ as a discrete approximation $\widetilde{M}$ . Point set $\mathcal{P}0$ (blue dots) is held constant and serves as representatives for the (background) non-Pareto class. Iteration 5 is the last because the error falls below the user-specified $\epsilon$ . The final cardinality of the weak Pareto set $\left| {\mathcal{P}1}\right| = P$ and any $\mathcal{P}0$ point that happens to fall within the ${\epsilon }_{\text{ outer }}$ threshold. Hence Alg. 1 ensures ${100}\%$ Pareto point density in $\mathcal{P}1$ , a vast improvement from HNPF [Singh et al., 2021, where only $\approx 2\%$ density was achieved. Fig. 2 shows functional domain convergence. SUHNPF achieves an even spread of points in the non-convex portion of the front.
239
+
240
+ 2 2 1 0.0 0.2 0.4 0.6 0.8 1.0 x1 (b) Iteration 5 (Converged) 0 0.0 0.2 0.4 0.6 0.8 1.0 x1 (a) Iteration 0 (Start)
241
+
242
+ Figure 1: Case I: Variable domain. The gray line show the true analytic solution $\left( {0 \leq {x}_{1} \leq 1}\right)$ . SUHNPF Pareto candidates $\mathcal{P}1$ (red dots) converge in 5 iterations. Non-Pareto candidates $\mathcal{P}0$ (blue dots) are held constant throughout the iterative sequence.
243
+
244
+ 1.0 1.0 0.8 0.6 0.2 0.0 0.2 1.0 (b) Iteration 5 (Converged) 0.6 0.4 0.2 0.6 0.8 1.0 fl (a) Iteration 0 (Start)
245
+
246
+ Figure 2: Case I: Functional domain corresponding to Figure 1. SUHNPF Pareto candidates $\mathcal{P}1$ (red dots) converge in 5 iterations.
247
+
248
+ Fig. 3 presents results for Linear Scalarization (LS) and several MTL methods: MOOMTL, PMTL, EPO, EPSE, and PHN. LS successfully produces a number of points in the non-convex portions of the front, despite prior studies often asserting that LS cannot handle any non-convexity. Refer to Appendix G for analysis and justification. Refer to Appendix 1 for iterative convergence plots for Case I.
249
+
250
+ To check for optimality, MTL methods rely upon KKT conditions that implicitly assume convexity (see Section 3). The non-convex nature of ${f}_{2}$ is thus challenging for these KKT-based methods. For example, some methods seek an even distribution of Pareto points by breaking up the functional space into evenly spaced cones or preference rays for trade-off values $\alpha$ . However, the uneven point spread seen on this non-convex benchmark illustrates limitations of the cone-based approach in handling non-convexity. We also clearly see non-Pareto points produced by some methods.
251
+
252
+ 1.0 1.0 0.6 0.4 0.2 0.0 0.0 0.2 0.4 0.6 0.8 1.0 fl (b) MOOMTL 1.0 0.8 0.4 0.2 0.0 0.0 0.2 0.4 0.6 0.8 1.0 (d) EPO 1.0 0.6 0.4 0.2 0.0 0.0 0.2 1.0 (f) PHN 0.8 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0.8 1.0 (a) Linear Scalarization (LS) 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0.8 1.0 f1 (c) PMTL 1.0 0.6 0.4 0.2 0.0 0.0 0.6 0.8 1.0 (e) EPSE
253
+
254
+ Figure 3: Case I: function domain for LS and MTL methods. No method produces all 50 of the requested Pareto points. PMTL, EPO and PHN also find non-Pareto points (circled in blue). Methods vary greatly in their coverage of points spanning the true front.
255
+
256
+ § 6.2 CASE II: ZHANG ET AL. [2008]
257
+
258
+ $$
259
+ {f}_{1}\left( x\right) = {x}_{1} + \frac{2}{\left| {J}_{1}\right| }\mathop{\sum }\limits_{{j \in {J}_{1}}}{y}_{j}^{2}\;,\;{f}_{2}\left( x\right) = 1 - \sqrt{{x}_{1}} + \frac{2}{\left| {J}_{2}\right| }\mathop{\sum }\limits_{{j \in {J}_{2}}}{y}_{j}^{2}
260
+ $$
261
+
262
+ $$
263
+ \text{ s.t. }{g}_{1},\ldots ,{g}_{30} : 0 \leq {x}_{1} \leq 1, - 1 \leq {x}_{j} \leq 1,j = 2,\ldots ,m
264
+ $$
265
+
266
+ $$
267
+ {J}_{1} = \{ j \mid j\text{ is odd, }2 \leq j \leq m\} ,{J}_{2} = \{ j \mid j\text{ is even, }2 \leq j \leq m\}
268
+ $$
269
+
270
+ $$
271
+ {y}_{j} = \left\{ \begin{array}{ll} {x}_{j} - \left\lbrack {{0.3}{x}_{1}^{2}\cos \left( {{24\pi }{x}_{1} + \frac{4j\pi }{m}}\right) + {0.6}{x}_{1}}\right\rbrack \cos \left( {{6\pi }{x}_{1} + \frac{j\pi }{m}}\right) & j \in {J}_{1} \\ {x}_{j} - \left\lbrack {{0.3}{x}_{1}^{2}\cos \left( {{24\pi }{x}_{1} + \frac{4j\pi }{m}}\right) + {0.6}{x}_{1}}\right\rbrack \cos \left( {{6\pi }{x}_{1} + \frac{j\pi }{m}}\right) & j \in {J}_{2} \end{array}\right.
272
+ $$
273
+
274
+ This joint minimization case operates in a $n = {30}$ dimensional variable space. Fig. 4 shows the true Pareto front and SUHNPF convergence in the variable domain. Note the non-convexity in the variable domain, where ${x}_{1}$ varies uniformly between $\left\lbrack {0,1}\right\rbrack$ , while ${x}_{2},\ldots ,{x}_{30}$ are sinusoidal in nature guided by ${x}_{1}$ . Thus, the Pareto manifold has a spiral trajectory along ${x}_{2},\ldots ,{x}_{30}$ with evolution along ${x}_{1}$ .
275
+
276
+ Despite the Pareto front being convex, the objectives are non-convex. For MTL methods, the min_norm_solver [Sener and Koltun, 2018], which is integral to all MTL solvers, simply fails. Consequently, no MTL results are reported.
277
+
278
+ For SUHNPF, following random initialization (iteration 0) in Fig. 4 (a), we observe that the candidate set $\mathcal{P}1$ propagates more towards increasing values of ${x}_{1}$ in Fig. 4, and approximates the expected Pareto manifold at iteration 5 .
279
+
280
+ 1.0 1.0 0.5 0.0 -1.0 0.0 (b) Iteration 5 (Converged) 0.5 -0.5 -1.0 0.0 (a) Iteration 0 (Start) Figure 4: Case II: variable domain (SUHNPF). We restrict the four plots to three dimensions $\left( {{x}_{1},{x}_{2}}\right.$ , and $\left. {x}_{3}\right)$ for visualization.
281
+
282
+ 1.0 1.0 0.8 0.6 0.4 0.2 (b) Iteration 5 (Converged) 0.8 0.6 0.4 0.2 f1 (a) Iteration 0 (Start)
283
+
284
+ Figure 5: Case II: functional domain (SUHNPF).
285
+
286
+ § 6.3 CASE III: TANAKA ET AL. [1995]
287
+
288
+ $$
289
+ {f}_{1}\left( {{x}_{1},{x}_{2}}\right) = {x}_{1},{f}_{2}\left( {{x}_{1},{x}_{2}}\right) = {x}_{2}
290
+ $$
291
+
292
+ s.t. ${g}_{1}\left( {{x}_{1},{x}_{2}}\right) = {\left( {x}_{1} - {0.5}\right) }^{2} + {\left( {x}_{2} - {0.5}\right) }^{2} \leq {0.5}$
293
+
294
+ ${g}_{2}\left( {{x}_{1},{x}_{2}}\right) = {x}_{1}^{2} + {x}_{2}^{2} - 1 - {0.1}\cos \left( {{16}\arctan \left( {{x}_{1}/{x}_{2}}\right) }\right) \geq 0$
295
+
296
+ $$
297
+ {g}_{3},{g}_{4} : 0 \leq {x}_{1},{x}_{2} \leq \pi
298
+ $$
299
+
300
+ For this joint minimization problem, the Pareto front is dominated by the two constraints ${g}_{1}$ and ${g}_{2}$ , while linear functions ${f}_{1}$ and ${f}_{2}$ do not contribute to the Pareto optimal solution. Fig. 6 shows the convergence of SUHNPF Pareto candidates toward the known solution manifold.
301
+
302
+ Because MTL approaches do not support constraints, they are not capable of solving this benchmark problem. However, note that if we were to remove constraints ${g}_{1}$ and ${g}_{2},{f}_{1}$ and ${f}_{2}$ would then become independent of each other (and so not compete). The front then collapses to the point(0,0), corresponding to the minimum of both functions. For this unconstrained problem, MTL methods would be expected to find this correct Pareto optimal solution point.
303
+
304
+ 1.0 1.0 0.8 0.4 0.2 0.0 0.2 0.4 0.6 0.8 1.0 (b) Iteration 5 (Converged) 0.8 ... 0.6 0.4 0.2 0.0 0.2 0.4 0.6 1.0 x1 (a) Iteration 0 (Start)
305
+
306
+ Figure 6: Case III: variable domain. The analytical solution for this problem is driven by constraints ${g}_{1},{g}_{2}$ . SUHNPF Pareto candidates $\mathcal{P}0$ (red dots) converge to the true front.
307
+
308
+ Case III highlights the need for any manifold based extractor to support both explicit and implicit forms of the Pareto front. Cases I and II have explicit form of front in the functional and variable domain. However, Case III has an implicit Pareto front (Fig. 6) owing to constraints ${g}_{1},{g}_{2}$ , where they render an implicit relation between ${x}_{1},{x}_{2}$ and therefore ${f}_{1},{f}_{2}$ . SUHNPF’s ability to construct a full rank diffusive indicator function of Pareto vs. non-Pareto points enables it to approximate the true manifold.
309
+
310
+ § 6.4 SUHNPF VS. OR AND MTL METHODS
311
+
312
+ Table 3 reports the number of candidate evaluations by OR methods vs. SUHNPF to find $P = {50}$ Pareto points for Cases I-III. Because OR methods and SUHNPF all return $P$ true Pareto points, we compare methods on efficiency only.
313
+
314
+ Note that HNPF is not an iterative solver: given a grid of points in the feasible domain, it identifies those that are weak Pareto optimal. It thus requires fewer evaluations with low variable dimensionality (Case I and III of 2 dimensions). In contrast, SUHNPF is a solver that starts with 50 random points and iteratively converges them onto the weak Pareto front, irrespective of the variable space (Case II of 30 dimension), and hence is scalable to MTL problems. In general, the large number of evaluations required by OR methods is indicative of their lack of scalability to MTL problems.
315
+
316
+ Table 3: The number of evaluations performed by each method to find 50 Pareto points across Cases I-III. HNPF performs well with small variable space dimensionality (e.g., 2D in Case I & III) but scales poorly to higher dimensionality (Case II, 30D).
317
+
318
+ max width=
319
+
320
+ $\mathbf{{Method}}$ Case I Case II Case III
321
+
322
+ 1-4
323
+ NBI 1,236,034 1,497,063,168 447,574
324
+
325
+ 1-4
326
+ mCHIM 1,081,625 3,605,242,265 497,537
327
+
328
+ 1-4
329
+ PK 138,224 169,223,715 377,410
330
+
331
+ 1-4
332
+ HNPF 2,731 24,457 3,626
333
+
334
+ 1-4
335
+ SUHNPF 4,219 4,682 4,578
336
+
337
+ 1-4
338
+
339
+ Table 4 reports the accuracy, efficiency and run-time of SUHNPF vs. MTL methods for Case I. For Case II, the min_norm_solver [Sener and Koltun, 2018] used by MTL methods fails, and Case III's constraints are not supported by MTL methods. Note that for fair evaluation, we only consider candidates that are produced within the feasible functional bounds for the problem. Additional run-time evaluation and discussion can be found in Appendix C.
340
+
341
+ Table 4: SUHNPF vs. MTL methods on Case I in finding $P = {50}$ Pareto points. We report the $\%$ of feasible points each method finds and their avg/max error ${vs}$ . the true front. Our error measure considers feasible points only; infeasible points are not penalized.
342
+
343
+ max width=
344
+
345
+ $\mathbf{{Method}}$ LS MOOMTL PMTL EPO EPSE PHN SUHNPF
346
+
347
+ 1-8
348
+ Evaluations 5K 5K 5K 5K 5K 5K 4,219
349
+
350
+ 1-8
351
+ Run-time (secs) 18.1 19.2 527 752 641 853 10.0
352
+
353
+ 1-8
354
+ Points Found 54% 32% 70% 68% 30% 80% 100%
355
+
356
+ 1-8
357
+ Avg Err $\left( {10}^{-4}\right)$ 0.53 0.45 4.15 8.73 0.61 3.04 0.52
358
+
359
+ 1-8
360
+ Max Err $\left( {10}^{-4}\right)$ 1.12 0.98 126 106 0.94 73.8 0.82
361
+
362
+ 1-8
363
+
364
+ Regarding Case I coverage and accuracy, SUHNPF returns all 50 Pareto points; no MTL method does. For all points that are found, we measure their error ${vs}$ . the true Pareto front. SUHNPF is seen to achieve the lowest error, with maximum error bounded by the ${10}^{-4}$ error tolerance parameter set in our experiments. Specifically, the outer loop of Alg. 1 would not achieve convergence until all the points points are within the prescribed error tolerance. In contrast, PMTL, EPO, and PHN yield maximum error two orders of magnitude larger. Note also that our error metric generously scores only the points found by each method, with no penalty for missing points. Visually, SUHNPF (Fig. 2) clearly provides better coverage of the Pareto front via a denser, more even spread of points vs. those found by MTL methods (Fig. 3).
365
+
366
+ Because MTL approaches assume convexity of objective functions to generate points with uniformity on the Pareto front, and Case I includes non-convex objectives, the MTL solvers fail to find points in certain regions (see Fig. 3). While EPO's solver has convergence criteria, it still produces points that did not converge (circled in blue). This stems from EPO's assumption on KKT conditions to achieve optimality, which fails on Case I’s non-convex form of ${f}_{2}$ . Correspondingly PHN(-EPO), which uses EPO as its base solver, also fails to converge on certain points. In contrast, SUHNPF relies on the FJC to test optimality, which fully supports non-convexity in functions and constraints.
367
+
368
+ Regarding Case I efficiency, SUHNPF is also fastest: nearly twice as fast as LS and MOOMTL, more than ${50}\mathrm{x}$ faster than PMTL and EPSE, 75x faster than EPO, and 85x faster than PHN. (Because PHN-EPO calls EPO, it is necessarily slower than EPO). As Navon et al. [2021] note, LS is much faster than EPO, so one could expect PHN-LS to be faster than PHN-EPO and slower than LS.
369
+
370
+ § 7 SUHNPF AS A HYPERNETWORK
371
+
372
+ Hypernetworks [Ha et al., 2016] train one neural model to generate effective weights for a second, target model. Navon et al. [2021] and Lin et al. [2021] learn a neural manifold mapping MOO solutions to different target model weights, enabling the target model to achieve the desired Pareto trade-off for the MOO problem.
373
+
374
+ Assume the target task maps from input $Y$ to output $Z$ . We seek to minimize objective functions ${f}_{1}$ and ${f}_{2}$ having loss functions ${\mathcal{L}}_{1}$ and ${\mathcal{L}}_{2}$ . Given correct output ${Z}^{ * }$ , we score $Z$ for each loss function ${\mathcal{L}}_{i}\left( {Z,{Z}^{ * }}\right)$ . A target model for this task ${C}_{\Theta } : Y \rightarrow Z$ with parameters $\Theta$ will yield loss ${\forall }_{i}{\mathcal{L}}_{i}\left( {{C}_{\Theta }\left( Y\right) ,{Z}^{ * }}\right)$ . The MOO problem is to find Pareto optimal ${\Theta }^{ * }$ for the ${f}_{1} = {\mathcal{L}}_{1}$ vs. ${f}_{2} = {\mathcal{L}}_{2}$ trade-off.
375
+
376
+ The objectives ${\mathcal{L}}_{1}\left( \Theta \right) ,{\mathcal{L}}_{2}\left( \Theta \right)$ for SUHNPF are continuous differentiable functions of $\Theta$ . This enables SUNHPF’s guided double gradient descent strategy to efficiently search the space of model target parameters $\Theta$ , mapping each to resulting loss values $\left( {{\mathcal{L}}_{1},{\mathcal{L}}_{2}}\right)$ . Training data resulting from this search allows SUHNPF to learn an $\epsilon$ -bounded approximation $\widetilde{M}\left( {\Theta }^{ * }\right)$ to the weak Pareto optimal manifold.
377
+
378
+ As in prior Pareto Front Learning (PFL) work [Navon et al., 2021, Lin et al., 2021], this enables rapid model personalization at run-time based on user preferences. The neural MOO ${\text{ Loss }}_{\text{ classifier }}$ is a weighted linear combination of the user-prescribed objectives $\left( {{\mathcal{L}}_{1},{\mathcal{L}}_{2}}\right)$ . The classifier loss hyper-parameter $\alpha$ (trade-off value) is computed as a postprocessing step corresponding to Pareto optimal classifier weights ${\Theta }^{ * }$ for rapid traversal of arbitrary $\left( {\alpha ,{\Theta }^{ * }}\right)$ solutions. See Fig. 9 in Appendix A for additional details of the setup of SUHNPF as a hypernetwork to optimize a target model.
379
+
380
+ § 7.1 EVALUATION ON MULTI-TASK LEARNING
381
+
382
+ We evaluate on the same MTL image classification problems as in Navon et al. [2021]. Given two underlying source datasets, MNIST [LeCun et al., 1998] and Fashion-MNIST [Xiao et al., 2017a], Navon et al. [2021] report on three MTL tasks: MultiMNIST [Sabour et al., 2017], Multi-Fashion, and Multi-Fashion + MNIST. In each case, two images are sampled from source datasets and overlaid, one at the top-left corner and one at the bottom-right, with each also shifted up to 4 pixels in each direction. The two competing tasks are to correctly classify each of the original images: Top-Left (Task 1 or ${f}_{1}$ ) and Bottom-Right (Task 2 or ${f}_{2}$ ). We use ${120}\mathrm{\;K}$ training and ${20k}$ testing examples and directly apply existing single-task models, allocating ${10}\%$ of each training set for constructing validation sets, as used in Lin et al. [2019]. Navon et al. [2021] found that PHN-EPO (henceforth PHN) was more accurate than other methods they compared, so we use PHN as our baseline.
383
+
384
+ We adopt the LeNet architecture [LeCun et al., 1998] as the target model to learn. Following prior MTL work [Sener and Koltun, 2018], we treat all layers other than the last as the shared representation function and put two fully-connected layers as task-specific functions. We use cross-entropy loss with softmax activation for both task-specific loss functions. Because cross-entropy loss functions are differentiable, we can use them directly as training objectives.
385
+
386
+ SUHNPF 0.74 SUHNPF 0.65 SUHNPF 0.60 Task 2 Loss 0.55 0.50 0.45 0.40 0.63 Task 1 Loss Task 1 Loss (b) MultiFashion (c) MultiFashion+MNIST 0.50 0.70 Task 2 Loss 0.45 0.66 0.40 0.58 0.35 0.54 0.30 - 0.50 Task 1 Loss (a) MultiMNIST
387
+
388
+ Figure 7: Cross-entropy loss on the test split for all three MTL datasets for SUHNPF vs. PHN. The 11 points shown for each method correspond (from left-to-right) to varying trade-offs preferences in minimizing the combined linear loss over objectives: $\alpha {f}_{1} + \left( {1 - \alpha }\right) {f}_{2}$ for $\alpha \in \{ 1,{0.9},\ldots ,0\}$ . The gray dashed-line show the best loss achieved by LeNet to classify a single image for each given task.
389
+
390
+ Results. We see SUHNPF vs. PHN results on dataset test splits in Fig. 7. Because SUHNPF defines a strict $\epsilon$ -bound on error, we can assert its correctness on this basis alone. Visual inspection also shows that PHN returns dominated points (e.g., top of MultiMNIST plot), whereas a Pareto front by definition includes only non-dominated points. Nonetheless, we cannot directly measure error ${vs}$ . a known Pareto front because real MOO problems lack a simple analytical solution like synthetic benchmark problems. Of course, we can still compare relative performance of methods. We see that SUHNPF achieves strictly lower loss than PHN across all user trade-off settings of $\alpha$ on all three datasets.
391
+
392
+ Since the minimum loss $\min \left( {f}_{1}\right) = \min \left( {f}_{2}\right) = 0$ , for both objectives, the ideal point [Marler and Arora, 2004] for joint minimization is(0,0). A simple error measure for each point found is thus its 2 distance from $\left( {0,0}\right) : \sqrt{{f}_{1}^{2} + {f}_{2}^{2}}$ . Table 5 reports this distance for each Pareto point found at each $\alpha$ (across methods and datasets). We also report the average over the 11 settings of $\alpha$ . Overall, Table 5 quantifies what Fig. 7 depicts visually: SUHNPF performs strictly better for every Pareto point and thus also on average.
393
+
394
+ Table 5: SUHNPF vs. PHN on MTL tasks, measured by distance of each Pareto point found ${vs}$ . the ideal loss point $\left( {{f}_{1},{f}_{2}}\right) = \left( {0,0}\right)$ .
395
+
396
+ max width=
397
+
398
+ X 12|c|Trade-off values $\alpha$
399
+
400
+ 1-13
401
+ Method 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 $\mathbf{{Avg}}$
402
+
403
+ 1-13
404
+ 13|c|MultiMNIST
405
+
406
+ 1-13
407
+ PHN .621 .585 .539 .504 .486 .478 .483 .494 .508 .521 .527 .522
408
+
409
+ 1-13
410
+ SUHNPF .500 .478 .464 .448 .441 .434 .441 .443 .452 .457 .465 .456
411
+
412
+ 1-13
413
+ 13|c|MultiFashion
414
+
415
+ 1-13
416
+ PHN .877 .872 .853 .813 .784 .773 .779 .797 .816 .826 .829 .819
417
+
418
+ 1-13
419
+ SUHNPF .862 .819 .792 .773 .757 .746 .754 .758 .767 .793 .810 .784
420
+
421
+ 1-13
422
+ 13|c|MultiFashion+MNIST
423
+
424
+ 1-13
425
+ PHN .690 .613 .581 .569 .571 .579 .598 .631 .682 .752 .797 .642
426
+
427
+ 1-13
428
+ SUHNPF .667 .617 .586 .552 .547 .543 .549 .553 .583 .629 .695 .593
429
+
430
+ 1-13
431
+
432
+ § 8 UNDERSTANDING SUHNPF VS. PHN
433
+
434
+ While both SUHNPF and PHN are manifold-based (Fig. 8), they differ in the type of manifold being learned. SUHNPF explicity maintains point sets $\mathcal{P}0$ and $\mathcal{P}1$ to learn the classification boundary between Pareto vs. non-Pareto points as per the FJC. PHN fits a regression surface over the set of points returned by LS or EPO. Since neither LS nor EPO are guaranteed to operate under non-convex settings (Section 3), those drawbacks are in turn inherited by PHN in using them. Table 6 highlights the key differences. The distinction between a diffusive full-rank indicator ${vs}$ . a low-rank regressor is further discussed in Appendix D.
435
+
436
+ Neural Network Fritz John criteria Neural Network EPO solver (b) PHN (a) SUHNPF
437
+
438
+ Figure 8: High level abstraction of SUHNPF and PHN solvers. While SUHNPF uses the Fritz-John criteria for optimality check, PHN uses the candidates deemed optimal by the EPO solver.
439
+
440
+ Table 6: SUHNPF vs. PHN for Pareto front learning.
441
+
442
+ max width=
443
+
444
+ Criteria SUHNPF PHN
445
+
446
+ 1-3
447
+ Handle non-convexity ✓ ✘
448
+
449
+ 1-3
450
+ Supports constraints ✓ ✘
451
+
452
+ 1-3
453
+ Manifold Extractor ✓ ✓
454
+
455
+ 1-3
456
+ Nature of manifold Diffusive full-rank indicator Low-rank regressor
457
+
458
+ 1-3
459
+ Optimality Criteria Fritz-John Conditions EPO solver
460
+
461
+ 1-3
462
+
463
+ § 9 CONCLUSION
464
+
465
+ Multi-objective optimization problems require balancing competing objectives, often under constraints. In this work, we described a novel method for Pareto-front learning (inducing the full Pareto manifold at train-time so users can pick any desired optimal trade-off point at run-time). Our SUHNPF Pareto solver is robust against non-convexity, with error bounded by a user-specified tolerance. Our key innovation over prior work's HNPF [Singh et al., 2021] is to exploit Fritz-John Conditions for a novel guided double gradient descent strategy. The scaling property imparts significant improvement in memory and run-time vs. prior OR and Multi-Task Learning (MTL) approaches. Results across synthetic benchmarks and MTL problems in image classification show clear, consistent advantages of SUHNPF in capability (handling non-convexity and constraints), denser coverage and higher accuracy in recovering the true Pareto front, and efficiency (time and space). Beyond empirical results, our conceptual framing and review of prior work also further bridges disparate lines of OR and MTL research.
466
+
467
+ Both SUHNPF and MTL methods assume differentiable evaluation metrics as training loss so optima to be found through gradient descent. However, loss can be a nondifferentiable, probabilistic measure, such as in fairness-related tasks [Sacharidis, 2019, Valdivia et al., 2020]. This creates a risk of metric divergence between training loss vs. the evaluation measure of interest [Abou-Moustafa and Fer-rie, 2012]. Continuing development of differentiable measures can help to address this [Swezey et al., 2021].
UAI/UAI 2022/UAI 2022 Conference/BcLqJUIs5x5/Initial_manuscript_md/Initial_manuscript.md ADDED
The diff for this file is too large to render. See raw diff
 
UAI/UAI 2022/UAI 2022 Conference/BcLqJUIs5x5/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,263 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § SOLVING STRUCTURED HIERARCHICAL GAMES USING DIFFERENTIAL BACKWARD INDUCTION
2
+
3
+ § ABSTRACT
4
+
5
+ From large-scale organizations to decentralized political systems, hierarchical strategic decision making is commonplace. We introduce a novel class of structured hierarchical games (SHGs) that formally capture such hierarchical strategic interactions. In an SHG, each player is a node in a tree, and strategic choices of players are sequenced from root to leaves, with root moving first, followed by its children, then followed by their children, and so on until the leaves. A player's utility in an SHG depends on its own decision, and on the choices of its parent and all the tree leaves. SHGs thus generalize simultaneous-move games, as well as Stackelberg games with many followers. We leverage the structure of both the sequence of player moves as well as payoff dependence to develop a novel gradient-based back propagation-style algorithm, which we call Differential Backward Induction (DBI), for approximating equilibria of SHGs. We provide a sufficient condition for convergence of DBI and demonstrate its efficacy in finding approximate equilibrium solutions to several SHG models of hierarchical policy-making problems.
6
+
7
+ § 1 INTRODUCTION
8
+
9
+ The COVID-19 pandemic has revealed considerable strategic tension among the many parties involved in decentralized hierarchical policy-making. For example, recommendations by the World Health Organization are sometimes heeded, and other times discarded by nations, while subna-tional units, such as provinces and urban areas, may in turn take a policy stance (such as on lockdowns, mask mandates, or vaccination priorities) that is not congruent with national policies. Similarly, in the US, policy recommendations at the federal level can be implemented in a variety of ways
10
+
11
+ by the states, while counties and cities, in turn, may comply with state-level policies, or not, potentially triggering litigation [15]. Central to all these cases is that, besides this strategic drama, what ultimately determines infection spread is how policies are implemented at the lowest level, such as by cities and towns, or even individuals. Similar strategic encounters routinely play out in large-scale organizations, where actions throughout the management hierarchy are ultimately reflected in the decisions made at the lowest level (e.g., by the employees who are ultimately involved in production), and these lowest-level decisions play a decisive role in the organizational welfare.
12
+
13
+ We propose a novel model of hierarchical decision making which is a natural stylized representation of strategic interactions of this kind. Our model, which we term structured hierarchical games (SHGs), represents each player by a node in a tree hierarchy. The tree plays two roles in SHGs. First, it captures the sequence of moves by the players: the root (the lone member of level 1 of the hierarchy) makes the first strategic choice, its children (i.e., all nodes in level 2) observe the root's choice and follow, their children then follow in turn, and so on, until we reach the leaf node players who move upon observing their predecessors' choices. Second, the tree partially captures strategic dependence: a player's utility depends on its own strategy, that of its parent, and the strategies of all of the leaf nodes. The sequence of moves in our model naturally captures the typical sequence of decisions in hierarchical policy-making settings, as well as in large organizations, while the utility structure captures the decisive role of leaf nodes (e.g., individual compliance with vaccination policies), as well as hierarchical dependence (e.g., employee dependence on a manager's approval of their performance, or state dependence on federal funding). Significantly, the SHG model generalizes a number of well-established models of strategic encounters, including (a) simultaneous-move games (captured by a 2-level SHG with the root having a single dummy action), (b) Stackelberg (leader-follower) games (a 2-level game with a single leaf node) [11, 37], and (c) single-leader multi-follower Stackel-berg games (e.g., a Stackelberg security game with a single defender and many attackers) [5, 8].
14
+
15
+ Our second contribution is a novel gradient-based algorithm for approximately computing subgame-perfect equilibria of SHGs. Specifically, we propose Differential Backward Induction (DBI), which is a backpropagation-style gradient ascent algorithm that leverages both the sequential structure of the game, as well as the utility structure of the players. As ${DBI}$ involves simultaneous gradient updates of players in the same level (particularly at the leaves), convergence is not guaranteed in general (as is also the case for best-response dynamics [12]). Viewing ${DBI}$ as a dynamical system, we provide a sufficient condition for its convergence to a stable point. Our results also imply that in the special case of two-player zero-sum Stackelberg games, ${DBI}$ converges to a local Stackelberg equilibrium [11, 39].
16
+
17
+ Finally, we demonstrate the efficacy of DBI in finding approximate equilibrium solutions to several classes of SHGs. First, we use a highly stylized class of SHGs with polynomial utility functions to compare DBI with five baseline gradient-based approaches from prior literature. Second, we use DBI to solve a recently proposed game-theoretic model of 3-level hierarchical epidemic policy making. Third, we apply DBI to solve a hierarchical variant of a public goods game, which naturally captures the decentralization of decision making in public good investment decisions, such as investments in sustainable energy. Fourth, we evaluate DBI in the context of a hierarchical security investment game, where hierarchical decentralization (e.g., involving federal government, industry sectors, and particular organizations) can also play a crucial role. In all of these, we show that DBI significantly outperforms the state of the art approaches that can be applied to solve games with hierarchical structure.
18
+
19
+ Related Work SHGs generalize both simultaneous-move games and Stackelberg games with multiple followers [5] 23]. They are also related to graphical games [20] in capturing utility dependence structure, although SHGs also capture sequential structure of decisions. Several prior approaches use gradient-based methods for solving games with particular structure. A prominent example is generative adversarial networks (GANs), though these are zero-sum games [9, 14, 19, 29, 30, 31]. Ideas from learning GANs have been adopted in gradient-based approaches to solve multi-player general-sum games [4, 7, 17, 22, 25, 28, 29]. However, all of these approaches assume a simultaneous-move game. A closely-related thread to our work considers gradient-based methods for bi-level optimization [24, 35]. Several related efforts consider gradient-based learning in Stackelberg games, and also use the implicit function theorem to derive gradient updates [2, 11, 32, 38, 39]. We significantly generalize these ideas by considering an arbitrary hierarchical game structure.
20
+
21
+ Jia et al. [18] recently considered a stylized 3-level SHG for pandemic policy making, and proposed several non-gradient-based algorithms for this problem. We compare with their approach in Section 4.
22
+
23
+ § 2 STRUCTURED HIERARCHICAL GAMES
24
+
25
+ Notation We use bold lower-case letters to denote vectors. Let $f$ be a function of the form $f\left( {\mathbf{x},\mathbf{y}}\right) : {\mathbb{R}}^{d} \times {\mathbb{R}}^{{d}^{\prime }} \rightarrow {\mathbb{R}}^{{d}^{\prime \prime }}$ . We use ${\nabla }_{\mathbf{x}}f$ to denote the partial derivative of $f$ with respect to $\mathbf{x}$ . When there is functional dependency between $\mathbf{x}$ and $\mathbf{y}$ , we use ${D}_{\mathbf{x}}f$ to denote the total derivative of $f\left( {\mathbf{x},\mathbf{y}\left( \mathbf{x}\right) }\right)$ with respect to $\mathbf{x}$ . We use ${\nabla }_{\mathbf{x},\mathbf{x}}^{2}f$ and ${\nabla }_{\mathbf{x},\mathbf{y}}^{2}f$ to denote the second-order partial derivatives and ${D}_{\mathbf{x},\mathbf{x}}^{2}f$ to denote the second-order total derivative of $f$ . For a mapping $f : {\mathbb{R}}^{d} \rightarrow$ ${\mathbb{R}}^{d}$ , we use ${f}^{t}\left( \mathbf{x}\right)$ to denote $t$ iterative applications of $f$ on $\mathbf{x}$ . For mappings ${f}_{1} : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ and ${f}_{2} : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ , we define $\left( {{f}_{1} \circ {f}_{2}}\right) \left( \mathbf{x}\right) \triangleq {f}_{1}\left( {{f}_{2}\left( \mathbf{x}\right) }\right)$ and $\left( {{f}_{1} + {f}_{2}}\right) \left( \mathbf{x}\right) \triangleq$ ${f}_{1}\left( \mathbf{x}\right) + {f}_{2}\left( \mathbf{x}\right)$ . Moreover, for a given $\epsilon \in {\mathbb{R}}^{ \geq 0}$ and $\mathbf{x} \in {\mathbb{R}}^{d}$ , we define the $\epsilon$ -ball around $\mathbf{x}$ as ${\mathbb{B}}_{\epsilon }\left( \mathbf{x}\right) = \left\{ {{\mathbf{x}}^{\prime } \in {\mathbb{R}}^{d} \mid }\right.$ $\left. {{\begin{Vmatrix}\mathbf{x} - {\mathbf{x}}^{\prime }\end{Vmatrix}}_{2} < \epsilon }\right\}$ . Finally, $\mathbf{I}$ denotes an identity matrix.
26
+
27
+ Formal Model A structured hierarchical game (SHG) $\mathcal{G}$ consists of the set $\mathcal{N}$ of $n$ players. Each player $i$ is associated with a set of actions ${\mathcal{X}}_{i} \subseteq {\mathbb{R}}^{{d}_{i}}$ . The players are partitioned across $L$ levels, where ${\mathcal{N}}_{l}$ is the set of ${n}_{l}$ players occupying level $l$ . Let ${l}_{i}$ denote the level occupied by player i. This hierarchical structure of the game is illustrated in Figure 1 where players correspond to nodes and levels are marked by dashed boundaries. The hierarchy plays two crucial roles: 1) it determines the order of moves, and 2) it partly determines utility dependence among players. Specifically, the temporal pattern of actions is as follows: level 1 has a single player, the root, who chooses an action first, followed by all players in level 2 making simultaneous choices, followed in turn by players in level 3, and so on until the leaves in the final level $L$ . Players of level $l$ only observe the actions chosen by all players of levels $1,2,\ldots ,l - 1$ , but not their peers in the same level. So, for example, pandemic social distancing and vaccination policies in the US are initiated by the federal government (including the Centers for Disease Control and Prevention), with states subsequently instituting their own policies, counties reacting to these by determining their own, and behavior of people ultimately influenced, but not determined, by the guidelines and enforcement policies by the local county/city.
28
+
29
+ < g r a p h i c s >
30
+
31
+ Figure 1: Schematic representation of an SHG. The utility of player $i$ can have direct functional dependence only on the joint action of all shaded players.
32
+
33
+ Next, we describe the utility structure of the game as entailed by the SHG hierarchy. Each player $i$ in level ${l}_{i} > 1$ (i.e., any node other than the root) has a unique parent in level ${l}_{i} - 1$ ; we denote the parent of node $i$ by $\mathrm{{PA}}\left( i\right)$ . A player’s utility function is determined by 1 ) its own action, 2) the action of its parent, and 3) the actions of all players in level $L$ (i.e., all leaf players). To formalize, let ${\mathbf{x}}_{l}$ denote the joint action profile of all players in level $l$ . Player $i$ ’s utility function then has the form ${u}_{i}\left( {{x}_{i},{\mathbf{x}}_{L}}\right)$ if ${l}_{i} = 1,{u}_{i}\left( {{x}_{i},{x}_{\mathrm{{PA}}\left( i\right) },{\mathbf{x}}_{L}}\right)$ if $1 < {l}_{i} < L$ , and ${u}_{i}\left( {{x}_{i},{x}_{\mathrm{{PA}}\left( i\right) },{\mathbf{x}}_{L, - i}}\right)$ if ${l}_{i} = L$ , where ${\mathbf{x}}_{L, - i}$ is the action profile of all players in level $L$ other than $i$ . For example, in our running pandemic policy example, the utility of a county depends on both the policy and enforcement strategy of its state (its parent) and on the ultimate pandemic spread and economic impact within it, both determined largely by the behavior of the county residents (leaf nodes). Note the considerable generality of the SHG model. For example, an arbitrary simultaneous-move game is a SHG with 2 levels and a "dummy" root node (utilities of all leaves depend on one another's actions), and an arbitrary Stackelberg game (e.g., Stackelberg security game), even with many followers, can be modeled as a 2-level SHG with the leader as root and followers as leaves. Furthermore, while we have defined SHGs with respect to real-vector player action sets, it is straightforward to represent mixed strategies of finite-action games in this way by simply using a softmax function to map an arbitrary real vector into a valid mixed strategy.
34
+
35
+ Solution Concept Since an SHG has important sequential structure, it is natural to consider the subgame perfect equilibrium (SPE) as the solution concept [33]. Here, we focus on pure-strategy equilibria. To begin, we note that in SHGs, the strategies of players in any level $l > 1$ are, in general, functions of the complete history of play in levels $1,\ldots ,l - 1$ , which we denote by ${h}_{ < l} = \left( {{\mathbf{x}}_{1},{\mathbf{x}}_{2},\ldots ,{\mathbf{x}}_{l - 1}}\right)$ . Formally, a (pure) strategy of a player $i$ is denoted by ${s}_{i}\left( {h}_{ < l}\right)$ , which deterministically maps an arbitrary history ${h}_{ < l}$ into an action ${x}_{i} \in {\mathcal{X}}_{i}$ . A Nash equilibrium of an SHG is then a strategy profile $\mathbf{s} = \left( {{s}_{1},\ldots ,{s}_{i},\ldots ,{s}_{n}}\right)$ such that for all $i \in \mathcal{N},{u}_{i}\left( {{s}_{i},{\mathbf{s}}_{-\mathbf{i}}}\right) \geq {u}_{i}\left( {{s}_{i}^{\prime },{\mathbf{s}}_{-\mathbf{i}}}\right)$ for all possible alternative strategies for $i,{s}_{i}^{\prime }$ . Here, we denote the realized payoff of $i$ from profile $s$ by ${u}_{i}\left( {{s}_{i},{s}_{-i}}\right)$ . Next, we define a level- $l$ -subgame given ${h}_{ < l}$ as an SHG that includes only players at levels $\geq l$ , with actions chosen in levels $< l$ fixed to ${h}_{ < l}$ . A strategy profile $s$ is a subgame perfect equilibrium of SHG if it is a Nash equilibrium of every level- $l$ -subgame of SHG for every $l$ and history ${h}_{ < l}$ . We prove in appendix A that our definition of SPE is equivalent to the standard SPE in an extensive-form representation of SHG.
36
+
37
+ While in principle we can compute an SPE of an SHG using backward induction, this cannot be done directly (i.e., by complete enumeration of actions of all players) as actions are real vectors. Moreover, even discretizing actions is of little help, as the hierarchical nature of the game leads to exponential explosion of the search space. We now present a gradient-based approach for approximating SPE along the equilibrium path in an SHG that leverages the game structure to derive backpropagation-style gradient updates.
38
+
39
+ § 3 DIFFERENTIAL BACKWARD INDUCTION
40
+
41
+ In this section, we describe our gradient-based algorithm, Differential Backward Induction (DBI), for approximating an SPE, and then analyze its convergence. Just as gradient ascent does not, in general, identify a globally optimal solution to a non-convex optimization problem, DBI in general yields a solution which only satisfies first-order conditions (see Section 3.2 for further details). Moreover, we leverage the structure of the utility functions to focus computation on an SPE in which strategies of players are only a function of their immediate parents. ${}^{1}$ In this spirit, we define local best response functions ${\phi }_{i} : {\overline{\mathbb{R}}}^{{d}_{\mathrm{{PA}}\left( i\right) }} \rightarrow {\mathbb{R}}^{{d}_{i}}$ mapping a player $i$ ’s parent’s action ${x}_{\mathrm{{PA}}\left( i\right) }$ to $i$ ’s action ${x}_{i}$ ; note that the notation ${\phi }_{i}$ is distinct from ${s}_{i}$ above for $i$ ’s strategy to emphasize the fact that ${\phi }_{i}$ is only locally optimal. Now, suppose that a player $i$ is in the last level $L$ . Local optimality of ${\phi }_{i}$ implies that if ${x}_{i} = {\phi }_{i}\left( {x}_{\mathrm{{PA}}\left( i\right) }\right)$ , then ${\nabla }_{{x}_{i}}{u}_{i}\left( {{\mathbf{x}}_{i},{\mathbf{x}}_{\mathrm{{PA}}\left( i\right) },{\mathbf{x}}_{L, - i}}\right) = 0$ and ${\nabla }_{{x}_{i},{x}_{i}}^{2}{u}_{i}\left( {{\mathbf{x}}_{i},{\mathbf{x}}_{\mathrm{{PA}}\left( i\right) },{\mathbf{x}}_{L, - i}}\right) < 0$ .
42
+
43
+ Let ${\phi }_{l}$ denote the local best response for all the players in level $l$ given the actions of all players in level $l - 1$ . We can compose these local best response functions to define the function ${\Phi }_{l} \mathrel{\text{ := }} {\phi }_{L} \circ {\phi }_{L - 1} \circ \ldots \circ {\phi }_{l + 1} : {\mathbb{R}}^{{d}_{{n}_{l}}} \rightarrow {\mathbb{R}}^{{d}_{{n}_{L}}}$ i.e., the local best response of players in the last level $L$ given the actions of the players in level $l.{}^{3}$ Then for any player (i) in level ${l}_{i} < L,{D}_{{x}_{i}}{u}_{i}\left( {{x}_{i},{x}_{\mathrm{{PA}}\left( i\right) },{\Phi }_{l}\left( \left\langle {{x}_{i},{\mathbf{x}}_{l, - i}}\right\rangle \right) }\right) = 0$ and ${D}_{{x}_{i},{x}_{i}}^{2}{u}_{i}\left( {{x}_{i},{x}_{\mathrm{{PA}}\left( i\right) },{\Phi }_{l}\left( \left\langle {{x}_{i},{\mathbf{x}}_{l, - i}}\right\rangle \right) }\right) \prec 0$ , where ${D}_{{x}_{i}}$ is the total derivative with respect to ${x}_{i}$ (as ${\Phi }_{l}\left( \left\langle {{x}_{i},{\mathbf{x}}_{l, - i}}\right\rangle \right)$ is also a function of ${x}_{i}$ ). Note that the functions $\phi$ and $\Phi$ are implicit, capturing the functional dependencies between actions of players in different levels at the local equilibrium.
44
+
45
+ Throughout, we make the following standard assumption on the utility functions [10, 39].
46
+
47
+ ${}^{1}$ Note that while we cannot guarantee that an SPE exists in SHGs in general, let alone those possessing the assumed structure, we find experimentally that our approach often yields good SPE approximations.
48
+
49
+ ${}^{2}$ For simplicity, we omit degenerate cases where ${\nabla }_{{\mathbf{x}}_{i},{\mathbf{x}}_{i}}^{2}{u}_{i} =$ 0 and assume all local maxima are strict.
50
+
51
+ ${}^{3}$ Note that in particular ${\Phi }_{L} = {\phi }_{L}$ .
52
+
53
+ Assumption 1. For any ${x}_{i} \in {\mathcal{X}}_{i}$ , the second-order partial derivatives of the form ${\nabla }_{{x}_{i},{x}_{i}}^{2}{u}_{i}$ are non-singular.
54
+
55
+ § 3.1 ALGORITHM
56
+
57
+ The DBI algorithm works in a bottom-up manner, akin to back-propagation: for each level $l$ , we compute the total derivatives (gradients) of the utility functions and local best response maps $\left( {\phi ,\Phi }\right)$ based on analytical expressions that we derive below. We then propagate this information up to level $l - 1$ , as it is used to compute gradients for that level, and so on until level 1. Algorithm 1 gives the full DBI algorithm. In this algorithm, $\operatorname{CHD}\left( i\right)$ denotes the set of children of player $i$ (i.e., nodes in level ${l}_{i} + 1$ for whom $i$ is the parent). DBI works in a backward message-passing manner, comparable to back-propagation: after each player has computed its total derivative, it passes (back-propagates) ${D}_{{x}_{i}}{\Phi }_{l}$ to its direct parent; this information is, in turn, used by the parent to compute its own total derivative, which is passed to its own parent, and so on.
58
+
59
+ Algorithm 1 Differential Backward Induction (DBI)
60
+
61
+ Input: An SHG instance $\mathcal{G}$
62
+
63
+ Parameters: Learning rate $\alpha$ , maximum number of itera-
64
+
65
+ tions $T$ for gradient update
66
+
67
+ Output: A strategy profile
68
+
69
+ Randomly initialize ${\mathbf{x}}^{0} = \left\langle {{\mathbf{x}}_{1}^{0},\ldots ,{\mathbf{x}}_{L}^{0}}\right\rangle$
70
+
71
+ for $t = 1,2,\ldots ,T$ do
72
+
73
+ for $l = L,L - 1,\ldots ,1$ do
74
+
75
+ for $i = 1,2,\ldots ,{n}_{l}$ do
76
+
77
+ if $l = L$ then
78
+
79
+ Back-propagate ${D}_{{x}_{i}}{\Phi }_{i} = \mathbf{I}$ to $\mathrm{{PA}}\left( i\right)$
80
+
81
+ Set ${x}_{i}^{t} \leftarrow {x}_{i}^{t - 1} + \alpha {\nabla }_{{x}_{i}}{u}_{i}$
82
+
83
+ else
84
+
85
+ Compute ${\nabla }_{{x}_{i}}{u}_{i},{\nabla }_{{\mathbf{x}}_{L}}{u}_{i}$ at ${\mathbf{x}}^{t - 1}$
86
+
87
+ Compute ${D}_{{x}_{i}}{\phi }_{j},\forall j \in \mathrm{{CHD}}\left( i\right)$ (Eqn. (5))
88
+
89
+ Compute ${D}_{{x}_{i}}{\Phi }_{l}$ (Eqn. (4))
90
+
91
+ Back-propagate ${D}_{{x}_{i}}{\Phi }_{l}$ to $\mathrm{{PA}}\left( i\right)$
92
+
93
+ Compute ${D}_{{x}_{i}}{u}_{i}\; = \;{\nabla }_{{x}_{i}}{u}_{i} +$
94
+
95
+ ${\nabla }_{{\mathbf{x}}_{L}}{u}_{i}{D}_{{x}_{i}}{\Phi }_{l}$
96
+
97
+ Set ${x}_{i}^{t} \leftarrow {x}_{i}^{t - 1} + \alpha {D}_{{x}_{i}}{u}_{i}$
98
+
99
+ Return ${\mathbf{x}}^{T}$
100
+
101
+ Algorithm 1 takes the total derivates as given. We now derive closed-form expressions for these. We start from the last level $L$ . Given the actions of players in level $L - 1$ , the total derivative of a player $i \in {\mathcal{N}}_{L}$ with respect to ${x}_{i}$ is
102
+
103
+ $$
104
+ {D}_{{x}_{i}}{u}_{i}\left( {{x}_{i},{x}_{\mathrm{{PA}}\left( i\right) },{\mathbf{x}}_{L, - i}}\right) = {\nabla }_{{x}_{i}}{u}_{i}. \tag{1}
105
+ $$
106
+
107
+ For a player $i$ in level $L - 1$ , the total derivative (at a local best response) is
108
+
109
+ $$
110
+ {D}_{{x}_{i}}{u}_{i}\left( {{x}_{i},{x}_{\mathrm{{PA}}\left( i\right) },{\phi }_{L}\left( \left\langle {{x}_{i},{\mathbf{x}}_{L - 1, - i}}\right\rangle \right) }\right)
111
+ $$
112
+
113
+ $$
114
+ = {\nabla }_{{x}_{i}}{u}_{i} + \left( {{\nabla }_{{\mathbf{x}}_{L}}{u}_{i}}\right) \left( {{D}_{{x}_{i}}{\phi }_{L}}\right) , \tag{2}
115
+ $$
116
+
117
+ where ${\nabla }_{{\mathbf{x}}_{L}}{u}_{i}$ is a $1 \times {d}_{{n}_{L}}$ vector and ${D}_{{x}_{i}}{\phi }_{L}$ is a ${d}_{{n}_{L}} \times d$ matrix. The technical challenge here is to derive the term ${D}_{{x}_{i}}{\phi }_{L}$ for $i \in {\mathcal{N}}_{L - 1}$ . Recall that ${\phi }_{L}$ is the vectorized concatenation of the ${\phi }_{j}$ functions for $j \in {\mathcal{N}}_{L}$ . Since the local best response strategy of a player in level $L$ only depends on its parent in level $L - 1$ , the only terms in ${\phi }_{L}$ that depend on ${x}_{i}$ are the actions of $\operatorname{CHD}\left( i\right)$ in level $L$ . Consequently, it suffices to derive ${D}_{{x}_{i}}{\phi }_{j}$ for $j \in \mathrm{{CHD}}\left( i\right)$ . Note that for these players $j,{\nabla }_{{x}_{j}}{u}_{j} = 0$ (by local optimality of ${\phi }_{L}$ ). We will use this first-order condition to derive the expression for the total derivative using the implicit function theorem.
118
+
119
+ Theorem 1 (Implicit Function Theorem (IFT) [10, Theorem 1B.1]). . Let $f\left( {{\mathbf{x}}_{1},{\mathbf{x}}_{2}}\right) : {\mathbb{R}}^{d} \times {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ be a continuously differentiable function in a neighborhood of $\left( {{\mathbf{x}}_{1}^{ * },{\mathbf{x}}_{2}^{ * }}\right)$ such that $f\left( {{\mathbf{x}}_{1}^{ * },{\mathbf{x}}_{2}^{ * }}\right) = 0$ . Also suppose ${\nabla }_{{\mathbf{x}}_{2}}f$ , the Jacobian of $f$ with respect to ${x}_{2}$ , is non-singular at $\left( {{\mathbf{x}}_{1}^{ * },{\mathbf{x}}_{2}^{ * }}\right)$ . Then around a neighborhood of ${\mathbf{x}}_{1}^{ * }$ , we have a local diffeomorphism ${\mathbf{x}}_{2}^{ * }\left( {\mathbf{x}}_{1}\right) : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ such that ${D}_{{\mathbf{x}}_{1}}{\mathbf{x}}_{2} = - {\left( {\nabla }_{{\mathbf{x}}_{2}}f\right) }^{-1}{\nabla }_{{\mathbf{x}}_{1}}f.$
120
+
121
+ To use Theorem 1, we set $f = {\nabla }_{{x}_{j}}{u}_{j}$ (which satisfies the conditions of Theorem 1 by Assumption 1), ${\mathbf{x}}_{1} = {x}_{i}$ and ${x}_{2} = {\mathbf{x}}_{j}$ (recall that $j \in \operatorname{CHD}\left( i\right)$ ). By IFT, there exists ${\phi }_{j}\left( {x}_{i}\right)$ such that ${D}_{{x}_{i}}{\phi }_{j} = - {\left( {\nabla }_{{x}_{j},{x}_{j}}^{2}{u}_{j}\right) }^{-1}{\nabla }_{{x}_{j},{x}_{i}}^{2}{u}_{j}$ . Define ${\nabla }_{j}^{2} \mathrel{\text{ := }} {\nabla }_{{x}_{j},{x}_{i}}^{2}{u}_{j}$ . Then
122
+
123
+ $$
124
+ \left( {{\nabla }_{{\mathbf{x}}_{L}}{u}_{i}}\right) \left( {{D}_{{x}_{i}}{\phi }_{L}}\right) = - \mathop{\sum }\limits_{{j \in \operatorname{CHD}\left( i\right) }}\left( {{\nabla }_{{x}_{j}}{u}_{i}}\right) {D}_{{x}_{i}}{\phi }_{j}
125
+ $$
126
+
127
+ $$
128
+ = - \mathop{\sum }\limits_{{j \in \operatorname{CHD}\left( i\right) }}\left( {{\nabla }_{{x}_{j}}{u}_{i}}\right) {\left( {\nabla }_{{x}_{j},{x}_{j}}^{2}{u}_{j}\right) }^{-1}{\nabla }_{j}^{2}.
129
+ $$
130
+
131
+ Plugging this into Equation (2), we obtain
132
+
133
+ $$
134
+ {D}_{{x}_{i}}{u}_{i}\left( {{x}_{i},{x}_{\mathrm{{PA}}\left( i\right) },{\phi }_{L}\left( {\mathbf{x}}_{L - 1}\right) }\right)
135
+ $$
136
+
137
+ $$
138
+ = {\nabla }_{{x}_{i}}{u}_{i} - \mathop{\sum }\limits_{{j \in \operatorname{CHD}\left( i\right) }}\left( {{\nabla }_{{x}_{j}}{u}_{i}}\right) {\left( {\nabla }_{{x}_{j},{x}_{j}}^{2}{u}_{j}\right) }^{-1}{\nabla }_{j}^{2}. \tag{3}
139
+ $$
140
+
141
+ For a level $l < L - 1$ , the total derivative of player $i \in {\mathcal{N}}_{l}$ in a local best response is ${D}_{{x}_{i}}{u}_{i} = {\nabla }_{{x}_{i}}{u}_{i} + \left( {{\nabla }_{{x}_{L}}{u}_{i}}\right) \left( {{D}_{{x}_{i}}{\Phi }_{l}}\right)$ , where
142
+
143
+ $$
144
+ {D}_{{x}_{i}}{\Phi }_{l} = \left( {{D}_{{\mathbf{x}}_{l + 1}}{\Phi }_{l + 1}}\right) \left( {{D}_{{x}_{i}}{\mathbf{x}}_{l + 1}}\right)
145
+ $$
146
+
147
+ $$
148
+ = \mathop{\sum }\limits_{{j \in \operatorname{CHD}\left( i\right) }}\left( {{D}_{{x}_{j}}{\Phi }_{l + 1}}\right) \left( {{D}_{{x}_{i}}{\phi }_{j}}\right) . \tag{4}
149
+ $$
150
+
151
+ Applying IFT, we get
152
+
153
+ $$
154
+ {D}_{{x}_{i}}{\phi }_{j} = - {\left( {\nabla }_{{x}_{j},{x}_{j}}^{2}{u}_{j}\right) }^{-1}{\nabla }_{{x}_{j},{x}_{i}}^{2}{u}_{j}, \tag{5}
155
+ $$
156
+
157
+ for $j \in \operatorname{CHD}\left( i\right)$ . We can apply the above procedure recursively for ${D}_{{\mathbf{x}}_{l + 1}}{\Phi }_{l + 1}$ to derive the total derivative for players $i \in {\mathcal{N}}_{l}$ for $l < L - 1$ :
158
+
159
+ $$
160
+ {D}_{{x}_{i}}{u}_{i} = {\nabla }_{{x}_{i}}{u}_{i} + \left( {\mathop{\sum }\limits_{{j \in \operatorname{LEAF}\left( i\right) }}{\left( -1\right) }^{L - l}{\nabla }_{{x}_{j}}{u}_{i}}\right.
161
+ $$
162
+
163
+ $$
164
+ \left. {\mathop{\prod }\limits_{{\eta \in \operatorname{PATH}\left( {j \rightarrow i}\right) }}{\left( {\nabla }_{{x}_{\eta },{x}_{\eta }}^{2}{u}_{\eta }\right) }^{-1}{\nabla }_{{x}_{\eta },{\mathbf{x}}_{\operatorname{PA}\left( \eta \right) }}^{2}{u}_{\eta }}\right) ,
165
+ $$
166
+
167
+ (6)where $\operatorname{PATH}\left( {j \rightarrow i}\right)$ is an ordered list of nodes (players) lying on the unique path from $j$ to $i$ , excluding $i$ . Note that Equation (6) is a generalization of Equation (3) where the PATH only consists of the leaf player.
168
+
169
+ While the above derivation assumes the $\phi$ and $\Phi$ functions are local best responses, in our algorithm in each iteration we evaluate these functional expressions for the total derivatives at the current joint action profile. This significantly reduces computational complexity and ensures that Algorithm 1 satisfies the first-order conditions upon convergence.
170
+
171
+ § 3.2 CONVERGENCE ANALYSIS
172
+
173
+ Our next task is to provide sufficient conditions for the DBI algorithm to converge to a stable point. A few notes are in order. First, as we remarked earlier, stable points of DBI are not guaranteed to be SPE just as stable points of gradient ascent are not guaranteed to be globally optimal with general non-convex objective functions. Furthermore, DBI algorithm entails what are effectively iterative better-response updates by players, and it is well-known that best response dynamic processes in games will in general lead to cycles [12].
174
+
175
+ To begin, we observe that the gradient updates in DBI can be interpreted as a discrete dynamical system, ${\mathbf{x}}^{t + 1} = F\left( {\mathbf{x}}^{t}\right)$ , with $F\left( {\mathbf{x}}^{t}\right) = \left( {\mathbf{I} + {\alpha G}}\right) \left( {\mathbf{x}}^{t}\right)$ where $G$ is an update gradient vector. This discrete system can be viewed as an approximation of a continuous limit dynamical system $\dot{\mathbf{x}} = G\left( \mathbf{x}\right)$ (i.e., letting $\alpha \rightarrow 0$ ). A standard solution concept for such dynamical systems is a locally asymptotic stable point (LASP).
176
+
177
+ Definition 1 ([13]). A continuous (or discrete) dynamical system $\dot{\mathbf{x}} = G\left( \mathbf{x}\right)$ (or ${\mathbf{x}}^{t + 1} = F\left( {\mathbf{x}}^{t}\right)$ ) has a locally asymptotic stable point (LASP) ${\mathbf{x}}^{ * }$ if $\exists \epsilon > 0,\mathop{\lim }\limits_{{t \rightarrow \infty }}{\mathbf{x}}^{t} =$ ${\mathbf{x}}^{ * },\forall {\mathbf{x}}^{0} \in {\mathbb{B}}_{\epsilon }\left( {\mathbf{x}}^{ * }\right)$ .
178
+
179
+ There are well-known necessary and sufficient conditions for the existence of an LASP.
180
+
181
+ Proposition 1 (Characterization of LASP [40, Theorem 1.2.5, Theorem 3.2.1]). A point ${\mathbf{x}}^{ * }$ is an LASP for the continuous dynamical system $\dot{\mathbf{x}} = G\left( \mathbf{x}\right)$ if $G\left( {\mathbf{x}}^{ * }\right) = 0$ and all eigenvalues of Jacobian matrix ${\nabla }_{\mathbf{x}}G$ at ${\mathbf{x}}^{ * }$ have negative real parts. Furthermore, for any ${\mathbf{x}}^{ * }$ such that $G\left( {\mathbf{x}}^{ * }\right) = 0$ , if ${\nabla }_{\mathbf{x}}G$ has eigenvalues with positive real parts at ${\mathbf{x}}^{ * }$ , then ${\mathbf{x}}^{ * }$ cannot be an LASP.
182
+
183
+ Note that an LASP of DBI is an action profile of all players that satisfies the first-order conditions, i.e., it has the property that no player can improve their utility through a local gradient update. While the existence of an LASP depends on game structure, we show that under Assumption 1, and as long as the sufficient conditions for LASP existence in Proposition 1 are satisfied, DBI converges to LASP. We defer all the omitted proofs to Appendix C.
184
+
185
+ Proposition 2. Let ${\lambda }_{1},\ldots ,{\lambda }_{d}$ denote the eigenvalues of the updating Jacobian ${\nabla }_{\mathbf{x}}G$ at an LASP ${\mathbf{x}}^{ * }$ and define ${\lambda }^{ * } = \arg \mathop{\max }\limits_{{i \in \left\lbrack d\right\rbrack }}\operatorname{Re}\left( {\lambda }_{i}\right) /{\left| {\lambda }_{i}\right| }^{2}$ , where $\operatorname{Re}$ is the real part operator. Then with a learning rate $\alpha < - 2\operatorname{Re}\left( {\lambda }^{ * }\right) /{\left| {\lambda }^{ * }\right| }^{2}$ , and an initial point ${\mathbf{x}}^{0} \in {\mathbb{B}}_{\epsilon }\left( {\mathbf{x}}^{ * }\right)$ for some $\epsilon > 0$ around ${\mathbf{x}}^{ * }$ , DBI converges to an LASP. Specifically, if the choice of learning rate equals ${\alpha }^{ * }$ and the modulus of matrix $\rho (\mathbf{I} +$ $\left. {{\alpha }^{ * }{\nabla }_{\mathbf{x}}G}\right) = 1 - \kappa < 1$ , then the dynamics converge to ${\mathbf{x}}^{ * }$ with the rate of $O\left( {\left( 1 - \kappa /2\right) }^{t}\right)$ .
186
+
187
+ Proposition 2 states that there exists a region such that, if the initial point is in that region, then DBI will converge to an LASP. We next show that if we assume first-order Lips-chitzness for the update rule, then we can also characterize the region of initial points which converge to an LASP.
188
+
189
+ Proposition 3. Suppose $G$ is L-Lipschitz. Then for all ${\mathbf{x}}^{0} \in {\mathbb{B}}_{\kappa /{2L}}\left( {\mathbf{x}}^{ * }\right) ,\epsilon > 0$ and after $T$ rounds of gradient update, DBI will output a point ${\mathbf{x}}^{T} \in {\mathbb{B}}_{\epsilon }\left( {\mathbf{x}}^{ * }\right)$ as long as $T \geq \left\lceil {\frac{2}{\kappa }\log \begin{Vmatrix}{{\mathbf{x}}^{0} - {\mathbf{x}}^{ * }}\end{Vmatrix}/\epsilon }\right\rceil$ where $\kappa$ is as defined in Proposition 2
190
+
191
+ We further show that through random initialization, the probability of reaching a saddle point is 0, which means that with probability 1, DBI converges to an LASP in which players are playing local best responses.
192
+
193
+ Proposition 4. Suppose $G$ is L-Lipschitz. Let $\alpha < 1/L$ and define the saddle points of the dynamics $G$ as ${\mathcal{X}}_{\text{ sad }}^{ * } = \left\{ {{\mathbf{x}}^{ * } \in }\right.$ $\mathcal{X} \mid {\mathbf{x}}^{ * } = \left( {\mathbf{I} + {\alpha G}}\right) \left( {\mathbf{x}}^{ * }\right) ,\rho \left( {\left( {\mathbf{I} + \alpha {\nabla }_{\mathbf{x}}G}\right) \left( {\mathbf{x}}^{ * }\right) }\right) > 1\} .{Also}$ let ${\mathcal{X}}_{\text{ sad }}^{0} = \left\{ {{\mathbf{x}}^{0} \in \mathcal{X} \mid \mathop{\lim }\limits_{{t \rightarrow \infty }}{\left( \mathbf{I} + \alpha G\right) }^{t}\left( {\mathbf{x}}^{0}\right) \in {\mathcal{X}}_{\text{ sad }}^{ * }}\right\}$ denote the set of initial points that converge to a saddle point. Then $\mu \left( {\mathcal{X}}_{\text{ sad }}^{0}\right) = 0$ , where $\mu$ is Lebesgue measure.
194
+
195
+ While our convergence analysis does not guarantee convergence to an approximate SPE, our experiments show that DBI is in fact quite effective in doing so in practice.
196
+
197
+ § 4 EXPERIMENTS
198
+
199
+ In this section, we empirically investigate the following questions: (1) the convergence rate of DBI, (2) the solution quality of DBI, (3) the behavior of DBI in games where we can verify global stability. All our code is written in python. We ran our experiments on an Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz to obtain the results in Sections 4.1 and C.3, and on an Intel(R) Core(TM) i9-9820X CPU @ 3.30GHz for the rest of the experiments.
200
+
201
+ We evaluate the performance in terms of quality of equilibrium approximation as a function of the number of iterations of a given algorithm, or its running time. Ideally, given a collection of actions $\mathbf{x}$ played by players along the (approximate) equilibrium path computed, we wish to find the largest utility gain any player can have by deviating from this path, which we denote by $\epsilon \left( \mathbf{x}\right)$ . However, this computation is impossible in our setting, as it would need to consider all possible histories as well, whereas our approach and alternatives only return $\mathbf{x}$ along the path of play (moreover, considering all possible histories is itself intractable).
202
+
203
+ ${}^{4}$ Formally, this means that $\exists L > 0$ such that $\forall \mathbf{x},{\mathbf{x}}^{\prime } \in$ $\mathcal{X},{\begin{Vmatrix}G\left( \mathbf{x}\right) - G\left( {\mathbf{x}}^{\prime }\right) \end{Vmatrix}}_{2} \leq L{\begin{Vmatrix}\mathbf{x} - {\mathbf{x}}^{\prime }\end{Vmatrix}}_{2}.$
204
+
205
+ < g r a p h i c s >
206
+
207
+ Figure 2: Convergence behaviors on (a) a(1,1,1)game with 1-d actions (b) a(1,1,2)game with 1-d actions (c) a(1,1,1) game with 3-d actions.
208
+
209
+ < g r a p h i c s >
210
+
211
+ Figure 3: Solution qualities on (a) a(1,1,1)game with 1-d actions (b) a(1,1,2)game with 1-d actions (c) a(1,1,1)game with 3-d actions.
212
+
213
+ Therefore, we consider two heuristic alternatives. The first, which we call local SPE regret, runs DBI for every player $i$ starting with $\mathbf{x}$ , and returns the greatest benefit that any player can thereby obtain; we use this in Section 4.1. In the rest of this section, we use the second alternative, which we call global SPE regret. It considers for each player $i$ in level $l$ a discrete grid of alternative actions, and uses best response dynamics to compute an approximate SPE of the level- $\left( {l + 1}\right)$ subgame to evaluate player $i$ ’s utility for each such deviation. This approach then returns the highest regret among all players computed in this way.
214
+
215
+ Our evaluation considers three SHG scenarios. We begin by comparing DBI to a number of baselines on simple, stylized SHG models, then move on to three complex hierarchical game models motivated by concrete applications.
216
+
217
+ § 4.1 POLYNOMIAL GAMES
218
+
219
+ We begin by considering instances of SHGs to which we can readily apply several state-of-the-art baselines, allowing us a direct comparison to previous work. Specifically, we consider 3 SHG instances with different game properties: (a) a three-level chain structure (or the(1,1,1)game) with 1-d actions (b) a " $\lambda$ " shape tree (or the(1,1,2)game) with 1-d action spaces, and (c) and (1,1,1) game with 3-d actions. In all the games, the payoffs are polynomial functions of $\mathbf{x}$ with randomly generated coefficients (we can think of these as proxies for a Taylor series approximation of actual utility functions). Details are in Appendix C.
220
+
221
+ We compare DBI with the following five baselines: 1) simultaneous partial gradient ascent (SIM) [7, 28], 2) symplectic gradient dynamics with or 3 ) without alignment (SYM_ALN and SYM, respectively) [4], 4) consensus optimization (CO) [30], and 5) Hamilton gradient (HAM) [1, 26]. SIM, SYM_ALN, SYM, CO and HAM are all designed to compute a local Nash equilibrium [4, 7].
222
+
223
+ We start by comparing convergence behavior of DBI to the baselines. We run all algorithm with the same initial point and learning rate. The results are in Figure 2 where we plot the ${L}_{2}$ norm of total gradient for each of the algorithms (Y axis) against the number of iterations (X axis). In all cases, DBI converges to a critical point that meets the first-order conditions while the baseline algorithms fail to do so in most cases. In Figures 2(a) and (c), all baselines have converged to a point with finite norm for the total gradients. In (b), however, only CO and HAM converge to a stationary point while SIM, SYM, SYM_ALN all diverge. For scenario (b), DBI appears to be on an inward spiral to a critical point. We further check the second-order condition (see Appendix C) and verify that DBI has actually converged to local maxima of individual payoffs in all three games.
224
+
225
+ Next, we investigate solution quality in terms of local regret of DBI compared to baselines. As shown in Figure 3, across all three game instances, DBI outputs a profile of actions (along the path of play) with near-zero local regret while other algorithm fail to do so.
226
+
227
+ § 4.2 DECENTRALIZED EPIDEMIC POLICY GAME
228
+
229
+ Next, we consider DBI for solving a class of games inspired by hierarchical decentralized policy-making in the context of epidemics such as COVID-19 [18]. The hierarchy has levels corresponding to the (single) federal government, multiple states, and county administrations under each state. Each player’s action (policy) is a scalar in $\left\lbrack {0,1}\right\rbrack$ that represents, for example, the extent of social distancing recommended or mandated by a player (e.g., a state) for its administrative subordinates (e.g., counties). Crucially, these subordinates have considerable autonomy about setting their own policies, but incur a non-compliance cost for significantly deviating from recommendations made by the level immediately above (of course, non-compliance costs are not relevant for the root player). The full cost function of each player additionally includes infection prevalence within the geographic territory of interest to the associated entity (e.g., within the state), as well as the socio-economic cost of the policy itself (complete details are provided in Appendix C).
230
+
231
+ Since the actions are in a one-dimensional compact space and the depth of the hierarchy is at most 3, our baseline is the best response dynamics (BRD) algorithm proposed by Jia et al. [18] (detailed in Appendix C), and we use global regret as a measure of efficacy in comparing the proposed DBI algorithm with BRD. The results of this comparison are shown in Figure 4 for two-level (government and states) and three-level (government, states, counties) variants of this game. We consider two-level games with 20 and 50 leaves (states), and three-level games with 2 players in level 2 (states) and 4 and 10 leaves (counties).
232
+
233
+ As we can see from the top plots in Figure 4, BRD can have poor convergence behavior in terms of global regret, whereas DBI appears to converge quite reliably to a path of play with a considerably lower global regret. Notably, the improvement in solution quality becomes more substantial as we increase the game complexity either in terms of scale (number of leaves) or in terms of the level of hierarchy (moving from 2- to 3-level games).
234
+
235
+ < g r a p h i c s >
236
+
237
+ Figure 4: Global regret (top) and running time (bottom) for the decentralized epidemic policy game. Left (resp., right) column corresponds to result for games with 2(resp., 3) levels.
238
+
239
+ Running time (in seconds) demonstrates the relative efficacy of DBI even further (bottom plots in Figure 4). In particular, observe the significant increase in the running time of BRD as we increase the number of leaves. In contrast, DBI is far more scalable: indeed, even more than doubling the number of players appears to have little impact on its running time. Moreover, BRD is several orders of magnitude slower than DBI for the more complex games.
240
+
241
+ § 4.3 HIERARCHICAL PUBLIC GOODS GAMES
242
+
243
+ Next, we consider hierarchical public goods games. A conventional networked public goods game endows each player $i$ with a utility function ${u}_{i}\left( {{x}_{i},{x}_{-i}}\right) = {a}_{i} + {b}_{i}{x}_{i} +$ $\mathop{\sum }\limits_{j}{g}_{ji}{x}_{i}{x}_{j} - {c}_{i}\left( {x}_{i}\right)$ , where ${g}_{ji}$ is the impact of player $j$ on player $i$ (often represented as a weighted edge on a network), and ${x}_{i} \in \left\lbrack {0,1}\right\rbrack$ the level of investment in the public good by player $i$ [6]. We construct a 3-level hierarchical variant of such games by starting with the karate club network [43] which represents friendships among 34 individuals. Level-2 nodes are obtained by partitioning the network into two (sub)clubs, with leaves (level-3 nodes) representing all the individuals. The utility of level-2 nodes is the sum of utilities of individual members of associated clubs, with the utility of the root being the sum of the utilities of all individuals. Furthermore, we introduce non-compliance costs with investment policies in the level immediately above, as we did in the decentralized epidemic policy game (Section 4.2). Further details are provided in Appendix C.6.
244
+
245
+ Figure 5(a) presents the global regret as a function of running time for DBI (black line) and BRD with different levels of discretization (dots). We observe that DBI yields considerably lower regret in these games than BRD even as we discretize the latter finely. Moreover, DBI reaches smaller regret orders of magnitude faster than BRD.
246
+
247
+ < g r a p h i c s >
248
+
249
+ Figure 5: (a) Performance $\left( \epsilon \right)$ in the Public Goods Game (Section 4.3); the scatter points show the results of BRD with discretized factors0.5,0.2,0.1,0.05, and best response rounds 2,3. (b),(c) Results on (1,3,6) hierarchical security games (Section 4.4); legend is shared.
250
+
251
+ § 4.4 HIERARCHICAL SECURITY GAMES
252
+
253
+ In the final set of experiments, we evaluate DBI on a hierarchical extension of interdependent security games [3]. In these games, $n$ defenders can each invest ${x}_{i} \geq 0$ in security. If defender $i$ is attacked, the probability that the attack succeeds is $1/\left( {1 + {x}_{i}}\right)$ . Furthermore, defenders are interdependent, so that a successful attack on defender $i$ cascades to defender $j$ with probability ${q}_{ji}$ . In the variant we adopt, the attacker strategy is a uniform distribution over defenders (e.g., the "attacker" is just nature, with attacks representing stochastic exogenous failures). The utility of the defender is the probability of surviving the attack less the cost of security investment.
254
+
255
+ We extend this simultaneous-move game to a hierarchical structure consisting of one root player (e.g., government), three level-2 players (e.g., sectors), and six leaf players (e.g., organizations). The policy-makers in the first two levels of the game recommend an investment policy to the level below, and aim to maximize total welfare (sum of utilities) among the leaf players in their subtrees. Just as in both hierarchical epidemic and public goods games, whenever a player in level $l$ does not act according to the recommendation of their parent in level $l - 1$ , they incur a non-compliance cost. Complete model details are deferred to Appendix C.7. We conduct experiments with two weights $\kappa$ that determine the relative importance of non-compliance costs in the decisions of non-root players in the game: $\kappa \in \{ {0.1},{0.5}\}$ .
256
+
257
+ Figures 5(b) and 5(c) present the results of comparing DBI with BRD on this class of games, where BRD is again evaluated with different levels of action space discretization (note, moreover, that in this setting discretizing actions is not enough, since these are unbounded, and we also had to impose an upper bound). We can observe that for either value of $\kappa$ , DBI yields high-quality SPE approximation (in terms of global SPE regret) far more quickly than BRD. In particular, when we use relatively coarse discretization, BRD is approximately an order of magnitude slower, and yields significantly higher regret. In contrast, if we use finer discretization for BRD, global regret for BRD and DBI becomes comparable, but now BRD is several orders of magnitude slower. For example, DBI converges within several seconds, whereas if we discretize ${x}_{i}$ into multiples of 0.02, BRD takes nearly 2 hours, while discretization at the level of 0.01 results in BRD taking nearly 7 hours.
258
+
259
+ § 5 CONCLUSION
260
+
261
+ We introduced a novel class of hierarchical games, proposed a new game-theoretic solution concept and designed an algorithm to compute it. We assume a specific form of utility dependency between players and our solution concept only guarantees local stability. Improvement on each of these two fronts is an interesting direction for future work.
262
+
263
+ Given the generality of our framework, our approach can be used for many applications characterized by a hierarchy of strategic agents e.g., pandemic policy making. However, our modeling requires the full knowledge of the true utility functions of all players and our analysis assumes full rationality for all the players. Although the model we have addressed here is already challenging, these assumptions are unlikely to hold in many real-world applications. Therefore, further analysis is necessary to fully gauge the robustness of our approach before deployment.
UAI/UAI 2022/UAI 2022 Conference/BcLwrLLi5xq/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,444 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Monotonicity Regularization: Improved Penalties and Novel Applications to Disentangled Representation Learning and Robust Classification
2
+
3
+ ## Abstract
4
+
5
+ We study settings where gradient penalties are used alongside risk minimization with the goal of obtaining predictors satisfying different notions of monotonicity. Specifically, we present two sets of contributions. In the first part of the paper, we show that different choices of penalties define the regions of the input space where the property is observed. As such, previous methods result in models that are monotonic only in a small volume of the input space. We thus propose an approach that uses mixtures of training instances and random points to populate the space and enforce the penalty in a much larger region. As a second set of contributions, we introduce regularization strategies that enforce other notions of monotonicity in different settings. In this case, we consider applications, such as image classification and generative modeling, where monotonicity is not a hard constraint but can help improve some aspects of the model. Namely, we show that inducing monotonicity can be beneficial in applications such as: (1) allowing for controllable data generation, (2) defining strategies to detect anomalous data, and (3) generating explanations for predictions. Our proposed approaches do not introduce relevant computational overhead while leading to efficient procedures that provide extra benefits over baseline models.
6
+
7
+ ## 1 INTRODUCTION
8
+
9
+ Highly expressive model classes such as neural networks have achieved impressive prediction performance across a broad range of supervised learning tasks and domains [Krizhevsky et al., 2012, Graves and Jaitly, 2014, Bahdanau et al., 2014]. However, finding predictors attaining low risk on unseen data is often not enough to enable the use of such models in practice. In fact, practical applications usually have more requirements other than prediction accuracy. Hence, devising approaches that search risk minimizers satisfying practical needs led to several research threads seeking to enable the use of neural networks in real-life scenarios. Examples of such requirements include: (1) Robustness, where low risk is expected even if the model is evaluated under distribution shifts, (2) Fairness, where the performance of the model is expected to not significantly change across data sub-populations, and (3) Explainability/Interpretability, where models are expected to indicate how the features of the data imply their predictions.
10
+
11
+ In addition to the requirements mentioned above, a property commonly expected in trained models in certain applications is monotonicity with respect to some subset of the input dimensions. I.e., an increase (or decrease) along some particular dimensions strictly imply the function value will not decrease (or will not increase), provided that all other dimensions are kept fixed. As a result, the behavior of monotonic models will be more aligned with the properties that the data under consideration is believed to satisfy. For example, in the case of models used to accept/reject job applications, we expect acceptance scores to be monotonically non-decreasing with respect to features such as past years of experience of a candidate. Thus, given two applicants with exactly the same features except their years of experience, the more experienced candidate should be assigned an equal or higher chance of getting accepted. For applications where monotonicity is expected, having a predictor failing to satisfy this requirement would damage the user's confidence. As such, different strategies have been devised in order to enable training monotonic predictors. These approaches can be divided into two main categories:
12
+
13
+ Monotonicity by construction: In this case, focus lies on defining a model class that guarantees monotonicity in all of its elements Bakst et al. [2021], Wehenkel and Louppe [2019], Nguyen and Martínez [2019], You et al. [2017], Garcia and Gupta [2009], Archer and Wang [1993]. However, this approach can not be used with general architectures. Additionally, the model class can be constrained to the extent that it might affect the prediction performance.
14
+
15
+ Monotonicity via regularization: This approach is based on searching for monotonic candidates within a general class of models [Liu et al., 2020, Sivaraman et al., 2020, Gupta et al., 2019]. Such group of methods is more generally applicable and can be used, for instance, with any neural network architecture. However, they are not guaranteed to yield monotonic predictors unless extra verification/certification steps are performed, which can be computationally costly.
16
+
17
+ In addition to being a requirement as in the examples discussed above, monotonicity has been also observed to be a useful feature in certain cases. For example, it can define an effective inductive bias and improve generalization in cases where prior knowledge indicates the data generating process satisfies such property [Dugas et al., 2001]. In such cases, however, it is not necessary to satisfy the property everywhere (i.e., in the bulk of the input space), since it is enforced simply as a desirable feature of trained models rather than a design specification.
18
+
19
+ This work comprises two complementary sets of contributions, and in both cases we tackle the problem of performing empirical risk minimization over rich classes of models such as neural networks, while simultaneously searching for monotonic predictors within the set of risk minimizing solutions. In further detail, our contributions are as follows:
20
+
21
+ 1. In Section 3, we identify a limitation in previous methods and show they only enforce monotonicity either near the training data or near the boundaries of the input space. Then, we propose an efficient algorithm that tackles this problem. In particular, we modify Mixup [Zhang et al., 2018] and use it to mix data with random noise. We show that doing so helps populate the interior of the input space. With extensive evaluation on synthetic data and benchmarks, we show that the proposed strategy enforces monotonicity in a larger volume relative to previous methods in the literature.
22
+
23
+ 2. In Section 4, we define different notions of monotonicity along with regularization penalties aimed at enforcing them. We show that doing so introduces useful properties in models used for applications such as generative modeling or object recognition, and does not compromise the original performance obtained without the penalties. Contrary to the discussion on the first part of the paper in Section 3, the monotonicity property is not required to be satisfied everywhere and, as such, constraints that focus only on the actual data points are proposed.
24
+
25
+ ## 2 BACKGROUND AND RELATED WORK
26
+
27
+ We start by defining the notion of partial monotonicity used throughout the paper. Consider the standard supervised learning setting where data instances are observed in pairs $x, y \sim \mathcal{X} \times \mathcal{Y}$ , where $\mathcal{X} \subset {\mathbb{R}}^{d}$ and $\mathcal{Y} \subset \mathbb{R}$ correspond to the input and output spaces, respectively. Further, consider the differentiable functions $f : \mathcal{X} \mapsto \mathcal{Y}$ , and let $M$ indicate some subset of the input dimensions, i.e., $M \subset \{ 1,\ldots d\}$ , such that $x = \operatorname{concat}\left( {{x}_{M},{x}_{\bar{M}}}\right)$ , where $\bar{M} = \{ 1,\ldots , d\} \smallsetminus M$ .
28
+
29
+ Definition 1 Partially monotonic functions relative to $M$ : We say $f$ is monotonically non-decreasing relative to $M$ , denoted ${f}_{M}$ , if $\mathop{\min }\limits_{{i \in M}}\frac{\partial f\left( x\right) }{\partial {x}_{i}} \geq 0,\forall x \in \mathcal{X}$ .
30
+
31
+ This definition covers functions that do not decrease in value given increasing changes along a subset of the input dimensions, provided that all other dimensions are kept unchanged. Several approaches were introduced for defining model classes that have such a property. The simplest approach restricts the weights of the network to be non-negative [Archer and Wang, 1993]. However, doing so affects the prediction performance. Another approach corresponds to using lattice models [Garcia and Gupta, 2009, You et al., 2017]. In this case, models are given by interpolations in a grid defined by training data. Such a class of models can be made monotonic via the choice of the interpolation strategy and recently introduced variations [Bakst et al., 2021] scale efficiently with the dimension of the input space, but downstream applications might still require different classes of models to satisfy this type of property. For neural networks, approaches such as [Nguyen and Martínez, 2019] reparameterize fully connected layers such that the gradients with respect to parameters can only be non-negative. Wehenkel and Louppe [2019], on the other hand, consider the class of predictors $H : \mathcal{X} \mapsto \mathcal{Y}$ of the form $H\left( x\right) = {\int }_{0}^{x}h\left( t\right) {dt} + H\left( 0\right)$ , where $h\left( t\right)$ is a strictly positive mapping parameterized by a neural network. While such approaches guarantee monotonicity by design, they can be too restrictive or give overly complicated learning procedures. For example, the approach in [Wehenkel and Louppe, 2019] requires backpropagating through the integral. An alternative approach is based on searching over general classes of models while assigning higher importance to predictors observed to be monotonic. Similar to the case of adversarial training [Goodfellow et al., 2014], Sivaraman et al. [2020] proposed an approach to find counterexamples, i.e., pairs of points where the monotonicity constraint is violated, which are included in the training data to enforce monotonicity conditions in the next iterations of the model. However, this approach only supports fully-connected ReLU networks. Moreover, the procedure for finding the counterexamples is costly. Alternatively, Liu et al. [2020], Gupta et al. [2019] introduced point-wise regularization penalties for enforcing monotonicity, where the penalties are estimated via sampling. While Liu et al. [2020] use uniform random draws, Gupta et al. [2019] apply the regularization penalty over the training instances. Both approaches have shortcomings that we seek to address.
32
+
33
+ ## 3 AN EFFICIENT FIX FOR MONOTONICITY PENALTIES
34
+
35
+ Given the standard supervised learning setting where $\ell$ : ${\mathcal{Y}}^{2} \mapsto {\mathbb{R}}^{ + }$ is a loss function indicating the goodness of the predictions relative to ground truth targets, the goal is to find a predictor $h \in \mathcal{H}$ such that its expected loss - or the so-called risk - over the input space is minimized. Such an approach yields the empirical risk minimization framework once a finite sample is used to estimate the risk. However, given the extra monotonicity requirement, we consider an augmented framework where such property is further enforced. We seek the optimal monotonic predictors relative to $M,{h}_{M}^{ * }$ :
36
+
37
+ $$
38
+ {h}_{M}^{ * } \in \underset{h \in \mathcal{H}}{\arg \min }{\mathbb{E}}_{x, y \sim \mathcal{X} \times \mathcal{Y}}\left\lbrack {\ell \left( {h\left( x\right) , y}\right) }\right\rbrack + {\gamma \Omega }\left( {h, M}\right) , \tag{1}
39
+ $$
40
+
41
+ where $\gamma$ is a hyperparameter weighing the importance of the penalty $\Omega \left( {h, M}\right)$ which, in turn, is a measure of how monotonic the predictor $h$ is relative to the dimensions indicated by $M.\Omega \left( {h, M}\right)$ can be defined by the following gradient penalty [Gupta et al., 2019, Liu et al., 2020]:
42
+
43
+ $$
44
+ \Omega \left( {h, M}\right) = {\mathbb{E}}_{x \sim \mathcal{D}}\left\lbrack {\mathop{\sum }\limits_{{i \in M}}\max {\left( 0, - \frac{\partial h\left( x\right) }{\partial {x}_{i}}\right) }^{2}}\right\rbrack , \tag{2}
45
+ $$
46
+
47
+ where $\frac{\partial h\left( x\right) }{\partial {x}_{i}}$ indicates the gradients of $h$ relative to the input dimensions $i \in M$ , which are constrained to be nonnegative, rendering $h$ monotonically non-decreasing relative to $M$ . At this point, the only missing ingredient to define algorithms to estimate ${h}_{M}^{ * }$ is how to define the distribution $\mathcal{D}$ over which the expectation in Eq. 2 is computed, discussed in the following sections.
48
+
49
+ ### 3.1 CHOOSING DISTRIBUTIONS OVER WHICH TO COMPUTE THE PENALTY
50
+
51
+ In the following, we present and discuss two past choices for $\mathcal{D}$ :
52
+
53
+ 1) Define $\mathcal{D}$ as the empirical distribution of the training sample: In [Gupta et al., 2019], given a training dataset of size $N$ , in addition to using the observed data to estimate the risk, the same data is used to compute the monotonicity penalty so that:
54
+
55
+ $$
56
+ {\Omega }_{\text{train }}\left( {h, M}\right) = \frac{1}{N}\mathop{\sum }\limits_{{k = 1}}^{N}\mathop{\sum }\limits_{{i \in M}}\max {\left( 0, - \frac{\partial h\left( {x}^{k}\right) }{\partial {x}_{i}^{k}}\right) }^{2},
57
+ $$
58
+
59
+ where ${x}^{k}$ indicates the $k$ -th instance within the training sample. While this choice seems natural and can be easily implemented, it only enforces monotonicity in the region where the training samples lie, which can be problematic. For example, in case of covariate-shift, the test data might lie in parts of the space different from that of the training data so monotonicity cannot be guaranteed. We thus argue that one needs to enforce the monotonicity property in a region larger than what is defined by the training data. In Appendix B, we conduct an evaluation under domain shift and show the issue to become more and more relevant with the increase in the dimension $d$ of the input space $\mathcal{X}$ .
60
+
61
+ 2) Define $\mathcal{D} = \operatorname{Uniform}\left( \mathcal{X}\right)$ : In [Liu et al.,2020], a simple strategy is defined so that $\Omega$ is computed over the random points drawn uniformly across the entire input space $\mathcal{X}$ ; i.e.:
62
+
63
+ $$
64
+ {\Omega }_{\text{random }}\left( {h, M}\right) = {\mathbb{E}}_{x \sim \mathrm{U}\left( \mathcal{X}\right) }\left\lbrack {\mathop{\sum }\limits_{{i \in M}}\max {\left( 0, - \frac{\partial h\left( x\right) }{\partial {x}_{i}}\right) }^{2}}\right\rbrack .
65
+ $$
66
+
67
+ Despite its simplicity and ease of use, this approach has some flaws. In high-dimensional spaces, random draws from any distribution of bounded variance will likely lie in the boundaries of the space, hence far from the regions where data actually lie. Moreover, it is commonly observed that naturally occurring high-dimensional data is structured in lower-dimensional manifolds (c.f. [Fefferman et al., 2016] for an in-depth discussion on the manifold hypothesis). It is thus likely that random draws from the uniform distribution will lie nowhere near regions of space where training/testing data will be observed. We further illustrate the issue with examples in Appendix A, which can be summarized as follows: consider the cases of uniform distributions over the unit $n$ -sphere. In such a case, the probability of a random draw lying closer to the sphere's surface than to its center is $P\left( {\parallel x{\parallel }_{2} > \frac{1}{2}}\right) = \frac{{2}^{n} - 1}{{2}^{n}}$ , as given by the volume ratio of the two regions of interest. Note that $P\left( {\parallel x{\parallel }_{2} > \frac{1}{2}}\right) \rightarrow 1$ as $n \rightarrow \infty$ , which suggests the approach in [Liu et al.,2020] will only enforce monotonicity at the boundaries.
68
+
69
+ In summary, the previous approaches are either too focused on enforcing monotonicity where the training data lie, or too loose such that the monotonicity property is uniformly enforced across a large space, and the actual data manifold may be neglected. We thus propose an alternative approach where we can have some control over the volume of the input space where the monotonicity property will be enforced. Our approach uses the idea of data mixup [Zhang et al., 2018, Verma et al., 2019, Chuang and Mroueh, 2021], where auxiliary data is created via interpolations of pairs of data points, to populate areas of the space that are otherwise disregarded. Mixup was introduced by Zhang et al. [2018] with the goal of training classifiers with smooth outputs across trajectories in the input space from instances of different classes. Given a pair of data points $\left( {{x}^{\prime },{y}^{\prime }}\right)$ , $\left( {{x}^{\prime \prime },{y}^{\prime \prime }}\right)$ , the method augments the training data using interpolations given by $\left( {\lambda {x}^{\prime } + \left( {1 - \lambda }\right) {x}^{\prime \prime },\lambda {y}^{\prime } + \left( {1 - \lambda }\right) {y}^{\prime \prime }}\right)$ , where $\lambda \sim \operatorname{Uniform}\left( \left\lbrack {0,1}\right\rbrack \right)$ . We propose a variation of this approach where data-data and noise-data pairs are mixed to define points where $\Omega$ can be estimated. We highlight the following motivations for doing so: (1) Interpolation of data points more densely populates the convex hull of the training data. (2) Extrapolation cases where mixup is performed between data points and instances obtained at random results in points that lie anywhere between the data manifold and the boundaries of the space. We thus claim that performing mixup enables the computation of $\Omega$ on parts of the space that are disregarded if one focus only on either observed data or random draws from uninformed choices of distributions such as the uniform.
70
+
71
+ ### 3.2 EVALUATION
72
+
73
+ In order to evaluate the effect of different choices of $\Omega$ , we report results on three commonly used datasets covering classification and regression settings with input spaces of different dimensions. Namely, we report results for the following datasets: Compas, Loan Lending Club, and Blog Feedback. Models are implemented using the same architecture as in [Liu et al., 2020]. Further details on the data, models, and training settings can be found in Appendix C. For all evaluation cases, we consider the baseline where training is carried out without any monotonicity enforcing penalty. For the regularized cases, the different approaches used for computing $\Omega$ are as follows:
74
+
75
+ (1) ${\Omega }_{\text{random }}$ [Liu et al.,2020] which uses random points drawn from $\operatorname{Uniform}\left( \mathcal{X}\right)$ . In this case, the sample observed at each training iteration is set to a size of 1024 throughout all experiments.
76
+
77
+ (2) ${\Omega }_{\text{train }}$ [Gupta et al.,2019] which uses the actual data observed at each training iteration; i.e., the observed mini-batch itself is used to compute $\Omega$ .
78
+
79
+ (3) ${\Omega }_{\text{mixup }}$ (ours), in which case the penalty is computed on points generated by mixing-up points from the training data and random points. In details, for each mini-batch of size $N > 1$ , we augment it with complementary random data and obtain a final mini-batch of size ${2N}$ . Out of the $\frac{{2N}\left( {{2N} - 1}\right) }{2}$ possible pairs of points, we take a random subsample of 1024 pairs to compute mixtures of instances. In this case, we use $\lambda \sim$ Uniform $\left( \left\lbrack {0,1}\right\rbrack \right)$ and $\lambda$ is independently drawn for each pair of points.
80
+
81
+ Results are reported in terms of both prediction performance and level of monotonicity. The latter is assessed via the probability $\rho$ of a model to not satisfy definition 1, which we estimate via the fraction $\widehat{\rho }$ of points within a sample where the monotonicity constraint is violated; i.e., given a set of $N$ data points, we compute:
82
+
83
+ $$
84
+ \widehat{\rho } = \frac{\mathop{\sum }\limits_{{k = 1}}^{N}\mathbb{1}\left\lbrack {\mathop{\min }\limits_{{i \in M}}\frac{\partial h\left( x\right) }{\partial {x}_{i}^{k}} < 0}\right\rbrack }{N}, \tag{3}
85
+ $$
86
+
87
+ such that $\widehat{\rho } = 0$ corresponds to monotonic models over the considered points. Moreover, in order to quantify the degree of monotonicity in different parts of the space, we estimate $\rho$ for 3 different sets of points: (1) ${\widehat{\rho }}_{\text{random }}$ , computed on a sample drawn according to $\operatorname{Uniform}\left( \mathcal{X}\right)$ . We used a sample of 10,000 points throughout the experiments. (2) ${\widehat{\rho }}_{\text{train }}$ , computed on the training data. And (3) ${\widehat{\rho }}_{\text{test }}$ : computed on the test data. Results are summarized in Table 1 in terms of both prediction performance along with the metric $\widehat{\rho }$ indicating the degree of monotonicity of the predictor for each regularization strategy. Prediction performance is measured in terms of accuracy for classification tasks, and RMSE for the case of regression. Results reported in the tables represent ${95}\%$ confidence intervals corresponding to 20 independent training runs. Across evaluations, different penalties do not result in significant variations in terms of prediction, but affect how monotonic trained models are.
88
+
89
+ This indicates that the class of predictors corresponding to the subset of $\mathcal{H}$ that is monotonic relative to $M$ , denoted ${\mathcal{H}}_{M}$ , has enough capacity so as to be able to match the performance of the best canditates within $\mathcal{H}$ . In terms of monotonicity, we observe a clear pattern leading to the following intuition: monotonicity is achieved in the regions where it is enforced. This is evidenced by the observation that ${\widehat{\rho }}_{\text{random }}$ is consistently lower for ${\Omega }_{\text{random }}$ relative to ${\Omega }_{\text{train }}$ and ${\Omega }_{\text{mixup }}$ while, on the other hand, ${\widehat{\rho }}_{\text{train }}$ and ${\widehat{\rho }}_{\text{test }}$ are consistently lower for ${\Omega }_{\text{train }}$ and ${\Omega }_{\text{mixup }}$ compared to ${\Omega }_{\text{random }}$ . A comparison between ${\Omega }_{\text{train }}$ and ${\Omega }_{\text{mixup }}$ shows what we anticipated: enforcing monotonicity in points resulting from mixup yields predictors that are as monotonic as those given by the use of ${\Omega }_{\text{train }}$ in actual data, but significantly better at the boundaries of $\mathcal{X}$ . Finally, the results demonstrate that our proposed approach ${\Omega }_{\text{mixup }}$ achieves the best results in terms of monotonicity for all the sets of points that we considered. Moreover, our approach introduces no significant computation overhead. Algorithm 1 in Appendix C presents details on how to compute ${\Omega }_{\text{mixup }}$ .
90
+
91
+ ## 4 APPLICATIONS OF MONOTONICITY PENALTIES
92
+
93
+ In Section 3, we presented an efficient approach to enforce monotonicity when it is a requirement. We now consider a different perspective and show that adding monotonicity constraints during training can yield extra benefits to trained models. In these cases, monotonicity is not a requirement, and hence it is not necessary for it to be satisfied everywhere. As such, the penalties we discuss from now on are computed considering only data points, and no random draws are utilized. In the following sections, we introduce notions of monotonicity that will be enforced in our models, and discuss advantages of using monotonicity for different applications such as controllable generative modelling and for the detection of anomalous data. In Appendix F, we consider a further application for cases where one's interest is to obtain explanations from observed predictions.
94
+
95
+ <table><tr><td/><td>Non-mon.</td><td>${\Omega }_{random}$</td><td>${\Omega }_{train}$</td><td>${\Omega }_{mixup}$</td></tr><tr><td colspan="5">COMPAS</td></tr><tr><td>Validation accuracy</td><td>69.1%±0.2%</td><td>68.5% $\pm$ 0.1%</td><td>68.5%±0.1%</td><td>${68.4}\% \pm {0.1}\%$</td></tr><tr><td>Test accuracy</td><td>68.5%±0.2%</td><td>68.1% $\pm$ 0.2%</td><td>68.0%±0.2%</td><td>68.3%±0.2%</td></tr><tr><td>${\widehat{\rho }}_{random}$</td><td>55.45%±12.26%</td><td>0.01% $\pm$ 0.01%</td><td>6.41%±4.54%</td><td>${0.00}\% \pm {0.00}\%$</td></tr><tr><td>${\widehat{\rho }}_{train}$</td><td>92.98%±2.70%</td><td>2.08%±2.21%</td><td>${0.00}\% \pm {0.00}\%$</td><td>${0.00}\% \pm {0.00}\%$</td></tr><tr><td>${\widehat{\rho }}_{test}$</td><td>92.84%±2.75%</td><td>2.16%±2.35%</td><td>${0.00}\% \pm {0.00}\%$</td><td>${0.00}\% \pm {0.00}\%$</td></tr><tr><td colspan="5">Loan Lending Club</td></tr><tr><td>Validation RMSE</td><td>${0.213} \pm {0.000}$</td><td>${0.223} \pm {0.002}$</td><td>${0.222} \pm {0.002}$</td><td>${0.235} \pm {0.001}$</td></tr><tr><td>Test RMSE</td><td>${0.221} \pm {0.001}$</td><td>${0.230} \pm {0.001}$</td><td>${0.229} \pm {0.002}$</td><td>${0.228} \pm {0.001}$</td></tr><tr><td>${\widehat{\rho }}_{\text{random }}$</td><td>99.11%±1.70%</td><td>${0.00}\% \pm {0.00}\%$</td><td>14.47%±7.55%</td><td>${0.00}\% \pm {0.00}\%$</td></tr><tr><td>${\widehat{\rho }}_{train}$</td><td>100.00% $\pm$ 0.00%</td><td>7.23%±7.76%</td><td>0.01% $\pm$ 0.01%</td><td>0.00%±0.00%</td></tr><tr><td>${\widehat{\rho }}_{test}$</td><td>100.00% $\pm$ 0.00%</td><td>6.94% $\pm$ 7.43%</td><td>0.04% $\pm$ 0.03%</td><td>0.00%±0.00%</td></tr><tr><td colspan="5">Blog feedback</td></tr><tr><td>Validation RMSE</td><td>${0.174} \pm {0.000}$</td><td>${0.175} \pm {0.001}$</td><td>${0.177} \pm {0.000}$</td><td>${0.168} \pm {0.000}$</td></tr><tr><td>Test RMSE</td><td>${0.139} \pm {0.001}$</td><td>${0.139} \pm {0.001}$</td><td>${0.142} \pm {0.001}$</td><td>${0.143} \pm {0.001}$</td></tr><tr><td>${\widehat{\rho }}_{random}$</td><td>76.17%±12.37%</td><td>0.05%±0.08%</td><td>3.86%±4.19%</td><td>0.00%±0.01%</td></tr><tr><td>${\widehat{\rho }}_{train}$</td><td>78.67%±5.28%</td><td>78.59%±6.37%</td><td>0.01%±0.01%</td><td>0.01%±0.01%</td></tr><tr><td>${\widehat{\rho }}_{\text{test }}$</td><td>76.29%±6.47%</td><td>78.99%±7.20%</td><td>0.02% $\pm$ 0.02%</td><td>0.02% $\pm$ 0.02%</td></tr></table>
96
+
97
+ Table 1: Evaluation results in terms of 95% confidence intervals resulting from 20 independent training runs. Results correspond to the checkpoint that obtained the best prediction performance on validation data throughout training. The lower the values of $\widehat{\rho }$ the better.
98
+
99
+ ### 4.1 DISENTANGLED REPRESENTATION LEARNING UNDER MONOTONICITY
100
+
101
+ We first consider the case of disentangled representation learning. In this case, generative approaches often assume that the latent variables are independent, and hence control over generative factors can be achieved. E.g., one can modify a specific aspect of the data by modifying the value of a specific latent variable. However, we argue that disentanglement is necessary but not sufficient to enable controllable data generation. That is, one needs latent variables that satisfy some notion of monotonicity to be able to decide their values resulting in desired properties. For example, assume we are interested in generating images of simple geometric forms, and desire to control factors such as shape and size. In this example, even if a disentangled set of latent variables is available, we cannot decide how to change the value of the latent variable to get a bigger or a smaller object if there is no monotonic relationship between the size and the value of the corresponding latent variable. We address this issue and build upon the weakly supervised framework introduced by Locatello et al. [2020]. This work extends the popular $\beta$ -VAE setting [Higgins et al.,2016] by introducing weak supervision such that the training instances are presented to the model in pairs $\left( {{x}^{1},{x}^{2}}\right)$ where only one or a few generative factors are changing between each pair. Here, we propose to apply a notion of monotonocity over the activations of the corresponding latent variables to have more controlable factors. In the VAE setting, data is assumed to be generated according to $p\left( {x \mid z}\right) p\left( z\right)$ given the latent variables $z$ . Approximation is then performed by introducing ${p}_{\theta }\left( {x \mid z}\right)$ and ${q}_{\phi }\left( {z \mid x}\right)$ , both parameterized by neural networks. Our goal is to have $z$ fully factorizable in its dimensions, i.e., $p\left( z\right) = \mathop{\prod }\limits_{{i = 1}}^{{\operatorname{Dim}\left\lbrack z\right\rbrack }}p\left( {z}_{i}\right)$ , which needs to be captured by the approximate posterior distribution ${q}_{\phi }\left( {z \mid x}\right)$ . Training is performed by maximization of the following lower-bound on the data likelihood:
102
+
103
+ $$
104
+ {\mathcal{L}}_{ELBO} = {\mathbb{E}}_{{x}^{1},{x}^{2}}\mathop{\sum }\limits_{{i \in \{ 1,2\} }}{\mathbb{E}}_{{\widetilde{q}}_{\phi }\left( {\widehat{z} \mid {x}^{i}}\right) }\log \left( {{p}_{\theta }\left( {{x}^{i} \mid \widehat{z}}\right) }\right) \tag{4}
105
+ $$
106
+
107
+ $$
108
+ - \beta {D}_{KL}\left( {{\widetilde{q}}_{\phi }\left( {\widehat{z} \mid {x}^{i}}\right) , p\left( \widehat{z}\right) }\right) ,
109
+ $$
110
+
111
+ where ${\widetilde{q}}_{\phi }\left( {{\widehat{z}}_{j} \mid {x}^{i}}\right) = {q}_{\phi }\left( {{z}_{j} \mid {x}^{i}}\right)$ for the latent dimensions ${z}_{i}$ that change across ${x}^{1}$ and ${x}^{2}$ , and ${\widetilde{q}}_{\phi }\left( {{\widehat{z}}_{j} \mid {x}^{i}}\right) =$ $\frac{1}{2}\left( {{q}_{\phi }\left( {{\widehat{z}}_{j} \mid {x}^{1}}\right) + {q}_{\phi }\left( {{\widehat{z}}_{j} \mid {x}^{2}}\right) }\right)$ for those that are common (i.e., the approximate posterior of the shared latent variables are forced to be the same for ${x}^{1}$ and ${x}^{2}$ ). The outer expectation is estimated by sampling pairs of data instances $\left( {{x}^{1},{x}^{2}}\right)$ where only a number of generative factors vary. In our experiments, we consider the case where exactly one generative factor changes across inputs. Moreover, we follow Locatello et al. [2020] and assign the changing factor, denoted by $y$ , to the dimension $j$ of $z$ such that $y = \arg \mathop{\max }\limits_{{j \in \operatorname{Dim}\left\lbrack z\right\rbrack }}{D}_{KL}\left( {{z}_{j}^{1},{z}_{j}^{2}}\right) .$
112
+
113
+ While the above objective enforces disentanglement, controllable generation requires some regularity in $z$ so that users can decide values of $z$ resulting in desired properties in the generated samples. We then introduce ${\Omega }_{VAE}$ to enforce such a regularity. In this case, a monotonic relationship is enforced for the distance between data pairs where only a particular generative factor vary and a corresponding latent variable. In other words, an increasing trend in the value of each dimension of $z$ should yield a greater change in the output along a generative factor. Formally, ${\Omega }_{VAE}$ is defined as the following symmetric cross-entropy estimate:
114
+
115
+ $$
116
+ {\Omega }_{VAE} = - \frac{1}{2m}\mathop{\sum }\limits_{{i = 1}}^{m}\log \frac{{e}^{\frac{L\left( {{x}^{i,1},{x}^{i,2},{y}^{i}}\right) }{\mu }}}{\mathop{\sum }\limits_{{k = 1}}^{K}{e}^{\frac{L\left( {{x}^{i,1},{x}^{i,2}, k}\right) }{\mu }}} \tag{5}
117
+ $$
118
+
119
+ $$
120
+ + \log \frac{{e}^{\frac{L\left( {{x}^{i,2},{x}^{i,1},{y}^{i}}\right) }{\mu }}}{\mathop{\sum }\limits_{{k = 1}}^{K}{e}^{\frac{L\left( {{x}^{i,2},{x}^{i,1}, k}\right) }{\mu }}},
121
+ $$
122
+
123
+ where $L$ is given by the gradient of the mean squared error (MSE) between images that are 1-factor away along the dimension $y$ of $z$ , assigned to the changing factor, i.e., for the pair ${x}^{i}$ and ${x}^{j}$ varying only across factor $y$ , we have:
124
+
125
+ $$
126
+ L\left( {{x}^{i},{x}^{j}, y}\right) = \frac{\partial \operatorname{MSE}\left( {{\widehat{x}}^{i},{x}^{j}}\right) }{\partial {\widetilde{z}}_{y}}. \tag{6}
127
+ $$
128
+
129
+ In this case, ${\widehat{x}}^{i}$ indicates the reconstruction of ${x}^{i}$ . We evaluate such an approach by training the same 4-layered convolutional VAEs described in [Higgins et al., 2016] using the 3d-shapes dataset ${}^{1}$ . The dataset is composed of images containing shapes generated from 6 independent generative factors: floor color, wall color, object color, scale, shape and orientation. All combinations of these factors are present exactly once, resulting in $m = {480000}$ . We compared VAEs trained with and without the inclusion of the monotonicity penalty given by ${\Omega }_{VAE}$ . We highlight that the goal of the proposed framework is not to improve over current approaches in terms of how disentangled the learned representations are. Rather, we seek to achieve similar results in that sense, but impose extra regularity and structure in the relationship between the generated images and the values of $z$ so that the generative process is more easily controllable. Qualitative analysis is performed and shown in Figure 1. The two panels on the left represent the data generated by a linear combination of the latent code corresponding to two images that only vary in the factor object color. The panels stacked on the right present a per-dimension traversal of the latent space starting from a common image. It can be observed that disentanglement is indeed achieved in both cases. The monotonic model presents much smoother transitions between colors while the base model gives long sequences of very close images followed by very sharp transitions where the colors sometimes repeat (e.g., green-yellow-green transitions in the fourth row). As for the results per factor, the monotonic model provides more structure in the latent space compared to the base model. This can be observed in the shape factor. The monotonic model provides a certain order: sphere, cylinder, and then cube. Visually inspecting many samples, the monotonic model is following this order for the generated shapes. This pattern is even more pronounced in the color factors. We have found that the colors generated by the monotonic model follows the order of the colours in the HUE cycle. So our model has ordered the latent space and we know how to navigate it to generate a desired image. On the other hand, the baseline has no clear order of the latent space. For example, the baseline generates cubes at different ranges of $z$ . Similarly, the colors generated by the baseline model do not have a clear order. To further support the claim that ${\Omega }_{VAE}$ induces regularity in the latent space, we introduce the analysis shown in Table 2. We started by increasing ${z}_{3}$ (associated to floor color for both models), and recorded the sequence of the generated colors. We observed that for a large fraction of the data, the monotonic models yield sequences of images where the color of the floor is ordered according to its corresponding HUE angle. Further details are available in Appendix Halong with detailed plots of color transitions and a comparison with the HUE cycle.
130
+
131
+ <table><tr><td>Model</td><td>HUE structured rate</td></tr><tr><td>Base model</td><td>0.00%</td></tr><tr><td>Mon. model</td><td>89.44%</td></tr></table>
132
+
133
+ Table 2: rate of examples where colors are sorted according to hue. A large amount of the sequences generated by monotonic VAEs result in interpretable ordering.
134
+
135
+ ### 4.2 GROUP MONOTONIC CLASSIFIERS
136
+
137
+ We now consider the case of $K$ -way classifiers realized through convolutional neural networks. In this case, data examples correspond to pairs $x, y \sim \mathcal{X} \times \mathcal{Y}$ , and $\mathcal{Y} =$ $\{ 1,2,3,\ldots , K\} , K \in \mathbb{N}$ . Models parameterize a data-conditional categorical distribution over $\mathcal{Y}$ , i.e., for a given model $h, h{\left( x\right) }_{\mathcal{Y}}$ will yield likelihoods for each class indexed in $\mathcal{Y}$ . Under this setting, we introduce the notion of Group Monotonicity: we aim to find the models $h$ such that the outputs corresponding to each class satisfy a monotonic relationship with a specific subset of high-level representations, given by some inner convolutional layer. Let the outputs of a specific layer within a convolutional model be represented by ${a}_{w}, w \in \left\lbrack {1,2,3,\ldots , W}\right\rbrack$ , where $W$ indicates the width of the chosen layer given by its number of output feature maps. For simplicity of exposition, we consider the rather common case of convolutional layers where each feature map ${a}_{w}$ is 2-dimensional. We then partition such a set of representations into disjoint subsets, or slices, of uniform sizes. Each subset is then paired with a particular output or class, and hence denoted by ${S}_{k}, k \in \mathcal{Y}$ . An illustration is provided in Figure 2, where a generic convolutional model has the outputs of a specific layer partitioned into slices ${S}_{k}$ , which are then used to define output units over $\mathcal{Y}$ .
138
+
139
+ Definition 2 Group monotonic classifiers: We say $h$ is group monotonic for input $x$ and class label $y$ if $h{\left( x\right) }_{y}$ is partially monotonic relative to all elements in ${S}_{y}$ .
140
+
141
+ ---
142
+
143
+ https://github.com/deepmind/3d-shapes
144
+
145
+ ---
146
+
147
+ ![019638eb-31fa-7503-90a1-2ee65059544d_6_149_180_1447_576_0.jpg](images/019638eb-31fa-7503-90a1-2ee65059544d_6_149_180_1447_576_0.jpg)
148
+
149
+ Figure 1: Comparisons between data generated by standard and monotonic models. On the two panels on the left, we compare generations from a linear combination of the latent code of 2 images which only differs in the object color. On the two panels vertically stacked on the right, we start from the same image but change one latent dimension at a time.
150
+
151
+ We highlight that in this case, unlike the discussion in Section 3 monotonicity is not an application requirement, and it does not need to be satisfied everywhere.
152
+
153
+ Intuitively, our goal is to "reserve" groups of high-level features to activate more intensely than the remainder depending on the underlying class. Imposing such a structure can benefit the learned models via, for instance, more accurate anomaly detection. For training, we perform monotonic risk minimization as described in Eq. 1, and the risk is given by the negative log-likelihood over training points. Moreover, we design a penalty $\Omega$ that focuses only on observed data points during training and penalizes the slices of the Jacobian corresponding to a given class, i.e., a cross-entropy criterion enforces larger gradients on the specific class slice.
154
+
155
+ In order to formally introduce such a penalty, denoted by ${\Omega }_{\text{group }}$ , we first define the total gradient ${O}_{k}, k \in \mathcal{Y}$ , of a slice ${S}_{k}$ as follows: ${O}_{y}\left( x\right) = \mathop{\sum }\limits_{{{a}_{w} \in {S}_{y}}}\mathop{\sum }\limits_{{i, j}}\frac{\partial h{\left( x\right) }_{y}}{\partial {a}_{w, i, j}}$ , where the inner sum accounts for spatial dimensions of ${a}_{w}$ . Given the set of total gradients, a batch of size $m$ , and inverse temperature $\mu ,{\Omega }_{\text{group }}$ will be:
156
+
157
+ $$
158
+ {\Omega }_{\text{group }} = - \frac{1}{m}\mathop{\sum }\limits_{{i = 1}}^{m}\log \frac{{e}^{\frac{{O}_{{y}^{i}}^{i}\left( {x}^{i}\right) }{\mu }}}{\mathop{\sum }\limits_{{k = 1}}^{K}{e}^{\frac{{O}_{k}^{i}\left( {x}^{i}\right) }{\mu }}}. \tag{7}
159
+ $$
160
+
161
+ #### 4.2.1 Assessing performance of group monotonic classifiers
162
+
163
+ We start our evaluation by verifying whether the group monotonicity property can be effectively enforced into classifiers trained on standard object recognition benchmarks. In order to do so, we verify the performance of the total activation classifier, as defined by: $\arg \mathop{\max }\limits_{{k \in \mathcal{Y}}}{T}_{k}\left( x\right)$ , where ${T}_{k}$ indicates the total activation on slice ${S}_{k} : {T}_{k}\left( x\right) =$ $\mathop{\sum }\limits_{{{a}_{w} \in {S}_{k}}}\mathop{\sum }\limits_{{i, j}}{a}_{w, i, j}\left( x\right)$ . A good prediction performance of such a classifier serves as evidence that the group monotonicity property is satisfied by the model over the test data under consideration since it indicates the slice relative to the underlying class of test instances has the highest total activation. We thus run evaluations for both CIFAR-10 and Im-ageNet, and classifiers in each case correspond to WideRes-Nets [Zagoruyko and Komodakis, 2016] and ResNet-50 [He et al., 2016], respectively. Training details are presented in Appendix D. Results are reported in Table 3 in terms of the top-1 prediction accuracy measured on the test data. We use standard classifiers as the baselines where no monotonicity penalty is applied in order to isolate the effect of the penalty. In both datasets, the total activation classifiers for group monotonic models (indicated by the prefix mono) are able to approximate the performance of the classifier defined at the output layer, $\arg \mathop{\max }\limits_{{k \in \mathcal{Y}}}h{\left( x\right) }_{k}$ . This suggests that the higher total activation generally matches the predicted class for group monotonic models, which indicates the property is successfully enforced. Considering performances obtained at the output layer, there were small variations in accuracy when we included monotonicity penalties, which should be considered in practical uses of group monotonicity. Nonetheless, results suggest that one can perform closely to unconstrained models while focusing on the set of group monotonic candidates. Additional experiments are reported on Table 8 on Appendix E for cases with small sample sizes, where we show that the performance of the classifier defined at the output layer upper bounds that of the total activation classifier, i.e., the better the underlying classifier the more group monotonic it can be made.
164
+
165
+ <table><tr><td>Model</td><td>$\arg \mathop{\max }\limits_{{k \in \mathcal{Y}}}h{\left( x\right) }_{k}$</td><td>${\operatorname{argmax}}_{k \in \mathcal{Y}}{T}_{k}\left( x\right)$</td></tr><tr><td colspan="3">CIFAR-10</td></tr><tr><td>WideResNet</td><td>95.46%</td><td>16.35%</td></tr><tr><td>MonoWideResNet</td><td>95.64%</td><td>94.95%</td></tr><tr><td colspan="3">ImageNet</td></tr><tr><td>ResNet-50</td><td>75.85%</td><td>0.10%</td></tr><tr><td>MonoResNet-50</td><td>76.50%</td><td>72.52%</td></tr></table>
166
+
167
+ Table 3: Top-1 accuracy of standard and group monotonic models.
168
+
169
+ #### 4.2.2 Using group monotonicity to detect anomalies
170
+
171
+ After showing that group monotonicity can be enforced successfully without significantly affecting the prediction performance, we discuss approaches to leverage it and introduce applications of the models satisfying such a property. In particular, we consider the application of detecting anomalous data instances, i.e., those where the model may have made a mistake. For example, consider the case where a classifier is deployed to production and, due to some problem external to the model, it is queried to do prediction for an input consisting of white noise. Standard classifiers would provide a prediction even for such a clearly anomalous input. However, a more desirable behavior is to somehow indicate that the instance is problematic. We claim that imposing structure in the features, e.g., by enforcing group monotonicity, can help in deciding when not to predict. To evaluate the proposed method, we implement anomalous test instances using adversarial perturbations. Namely, we create ${L}_{\infty }$ PGD attackers [Madry et al.,2017] and detect anomalies based on simple statistics of the features. In details, for a given input $x$ , we compute the normalized entropy ${H}^{ * }\left( x\right)$ of the categorical distribution defined by the application of the softmax operator over the set of total activations ${T}_{\mathcal{Y}}\left( x\right) : {H}^{ * }\left( x\right) = \frac{\mathop{\sum }\limits_{{k \in \mathcal{Y}}}{p}_{k}\left( x\right) \log {p}_{k}}{\log K}$ , where $K = \left| \mathcal{Y}\right|$ and the set ${p}_{\mathcal{Y}}\left( x\right)$ corresponds to the parameters of a categorical distribution defined by: ${p}_{\mathcal{Y}}\left( x\right) = \operatorname{softmax}\left( {{T}_{\mathcal{Y}}\left( x\right) }\right)$ . Decisions can then be made by comparing ${H}^{ * }\left( x\right)$ with a threshold $\tau \in \left\lbrack {0,1}\right\rbrack$ , defining the detector ${\mathbb{1}}_{\left\{ {H}^{ * } > \tau \right\} }$ . We evaluate the detection performance of this approach on both MNIST and CIFAR-10. Training for the case of CIFAR-10 follows the same setup discussed on Section 4.2.1. For MNIST on the other hand, we modify the standard LeNet architecture by increasing the width of the second convolutional layer from 64 to 150 . This layer is then used to enforce the group monotonicity property. The resulting model is referred to as WideLeNet. Moreover, $\gamma$ and $\mu$ are set to ${1e10}$ and 1, respectively. Adversarial attacks are created under the white-box setting, i.e., by exposing the full model to the attacker. The perturbation budget in terms of ${L}_{\infty }$ distance is set to 0.3 and $\frac{8}{255}$ for the cases of MNIST and CIFAR-10, respectively. Detection performance is reported in Table 4 for the considered cases in terms of the area under the operating curve (AUC-ROC). The baselines are the models for which the monotonicity penalty is not enforced. They are trained under the same conditions and the same computation budget as the models where the penalty is enforced. The results are as expected, i.e., for monotonic models, test examples for which the total activations are not structured very often correspond to anomalous inputs.
172
+
173
+ <table><tr><td>Model</td><td>AUC-ROC</td></tr><tr><td colspan="2">MNIST</td></tr><tr><td>WideLeNet</td><td>54.47%</td></tr><tr><td>MonoWideLeNet</td><td>100.00%</td></tr><tr><td colspan="2">CIFAR-10</td></tr><tr><td>WideResNet</td><td>67.35%</td></tr><tr><td>MonoWideResNet</td><td>79.33%</td></tr></table>
174
+
175
+ Table 4: AUC-ROC (the higher the better) for the detection of adversarially perturbed data instances.
176
+
177
+ ![019638eb-31fa-7503-90a1-2ee65059544d_7_927_582_637_264_0.jpg](images/019638eb-31fa-7503-90a1-2ee65059544d_7_927_582_637_264_0.jpg)
178
+
179
+ Figure 2: Group monotonic convolutional model splits representations into disjoint subsets.
180
+
181
+ Finally, due to space constraints, we discuss the application of group monotonicity to explainability in appendix F.
182
+
183
+ ## 5 CONCLUSION
184
+
185
+ We proposed approaches that enable learning algorithms based on risk minimization to find solutions that satisfy some notion of monotonicity. First, we discussed the case where monotonicity is a design requirement that needs to be satisfied everywhere. In this case, we identified limitations in prior work that resulted in models satisfying the property only in very specific parts of the space. We then introduced an efficient procedure that was observed to significantly improve the solutions in terms of the volume of the space where the monotonicity requirement is achieved. In addition, we further argued that, even when not required, models satisfying monotonicity present useful properties. We studied the case of image classifiers and generative models and showed that imposing structure in learned representations via group monotonicity is beneficial and can be done efficiently. In particular, monotonic variational autoencoders were shown to yield latent spaces that are easier to navigate since those present more regular transitions when compared to the standard generative models under the same setting.
186
+
187
+ Norman P Archer and Shouhong Wang. Application of the back propagation neural network algorithm with monotonicity constraints for two-group classification problems. Decision Sciences, 24(1):60-75, 1993.
188
+
189
+ Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
190
+
191
+ William Taylor Bakst, Nobuyuki Morioka, and Erez Louidor. Monotonic kronecker-factored lattice. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum? id=0pxiMpCyBtr
192
+
193
+ Aditya Chattopadhay, Anirban Sarkar, Prantik Howlader, and Vineeth N Balasubramanian. Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In 2018 IEEE winter conference on applications of computer vision (WACV), pages 839-847. IEEE, 2018.
194
+
195
+ Ching-Yao Chuang and Youssef Mroueh. Fair mixup: Fairness via interpolation. In International Conference on Learning Representations, 2021. URL https: //openreview.net/forum?id=DNl5s5BXeBn
196
+
197
+ Charles Dugas, Yoshua Bengio, François Bélisle, Claude Nadeau, and René Garcia. Incorporating second-order functional knowledge for better option pricing. Advances in neural information processing systems, pages 472-478, 2001.
198
+
199
+ Charles Fefferman, Sanjoy Mitter, and Hariharan Narayanan. Testing the manifold hypothesis. Journal of the American Mathematical Society, 29(4):983-1049, 2016.
200
+
201
+ Eric Garcia and Maya Gupta. Lattice regression. In Y. Bengio, D. Schuurmans, J. Lafferty, C. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems, volume 22. Curran Associates, Inc., 2009. URL https://proceedings.neurips.cc/paper/2009/file/ 4b0250793549726d5c1ea3906726ebfe-Paper pdf.
202
+
203
+ Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
204
+
205
+ Alex Graves and Navdeep Jaitly. Towards end-to-end speech recognition with recurrent neural networks. In International conference on machine learning, pages 1764-1772. PMLR, 2014.
206
+
207
+ Akhil Gupta, Naman Shukla, Lavanya Marla, Arinbjörn Kolbeinsson, and Kartik Yellepeddi. How to incorporate monotonicity in deep networks while preserving flexibility? arXiv preprint arXiv:1909.10662, 2019.
208
+
209
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.
210
+
211
+ Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. 2016.
212
+
213
+ Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, et al. Searching for mo-bilenetv3. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1314-1324, 2019.
214
+
215
+ Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer. Squeezenet: Alexnet-level accuracy with ${50}\mathrm{x}$ fewer parameters and $< {0.5}\mathrm{\;{mb}}$ model size. arXiv preprint arXiv:1602.07360, 2016.
216
+
217
+ Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
218
+
219
+ Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25:1097-1105, 2012.
220
+
221
+ Xingchao Liu, Xing Han, Na Zhang, and Qiang Liu. Certified monotonic neural networks. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 15427-15438. Curran Associates, Inc., 2020. URL https://proceedings neurips.cc/paper/2020/file/ b139aeda1c2914e3b579aafd3ceeb1bd-Paper. pdf.
222
+
223
+ Francesco Locatello, Ben Poole, Gunnar Raetsch, Bernhard Schölkopf, Olivier Bachem, and Michael Tschan-nen. Weakly-supervised disentanglement without compromises. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 6348-6359. PMLR, 13-18 Jul 2020. URL http://proceedings.mlr.press/ v119/locatello20a.html
224
+
225
+ Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
226
+
227
+ An-phi Nguyen and María Rodríguez Martínez. Mononet: towards interpretable models by learning monotonic features. arXiv preprint arXiv:1909.13611, 2019.
228
+
229
+ Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pages 618- 626, 2017.
230
+
231
+ Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
232
+
233
+ Aishwarya Sivaraman, Golnoosh Farnadi, Todd Millstein, and Guy Van den Broeck. Counterexample-guided learning of monotonic neural networks. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 11936-11948. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/ 8ab70731b1553f17c11a3bbc87e0b605-Paper. pdf.
234
+
235
+ Vikas Verma, Alex Lamb, Christopher Beckham, Amir Na-jafi, Ioannis Mitliagkas, David Lopez-Paz, and Yoshua Bengio. Manifold mixup: Better representations by interpolating hidden states. In International Conference on Machine Learning, pages 6438-6447. PMLR, 2019.
236
+
237
+ Antoine Wehenkel and Gilles Louppe. Unconstrained monotonic neural networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/ 2a084e55c87b1ebcdaad1f62fdbbac8e-Paper. pdf.
238
+
239
+ Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1492-1500, 2017.
240
+
241
+ Seungil You, David Ding, Kevin Canini, Jan Pfeifer, and Maya Gupta. Deep lattice networks and partial monotonic functions. arXiv preprint arXiv:1709.06680, 2017.
242
+
243
+ Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
244
+
245
+ Matthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
246
+
247
+ Hongyi Zhang, Moustapha Cissé, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In ICLR (Poster), 2018. URL https: //openreview.net/forum?id=r1Ddp1-Rb
248
+
249
+ ![019638eb-31fa-7503-90a1-2ee65059544d_10_689_173_367_365_0.jpg](images/019638eb-31fa-7503-90a1-2ee65059544d_10_689_173_367_365_0.jpg)
250
+
251
+ Figure 3: Illustration unit spheres $\mathcal{B}$ and ${\mathcal{B}}_{r}$ on the plane.
252
+
253
+ ## A ILLUSTRATIVE EXAMPLES ON THE SPHERE: MIXUP HELPS TO POPULATE THE SMALL VOLUME INTERIOR REGION
254
+
255
+ To further illustrate the issue discussed in the item 2 of Section 3.1 as well the effect of our proposal, we discuss a simple example considering random draws from the unit $n$ -sphere, shown in Figure 3, i.e., the set of points $\mathcal{B} = \left\{ {x \in {\mathbb{R}}^{n} : \parallel x{\parallel }_{2} < }\right.$ $1\}$ . We further consider a concentric sphere of radius $0 < r < 1$ given by ${\mathcal{B}}_{r} = \left\{ {x \in {\mathbb{R}}^{n} : \parallel x{\parallel }_{2} < r}\right\}$ . We are interested in the probability of a random draw from $\mathcal{B}$ to lie outside of ${\mathcal{B}}_{r}$ , i.e.: $P\left( {\parallel x{\parallel }_{2} > r}\right) , x \sim \mathcal{D}\left( \mathcal{B}\right)$ , for some distribution $\mathcal{D}$ . We start by defining $\mathcal{D}$ as the Uniform(B), which results in $P\left( {\parallel x{\parallel }_{2} > r}\right) = 1 - {r}^{n}$ . In Figure 4a, we can see that for growing $n, P\left( {\parallel x{\parallel }_{2} > r}\right)$ is very large even if $r \approx 1$ , which suggests most random draws will lie close to $\mathcal{B}$ ’s boundary.
256
+
257
+ We now evaluate the case where mixup is applied and random draws are taken in two steps: we first observe $y \sim \operatorname{Uniform}\left( \mathcal{B}\right)$ , and then we perform mixup between $y$ and the origin ${}^{2}$ , i.e., $x = {\lambda y},\lambda \sim \operatorname{Uniform}\left( \left\lbrack {0,1}\right\rbrack \right)$ . In this case, $P\left( {\parallel x{\parallel }_{2} > r}\right) =$ $\left( {1 - {r}^{n}}\right) \left( {1 - r}\right)$ , which is shown in Figure 4b as a function of $r$ for increasing $n$ . We can then observe that even for large $n$ , $P\left( {\parallel x{\parallel }_{2} > r}\right)$ decays linearly with $r$ , i.e., we populate the interior of $\mathcal{B}$ and $x$ in this case follows a non-uniform distribution such that its norms histogram is uniform.
258
+
259
+ ![019638eb-31fa-7503-90a1-2ee65059544d_10_142_1253_1416_532_0.jpg](images/019638eb-31fa-7503-90a1-2ee65059544d_10_142_1253_1416_532_0.jpg)
260
+
261
+ Figure 4: Illustrative example showing that uniformly distributed draws on a unit sphere in ${\mathbb{R}}^{n}$ concentrate on its boundary for large $n$ . Applying mixup populates the interior of the space.
262
+
263
+ ## B PROOF-OF-CONCEPT EVALUATION
264
+
265
+ We start by describing the approach we employ to generate data containing the properties required by our evaluation. Denote a design matrix by ${X}_{N \times D}$ such that each of its $N$ rows corresponds to a feature vector within ${\mathbb{R}}^{D}$ . In order to ensure the data lies in some manifold, we first obtain a low-dimensional synthetic design matrix given by ${X}_{N \times d}^{\prime }$ , where each entry is sampled randomly from Uniform $\left( \left\lbrack {-{10},{10}}\right\rbrack \right)$ . We then expand it to ${\mathbb{R}}^{D}$ by applying the following transformation:
266
+
267
+ $$
268
+ X = {X}^{\prime }A \tag{8}
269
+ $$
270
+
271
+ ---
272
+
273
+ ${}^{2}$ Similar conclusions hold for any fixed point within $\mathcal{B}$ . The origin is chosen for convenience.
274
+
275
+ ---
276
+
277
+ where the expansion matrix given by ${A}_{d \times D}$ is such that each of its entries are independently drawn from Uniform $\left( \left\lbrack {0,1}\right\rbrack \right)$ . Throughout our experiments, $d = \lfloor {0.3D}\rfloor$ was employed.
278
+
279
+ Target values for the function $f$ to be approximated are defined as sums of functions of scalar arguments applied independently over each dimension. We thus select a set of dimensions $M \in \left\lbrack D\right\rbrack$ with respect to which $f$ is to be monotonic, i.e.:
280
+
281
+ $$
282
+ f\left( x\right) = \mathop{\sum }\limits_{{i \in M}}{g}_{i}\left( {x}_{i}\right) + \mathop{\sum }\limits_{{j \in \bar{M}}}{h}_{j}\left( {x}_{j}\right) , \tag{9}
283
+ $$
284
+
285
+ and every ${g}_{i} : \mathbb{R} \mapsto \mathbb{R}$ is increasing monotonic, while every ${h}_{i} : \mathbb{R} \mapsto \mathbb{R}$ is not monotonic.
286
+
287
+ We then create two evaluation datasets. One of them, referred to as the validation set, is identically distributed with respect to $X$ since it is obtained following the same procedure discussed above. In order to simulate covariate-shift, we create a test set by changing the expansion matrix $A$ to a different one.
288
+
289
+ $$
290
+ {X}_{\text{val }} = {X}_{\text{val }}^{\prime }A,\;{X}_{\text{test }} = {X}_{\text{test }}^{\prime }{A}_{\text{test }}, \tag{10}
291
+ $$
292
+
293
+ where ${A}_{\text{test }}$ will be given by entry-wise linear interpolations between $A$ , used to generate the training data, and a newly sampled expansion matrix ${A}^{\prime } : {A}_{\text{test }} = \alpha {A}^{\prime } + \left( {1 - \alpha }\right) A$ . The parameter $\alpha \in \left\lbrack {0,1}\right\rbrack$ , set to 0.8 in the reported evaluation, controls the shift between ${A}_{\text{test }}$ and ${A}_{\text{test }}$ in terms of the Frobenius norm, which in turn enables the control of how much the test set shifts relative to the training data.
294
+
295
+ We thus trained models to approximate $f$ for spaces of increasing dimensions as well as for an increasing number of dimensions with respect to which $f$ is monotonic. Results are reported in Table 5 in terms of RMSE on the two evaluation datasets, and in terms of monotonicity in Table 6 where $\widehat{\rho }$ is computed both on random points and on the shifted test set. Entries in the tables correspond to the centers of 95% confidence intervals resulting from 20 independent training runs.
296
+
297
+ We highlight the two following observations regarding the prediction performances shown in table 5: different models present consistent performances across evaluations, which suggests different monotonicity-enforcing penalties do not significantly affect prediction accuracy. Moreover, the proposed approach used to generate test data under covariate-shift is effective given the gap in performance consistently observed between the validation and the test partitions. In terms of monotonicity, results in Table 6 suggest that ${\Omega }_{\text{random }}$ and ${\Omega }_{\text{train }}$ are only effective on either random or data points, which seems to aggravate when the dimension $D$ grows. ${\Omega }_{\text{mixup }}$ , on the other hand, is effective on both sets of points, and continues to work well for growing $D$ . Furthermore, covariate-shift significantly affects ${\Omega }_{\text{train }}$ for higher-dimensional cases, while ${\Omega }_{\text{mixup }}$ performs well in such a case.
298
+
299
+ <table><tr><td rowspan="2">$\left| M\right| /D$</td><td colspan="2">20/100</td><td colspan="2">40/200</td><td colspan="2">80/400</td><td colspan="2">100/500</td></tr><tr><td>Valid. RMSE</td><td>Test RMSE</td><td>Valid. RMSE</td><td>Test RMSE</td><td>Valid. RMSE</td><td>Test RMSE</td><td>Valid. RMSE</td><td>Test RMSE</td></tr><tr><td>Non-mon.</td><td>0.007</td><td>0.107</td><td>0.006</td><td>0.082</td><td>0.007</td><td>0.087</td><td>0.011</td><td>0.146</td></tr><tr><td>${\Omega }_{random}$</td><td>0.008</td><td>0.117</td><td>0.006</td><td>0.081</td><td>0.007</td><td>0.093</td><td>0.012</td><td>0.125</td></tr><tr><td>${\Omega }_{\text{train }}$</td><td>0.008</td><td>0.115</td><td>0.006</td><td>0.086</td><td>0.007</td><td>0.089</td><td>0.012</td><td>0.134</td></tr><tr><td>${\Omega }_{mixup}$</td><td>0.008</td><td>0.114</td><td>0.007</td><td>0.084</td><td>0.008</td><td>0.088</td><td>0.012</td><td>0.134</td></tr></table>
300
+
301
+ Table 5: Prediction performance of models trained on generated data in spaces of growing dimension(D)and number of monotonic dimensions $\left( \left| M\right| \right)$ . Different regularization strategies do not affect prediction performance. The performance gap consistently observed across the evaluation sets highlights the shift between the two sets of points. The lower the values of RMSE the better.
302
+
303
+ ## C DATASETS, MODELS, AND TRAINING DETAILS FOR EXPERIMENTS REPORTED IN SECTION 3.2
304
+
305
+ Algorithm 1 describes a procedure used to compute the proposed regularization ${\Omega }_{\text{mixup }}$ .
306
+
307
+ <table><tr><td rowspan="2">$\left| M\right| /D$</td><td colspan="2">20/100</td><td colspan="2">40/200</td><td colspan="2">80/400</td><td colspan="2">100/500</td></tr><tr><td>${\widehat{\rho }}_{random}$</td><td>${\widehat{\rho }}_{test}$</td><td>${\widehat{\rho }}_{random}$</td><td>${\widehat{\rho }}_{\text{test }}$</td><td>${\widehat{\rho }}_{random}$</td><td>${\widehat{\rho }}_{\text{test }}$</td><td>${\widehat{\rho }}_{random}$</td><td>${\widehat{\rho }}_{test}$</td></tr><tr><td>Non-mon.</td><td>99.90%</td><td>99.99%</td><td>97.92%</td><td>94.96%</td><td>98.47%</td><td>96.56%</td><td>93.98%</td><td>90.01%</td></tr><tr><td>${\Omega }_{random}$</td><td>0.00%</td><td>3.49%</td><td>0.00%</td><td>4.62%</td><td>0.01%</td><td>11.36%</td><td>0.02%</td><td>19.90%</td></tr><tr><td>${\Omega }_{\text{train }}$</td><td>1.30%</td><td>0.36%</td><td>4.00%</td><td>0.58%</td><td>9.67%</td><td>0.25%</td><td>9.25%</td><td>5.57%</td></tr><tr><td>${\Omega }_{mixup}$</td><td>0.00%</td><td>0.35%</td><td>0.00%</td><td>0.44%</td><td>0.00%</td><td>0.26%</td><td>0.00%</td><td>0.42%</td></tr></table>
308
+
309
+ Table 6: Fraction of monotonic points $\widehat{\rho }$ for models trained on generated data in spaces of growing dimension(D)and number of monotonic dimensions $\left( \left| M\right| \right)$ . Different regularization strategies is effective on only one of ${\widehat{\rho }}_{\text{random }}$ or ${\widehat{\rho }}_{\text{test }}$ , while ${\Omega }_{\text{mixup }}$ seems effective throughout conditions. The lower the values of $\widehat{\rho }$ the better.
310
+
311
+ Algorithm 1 Procedure to compute ${\Omega }_{\text{mixup }}$ .
312
+
313
+ ---
314
+
315
+ Input mini-batch ${X}_{\left\lbrack N \times d\right\rbrack }$ , model $h$ , monotonic dimensions $M$
316
+
317
+ ${X}_{\Omega } = \{ \}$ # Initialize set of points used to compute regularizer.
318
+
319
+ ${\widetilde{X}}_{\left\lbrack N \times d\right\rbrack } \sim \operatorname{Uniform}\left( {\mathcal{X}}^{N}\right)$ # Sample random mini-batch with size $N$ .
320
+
321
+ $\widehat{X} = \operatorname{concat}\left( {X,\widetilde{X}}\right)$ # Concatenate data and random batches.
322
+
323
+ repeat
324
+
325
+ $i, j \sim \operatorname{Uniform}\left( {\{ 1,2,\ldots ,{2N}{\} }^{2}}\right)$ # Sample random pair of points.
326
+
327
+ $\lambda \sim \operatorname{Uniform}\left( \left\lbrack {0,1}\right\rbrack \right)$
328
+
329
+ $x = \lambda {\widehat{X}}^{i} + \left( {1 - \lambda }\right) {\widehat{X}}^{j}\;\#$ Mix random pair.
330
+
331
+ ${X}_{\Omega }$ .add(x)# Add $x$ to set of regularization points.
332
+
333
+ until Maximum number of pairs reached
334
+
335
+ ${\Omega }_{\text{mixup }}\left( {h, M}\right) = \frac{1}{\left| {X}_{\Omega }\right| }\mathop{\sum }\limits_{{x \in {X}_{\Omega }}}\mathop{\sum }\limits_{{i \in M}}\max {\left( 0, - \frac{\partial h\left( x\right) }{\partial {x}_{i}}\right) }^{2}$
336
+
337
+ return ${\Omega }_{\text{mixup }}$
338
+
339
+ ---
340
+
341
+ In Table 7, we list details on the three datasets used to evaluate our proposals as reported in Section 3.2,
342
+
343
+ <table><tr><td>Dataset</td><td>$\operatorname{Dim}\left\lbrack \mathcal{X}\right\rbrack$</td><td>M</td><td>#Train</td><td>#Test</td><td>Task</td></tr><tr><td>Compass</td><td>13</td><td>4</td><td>4937</td><td>1235</td><td>Classification</td></tr><tr><td>Loan Lending Club ${}^{4}$</td><td>33</td><td>11</td><td>8500</td><td>1500</td><td>Regression</td></tr><tr><td>Blog Feedback5</td><td>280</td><td>8</td><td>47287</td><td>6904</td><td>Regression</td></tr></table>
344
+
345
+ Table 7: Description of datasets used for empirical evaluation.
346
+
347
+ Models follow the architecture in [Liu et al., 2020] using dense layers whose weights are kept separate in early layers for the input dimensions with respect to which monotonicity is to be enforced. We set the depth of all networks to 3 , and use a bottleneck of size 10 for two datasets (Compas and Loan Lending Club), and 100 for the case of the Blog Feedback dataset and the experiments on generated data. Training is carried out with the Adam optimizer [Kingma and Ba, 2014] with a global learning rate of $5\mathrm{e} - 3$ , and $\gamma$ is set to 1e4. The training batch size is set to 256 throughout experiments.
348
+
349
+ ## D MODELS AND TRAINING DETAILS FOR EXPERIMENTS REPORTED IN SECTION 4
350
+
351
+ For the case of CIFAR-10, WideResNets [Zagoruyko and Komodakis, 2016] are used. The models are initialized randomly and trained both with and without the monotonicity penalty. Standard stochastic gradient descent (SGD) implements the parameters update rule with a learning rate starting at 0.1, being decreased by a factor of 10 on epochs 10,150,250, and 350 . Training is carried out for a total of 600 epochs with a batch size of 64. For ImageNet, on the other, training consists of fine tuning a pre-trained ResNet-50, where the fine-tuning phase included the monotonicity penalty. We do so by training
352
+
353
+ ---
354
+
355
+ 3https://www.kaggle.com/danofer/compass
356
+
357
+ 'https://www.openintro.org/data/index.php?data=loans_full_schema
358
+
359
+ Shttps://archive.ics.uci.edu/ml/datasets/BlogFeedback
360
+
361
+ ---
362
+
363
+ <table><tr><td>Model</td><td>${\operatorname{argmax}}_{k \in \mathcal{Y}}h{\left( x\right) }_{k}$</td><td>${\operatorname{argmax}}_{k \in \mathcal{Y}}{T}_{k}\left( x\right)$</td></tr><tr><td colspan="3">10%</td></tr><tr><td>WideResNet</td><td>85.68%</td><td>16.35%</td></tr><tr><td>MonoWideResNet</td><td>85.77%</td><td>82.21%</td></tr><tr><td colspan="3">30%</td></tr><tr><td>WideResNet</td><td>92.12%</td><td>14.51%</td></tr><tr><td>MonoWideResNet</td><td>92.42%</td><td>88.88%</td></tr><tr><td colspan="3">60%</td></tr><tr><td>WideResNet</td><td>94.51%</td><td>10.08%</td></tr><tr><td>MonoWideResNet</td><td>94.86%</td><td>93.81%</td></tr></table>
364
+
365
+ Table 8: Top-1 accuracy obtained by both standard and group monotonic models on sub-samples of CIFAR-10. Prediction performance obtained by classifiers defined by the total activations is upper bounded by the performance obtained at the output layer for monotonic models.
366
+
367
+ the model for 30 epochs on the full ImageNet training partition. In this case, given that the label set $\mathcal{Y}$ is relatively large, using the standard ResNet-50 would result in small slices ${S}_{k}$ . To avoid that, we add an extra final convolution layer with $W = {15K}$ . Training is once more carried out with SGD using a learning rate set to 0.001 in this case, and reduced by a factor of 5 at epoch 20. In both cases, the group monotonicity property is enforced at the last convolutional layer. Other hyperparameters such as the strength $\gamma$ of the monotonicity penalty as well as the inverse temperature $\mu$ used to compute ${\Omega }_{\text{group }}$ are set to 1 and 50 for the case of CIFAR-10, and to 5 and 10 for the case of ImageNet. Both momentum and weight decay are further employed and their corresponding parameters are set to 0.9 and 0.0001 . For MNIST classifiers, training is performed for 20 epochs using a batch size of 64 and the Adadelta optimizer [Zeiler, 2012] with a learning rate of 1 .
368
+
369
+ ## E ENFORCING GROUP MONOTONICITY UNDER SMALL SAMPLES
370
+
371
+ Using CIFAR-10, we further evaluate how the proposed group monotonicity penalty behaves in data-constrained settings, i.e., we check whether or not the property can be enforced under small sample regimes. We do so by sub-sampling the original training data by randomly selecting a fraction of the training images uniformly across classes. We then train the same WideResNet for the same computation budget in terms of number of iterations as the models trained in the complete set of images. The learning rate schedule also matches that of the training on the full dataset in that the learning rate is reduced at exactly the same iterations across all training cases. Results are reported in Table 8 for sub-samples corresponding to 10%, ${30}\%$ , and ${60}\%$ of CIFAR-10. Results are consistent across the three sets of results in showing that predictions obtained from the total activation of feature slices approximate the prediction performance of the underlying model for the case of group monotonic predictors, i.e., the extent to which the underlying model is able to accurately predict correct classes upper bound the resulting "level of monotonicity". In simple terms, the better the classifier, the more group monotonic it can be made.
372
+
373
+ ## F SELECTING FEATURE MAPS TO COMPUTE VISUAL EXPLANATIONS
374
+
375
+ Approaches based on Class Activation Maps (CAM) such as Grad-CAM and its variations [Selvaraju et al., 2017, Chattopad-hay et al., 2018] seek to extract explanations from convolutional models. By explanation we mean to refer to indications of properties of the data implying the predictions of a given model. Under such a framework, one can obtain so-called explanation heat-maps through the following steps: (1) Compute a weighted sum of activations of feature maps in a chosen layer; (2) Upscale the results in order to match the dimensions of the input data; (3) Superimpose results onto the input data. Specifically for the case of applications to image data, following those steps results in highlighting the patches of the input that were deemed relevant to yield the observed predictions. Different approaches were then introduced in order to define the weights used in the first step. A very common choice is to use the total gradient of the output corresponding to the prediction with respect to activations of each feature map.
376
+
377
+ For the case of group monotonic classifiers, we are interested in verifying whether one can define useful explanation heat-maps by considering only the feature slices corresponding to the predicted class, i.e., for a given input pair(x, y), we compute explanation heat-maps considering only its corresponding feature activation slice ${S}_{y}\left( x\right)$ . We thus design an experiment to evaluate the effectiveness of such an approach by using external auxiliary classifiers to perform predictions from test data that was occluded using explanation heat-maps obtained using different models and sets of representations. In other words, we use the explanation maps to remove from the data the parts that were not indicated as relevant. We then assume that good explanation maps will be such that classifiers are able to correctly classify occluded data since relevant patches are conserved. In further details, occlusions are computed by first applying a CAM operator given a model $h$ and data $x$ , which results in a heat-map with entries in $\left\lbrack {0,1}\right\rbrack$ . We then use such a heat-map as a multiplicative mask to get an occluded version of $x$ , denoted ${x}^{\prime }$ , i.e.:
378
+
379
+ $$
380
+ {x}^{\prime } = \operatorname{CAM}\left( {x, h}\right) \circ x, \tag{11}
381
+ $$
382
+
383
+ ![019638eb-31fa-7503-90a1-2ee65059544d_14_293_177_1160_545_0.jpg](images/019638eb-31fa-7503-90a1-2ee65059544d_14_293_177_1160_545_0.jpg)
384
+
385
+ Figure 5: Example of explanation heat-map and corresponding occlusion obtained with Grad-CAM and a ResNet-50 trained on ImageNet. The example belongs to the validation set and corresponds to the class snowmobile.
386
+
387
+ <table><tr><td rowspan="2">Model(h)</td><td colspan="4">Aux. classifier</td></tr><tr><td>ResNext-50</td><td>MobileNet-v3</td><td>VGG-16</td><td>SqueezeNet</td></tr><tr><td>Reference perf.</td><td>77.62%</td><td>74.04%</td><td>71.59%</td><td>58.09%</td></tr><tr><td>ResNet-50</td><td>72.94%</td><td>68.31%</td><td>67.34%</td><td>49.95%</td></tr><tr><td>MonoResNet-50</td><td>72.88%</td><td>68.75%</td><td>66.99%</td><td>48.92%</td></tr><tr><td>MonoResNet-50 (Constrained)</td><td>72.44%</td><td>66.55%</td><td>66.92%</td><td>45.83%</td></tr></table>
388
+
389
+ Table 9: Top-1 accuracy of auxiliary classifiers evaluated on data created by occluding patches deemed irrelevant by explanation heat-maps given by different models. The performance of monotonic classifiers when constrained to consider only the feature maps within the slice corresponding to their prediction is further reported and shown to closely math the performance of cases where the full set of features is considered.
390
+
391
+ where the operator $\circ$ indicates element-wise multiplication. An example of such a procedure is shown in Figure 5. We apply the above procedure to all of the validation data, and use resulting points to then assess the prediction performance of auxiliary classifiers.
392
+
393
+ Explanation maps are computed using the same models discussed in Section 4.2.1 for ImageNet. The CAM operator corresponds to a variation of Grad-CAM++ [Chattopadhay et al., 2018] where the model activations are directly employed for weighing feature maps rather than the gradients. We consider 4 auxiliary pre-trained classifiers corresponding to ResNext- 50 [Xie et al., 2017], MobileNet-v3 [Howard et al., 2019], VGG-16 [Simonyan and Zisserman, 2014], and SqueezeNet [Iandola et al., 2016]. Results are reported in Table 9 which also include the reference performance of the auxiliary classifiers on the standard validation set in order to provide an idea of the gap in performance resulting from removing parts of test images via occlusion. We highlight the performance reported in the last row of the Table. In that case, explanation maps for the group monotonic model are computed from only the features of the class slice, which is enough to match the performance of a standard ResNet-50 with full access to the features. This suggests that representations learned by group monotonic models are such that all the information required to explain a given class is contained in the slice reserved for that class.
394
+
395
+ ![019638eb-31fa-7503-90a1-2ee65059544d_15_187_273_1374_1394_0.jpg](images/019638eb-31fa-7503-90a1-2ee65059544d_15_187_273_1374_1394_0.jpg)
396
+
397
+ Figure 6: Examples of explanation heat-maps superimposed onto images. From left to right we have the original image, results obtained from a ResNet-50, a monoResNet-50, and a monoResNet-50 where the CAM operator only access the slice corresponding to the underlying class. All are obtained with Grad-CAM.
398
+
399
+ ![019638eb-31fa-7503-90a1-2ee65059544d_16_189_398_1372_1393_0.jpg](images/019638eb-31fa-7503-90a1-2ee65059544d_16_189_398_1372_1393_0.jpg)
400
+
401
+ Figure 7: Examples of occluded data using explanation heat-maps. From left to right we have the original image, results obtained from a ResNet-50, a monoResNet-50, and a monoResNet-50 where the CAM operator only access the slice corresponding to the underlying class. All are obtained with Grad-CAM.
402
+
403
+ ![019638eb-31fa-7503-90a1-2ee65059544d_17_647_171_450_449_0.jpg](images/019638eb-31fa-7503-90a1-2ee65059544d_17_647_171_450_449_0.jpg)
404
+
405
+ Figure 8: HUE circle of RGB images. Original image from: https://en.wikipedia.org/wiki/Hue
406
+
407
+ ## H ANALYSIS OF COLOR SEQUENCES FOR GENERATED DATA
408
+
409
+ We performed a set of experiments in order to evaluate whether some kind of ordering could be observed once we generate data for increasing values of $z$ , specifically on dimensions that correspond to colors. To do that, we created an increasing sequence of values by defining a uniform grid in $\left\lbrack {0,1}\right\rbrack$ with 50 steps. We then encoded a particular image, but decoded latent vectors after substituting the $z$ value in the dimension corresponding to floor color by the values in the sequence.
410
+
411
+ Generated sequences of images are shown in Figures 9 and 10 for the base and monotonic models, respectively. In each such a case, we plot the images on the left, and bottom-left patches of size ${10} \times {10}$ so as to highlight the color sequences that we observe with such an approach. Surprisingly, we observed that monotonic models tend to generate colors in a sequence that matches the HUE circle for RGB images, represented in Figure 8 for reference. Besides visually verifying that to be the case across a number of generated examples, in Table 2 in Section 4.1 we check the fraction of the dataset where such sequences of patches are sorted in terms of their HUE angles.
412
+
413
+ ![019638eb-31fa-7503-90a1-2ee65059544d_17_238_1205_1267_650_0.jpg](images/019638eb-31fa-7503-90a1-2ee65059544d_17_238_1205_1267_650_0.jpg)
414
+
415
+ Figure 9: Data generated by standard model for traversals of $z$ on the dimension corresponding to floor color
416
+
417
+ ![019638eb-31fa-7503-90a1-2ee65059544d_18_229_796_1280_661_0.jpg](images/019638eb-31fa-7503-90a1-2ee65059544d_18_229_796_1280_661_0.jpg)
418
+
419
+ Figure 10: Data generated by monotonic model for traversals of $z$ on the dimension corresponding to floor color
420
+
421
+ ![019638eb-31fa-7503-90a1-2ee65059544d_19_242_266_1272_1084_0.jpg](images/019638eb-31fa-7503-90a1-2ee65059544d_19_242_266_1272_1084_0.jpg)
422
+
423
+ Figure 11: Generating data by moving along the line passing over latent representation for inputs for which a single factor is different. Generative factor changing: floor color.
424
+
425
+ ![019638eb-31fa-7503-90a1-2ee65059544d_20_243_563_1273_1093_0.jpg](images/019638eb-31fa-7503-90a1-2ee65059544d_20_243_563_1273_1093_0.jpg)
426
+
427
+ Figure 12: Generating data by moving along the line passing over latent representation for inputs for which a single factor is different. Generative factor changing: wall color.
428
+
429
+ ![019638eb-31fa-7503-90a1-2ee65059544d_21_242_564_1272_1092_0.jpg](images/019638eb-31fa-7503-90a1-2ee65059544d_21_242_564_1272_1092_0.jpg)
430
+
431
+ Figure 13: Generating data by moving along the line passing over latent representation for inputs for which a single factor is different. Generative factor changing: object color.
432
+
433
+ ![019638eb-31fa-7503-90a1-2ee65059544d_22_243_564_1273_1092_0.jpg](images/019638eb-31fa-7503-90a1-2ee65059544d_22_243_564_1273_1092_0.jpg)
434
+
435
+ Figure 14: Generating data by moving along the line passing over latent representation for inputs for which a single factor is different. Generative factor changing: scale.
436
+
437
+ ![019638eb-31fa-7503-90a1-2ee65059544d_23_243_564_1271_1092_0.jpg](images/019638eb-31fa-7503-90a1-2ee65059544d_23_243_564_1271_1092_0.jpg)
438
+
439
+ Figure 15: Generating data by moving along the line passing over latent representation for inputs for which a single factor is different. Generative factor changing: shape.
440
+
441
+ ![019638eb-31fa-7503-90a1-2ee65059544d_24_240_564_1273_1092_0.jpg](images/019638eb-31fa-7503-90a1-2ee65059544d_24_240_564_1273_1092_0.jpg)
442
+
443
+ Figure 16: Generating data by moving along the line passing over latent representation for inputs for which a single factor is different. Generative factor changing: orientation.
444
+
UAI/UAI 2022/UAI 2022 Conference/BcLwrLLi5xq/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,293 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § MONOTONICITY REGULARIZATION: IMPROVED PENALTIES AND NOVEL APPLICATIONS TO DISENTANGLED REPRESENTATION LEARNING AND ROBUST CLASSIFICATION
2
+
3
+ § ABSTRACT
4
+
5
+ We study settings where gradient penalties are used alongside risk minimization with the goal of obtaining predictors satisfying different notions of monotonicity. Specifically, we present two sets of contributions. In the first part of the paper, we show that different choices of penalties define the regions of the input space where the property is observed. As such, previous methods result in models that are monotonic only in a small volume of the input space. We thus propose an approach that uses mixtures of training instances and random points to populate the space and enforce the penalty in a much larger region. As a second set of contributions, we introduce regularization strategies that enforce other notions of monotonicity in different settings. In this case, we consider applications, such as image classification and generative modeling, where monotonicity is not a hard constraint but can help improve some aspects of the model. Namely, we show that inducing monotonicity can be beneficial in applications such as: (1) allowing for controllable data generation, (2) defining strategies to detect anomalous data, and (3) generating explanations for predictions. Our proposed approaches do not introduce relevant computational overhead while leading to efficient procedures that provide extra benefits over baseline models.
6
+
7
+ § 1 INTRODUCTION
8
+
9
+ Highly expressive model classes such as neural networks have achieved impressive prediction performance across a broad range of supervised learning tasks and domains [Krizhevsky et al., 2012, Graves and Jaitly, 2014, Bahdanau et al., 2014]. However, finding predictors attaining low risk on unseen data is often not enough to enable the use of such models in practice. In fact, practical applications usually have more requirements other than prediction accuracy. Hence, devising approaches that search risk minimizers satisfying practical needs led to several research threads seeking to enable the use of neural networks in real-life scenarios. Examples of such requirements include: (1) Robustness, where low risk is expected even if the model is evaluated under distribution shifts, (2) Fairness, where the performance of the model is expected to not significantly change across data sub-populations, and (3) Explainability/Interpretability, where models are expected to indicate how the features of the data imply their predictions.
10
+
11
+ In addition to the requirements mentioned above, a property commonly expected in trained models in certain applications is monotonicity with respect to some subset of the input dimensions. I.e., an increase (or decrease) along some particular dimensions strictly imply the function value will not decrease (or will not increase), provided that all other dimensions are kept fixed. As a result, the behavior of monotonic models will be more aligned with the properties that the data under consideration is believed to satisfy. For example, in the case of models used to accept/reject job applications, we expect acceptance scores to be monotonically non-decreasing with respect to features such as past years of experience of a candidate. Thus, given two applicants with exactly the same features except their years of experience, the more experienced candidate should be assigned an equal or higher chance of getting accepted. For applications where monotonicity is expected, having a predictor failing to satisfy this requirement would damage the user's confidence. As such, different strategies have been devised in order to enable training monotonic predictors. These approaches can be divided into two main categories:
12
+
13
+ Monotonicity by construction: In this case, focus lies on defining a model class that guarantees monotonicity in all of its elements Bakst et al. [2021], Wehenkel and Louppe [2019], Nguyen and Martínez [2019], You et al. [2017], Garcia and Gupta [2009], Archer and Wang [1993]. However, this approach can not be used with general architectures. Additionally, the model class can be constrained to the extent that it might affect the prediction performance.
14
+
15
+ Monotonicity via regularization: This approach is based on searching for monotonic candidates within a general class of models [Liu et al., 2020, Sivaraman et al., 2020, Gupta et al., 2019]. Such group of methods is more generally applicable and can be used, for instance, with any neural network architecture. However, they are not guaranteed to yield monotonic predictors unless extra verification/certification steps are performed, which can be computationally costly.
16
+
17
+ In addition to being a requirement as in the examples discussed above, monotonicity has been also observed to be a useful feature in certain cases. For example, it can define an effective inductive bias and improve generalization in cases where prior knowledge indicates the data generating process satisfies such property [Dugas et al., 2001]. In such cases, however, it is not necessary to satisfy the property everywhere (i.e., in the bulk of the input space), since it is enforced simply as a desirable feature of trained models rather than a design specification.
18
+
19
+ This work comprises two complementary sets of contributions, and in both cases we tackle the problem of performing empirical risk minimization over rich classes of models such as neural networks, while simultaneously searching for monotonic predictors within the set of risk minimizing solutions. In further detail, our contributions are as follows:
20
+
21
+ 1. In Section 3, we identify a limitation in previous methods and show they only enforce monotonicity either near the training data or near the boundaries of the input space. Then, we propose an efficient algorithm that tackles this problem. In particular, we modify Mixup [Zhang et al., 2018] and use it to mix data with random noise. We show that doing so helps populate the interior of the input space. With extensive evaluation on synthetic data and benchmarks, we show that the proposed strategy enforces monotonicity in a larger volume relative to previous methods in the literature.
22
+
23
+ 2. In Section 4, we define different notions of monotonicity along with regularization penalties aimed at enforcing them. We show that doing so introduces useful properties in models used for applications such as generative modeling or object recognition, and does not compromise the original performance obtained without the penalties. Contrary to the discussion on the first part of the paper in Section 3, the monotonicity property is not required to be satisfied everywhere and, as such, constraints that focus only on the actual data points are proposed.
24
+
25
+ § 2 BACKGROUND AND RELATED WORK
26
+
27
+ We start by defining the notion of partial monotonicity used throughout the paper. Consider the standard supervised learning setting where data instances are observed in pairs $x,y \sim \mathcal{X} \times \mathcal{Y}$ , where $\mathcal{X} \subset {\mathbb{R}}^{d}$ and $\mathcal{Y} \subset \mathbb{R}$ correspond to the input and output spaces, respectively. Further, consider the differentiable functions $f : \mathcal{X} \mapsto \mathcal{Y}$ , and let $M$ indicate some subset of the input dimensions, i.e., $M \subset \{ 1,\ldots d\}$ , such that $x = \operatorname{concat}\left( {{x}_{M},{x}_{\bar{M}}}\right)$ , where $\bar{M} = \{ 1,\ldots ,d\} \smallsetminus M$ .
28
+
29
+ Definition 1 Partially monotonic functions relative to $M$ : We say $f$ is monotonically non-decreasing relative to $M$ , denoted ${f}_{M}$ , if $\mathop{\min }\limits_{{i \in M}}\frac{\partial f\left( x\right) }{\partial {x}_{i}} \geq 0,\forall x \in \mathcal{X}$ .
30
+
31
+ This definition covers functions that do not decrease in value given increasing changes along a subset of the input dimensions, provided that all other dimensions are kept unchanged. Several approaches were introduced for defining model classes that have such a property. The simplest approach restricts the weights of the network to be non-negative [Archer and Wang, 1993]. However, doing so affects the prediction performance. Another approach corresponds to using lattice models [Garcia and Gupta, 2009, You et al., 2017]. In this case, models are given by interpolations in a grid defined by training data. Such a class of models can be made monotonic via the choice of the interpolation strategy and recently introduced variations [Bakst et al., 2021] scale efficiently with the dimension of the input space, but downstream applications might still require different classes of models to satisfy this type of property. For neural networks, approaches such as [Nguyen and Martínez, 2019] reparameterize fully connected layers such that the gradients with respect to parameters can only be non-negative. Wehenkel and Louppe [2019], on the other hand, consider the class of predictors $H : \mathcal{X} \mapsto \mathcal{Y}$ of the form $H\left( x\right) = {\int }_{0}^{x}h\left( t\right) {dt} + H\left( 0\right)$ , where $h\left( t\right)$ is a strictly positive mapping parameterized by a neural network. While such approaches guarantee monotonicity by design, they can be too restrictive or give overly complicated learning procedures. For example, the approach in [Wehenkel and Louppe, 2019] requires backpropagating through the integral. An alternative approach is based on searching over general classes of models while assigning higher importance to predictors observed to be monotonic. Similar to the case of adversarial training [Goodfellow et al., 2014], Sivaraman et al. [2020] proposed an approach to find counterexamples, i.e., pairs of points where the monotonicity constraint is violated, which are included in the training data to enforce monotonicity conditions in the next iterations of the model. However, this approach only supports fully-connected ReLU networks. Moreover, the procedure for finding the counterexamples is costly. Alternatively, Liu et al. [2020], Gupta et al. [2019] introduced point-wise regularization penalties for enforcing monotonicity, where the penalties are estimated via sampling. While Liu et al. [2020] use uniform random draws, Gupta et al. [2019] apply the regularization penalty over the training instances. Both approaches have shortcomings that we seek to address.
32
+
33
+ § 3 AN EFFICIENT FIX FOR MONOTONICITY PENALTIES
34
+
35
+ Given the standard supervised learning setting where $\ell$ : ${\mathcal{Y}}^{2} \mapsto {\mathbb{R}}^{ + }$ is a loss function indicating the goodness of the predictions relative to ground truth targets, the goal is to find a predictor $h \in \mathcal{H}$ such that its expected loss - or the so-called risk - over the input space is minimized. Such an approach yields the empirical risk minimization framework once a finite sample is used to estimate the risk. However, given the extra monotonicity requirement, we consider an augmented framework where such property is further enforced. We seek the optimal monotonic predictors relative to $M,{h}_{M}^{ * }$ :
36
+
37
+ $$
38
+ {h}_{M}^{ * } \in \underset{h \in \mathcal{H}}{\arg \min }{\mathbb{E}}_{x,y \sim \mathcal{X} \times \mathcal{Y}}\left\lbrack {\ell \left( {h\left( x\right) ,y}\right) }\right\rbrack + {\gamma \Omega }\left( {h,M}\right) , \tag{1}
39
+ $$
40
+
41
+ where $\gamma$ is a hyperparameter weighing the importance of the penalty $\Omega \left( {h,M}\right)$ which, in turn, is a measure of how monotonic the predictor $h$ is relative to the dimensions indicated by $M.\Omega \left( {h,M}\right)$ can be defined by the following gradient penalty [Gupta et al., 2019, Liu et al., 2020]:
42
+
43
+ $$
44
+ \Omega \left( {h,M}\right) = {\mathbb{E}}_{x \sim \mathcal{D}}\left\lbrack {\mathop{\sum }\limits_{{i \in M}}\max {\left( 0, - \frac{\partial h\left( x\right) }{\partial {x}_{i}}\right) }^{2}}\right\rbrack , \tag{2}
45
+ $$
46
+
47
+ where $\frac{\partial h\left( x\right) }{\partial {x}_{i}}$ indicates the gradients of $h$ relative to the input dimensions $i \in M$ , which are constrained to be nonnegative, rendering $h$ monotonically non-decreasing relative to $M$ . At this point, the only missing ingredient to define algorithms to estimate ${h}_{M}^{ * }$ is how to define the distribution $\mathcal{D}$ over which the expectation in Eq. 2 is computed, discussed in the following sections.
48
+
49
+ § 3.1 CHOOSING DISTRIBUTIONS OVER WHICH TO COMPUTE THE PENALTY
50
+
51
+ In the following, we present and discuss two past choices for $\mathcal{D}$ :
52
+
53
+ 1) Define $\mathcal{D}$ as the empirical distribution of the training sample: In [Gupta et al., 2019], given a training dataset of size $N$ , in addition to using the observed data to estimate the risk, the same data is used to compute the monotonicity penalty so that:
54
+
55
+ $$
56
+ {\Omega }_{\text{ train }}\left( {h,M}\right) = \frac{1}{N}\mathop{\sum }\limits_{{k = 1}}^{N}\mathop{\sum }\limits_{{i \in M}}\max {\left( 0, - \frac{\partial h\left( {x}^{k}\right) }{\partial {x}_{i}^{k}}\right) }^{2},
57
+ $$
58
+
59
+ where ${x}^{k}$ indicates the $k$ -th instance within the training sample. While this choice seems natural and can be easily implemented, it only enforces monotonicity in the region where the training samples lie, which can be problematic. For example, in case of covariate-shift, the test data might lie in parts of the space different from that of the training data so monotonicity cannot be guaranteed. We thus argue that one needs to enforce the monotonicity property in a region larger than what is defined by the training data. In Appendix B, we conduct an evaluation under domain shift and show the issue to become more and more relevant with the increase in the dimension $d$ of the input space $\mathcal{X}$ .
60
+
61
+ 2) Define $\mathcal{D} = \operatorname{Uniform}\left( \mathcal{X}\right)$ : In [Liu et al.,2020], a simple strategy is defined so that $\Omega$ is computed over the random points drawn uniformly across the entire input space $\mathcal{X}$ ; i.e.:
62
+
63
+ $$
64
+ {\Omega }_{\text{ random }}\left( {h,M}\right) = {\mathbb{E}}_{x \sim \mathrm{U}\left( \mathcal{X}\right) }\left\lbrack {\mathop{\sum }\limits_{{i \in M}}\max {\left( 0, - \frac{\partial h\left( x\right) }{\partial {x}_{i}}\right) }^{2}}\right\rbrack .
65
+ $$
66
+
67
+ Despite its simplicity and ease of use, this approach has some flaws. In high-dimensional spaces, random draws from any distribution of bounded variance will likely lie in the boundaries of the space, hence far from the regions where data actually lie. Moreover, it is commonly observed that naturally occurring high-dimensional data is structured in lower-dimensional manifolds (c.f. [Fefferman et al., 2016] for an in-depth discussion on the manifold hypothesis). It is thus likely that random draws from the uniform distribution will lie nowhere near regions of space where training/testing data will be observed. We further illustrate the issue with examples in Appendix A, which can be summarized as follows: consider the cases of uniform distributions over the unit $n$ -sphere. In such a case, the probability of a random draw lying closer to the sphere's surface than to its center is $P\left( {\parallel x{\parallel }_{2} > \frac{1}{2}}\right) = \frac{{2}^{n} - 1}{{2}^{n}}$ , as given by the volume ratio of the two regions of interest. Note that $P\left( {\parallel x{\parallel }_{2} > \frac{1}{2}}\right) \rightarrow 1$ as $n \rightarrow \infty$ , which suggests the approach in [Liu et al.,2020] will only enforce monotonicity at the boundaries.
68
+
69
+ In summary, the previous approaches are either too focused on enforcing monotonicity where the training data lie, or too loose such that the monotonicity property is uniformly enforced across a large space, and the actual data manifold may be neglected. We thus propose an alternative approach where we can have some control over the volume of the input space where the monotonicity property will be enforced. Our approach uses the idea of data mixup [Zhang et al., 2018, Verma et al., 2019, Chuang and Mroueh, 2021], where auxiliary data is created via interpolations of pairs of data points, to populate areas of the space that are otherwise disregarded. Mixup was introduced by Zhang et al. [2018] with the goal of training classifiers with smooth outputs across trajectories in the input space from instances of different classes. Given a pair of data points $\left( {{x}^{\prime },{y}^{\prime }}\right)$ , $\left( {{x}^{\prime \prime },{y}^{\prime \prime }}\right)$ , the method augments the training data using interpolations given by $\left( {\lambda {x}^{\prime } + \left( {1 - \lambda }\right) {x}^{\prime \prime },\lambda {y}^{\prime } + \left( {1 - \lambda }\right) {y}^{\prime \prime }}\right)$ , where $\lambda \sim \operatorname{Uniform}\left( \left\lbrack {0,1}\right\rbrack \right)$ . We propose a variation of this approach where data-data and noise-data pairs are mixed to define points where $\Omega$ can be estimated. We highlight the following motivations for doing so: (1) Interpolation of data points more densely populates the convex hull of the training data. (2) Extrapolation cases where mixup is performed between data points and instances obtained at random results in points that lie anywhere between the data manifold and the boundaries of the space. We thus claim that performing mixup enables the computation of $\Omega$ on parts of the space that are disregarded if one focus only on either observed data or random draws from uninformed choices of distributions such as the uniform.
70
+
71
+ § 3.2 EVALUATION
72
+
73
+ In order to evaluate the effect of different choices of $\Omega$ , we report results on three commonly used datasets covering classification and regression settings with input spaces of different dimensions. Namely, we report results for the following datasets: Compas, Loan Lending Club, and Blog Feedback. Models are implemented using the same architecture as in [Liu et al., 2020]. Further details on the data, models, and training settings can be found in Appendix C. For all evaluation cases, we consider the baseline where training is carried out without any monotonicity enforcing penalty. For the regularized cases, the different approaches used for computing $\Omega$ are as follows:
74
+
75
+ (1) ${\Omega }_{\text{ random }}$ [Liu et al.,2020] which uses random points drawn from $\operatorname{Uniform}\left( \mathcal{X}\right)$ . In this case, the sample observed at each training iteration is set to a size of 1024 throughout all experiments.
76
+
77
+ (2) ${\Omega }_{\text{ train }}$ [Gupta et al.,2019] which uses the actual data observed at each training iteration; i.e., the observed mini-batch itself is used to compute $\Omega$ .
78
+
79
+ (3) ${\Omega }_{\text{ mixup }}$ (ours), in which case the penalty is computed on points generated by mixing-up points from the training data and random points. In details, for each mini-batch of size $N > 1$ , we augment it with complementary random data and obtain a final mini-batch of size ${2N}$ . Out of the $\frac{{2N}\left( {{2N} - 1}\right) }{2}$ possible pairs of points, we take a random subsample of 1024 pairs to compute mixtures of instances. In this case, we use $\lambda \sim$ Uniform $\left( \left\lbrack {0,1}\right\rbrack \right)$ and $\lambda$ is independently drawn for each pair of points.
80
+
81
+ Results are reported in terms of both prediction performance and level of monotonicity. The latter is assessed via the probability $\rho$ of a model to not satisfy definition 1, which we estimate via the fraction $\widehat{\rho }$ of points within a sample where the monotonicity constraint is violated; i.e., given a set of $N$ data points, we compute:
82
+
83
+ $$
84
+ \widehat{\rho } = \frac{\mathop{\sum }\limits_{{k = 1}}^{N}\mathbb{1}\left\lbrack {\mathop{\min }\limits_{{i \in M}}\frac{\partial h\left( x\right) }{\partial {x}_{i}^{k}} < 0}\right\rbrack }{N}, \tag{3}
85
+ $$
86
+
87
+ such that $\widehat{\rho } = 0$ corresponds to monotonic models over the considered points. Moreover, in order to quantify the degree of monotonicity in different parts of the space, we estimate $\rho$ for 3 different sets of points: (1) ${\widehat{\rho }}_{\text{ random }}$ , computed on a sample drawn according to $\operatorname{Uniform}\left( \mathcal{X}\right)$ . We used a sample of 10,000 points throughout the experiments. (2) ${\widehat{\rho }}_{\text{ train }}$ , computed on the training data. And (3) ${\widehat{\rho }}_{\text{ test }}$ : computed on the test data. Results are summarized in Table 1 in terms of both prediction performance along with the metric $\widehat{\rho }$ indicating the degree of monotonicity of the predictor for each regularization strategy. Prediction performance is measured in terms of accuracy for classification tasks, and RMSE for the case of regression. Results reported in the tables represent ${95}\%$ confidence intervals corresponding to 20 independent training runs. Across evaluations, different penalties do not result in significant variations in terms of prediction, but affect how monotonic trained models are.
88
+
89
+ This indicates that the class of predictors corresponding to the subset of $\mathcal{H}$ that is monotonic relative to $M$ , denoted ${\mathcal{H}}_{M}$ , has enough capacity so as to be able to match the performance of the best canditates within $\mathcal{H}$ . In terms of monotonicity, we observe a clear pattern leading to the following intuition: monotonicity is achieved in the regions where it is enforced. This is evidenced by the observation that ${\widehat{\rho }}_{\text{ random }}$ is consistently lower for ${\Omega }_{\text{ random }}$ relative to ${\Omega }_{\text{ train }}$ and ${\Omega }_{\text{ mixup }}$ while, on the other hand, ${\widehat{\rho }}_{\text{ train }}$ and ${\widehat{\rho }}_{\text{ test }}$ are consistently lower for ${\Omega }_{\text{ train }}$ and ${\Omega }_{\text{ mixup }}$ compared to ${\Omega }_{\text{ random }}$ . A comparison between ${\Omega }_{\text{ train }}$ and ${\Omega }_{\text{ mixup }}$ shows what we anticipated: enforcing monotonicity in points resulting from mixup yields predictors that are as monotonic as those given by the use of ${\Omega }_{\text{ train }}$ in actual data, but significantly better at the boundaries of $\mathcal{X}$ . Finally, the results demonstrate that our proposed approach ${\Omega }_{\text{ mixup }}$ achieves the best results in terms of monotonicity for all the sets of points that we considered. Moreover, our approach introduces no significant computation overhead. Algorithm 1 in Appendix C presents details on how to compute ${\Omega }_{\text{ mixup }}$ .
90
+
91
+ § 4 APPLICATIONS OF MONOTONICITY PENALTIES
92
+
93
+ In Section 3, we presented an efficient approach to enforce monotonicity when it is a requirement. We now consider a different perspective and show that adding monotonicity constraints during training can yield extra benefits to trained models. In these cases, monotonicity is not a requirement, and hence it is not necessary for it to be satisfied everywhere. As such, the penalties we discuss from now on are computed considering only data points, and no random draws are utilized. In the following sections, we introduce notions of monotonicity that will be enforced in our models, and discuss advantages of using monotonicity for different applications such as controllable generative modelling and for the detection of anomalous data. In Appendix F, we consider a further application for cases where one's interest is to obtain explanations from observed predictions.
94
+
95
+ max width=
96
+
97
+ X Non-mon. ${\Omega }_{random}$ ${\Omega }_{train}$ ${\Omega }_{mixup}$
98
+
99
+ 1-5
100
+ 5|c|COMPAS
101
+
102
+ 1-5
103
+ Validation accuracy 69.1%±0.2% 68.5% $\pm$ 0.1% 68.5%±0.1% ${68.4}\% \pm {0.1}\%$
104
+
105
+ 1-5
106
+ Test accuracy 68.5%±0.2% 68.1% $\pm$ 0.2% 68.0%±0.2% 68.3%±0.2%
107
+
108
+ 1-5
109
+ ${\widehat{\rho }}_{random}$ 55.45%±12.26% 0.01% $\pm$ 0.01% 6.41%±4.54% ${0.00}\% \pm {0.00}\%$
110
+
111
+ 1-5
112
+ ${\widehat{\rho }}_{train}$ 92.98%±2.70% 2.08%±2.21% ${0.00}\% \pm {0.00}\%$ ${0.00}\% \pm {0.00}\%$
113
+
114
+ 1-5
115
+ ${\widehat{\rho }}_{test}$ 92.84%±2.75% 2.16%±2.35% ${0.00}\% \pm {0.00}\%$ ${0.00}\% \pm {0.00}\%$
116
+
117
+ 1-5
118
+ 5|c|Loan Lending Club
119
+
120
+ 1-5
121
+ Validation RMSE ${0.213} \pm {0.000}$ ${0.223} \pm {0.002}$ ${0.222} \pm {0.002}$ ${0.235} \pm {0.001}$
122
+
123
+ 1-5
124
+ Test RMSE ${0.221} \pm {0.001}$ ${0.230} \pm {0.001}$ ${0.229} \pm {0.002}$ ${0.228} \pm {0.001}$
125
+
126
+ 1-5
127
+ ${\widehat{\rho }}_{\text{ random }}$ 99.11%±1.70% ${0.00}\% \pm {0.00}\%$ 14.47%±7.55% ${0.00}\% \pm {0.00}\%$
128
+
129
+ 1-5
130
+ ${\widehat{\rho }}_{train}$ 100.00% $\pm$ 0.00% 7.23%±7.76% 0.01% $\pm$ 0.01% 0.00%±0.00%
131
+
132
+ 1-5
133
+ ${\widehat{\rho }}_{test}$ 100.00% $\pm$ 0.00% 6.94% $\pm$ 7.43% 0.04% $\pm$ 0.03% 0.00%±0.00%
134
+
135
+ 1-5
136
+ 5|c|Blog feedback
137
+
138
+ 1-5
139
+ Validation RMSE ${0.174} \pm {0.000}$ ${0.175} \pm {0.001}$ ${0.177} \pm {0.000}$ ${0.168} \pm {0.000}$
140
+
141
+ 1-5
142
+ Test RMSE ${0.139} \pm {0.001}$ ${0.139} \pm {0.001}$ ${0.142} \pm {0.001}$ ${0.143} \pm {0.001}$
143
+
144
+ 1-5
145
+ ${\widehat{\rho }}_{random}$ 76.17%±12.37% 0.05%±0.08% 3.86%±4.19% 0.00%±0.01%
146
+
147
+ 1-5
148
+ ${\widehat{\rho }}_{train}$ 78.67%±5.28% 78.59%±6.37% 0.01%±0.01% 0.01%±0.01%
149
+
150
+ 1-5
151
+ ${\widehat{\rho }}_{\text{ test }}$ 76.29%±6.47% 78.99%±7.20% 0.02% $\pm$ 0.02% 0.02% $\pm$ 0.02%
152
+
153
+ 1-5
154
+
155
+ Table 1: Evaluation results in terms of 95% confidence intervals resulting from 20 independent training runs. Results correspond to the checkpoint that obtained the best prediction performance on validation data throughout training. The lower the values of $\widehat{\rho }$ the better.
156
+
157
+ § 4.1 DISENTANGLED REPRESENTATION LEARNING UNDER MONOTONICITY
158
+
159
+ We first consider the case of disentangled representation learning. In this case, generative approaches often assume that the latent variables are independent, and hence control over generative factors can be achieved. E.g., one can modify a specific aspect of the data by modifying the value of a specific latent variable. However, we argue that disentanglement is necessary but not sufficient to enable controllable data generation. That is, one needs latent variables that satisfy some notion of monotonicity to be able to decide their values resulting in desired properties. For example, assume we are interested in generating images of simple geometric forms, and desire to control factors such as shape and size. In this example, even if a disentangled set of latent variables is available, we cannot decide how to change the value of the latent variable to get a bigger or a smaller object if there is no monotonic relationship between the size and the value of the corresponding latent variable. We address this issue and build upon the weakly supervised framework introduced by Locatello et al. [2020]. This work extends the popular $\beta$ -VAE setting [Higgins et al.,2016] by introducing weak supervision such that the training instances are presented to the model in pairs $\left( {{x}^{1},{x}^{2}}\right)$ where only one or a few generative factors are changing between each pair. Here, we propose to apply a notion of monotonocity over the activations of the corresponding latent variables to have more controlable factors. In the VAE setting, data is assumed to be generated according to $p\left( {x \mid z}\right) p\left( z\right)$ given the latent variables $z$ . Approximation is then performed by introducing ${p}_{\theta }\left( {x \mid z}\right)$ and ${q}_{\phi }\left( {z \mid x}\right)$ , both parameterized by neural networks. Our goal is to have $z$ fully factorizable in its dimensions, i.e., $p\left( z\right) = \mathop{\prod }\limits_{{i = 1}}^{{\operatorname{Dim}\left\lbrack z\right\rbrack }}p\left( {z}_{i}\right)$ , which needs to be captured by the approximate posterior distribution ${q}_{\phi }\left( {z \mid x}\right)$ . Training is performed by maximization of the following lower-bound on the data likelihood:
160
+
161
+ $$
162
+ {\mathcal{L}}_{ELBO} = {\mathbb{E}}_{{x}^{1},{x}^{2}}\mathop{\sum }\limits_{{i \in \{ 1,2\} }}{\mathbb{E}}_{{\widetilde{q}}_{\phi }\left( {\widehat{z} \mid {x}^{i}}\right) }\log \left( {{p}_{\theta }\left( {{x}^{i} \mid \widehat{z}}\right) }\right) \tag{4}
163
+ $$
164
+
165
+ $$
166
+ - \beta {D}_{KL}\left( {{\widetilde{q}}_{\phi }\left( {\widehat{z} \mid {x}^{i}}\right) ,p\left( \widehat{z}\right) }\right) ,
167
+ $$
168
+
169
+ where ${\widetilde{q}}_{\phi }\left( {{\widehat{z}}_{j} \mid {x}^{i}}\right) = {q}_{\phi }\left( {{z}_{j} \mid {x}^{i}}\right)$ for the latent dimensions ${z}_{i}$ that change across ${x}^{1}$ and ${x}^{2}$ , and ${\widetilde{q}}_{\phi }\left( {{\widehat{z}}_{j} \mid {x}^{i}}\right) =$ $\frac{1}{2}\left( {{q}_{\phi }\left( {{\widehat{z}}_{j} \mid {x}^{1}}\right) + {q}_{\phi }\left( {{\widehat{z}}_{j} \mid {x}^{2}}\right) }\right)$ for those that are common (i.e., the approximate posterior of the shared latent variables are forced to be the same for ${x}^{1}$ and ${x}^{2}$ ). The outer expectation is estimated by sampling pairs of data instances $\left( {{x}^{1},{x}^{2}}\right)$ where only a number of generative factors vary. In our experiments, we consider the case where exactly one generative factor changes across inputs. Moreover, we follow Locatello et al. [2020] and assign the changing factor, denoted by $y$ , to the dimension $j$ of $z$ such that $y = \arg \mathop{\max }\limits_{{j \in \operatorname{Dim}\left\lbrack z\right\rbrack }}{D}_{KL}\left( {{z}_{j}^{1},{z}_{j}^{2}}\right) .$
170
+
171
+ While the above objective enforces disentanglement, controllable generation requires some regularity in $z$ so that users can decide values of $z$ resulting in desired properties in the generated samples. We then introduce ${\Omega }_{VAE}$ to enforce such a regularity. In this case, a monotonic relationship is enforced for the distance between data pairs where only a particular generative factor vary and a corresponding latent variable. In other words, an increasing trend in the value of each dimension of $z$ should yield a greater change in the output along a generative factor. Formally, ${\Omega }_{VAE}$ is defined as the following symmetric cross-entropy estimate:
172
+
173
+ $$
174
+ {\Omega }_{VAE} = - \frac{1}{2m}\mathop{\sum }\limits_{{i = 1}}^{m}\log \frac{{e}^{\frac{L\left( {{x}^{i,1},{x}^{i,2},{y}^{i}}\right) }{\mu }}}{\mathop{\sum }\limits_{{k = 1}}^{K}{e}^{\frac{L\left( {{x}^{i,1},{x}^{i,2},k}\right) }{\mu }}} \tag{5}
175
+ $$
176
+
177
+ $$
178
+ + \log \frac{{e}^{\frac{L\left( {{x}^{i,2},{x}^{i,1},{y}^{i}}\right) }{\mu }}}{\mathop{\sum }\limits_{{k = 1}}^{K}{e}^{\frac{L\left( {{x}^{i,2},{x}^{i,1},k}\right) }{\mu }}},
179
+ $$
180
+
181
+ where $L$ is given by the gradient of the mean squared error (MSE) between images that are 1-factor away along the dimension $y$ of $z$ , assigned to the changing factor, i.e., for the pair ${x}^{i}$ and ${x}^{j}$ varying only across factor $y$ , we have:
182
+
183
+ $$
184
+ L\left( {{x}^{i},{x}^{j},y}\right) = \frac{\partial \operatorname{MSE}\left( {{\widehat{x}}^{i},{x}^{j}}\right) }{\partial {\widetilde{z}}_{y}}. \tag{6}
185
+ $$
186
+
187
+ In this case, ${\widehat{x}}^{i}$ indicates the reconstruction of ${x}^{i}$ . We evaluate such an approach by training the same 4-layered convolutional VAEs described in [Higgins et al., 2016] using the 3d-shapes dataset ${}^{1}$ . The dataset is composed of images containing shapes generated from 6 independent generative factors: floor color, wall color, object color, scale, shape and orientation. All combinations of these factors are present exactly once, resulting in $m = {480000}$ . We compared VAEs trained with and without the inclusion of the monotonicity penalty given by ${\Omega }_{VAE}$ . We highlight that the goal of the proposed framework is not to improve over current approaches in terms of how disentangled the learned representations are. Rather, we seek to achieve similar results in that sense, but impose extra regularity and structure in the relationship between the generated images and the values of $z$ so that the generative process is more easily controllable. Qualitative analysis is performed and shown in Figure 1. The two panels on the left represent the data generated by a linear combination of the latent code corresponding to two images that only vary in the factor object color. The panels stacked on the right present a per-dimension traversal of the latent space starting from a common image. It can be observed that disentanglement is indeed achieved in both cases. The monotonic model presents much smoother transitions between colors while the base model gives long sequences of very close images followed by very sharp transitions where the colors sometimes repeat (e.g., green-yellow-green transitions in the fourth row). As for the results per factor, the monotonic model provides more structure in the latent space compared to the base model. This can be observed in the shape factor. The monotonic model provides a certain order: sphere, cylinder, and then cube. Visually inspecting many samples, the monotonic model is following this order for the generated shapes. This pattern is even more pronounced in the color factors. We have found that the colors generated by the monotonic model follows the order of the colours in the HUE cycle. So our model has ordered the latent space and we know how to navigate it to generate a desired image. On the other hand, the baseline has no clear order of the latent space. For example, the baseline generates cubes at different ranges of $z$ . Similarly, the colors generated by the baseline model do not have a clear order. To further support the claim that ${\Omega }_{VAE}$ induces regularity in the latent space, we introduce the analysis shown in Table 2. We started by increasing ${z}_{3}$ (associated to floor color for both models), and recorded the sequence of the generated colors. We observed that for a large fraction of the data, the monotonic models yield sequences of images where the color of the floor is ordered according to its corresponding HUE angle. Further details are available in Appendix Halong with detailed plots of color transitions and a comparison with the HUE cycle.
188
+
189
+ max width=
190
+
191
+ Model HUE structured rate
192
+
193
+ 1-2
194
+ Base model 0.00%
195
+
196
+ 1-2
197
+ Mon. model 89.44%
198
+
199
+ 1-2
200
+
201
+ Table 2: rate of examples where colors are sorted according to hue. A large amount of the sequences generated by monotonic VAEs result in interpretable ordering.
202
+
203
+ § 4.2 GROUP MONOTONIC CLASSIFIERS
204
+
205
+ We now consider the case of $K$ -way classifiers realized through convolutional neural networks. In this case, data examples correspond to pairs $x,y \sim \mathcal{X} \times \mathcal{Y}$ , and $\mathcal{Y} =$ $\{ 1,2,3,\ldots ,K\} ,K \in \mathbb{N}$ . Models parameterize a data-conditional categorical distribution over $\mathcal{Y}$ , i.e., for a given model $h,h{\left( x\right) }_{\mathcal{Y}}$ will yield likelihoods for each class indexed in $\mathcal{Y}$ . Under this setting, we introduce the notion of Group Monotonicity: we aim to find the models $h$ such that the outputs corresponding to each class satisfy a monotonic relationship with a specific subset of high-level representations, given by some inner convolutional layer. Let the outputs of a specific layer within a convolutional model be represented by ${a}_{w},w \in \left\lbrack {1,2,3,\ldots ,W}\right\rbrack$ , where $W$ indicates the width of the chosen layer given by its number of output feature maps. For simplicity of exposition, we consider the rather common case of convolutional layers where each feature map ${a}_{w}$ is 2-dimensional. We then partition such a set of representations into disjoint subsets, or slices, of uniform sizes. Each subset is then paired with a particular output or class, and hence denoted by ${S}_{k},k \in \mathcal{Y}$ . An illustration is provided in Figure 2, where a generic convolutional model has the outputs of a specific layer partitioned into slices ${S}_{k}$ , which are then used to define output units over $\mathcal{Y}$ .
206
+
207
+ Definition 2 Group monotonic classifiers: We say $h$ is group monotonic for input $x$ and class label $y$ if $h{\left( x\right) }_{y}$ is partially monotonic relative to all elements in ${S}_{y}$ .
208
+
209
+ https://github.com/deepmind/3d-shapes
210
+
211
+ < g r a p h i c s >
212
+
213
+ Figure 1: Comparisons between data generated by standard and monotonic models. On the two panels on the left, we compare generations from a linear combination of the latent code of 2 images which only differs in the object color. On the two panels vertically stacked on the right, we start from the same image but change one latent dimension at a time.
214
+
215
+ We highlight that in this case, unlike the discussion in Section 3 monotonicity is not an application requirement, and it does not need to be satisfied everywhere.
216
+
217
+ Intuitively, our goal is to "reserve" groups of high-level features to activate more intensely than the remainder depending on the underlying class. Imposing such a structure can benefit the learned models via, for instance, more accurate anomaly detection. For training, we perform monotonic risk minimization as described in Eq. 1, and the risk is given by the negative log-likelihood over training points. Moreover, we design a penalty $\Omega$ that focuses only on observed data points during training and penalizes the slices of the Jacobian corresponding to a given class, i.e., a cross-entropy criterion enforces larger gradients on the specific class slice.
218
+
219
+ In order to formally introduce such a penalty, denoted by ${\Omega }_{\text{ group }}$ , we first define the total gradient ${O}_{k},k \in \mathcal{Y}$ , of a slice ${S}_{k}$ as follows: ${O}_{y}\left( x\right) = \mathop{\sum }\limits_{{{a}_{w} \in {S}_{y}}}\mathop{\sum }\limits_{{i,j}}\frac{\partial h{\left( x\right) }_{y}}{\partial {a}_{w,i,j}}$ , where the inner sum accounts for spatial dimensions of ${a}_{w}$ . Given the set of total gradients, a batch of size $m$ , and inverse temperature $\mu ,{\Omega }_{\text{ group }}$ will be:
220
+
221
+ $$
222
+ {\Omega }_{\text{ group }} = - \frac{1}{m}\mathop{\sum }\limits_{{i = 1}}^{m}\log \frac{{e}^{\frac{{O}_{{y}^{i}}^{i}\left( {x}^{i}\right) }{\mu }}}{\mathop{\sum }\limits_{{k = 1}}^{K}{e}^{\frac{{O}_{k}^{i}\left( {x}^{i}\right) }{\mu }}}. \tag{7}
223
+ $$
224
+
225
+ § 4.2.1 ASSESSING PERFORMANCE OF GROUP MONOTONIC CLASSIFIERS
226
+
227
+ We start our evaluation by verifying whether the group monotonicity property can be effectively enforced into classifiers trained on standard object recognition benchmarks. In order to do so, we verify the performance of the total activation classifier, as defined by: $\arg \mathop{\max }\limits_{{k \in \mathcal{Y}}}{T}_{k}\left( x\right)$ , where ${T}_{k}$ indicates the total activation on slice ${S}_{k} : {T}_{k}\left( x\right) =$ $\mathop{\sum }\limits_{{{a}_{w} \in {S}_{k}}}\mathop{\sum }\limits_{{i,j}}{a}_{w,i,j}\left( x\right)$ . A good prediction performance of such a classifier serves as evidence that the group monotonicity property is satisfied by the model over the test data under consideration since it indicates the slice relative to the underlying class of test instances has the highest total activation. We thus run evaluations for both CIFAR-10 and Im-ageNet, and classifiers in each case correspond to WideRes-Nets [Zagoruyko and Komodakis, 2016] and ResNet-50 [He et al., 2016], respectively. Training details are presented in Appendix D. Results are reported in Table 3 in terms of the top-1 prediction accuracy measured on the test data. We use standard classifiers as the baselines where no monotonicity penalty is applied in order to isolate the effect of the penalty. In both datasets, the total activation classifiers for group monotonic models (indicated by the prefix mono) are able to approximate the performance of the classifier defined at the output layer, $\arg \mathop{\max }\limits_{{k \in \mathcal{Y}}}h{\left( x\right) }_{k}$ . This suggests that the higher total activation generally matches the predicted class for group monotonic models, which indicates the property is successfully enforced. Considering performances obtained at the output layer, there were small variations in accuracy when we included monotonicity penalties, which should be considered in practical uses of group monotonicity. Nonetheless, results suggest that one can perform closely to unconstrained models while focusing on the set of group monotonic candidates. Additional experiments are reported on Table 8 on Appendix E for cases with small sample sizes, where we show that the performance of the classifier defined at the output layer upper bounds that of the total activation classifier, i.e., the better the underlying classifier the more group monotonic it can be made.
228
+
229
+ max width=
230
+
231
+ Model $\arg \mathop{\max }\limits_{{k \in \mathcal{Y}}}h{\left( x\right) }_{k}$ ${\operatorname{argmax}}_{k \in \mathcal{Y}}{T}_{k}\left( x\right)$
232
+
233
+ 1-3
234
+ 3|c|CIFAR-10
235
+
236
+ 1-3
237
+ WideResNet 95.46% 16.35%
238
+
239
+ 1-3
240
+ MonoWideResNet 95.64% 94.95%
241
+
242
+ 1-3
243
+ 3|c|ImageNet
244
+
245
+ 1-3
246
+ ResNet-50 75.85% 0.10%
247
+
248
+ 1-3
249
+ MonoResNet-50 76.50% 72.52%
250
+
251
+ 1-3
252
+
253
+ Table 3: Top-1 accuracy of standard and group monotonic models.
254
+
255
+ § 4.2.2 USING GROUP MONOTONICITY TO DETECT ANOMALIES
256
+
257
+ After showing that group monotonicity can be enforced successfully without significantly affecting the prediction performance, we discuss approaches to leverage it and introduce applications of the models satisfying such a property. In particular, we consider the application of detecting anomalous data instances, i.e., those where the model may have made a mistake. For example, consider the case where a classifier is deployed to production and, due to some problem external to the model, it is queried to do prediction for an input consisting of white noise. Standard classifiers would provide a prediction even for such a clearly anomalous input. However, a more desirable behavior is to somehow indicate that the instance is problematic. We claim that imposing structure in the features, e.g., by enforcing group monotonicity, can help in deciding when not to predict. To evaluate the proposed method, we implement anomalous test instances using adversarial perturbations. Namely, we create ${L}_{\infty }$ PGD attackers [Madry et al.,2017] and detect anomalies based on simple statistics of the features. In details, for a given input $x$ , we compute the normalized entropy ${H}^{ * }\left( x\right)$ of the categorical distribution defined by the application of the softmax operator over the set of total activations ${T}_{\mathcal{Y}}\left( x\right) : {H}^{ * }\left( x\right) = \frac{\mathop{\sum }\limits_{{k \in \mathcal{Y}}}{p}_{k}\left( x\right) \log {p}_{k}}{\log K}$ , where $K = \left| \mathcal{Y}\right|$ and the set ${p}_{\mathcal{Y}}\left( x\right)$ corresponds to the parameters of a categorical distribution defined by: ${p}_{\mathcal{Y}}\left( x\right) = \operatorname{softmax}\left( {{T}_{\mathcal{Y}}\left( x\right) }\right)$ . Decisions can then be made by comparing ${H}^{ * }\left( x\right)$ with a threshold $\tau \in \left\lbrack {0,1}\right\rbrack$ , defining the detector ${\mathbb{1}}_{\left\{ {H}^{ * } > \tau \right\} }$ . We evaluate the detection performance of this approach on both MNIST and CIFAR-10. Training for the case of CIFAR-10 follows the same setup discussed on Section 4.2.1. For MNIST on the other hand, we modify the standard LeNet architecture by increasing the width of the second convolutional layer from 64 to 150 . This layer is then used to enforce the group monotonicity property. The resulting model is referred to as WideLeNet. Moreover, $\gamma$ and $\mu$ are set to ${1e10}$ and 1, respectively. Adversarial attacks are created under the white-box setting, i.e., by exposing the full model to the attacker. The perturbation budget in terms of ${L}_{\infty }$ distance is set to 0.3 and $\frac{8}{255}$ for the cases of MNIST and CIFAR-10, respectively. Detection performance is reported in Table 4 for the considered cases in terms of the area under the operating curve (AUC-ROC). The baselines are the models for which the monotonicity penalty is not enforced. They are trained under the same conditions and the same computation budget as the models where the penalty is enforced. The results are as expected, i.e., for monotonic models, test examples for which the total activations are not structured very often correspond to anomalous inputs.
258
+
259
+ max width=
260
+
261
+ Model AUC-ROC
262
+
263
+ 1-2
264
+ 2|c|MNIST
265
+
266
+ 1-2
267
+ WideLeNet 54.47%
268
+
269
+ 1-2
270
+ MonoWideLeNet 100.00%
271
+
272
+ 1-2
273
+ 2|c|CIFAR-10
274
+
275
+ 1-2
276
+ WideResNet 67.35%
277
+
278
+ 1-2
279
+ MonoWideResNet 79.33%
280
+
281
+ 1-2
282
+
283
+ Table 4: AUC-ROC (the higher the better) for the detection of adversarially perturbed data instances.
284
+
285
+ < g r a p h i c s >
286
+
287
+ Figure 2: Group monotonic convolutional model splits representations into disjoint subsets.
288
+
289
+ Finally, due to space constraints, we discuss the application of group monotonicity to explainability in appendix F.
290
+
291
+ § 5 CONCLUSION
292
+
293
+ We proposed approaches that enable learning algorithms based on risk minimization to find solutions that satisfy some notion of monotonicity. First, we discussed the case where monotonicity is a design requirement that needs to be satisfied everywhere. In this case, we identified limitations in prior work that resulted in models satisfying the property only in very specific parts of the space. We then introduced an efficient procedure that was observed to significantly improve the solutions in terms of the volume of the space where the monotonicity requirement is achieved. In addition, we further argued that, even when not required, models satisfying monotonicity present useful properties. We studied the case of image classifiers and generative models and showed that imposing structure in learned representations via group monotonicity is beneficial and can be done efficiently. In particular, monotonic variational autoencoders were shown to yield latent spaces that are easier to navigate since those present more regular transitions when compared to the standard generative models under the same setting.
UAI/UAI 2022/UAI 2022 Conference/BcU_UIIjqg9/Initial_manuscript_md/Initial_manuscript.md ADDED
The diff for this file is too large to render. See raw diff
 
UAI/UAI 2022/UAI 2022 Conference/BcU_UIIjqg9/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,384 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § ON THE EFFECTIVENESS OF ADVERSARIAL TRAINING AGAINST COMMON CORRUPTIONS
2
+
3
+ § ABSTRACT
4
+
5
+ The literature on robustness towards common corruptions shows no consensus on whether adversarial training can improve the performance in this setting. First, we show that, when used with an appropriately selected perturbation radius, ${\ell }_{p}$ adversarial training can serve as a strong baseline against common corruptions improving both accuracy and calibration. Then we explain why adversarial training performs better than data augmentation with simple Gaussian noise which has been observed to be a meaningful baseline on common corruptions. Related to this, we identify the $\sigma$ - overfitting phenomenon when Gaussian augmentation overfits to a particular standard deviation used for training which has a significant detrimental effect on common corruption accuracy. We discuss how to alleviate this problem and then how to further enhance ${\ell }_{p}$ adversarial training by introducing an efficient relaxation of adversarial training with learned perceptual image patch similarity as the distance metric. Through experiments on CIFAR- 10 and ImageNet-100, we show that our approach does not only improve the ${\ell }_{p}$ adversarial training baseline but also has cumulative gains with data augmentation methods such as AugMix, DeepAug-ment, ANT, and SIN, leading to state-of-the-art performance on common corruptions.
6
+
7
+ § 1 INTRODUCTION
8
+
9
+ Despite achieving human-level performance on many computer vision tasks, deep neural networks are still not as robust as humans towards various distribution shifts [Szegedy et al., 2014, Taori et al., 2020] including common image corruptions [Hendrycks and Dietterich, 2019]. Attempts to understand the vulnerability towards such shifts include analysis of the network architecture [Azulay and Weiss, 2019], the features contained in the data [Ilyas et al., 2019], and frequency analysis of neural networks [Yin et al., 2019, Ortiz-Jimenez et al., 2020]. Many approaches have been suggested to improve their robustness to these shifts including approaches based on data augmentations [Cubuk et al., 2019, Hendrycks et al., 2019b], adversarial training [Madry et al., 2018, Laidlaw et al., 2021], and pretraining [Hendrycks et al., 2019a].
10
+
11
+ < g r a p h i c s >
12
+
13
+ Figure 1: Accuracy on common corruptions from CIFAR-10- C for ResNet-18 models adversarially trained using different ${\ell }_{\infty }$ radii. We observe that the performance with $\varepsilon = 1/{255}$ is significantly higher than with the standardly used $\varepsilon = 8/{255}$ .
14
+
15
+ Although data augmentation methods tend to improve the performance under common synthetic corruptions [Hendrycks et al., 2019b], these augmentations are often ad hoc and may have substantial overlap with the corruptions evaluated at test time. At the same time, there is a large amount of literature on adversarial training with ${\ell }_{p}$ -bounded perturbations [Goodfellow et al., 2015, Madry et al., 2018]. Adversarial training emerged as a principled approach to improve the worst-case performance of the model against small ${\ell }_{p}$ perturbations. However, common image corruptions have a very high ${\ell }_{p}$ distance from clean samples, so the utility of using ${\ell }_{p}$ adversarial training for them is not obvious. This leads us to explore the following question:
16
+
17
+ How can we improve the performance on common image corruptions using adversarial training?
18
+
19
+ We make the following contributions in our paper:
20
+
21
+ * We show that ${\ell }_{p}$ adversarial training with an appropriately selected perturbation radius can serve as a strong baseline against common image corruptions improving both accuracy and calibration on corrupted images.
22
+
23
+ * We analyze the success of ${\ell }_{p}$ adversarial training via a comparison to other natural baselines such as Gaussian data augmentation. We observe that it can overfit to the perturbation size it has been trained which, however, does not happen for adversarial training.
24
+
25
+ * We introduce an efficient relaxation of adversarial training with learned perceptual image patch similarity (LPIPS) [Zhang et al., 2018b] based on layerwise adversarial perturbations. This new relaxation is at least as effective as previous approaches [Laidlaw et al., 2021 but significantly faster to train.
26
+
27
+ * We show that our relaxation approach has cumulative gains with existing data augmentation methods such as AugMix, DeepAugment, ANT, and SIN leading to state-of-the-art performance on common corruptions from CIFAR-10-C and ImageNet-100-C.
28
+
29
+ § 2 RELATED WORK
30
+
31
+ We provide here an overview of relevant works on common image corruptions, different data augmentation methods proposed to improve the performance on corruptions, and then we discuss papers on adversarial robustness with respect to both ${\ell }_{p}$ and non- ${\ell }_{p}$ perturbations.
32
+
33
+ Common image corruptions. Dodge and Karam [2017] first find that despite being on par with the human vision on standard images, deep networks perform suboptimally on common corruptions such as noise and blur. Geirhos et al. [2018] measure the performance of deep networks on 12 different image corruption types but find that data augmentation on one type of corruption does not tend to improve the performance on others. However, these findings are reconsidered in Rusak et al. [2020] where Gaussian data augmentation is shown to help for a wide range of image corruptions. In a standardization effort, Hendrycks and Diet-terich [2019] introduce a few image classification datasets-in particular, CIFAR-10-C and ImageNet-C-with 15 different common corruptions from four categories: noise, blur, weather, and digital corruptions. Ovadia et al. [2019] show that not only acccuracy but also calibration deteriorates under these common corruptions. [Schneider et al., 2020, Nandy et al., 2021] show that robustness to common corruptions can be improved by using test-time adaptation, e.g., via recomputing the batch normalization statistics. Radford et al. [2021] show that contrastive pretraining on a very large set of image-caption pairs can substantially improve robustness on various distribution shifts including common corruptions.
34
+
35
+ Data augmentations. Data augmentation is a widely used technique to improve the generalization. Besides classical image transformations like random flipping or cropping, many other approaches have been proposed such as linearly interpolating between images and their labels [Zhang et al., 2018a], replacing a part of the image with either a black-colored patch [DeVries and Taylor, 2017] or a part of another image [Yun et al., 2019]. One of the best-performing methods in terms of accuracy and calibration on common corruptions is AugMix [Hendrycks et al., 2019b], which combines carefully selected augmentations with a regularization term based on the Jensen-Shannon divergence. Taori et al. [2020] observe that improvements on synthetic distribution shifts (such as common corruptions) do not necessarily transfer to real distribution shifts. However, Hendrycks et al. [2021] show an example when improving robustness against synthetic blurs also helps against naturally obtained blurred images.
36
+
37
+ ${\ell }_{p}$ adversarial robustness. Adversarial training in deep learning has been first considered in Goodfellow et al. [2015] and later framed as a robust optimization problem by Madry et al. [2018]. The view that adversarial training damages or at least does not improve the performance on common corruptions has been prevalent in the literature [Hendrycks et al., 2019b, Rusak et al., 2020, Hendrycks et al., 2021]. However, previous works directly use publicly available robust models without adjusting the perturbation radius used for adversarial training. For example, Rusak et al. [2020] show that adversarially trained ImageNet models from Xie et al. [2019], Shafahi et al. [2019], and Shafahi et al. [2020] do not help on ImageNet-C compared to standardly trained models. However, Ford et al. [2019] report that ${\ell }_{\infty }$ adversarially trained models on CIFAR-10 from Madry et al. [2018] do lead to an improvement on CIFAR- 10-C compared to a standard model. The approach of Xie et al. [2020], AdvProp, relies on ${\ell }_{\infty }$ adversarial training to improve standard and corruption accuracy but they advocate the use of auxiliary batch normalization layers for standard and adversarial training examples. We find that similar performance can be achieved on common corruptions using vanilla adversarial training without a customized use of BatchNorm layers. Kang et al. [2019] study the robustness transfer between ${\ell }_{p}$ -robust models and adversarially optimized elastic and JPEG corruptions. They show that ${\ell }_{p}$ adversarial training can increase robustness against these two types of adversarial perturbations, but robustness does not transfer in all the cases and sometimes may even hurt robustness against other perturbation types.
38
+
39
+ Non- ${\ell }_{p}$ adversarial robustness. Volpi et al. [2018] propose Lagrangian-style adversarial training in the input space and in the last layer of the network. Stutz et al. [2019] propose on-manifold adversarial training which is performed in the latent space of a VAE-GAN generative model. However, its success crucially depends on the quality of the generative model which could not be scaled beyond simple image recognition datasets. Wei and Ma [2020] derive generalization bounds that motivate adversarial training with respect to all network layers which they use to improve ${\ell }_{p}$ robustness. Recently, Laidlaw et al. [2021] provided algorithms for approximate perceptual adversarial training based on the LPIPS distance [Zhang et al., 2018b] which is defined via activations of a neural network. They aim at improving robustness against new types of adversarial perturbations that were unseen during training.
40
+
41
+ § 3 ${\ELL }_{P}$ ADVERSARIAL TRAINING IMPROVES THE PERFORMANCE ON COMMON CORRUPTIONS
42
+
43
+ Here we formally introduce adversarial training and show that it can lead to non-trivial improvements in accuracy and calibration on common corruptions.
44
+
45
+ Background on adversarial training. Let $\ell \left( {x,y;\theta }\right)$ denote the loss of a classifier parametrized by $\theta \in {\mathbb{R}}^{m}$ on the sample $\left( {x,y}\right) \sim D$ where $D$ is the data distribution. Previous works [Shaham et al., 2018, Madry et al., 2018] formalized the goal of training adversarially robust models as the following optimization problem:
46
+
47
+ $$
48
+ \mathop{\min }\limits_{\theta }{\mathbb{E}}_{\left( {x,y}\right) \sim D}\left\lbrack {\mathop{\max }\limits_{{\delta \in \Delta }}\ell \left( {x + \delta ,y;\theta }\right) }\right\rbrack . \tag{1}
49
+ $$
50
+
51
+ In this section, we focus on the ${\ell }_{p}$ threat model, i.e. $\Delta =$ $\left\{ {\delta \in {\mathbb{R}}^{d} : \parallel \delta {\parallel }_{p} \leq \varepsilon ,x + \delta \in {\left\lbrack 0,1\right\rbrack }^{d}}\right\}$ , where the adversary can change each input $x$ in an $\varepsilon$ -ball around it while making sure that the input $x + \delta$ does not exceed its natural range. A common way to solve the inner maximization problem is the projected gradient descent method (PGD) defined by the following recursion initialized at ${\delta }^{\left( 0\right) }$ :
52
+
53
+ $$
54
+ {\delta }^{\left( t + 1\right) }\overset{\text{ def }}{ = }{\Pi }_{\Delta }\left\lbrack {{\delta }^{\left( t\right) } + \alpha {\nabla }_{{\delta }^{\left( t\right) }}\ell \left( {x + {\delta }^{\left( t\right) },y;\theta }\right) }\right\rbrack , \tag{2}
55
+ $$
56
+
57
+ where $\Pi$ is the projection operator on the set $\Delta$ , and $\alpha$ is the step size of PGD. Instead of the gradient, one often uses the gradient sign update for ${\ell }_{\infty }$ perturbations or the ${\ell }_{2}$ normalized update for ${\ell }_{2}$ perturbations. ${\delta }^{\left( 0\right) }$ can be initialized as any point inside $\Delta$ , e.g. as zero, or randomly [Madry et al., 2018].
58
+
59
+ The one-iteration variant of PGD is known as the fast gradient method (FGM) when the normalized ${\ell }_{2}$ update is used and as the fast gradient sign method (FGSM) when the ${\ell }_{\infty }$ sign update is used [Goodfellow et al., 2015]. Note that in both cases the step size is $\alpha = \varepsilon$ which leads to perturbations located on the boundary of the set $\Delta$ . These methods are fast but sometimes prone to catastrophic overfitting when the model overfits to FGM/FGSM but is not robust to iterative PGD attacks [Tramèr et al., 2018, Wong et al., 2020]. This problem can be alleviated by specific regularization methods like CURE [Moosavi-Dezfooli et al., 2019, Huang et al., 2020] or GradAlign [Andriushchenko and Flammarion,2020]. However, for small enough $\varepsilon$ , adversarial training with FGM/FGSM works as well as multi-step PGD [Andriushchenko and Flammarion, 2020].
60
+
61
+ Table 1: Accuracy and calibration of ResNet-18 models trained on CIFAR-10 and ImageNet-100. ${\ell }_{\infty }$ and ${\ell }_{2}$ adversarial training substantially improves accuracy and calibration error (ECE) on corrupted samples.
62
+
63
+ max width=
64
+
65
+ Training Standard accuracy Corruption accuracy Corruption calibration error
66
+
67
+ 1-4
68
+ X 3|c|CIFAR-10
69
+
70
+ 1-4
71
+ Standard 95.1% 74.6% 16.6%
72
+
73
+ 1-4
74
+ ${\ell }_{\infty }$ adversarial 93.3% 82.7% 10.8%
75
+
76
+ 1-4
77
+ ${\ell }_{2}$ adversarial 93.6% $\mathbf{{83.4}\% }$ 10.5%
78
+
79
+ 1-4
80
+ X 3|c|ImageNet-100
81
+
82
+ 1-4
83
+ Standard 86.6% 47.5% 10.0%
84
+
85
+ 1-4
86
+ ${\ell }_{\infty }$ adversarial 86.5% 47.7% 12.4%
87
+
88
+ 1-4
89
+ ${\ell }_{2}$ adversarial 86.3% $\mathbf{{48.4}\% }$ 9.4%
90
+
91
+ 1-4
92
+
93
+ Experimental details. We do experiments on two common image classification datasets: CIFAR-10 [Krizhevsky and Hinton,2009] which has ${32} \times {32}$ images, and ImageNet-100 [Russakovsky et al.,2015] with ${224} \times {224}$ images where we take each tenth class following Laidlaw et al. [2021]. We choose ImageNet-100 since we always perform a grid search over the main hyperparameters such as the perturbation radius for adversarial training which would be too expensive to do on the full ImageNet. Unless mentioned otherwise, we use PreAct ResNet-18 architecture [He et al., 2016]. We specify the exact hyperparameters in App. A. We evaluate the accuracy on common corruptions using CIFAR-10-C and ImageNet-C datasets from [Hendrycks and Dietterich, 2019] which contain 15 different synthetic corruptions in 4 categories: blur, noise, digital, weather corruptions. We report the accuracy by averaging over all 5 severity levels.
94
+
95
+ Adversarial training improves accuracy and calibration. We start by showing in Fig. 1 the common corruption accuracy of ${\ell }_{\infty }$ adversarially trained models as it is the most widely studied setting [Madry et al., 2018] and has been reported multiple times in common corruption literature Hendrycks et al., 2019b, Ford et al., 2019, Rusak et al., 2020]. Since we are interested primarily in small- $\varepsilon$ adversarial training, we rely throughout the paper on FGM/FGSM for ${\ell }_{2}/{\ell }_{\infty }$ norms respectively to solve the inner maximization problem (I) which only leads to a $2 \times$ computational overhead. Note however that we exceptionally use PGD with 10 steps for $\varepsilon \in \{ 8/{255},{10}/{255}\}$ to prevent catastrophic over-fitting and allow a direct comparison with previous works.
96
+
97
+ < g r a p h i c s >
98
+
99
+ Figure 2: Expected calibration error on CIFAR-10-C for ${\ell }_{\infty }$ adversarially trained models.
100
+
101
+ We observe that for the small- $\varepsilon$ regime around $\varepsilon = 1/{255}$ , we get a significant improvement in corruption accuracy: 74.5% accuracy is achieved with standard training, 82.7% with adversarial training using $\varepsilon = 1/{255}$ , and ${73.8}\%$ using the standardly reported threshold ${\varepsilon }_{\infty } = 8/{255}$ . The reason is that the tradeoff between robustness and accuracy [Tsipras et al., 2019] has to be carefully balanced-if the standard accuracy drops for higher $\varepsilon$ , the corruption accuracy also deteriorates. Thus, selecting the most robust ${\ell }_{p}$ -model does not lead to the optimal performance on common corruptions. Alternatively, one can also balance this tradeoff by mixing clean and adversarial samples, but it overall leads to similar results (see App. C for details), so we focus on adversarial training with ${100}\%$ adversarial samples for the rest of the paper.
102
+
103
+ Additionally, we show that predicted probabilities of adver-sarially trained models are significantly better calibrated on common corruptions. We believe that calibration is another important aspect of the model's trustworthiness, which is particularly important in the presence of out-of-distribution data such as corrupted images. In Fig. 2, we plot the expected calibration error (ECE) [Guo et al., 2017] on CIFAR- 10-C for models trained with different ${\ell }_{\infty }$ -radii. We observe that the ECE-both with and without temperature rescaling (see App. B for details)-follows a decreasing trend over ${\ell }_{\infty }$ -radii which is expected since a classifier that predicts uniform probabilities over classes is perfectly calibrated. In particular, the most accurate model trained with ${\varepsilon }_{\infty } = 1/{255}$ has a much lower ECE than the standard model: ${10.8}\%$ instead of 16.6%, and with temperature rescaling 6.7% instead of 11.3%.
104
+
105
+ We further compare the performance in the ${\ell }_{2}$ perturbation model. In Table 1, we report results of standard, ${\ell }_{\infty }$ , and ${\ell }_{2}$ adversarial training on CIFAR-10 and ImageNet-100 where we perform a detailed grid search for each model over the perturbation radius $\varepsilon$ . To the best of our knowledge, we show for the first time that adversarial training improves calibration (see also App. B) while increasing the accuracy and that it helps on ImageNet-C, and not only on CIFAR-10-C. We generally observe that ${\ell }_{2}$ adversarial training performs better than ${\ell }_{\infty }$ , thus we focus on it in the next section.
106
+
107
+ < g r a p h i c s >
108
+
109
+ Figure 3: Accuracy for different corruption types on CIFAR- 10-C. Unlike other methods, adversarial training improves the performance on each corruption.
110
+
111
+ § 4 UNDERSTANDING THE EFFECT OF ADVERSARIAL TRAINING ON IMAGE CORRUPTIONS
112
+
113
+ Here we compare ${\ell }_{2}$ adversarial training to other natural baselines and discuss the main conceptual differences.
114
+
115
+ Comparing natural baselines across corruption types. We compare ${\ell }_{2}$ adversarial training with a few simple baselines: standard training, gradient regularization [Drucker and LeCun, 1992], and standard Gaussian data augmentation. To ensure a fair comparison, we perform a grid search for each method over the perturbation radius $\varepsilon$ , regularization parameter $\lambda$ , and noise standard deviation $\sigma$ respectively. We choose to compare to gradient regularization since it is an established regularization method that may have a similar effect to adversarial training with small perturbations [Simon-Gabriel et al., 2019]. We aggregate the corruptions over each type (blurs, digital, noise, weather) and plot the results in Fig. 3 and report results over each corruption in Fig. 12 in the Appendix.
116
+
117
+ First, we observe that adversarial training is the best performing method and that unlike other methods, ${\ell }_{2}$ adversarial training helps for each corruption type. At the same time, Gaussian augmentation degrades the performance on digital and weather corruptions while very significantly improving the performance for noise corruptions which is expected as the Gaussian noise used for training is also contained in the noise corruptions. Interestingly, for the fog and contrast corruptions, the performance degrades for all methods (see Table 10 in App. H.), consistently with the observation made in Ford et al. [2019]. Our results also suggest that the impact of gradient regularization is limited and it cannot explain the accuracy gains of both adversarial training and Gaussian augmentation as one could expect from the fact that these methods are equivalent to gradient regularization when used with sufficiently small parameters $\sigma$ and $\varepsilon$ [Bishop,1995].
118
+
119
+ ${}^{1}$ The exact numbers differ from [Ford et al.,2019] since we use ResNet-18 instead of WRN-28-10 and different hyperparameters.
120
+
121
+ Worst-case vs average-case behavior. Ford et al. [2019] show that the robustness to Gaussian noise and adversarial perturbations are closely related. More precisely, they show using concentration of measure arguments that a non-zero error rate under Gaussian perturbation implies the existence of small adversarial perturbations and consequently that improving adversarial robustness leads to an improvement in robustness against Gaussian perturbations. This finding is consistent with what we observe here. What remains to be understood is why adversarial training performs better than Gaussian augmentation on common corruptions. The main difference between both methods appears when analyzing the objectives that both methods minimize. For a single sample $x$ , the loss function considered in Gaussian augmentation is:
122
+
123
+ $$
124
+ {\mathbb{E}}_{d \sim N\left( {0,I{\sigma }^{2}}\right) }\left\lbrack {\ell \left( {\theta ,x + d}\right) }\right\rbrack \sim {\mathbb{E}}_{\rho : \parallel \rho {\parallel }_{2} = \sigma \sqrt{d}}\left\lbrack {\ell \left( {\theta ,x + \rho }\right) }\right\rbrack ,
125
+ $$
126
+
127
+ since Gaussian vectors with variance ${\sigma }^{2}I$ are highly concentrated on the sphere of radius $\sigma \sqrt{d}$ in high dimensions. Therefore Gaussian augmentation amounts to minimize an averaged objective where perturbations are averaged over the sphere. However, the objective behind adversarial training defined in Eq. (1) amounts to minimize a worst-case loss based on the worst-case perturbation in the ball. The key difference is that minimization of the expected value of the loss does not guarantee any behavior inside the sphere.
128
+
129
+ To investigate this behavior, we perform the following experiment in Fig. 4. For random 1000 test set images from CIFAR-10, we evaluate the loss with additive Gaussian noise of $\sigma \in \left\lbrack {0,{0.1}}\right\rbrack$ and average the loss function over both images and perturbations for (1) a standard model, (2) a model trained with Gaussian augmentation with $\sigma = {0.05}$ where all 100% training samples are augmented, (3) a model trained with Gaussian augmentation for $\sigma = {0.1}$ where only ${50}\%$ training samples are augmented, and (4) ${\ell }_{2}$ adversar-ially trained model with $\varepsilon = {0.1}$ . We notice that the loss function for ${100}\%$ Gaussian augmentation is minimal at $\sigma$ which is only slightly less than $\sigma = {0.05}$ used for its training. Hence, the model has overfitted not only to the type of noise but also to its magnitude. The loss function outside and inside of the sphere is bigger than on its surface. However, there is a simple fix if we train with 50% Gaussian noise in each batch, as suggested, e.g., in Rusak et al. [2020] in contrast to Ford et al. [2019]. This scheme allows to alleviate the $\sigma$ -overfitting behavior and also achieve better accuracy on clean samples (93.2% instead of 92.5%) and, most importantly, significantly improve on common corruptions (85.0% instead of 80.5%). At the same time, ${\ell }_{2}$ adversarial training does not suffer from this problem and both ${100}\%$ and ${50}\%$ schemes work nearly equally well (details can be found in App. C). We provide a further discussion on $\sigma$ -overfitting in App. D together with additional experiments on ImageNet-100 where $\sigma$ -overfitting has even more noticeable behavior.
130
+
131
+ < g r a p h i c s >
132
+
133
+ Figure 4: Average cross-entropy loss under Gaussian noise for different training methods.
134
+
135
+ < g r a p h i c s >
136
+
137
+ Figure 5: Average ${\ell }_{2}$ and LPIPS distance for different common corruptions from CIFAR-10-C.
138
+
139
+ Local vs global ${\ell }_{p}$ behavior. Interestingly, adversarial training with worst-case perturbations bounded within a tiny ${\ell }_{2}$ ball leads to robustness significantly beyond this radius. Fig. 5 illustrates that common corruptions have an ${\ell }_{2}$ norm an order of magnitude larger than $\varepsilon = {0.1}$ used for ${\ell }_{2}$ adversarial training. This is in contrast with adversarial robustness that does not significantly extend beyond the radius used for training [Madry et al., 2018]. Related to this, Ford et al. [2019] argue that for Gaussian noise improving the minimum distance to the decision boundary (e.g. via adversarial training) also leads to an improvement of the average distance. We have a similar mechanism at play for adversarial ${\ell }_{2}$ perturbations and common corruptions which may explain the generalization of adversarial training to large average-case perturbations. However, our setting is more complex compared to Ford et al. [2019] since at the training and test time we deal with different and diverse types of noise.
140
+
141
+ § 5 IMPROVING ADVERSARIAL TRAINING BY RELAXING A PERCEPTUAL DISTANCE
142
+
143
+ As shown above, ${\ell }_{p}$ adversarial training already leads to encouraging results on common corruptions. Moreover, the ${\ell }_{2}$ distance appears to be more suitable for adversarial training than ${\ell }_{\infty }$ on both datasets as implied by Table 1. This observation suggests that using more advanced distances such as perceptual ones can further improve corruption robustness.
144
+
145
+ From ${\ell }_{p}$ distances to LPIPS. One of the main disadvantages of ${\ell }_{p}$ -norms is that they are very sensitive under simple transformations such as rotations or translations [Sharif et al., 2018]. One possible solution is to consider perceptual distances ${}^{2}$ which capture these invariances better such as the learned perceptual image patch similarity (LPIPS) distance introduced in Zhang et al. [2018b] and which is based on the activations of a convolutional network. The LPIPS distance is formally defined as
146
+
147
+ $$
148
+ {\mathrm{d}}_{\text{ LPIPS }}{\left( x,{x}^{\prime }\right) }^{2} = \mathop{\sum }\limits_{{l = 1}}^{L}{\alpha }_{l}{\begin{Vmatrix}{\phi }_{l}\left( x\right) - {\phi }_{l}\left( {x}^{\prime }\right) \end{Vmatrix}}_{2}^{2}, \tag{3}
149
+ $$
150
+
151
+ where $L$ is the depth of the network, ${\phi }_{l}$ is its feature map up to the $l$ -th layer, and ${\left\{ {\alpha }_{l}\right\} }_{l = 1}^{L}$ are some constants that weigh the contributions of the ${\ell }_{2}$ distances between activations. There are two crucial elements in LPIPS: the learned network and learned coefficients ${\left\{ {\alpha }_{l}\right\} }_{l = 1}^{L}$ . Zhang et al. [2018b] propose to take a network pre-trained on ImageNet and learn coefficients on their collected dataset of human judgements about which images are closer to each other. Both Zhang et al. [2018b] and Laidlaw et al. [2021] argue about better suitability of LPIPS to measure image similarity. In App. E we analyse the suitability of LPIPS over ${\ell }_{2}$ specifically on the images from CIFAR-10-C with a detailed breakdown over corruption types. In particular, we show that the LPIPS distance is better correlated with the error rate of the network, and the increase over severity levels is more monotonic compared to ${\ell }_{2}$ as can be also seen in Fig. 5.
152
+
153
+ LPIPS adversarial training. In view of the positive features of LPIPS, adversarial training using LPIPS appears to be a promising approach to improve the performance on common corruptions. The worst-case loss problem considered in (1) using the LPIPS distance can be formulated as:
154
+
155
+ $$
156
+ \mathop{\max }\limits_{\delta }\ell \left( {x + \delta ,y;\theta }\right) \;\text{ s.t. }\;{\mathrm{d}}_{\mathrm{{LPIPS}}}\left( {x,x + \delta }\right) \leq \varepsilon . \tag{4}
157
+ $$
158
+
159
+ However, this optimization problem is challenging since ${\mathrm{d}}_{\text{ LPIPS }}$ is itself defined by a neural network, and the projection onto the LPIPS-ball-as required when using PGD to solve [4]-does not admit a closed-form expression. This problem was considered in Laidlaw et al. [2021] who propose two approximate attacks: the Perceptual Projected Gradient Descent (PPGD) and the Lagrangian Perceptual Attack (LPA). We discuss their approach in more detail in App. F but emphasize that they either need to perform an approximate projection which is computationally expensive or come up with some scheme for tuning the Lagrange multiplier $\lambda$ in the Lagrangian formulation. Furthermore, they suggest in both cases to use 10 -step iterative attacks for approximate LPIPS adversarial training which limits the scalability of the method to large datasets such as ImageNet.
160
+
161
+ Relaxed LPIPS adversarial training. We propose here a relaxation of the LPIPS adversarial objective [4]. For the simplicity of presentation, let us start by assuming that the LPIPS distance is defined using a single intermediate layer of the network, i.e. ${\mathrm{d}}_{\text{ LPIPS }}\left( {x,{x}^{\prime }}\right) = {\begin{Vmatrix}\phi \left( x\right) - \phi \left( {x}^{\prime }\right) \end{Vmatrix}}_{2}$ . Then we can write a neural network $f$ as the composition of the feature map $\phi$ and the remaining part of the network $f\left( x\right) = h\left( {\phi \left( x\right) }\right)$ . The LPIPS adversarial objective [4] in this notation becomes
162
+
163
+ $$
164
+ \mathop{\max }\limits_{\delta }\ell \left( {h\left( {\phi \left( {x + \delta }\right) }\right) }\right) \;\text{ s.t. }\;\parallel \phi \left( x\right) - \phi \left( {x + \delta }\right) {\parallel }_{2} \leq \varepsilon .
165
+ $$
166
+
167
+ We first introduce the slack variable $\widetilde{\delta } = \phi \left( x\right) - \phi \left( {x + \delta }\right)$ which allows us to rewrite the objective as
168
+
169
+ $$
170
+ \mathop{\max }\limits_{{\delta ,\widetilde{\delta }}}\ell \left( {h\left( {\phi \left( x\right) + \widetilde{\delta }}\right) }\right) \text{ s.t. }\parallel \widetilde{\delta }{\parallel }_{2} \leq \varepsilon ,\widetilde{\delta } = \phi \left( x\right) - \phi \left( {x + \delta }\right) .
171
+ $$
172
+
173
+ Then we perform the key step: we omit the constraint on the slack variable and obtain the following relaxation
174
+
175
+ $$
176
+ \mathop{\max }\limits_{\widetilde{\delta }}\ell \left( {h\left( {\phi \left( x\right) + \widetilde{\delta }}\right) }\right) \;\text{ s.t. }\;\parallel \widetilde{\delta }{\parallel }_{2} \leq \varepsilon , \tag{5}
177
+ $$
178
+
179
+ i.e. we lift the requirement that there should exist a $\delta$ in the input space that corresponds to the layerwise perturbation $\widetilde{\delta }$ .
180
+
181
+ A similar relaxation can be derived when the LPIPS distance is defined using multiple layers (see App. F):
182
+
183
+ $$
184
+ \mathop{\max }\limits_{{{\widetilde{\delta }}^{\left( 1\right) },\ldots ,{\widetilde{\delta }}^{\left( L\right) }}}\ell \left( {{g}_{L}\left( {\ldots {g}_{1}\left( {x + {\widetilde{\delta }}^{\left( 1\right) }}\right) \cdots + {\widetilde{\delta }}^{\left( L\right) }}\right) }\right) \tag{6}
185
+ $$
186
+
187
+ s.t. ${\begin{Vmatrix}{\widetilde{\delta }}^{\left( l\right) }\end{Vmatrix}}_{2} \leq {\varepsilon }_{l}\forall l \in {\mathcal{L}}_{LPIPS},{\widetilde{\delta }}^{\left( l\right) } = 0\forall l \notin {\mathcal{L}}_{LPIPS}$ ,
188
+
189
+ where the network is written under its compositional form $f = {g}_{L} \circ \cdots \circ {g}_{1},{\mathcal{L}}_{LPIPS}$ is the set of layer indices used in LPIPS and ${\varepsilon }_{l}$ denotes the ${\ell }_{2}$ bound imposed at the $l$ -th layer. We denote this relaxation as relaxed LPIPS adversarial training (RLAT) and solve it efficiently using a single-iteration adversarial attack similar to FGM. We emphasize that the projection of each ${\widetilde{\delta }}^{\left( l\right) }$ onto the corresponding ${\ell }_{2}$ balls is computationally cheap to perform, unlike the LPIPS projection.
190
+
191
+ Since we perform relaxation and train the network which is also used to compute LPIPS, the exact layerwise coefficients ${\alpha }_{l}$ from the original LPIPS Zhang et al. [2018b] are no longer applicable and cannot be used to set the layerwise bounds ${\varepsilon }_{l}$ . Therefore, we set our own values of ${\varepsilon }_{l}$ which we specify in App. F together with detailed derivations of RLAT, its precise algorithm and other implementation details. Finally, we remark that related layerwise adversarial training methods have been proposed before [Stutz et al., 2019, Volpi et al., 2018, Wei and Ma, 2020]. However, viewing layerwise adversarial training as an efficient relaxation of LPIPS adversarial training is novel, as well as applying these methods for general robustness such as common corruptions.
192
+
193
+ ${}^{2}$ Not necessarily distances in a strict mathematical sense that assumes a certain set of axioms to hold.
194
+
195
+ < g r a p h i c s >
196
+
197
+ Figure 6: LPIPS adversarial robustness of different training schemes on CIFAR-10.
198
+
199
+ § 6 EMPIRICAL EVALUATION OF RLAT
200
+
201
+ Here we first show that RLAT indeed substantially improves the LPIPS robustness. Second, we compare RLAT to other established methods and show that it consistently leads to improved accuracy and calibration on common corruptions.
202
+
203
+ LPIPS robustness of RLAT. We use the Lagrangian Perceptual Attack attack developed in Laidlaw et al. [2021] to estimate the LPIPS adversarial accuracy under different LPIPS radii and plot results in Fig. 6 on CIFAR-10. We use standard, ${\ell }_{2}$ adversarial training (AT), Fast PAT, and RLAT models with their main hyperparameters selected to perform best on common corruptions. ${}^{3}$ We observe that RLAT indeed substantially improves LPIPS robustness, even more than other approaches such as ${\ell }_{2}$ AT and Fast PAT. This gives further evidence that both ${\ell }_{2}$ and RLAT training do not suffer from catastrophic overfitting, even though trained with one-step perturbations similar to FGSM. We provide a similar evaluation for ${\ell }_{2}$ robustness in App. F (Fig. 10).
204
+
205
+ Main experimental setup. We compare the results for RLAT with additional baselines: ${\ell }_{2}$ and ${\ell }_{\infty }$ adversarial training (with 100% adversarial samples per batch), Gaussian augmentation (with both ${50}\%$ and ${100}\%$ augmentations per batch), AdvProp [Xie et al., 2020], Fast PAT [Laidlaw et al., 2021], and also four data augmentation approaches: Deep-Augment [Hendrycks et al., 2021], AugMix [Hendrycks et al., 2019b], adversarial noise training (ANT) [Rusak et al., 2020], and Stylized ImageNet (SIN) [Geirhos et al., 2019]. We use AugMix method additionally with the Jensen-Shannon regularization term as proposed in Hendrycks et al. [2019b]. We train all methods from random initialization except ANT where we follow the scheme of Rusak et al. [2020]. All comparisons between methods are performed with a grid search over their main hyperparameters (reported in App. A) such as $\sigma$ in Gaussian augmentation or $\epsilon$ in adversarial training which we perform on the main 15 corruptions from CIFAR-10-C / ImageNet-C. In App. H we further verify that selecting the main hyperparameters on validation corruptions leads to the same results. For Fast PAT on CIFAR-10, we do a grid search over their parameter $\varepsilon$ , but on ImageNet-100 we report the results based on the models provided by the authors due to limited computational resources. To assess calibration, we report the expected calibration error (ECE) (see App. H for ECE with temperature rescaling Guo et al. [2017]).
206
+
207
+ Since the main goal of the common corruption benchmark [Hendrycks and Dietterich, 2019] is to show the model's behavior on unseen corruptions, we do not use overlapping augmentations in training (see App. A). The only exception is Gaussian augmentation which we mark in gray in Table 2 following [Rusak et al., 2020] since it belongs to common corruptions. We note that removing only Gaussian noise from evaluation is not sufficient, because other noises can be affected as well by training with Gaussian augmentation. Thus, the results of ${100}\%$ and ${50}\%$ Gaussian augmentation are shown only for illustrative purposes suggesting that adversarial training with no prior knowledge about the corruptions can obtain almost the same results as direct augmentation.
208
+
209
+ Main experimental results. We show the main experimental results on CIFAR-10-C and ImageNet-100-C in Table 2, First of all, we observe that ${\ell }_{p}$ adversarial training is a strong baseline on common corruptions on both datasets with a larger gain on CIFAR-10-C. Using our proposed relaxed LPIPS adversarial training further improves the corruption accuracy on both datasets: from 74.6% to 84.1% on CIFAR- 10-C and from 47.5% to 48.8% compared to standard models. Moreover, RLAT also improves calibration compared to the standard model: from 16.6% to 9.9% ECE on CIFAR- 10-C and from 10.0% to 9.1% ECE on ImageNet-100-C. We also observe that ${100}\%$ Gaussian augmentation even deteriorates the performance on ImageNet-100-C while ${50}\%$ Gaussian augmentation significantly improves the average accuracy which is consistent with Rusak et al. [2020].
210
+
211
+ We observe that RLAT can be successfully combined with existing data augmentations, leading to better accuracy and calibration. E.g., adding RLAT on top of DeepAugment helps to improve the CIFAR-10-C accuracy from 85.3% to 87.8%. Combining RLAT with the AugMix augmentation improves the corruption accuracy from 86.6% to 88.5% on CIFAR-10-C and on ImageNet-100-C from 52.3% to ${54.8}\%$ . Combining SIN and ${\mathrm{{ANT}}}^{3 \times 3}$ improves the accuracy on ImageNet-100-C from 53.7% to 54.3% and from ${57.7}\%$ to ${58.3}\%$ , respectively. Moreover, we see that RLAT consistently improves ECE in all settings, and we refer to App. H for ECE with temperature rescaling which qualitatively shows the same behavior.
212
+
213
+ ${}^{3}$ We note that Laidlaw et al. [2021] focus on robustness to unseen adversarial examples that involve a worst-case optimization process, while we focus on unseen average-case common corruptions. This is the reason why the optimal perturbation radii that we consider are noticeably smaller than in their paper.
214
+
215
+ Table 2: Accuracy and calibration of ResNet-18 models trained on CIFAR-10 and ImageNet-100. Gray-colored numbers correspond to methods partially trained with the corruptions from CIFAR-10-C and ImageNet-100-C.
216
+
217
+ max width=
218
+
219
+ Training Standard accuracy Corruption accuracy Corruption calibr. error
220
+
221
+ 1-4
222
+ X 3|c|CIFAR-10
223
+
224
+ 1-4
225
+ Standard 95.1% 74.6% 16.6%
226
+
227
+ 1-4
228
+ 100% Gaussian 92.5% 80.5% 13.2%
229
+
230
+ 1-4
231
+ 50% Gaussian 93.2% 85.0% 9.1%
232
+
233
+ 1-4
234
+ Fast PAT 93.4% 80.6% 12.0%
235
+
236
+ 1-4
237
+ AdvProp 94.7% 82.9% 10.1%
238
+
239
+ 1-4
240
+ ${\ell }_{\infty }$ adversarial 93.3% 82.7% 10.8%
241
+
242
+ 1-4
243
+ ${\ell }_{2}$ adversarial 93.6% 83.4% 10.5%
244
+
245
+ 1-4
246
+ RLAT 93.1% 84.1% 9.9%
247
+
248
+ 1-4
249
+ DeepAugment 94.1% 85.3% 8.7%
250
+
251
+ 1-4
252
+ DeepAugment + RLAT 93.6% 87.8% 6.1%
253
+
254
+ 1-4
255
+ AugMix 95.0% 86.6% 6.9%
256
+
257
+ 1-4
258
+ AugMix + RLAT 94.8% 88.5% 4.5%
259
+
260
+ 1-4
261
+ AugMix + JSD 95.0% 88.6% 6.5%
262
+
263
+ 1-4
264
+ AugMix + JSD + RLAT 94.8% 89.6% $\mathbf{{5.4}\% }$
265
+
266
+ 1-4
267
+
268
+ max width=
269
+
270
+ X 3|c|ImageNet-100
271
+
272
+ 1-4
273
+ Standard 86.6% 47.5% 10.0%
274
+
275
+ 1-4
276
+ 100% Gaussian 86.4% 46.7% 11.7%
277
+
278
+ 1-4
279
+ 50% Gaussian 83.8% 55.2% 6.1%
280
+
281
+ 1-4
282
+ Fast PAT 71.5% 45.2% 8.0%
283
+
284
+ 1-4
285
+ ${\ell }_{\infty }$ adversarial 86.5% 47.7% 12.4%
286
+
287
+ 1-4
288
+ ${\ell }_{2}$ adversarial 86.3% 48.4% 9.4%
289
+
290
+ 1-4
291
+ RLAT 86.5% 48.8% 9.1%
292
+
293
+ 1-4
294
+ AugMix 86.7% 52.3% 7.5%
295
+
296
+ 1-4
297
+ AugMix + RLAT 86.8% 54.8% 4.7%
298
+
299
+ 1-4
300
+ AugMix + JSD 88.4% 59.3% 1.9%
301
+
302
+ 1-4
303
+ AugMix + JSD + RLAT 87.1% 61.1% $\mathbf{{1.8}\% }$
304
+
305
+ 1-4
306
+ SIN 86.6% 53.7% 6.7%
307
+
308
+ 1-4
309
+ SIN + RLAT 86.5% 54.3% 6.0%
310
+
311
+ 1-4
312
+ ${\mathrm{{ANT}}}^{3 \times 3}$ 85.9% 57.7% 5.1%
313
+
314
+ 1-4
315
+ ${\mathrm{{ANT}}}^{3\mathrm{x}3} + \mathrm{{RLAT}}$ 85.3% 58.3% $\mathbf{{4.4}\% }$
316
+
317
+ 1-4
318
+
319
+ Runtime of RLAT. We report a full runtime comparison between standard training, ${\ell }_{2}/{\ell }_{\infty }$ adversarial training, RLAT, and Fast PAT in Table 3. The main observation is that RLAT is significantly faster than Fast PAT (e.g., 1.8 hours vs. 9.4 hours on CIFAR-10) and leads only to a slight overhead compared to ${\ell }_{2}/{\ell }_{\infty }$ adversarial training (1.8 hours vs 1.3 hours on CIFAR-10). These runtimes show further the advantage of the single-step adversarial training procedure of RLAT compared to the multi-step approach of Fast PAT. It would be interesting in future work to develop a single-step version of Fast-LPA which is, however, not straightforward because of their Lagrangian formulation and the need to tune the parameter $\lambda$ over the iterations of Fast-LPA.
320
+
321
+ Table 3: Wall-clock time in hours for ResNet-18 trained with different methods on CIFAR-10 and ImageNet-100 using one Nvidia V100 GPU. ${}^{ * }$ denotes the time reported by Laidlaw et al. [2021] for a larger model (ResNet-50) using different hardware (4 Nvidia RTX 2080 Ti GPUs).
322
+
323
+ max width=
324
+
325
+ 2*Training 2|c|Dataset
326
+
327
+ 2-3
328
+ CIFAR-10 ImageNet-100
329
+
330
+ 1-3
331
+ Standard 0.8h 3.9h
332
+
333
+ 1-3
334
+ ${\ell }_{2}/{\ell }_{\infty }$ adversarial 1.3h 5.8h
335
+
336
+ 1-3
337
+ RLAT 1.8h 6.2h
338
+
339
+ 1-3
340
+ Fast PAT 9.4h *120h
341
+
342
+ 1-3
343
+
344
+ Additional experiments. We refer to the Appendix for further experimental results. In App. G, we evaluate the performance of the models from Table 2 on ImageNet-A, ImageNet-R, and Stylized ImageNet to better understand how well the improvements on common corruptions transfer to other distribution shifts. In App. H, we provide more detailed results such as those presented in Table 2 but with breakdowns over different corruptions and severities. We also present results for larger network architectures and for AugMix combined with ${\ell }_{p}$ adversarial training in App. A, as well as results of RLAT over multiple random seeds.
345
+
346
+ § 7 CONCLUSIONS AND FUTURE WORK
347
+
348
+ Our findings suggest that adversarial training can be successfully used to improve accuracy and calibration on common image corruptions. Even simple ${\ell }_{p}$ adversarial training can serve as a strong baseline if the optimal perturbation radius is chosen for the given problem. More advanced adversarial training schemes involve perceptual distances, such as LPIPS, and we provide a relaxation of LPIPS adversarial training with an efficient single-step procedure. We observe that the developed relaxation (RLAT) substantially improves the LPIPS robustness and can be successfully combined with existing data augmentations. We hope that RLAT would be of interest also for other domains such as natural language processing where robustness to commonly occurring corruptions (e.g., typos) is an important task.
349
+
350
+ Maksym Andriushchenko and Nicolas Flammarion. Understanding and improving fast adversarial training. In NeurIPS, 2020.
351
+
352
+ Aharon Azulay and Yair Weiss. Why do deep convolutional networks generalize so poorly to small image transformations? JMLR, 20(184):1-25, 2019.
353
+
354
+ Chris M. Bishop. Training with noise is equivalent to Tikhonov regularization. Neural Computation, 7(1):108-116, January 1995.
355
+
356
+ Francesco Croce and Matthias Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In ${ICML},{2020}$ .
357
+
358
+ Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Nicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein. Robustbench: a standardized adversarial robustness benchmark. arXiv preprint arXiv:2010.09670, 2020.
359
+
360
+ Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. Autoaugment: Learning augmentation policies from data. In ${CVPR},{2019}$ .
361
+
362
+ Terrance DeVries and Graham W Taylor. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017.
363
+
364
+ Samuel Dodge and Lina Karam. A study and comparison of human and deep learning recognition performance under visual distortions. In ${ICCCN},{2017}$ .
365
+
366
+ Harris Drucker and Yann LeCun. Improving generalization performance using double backpropagation. IEEE Transactions on Neural Networks, 1992.
367
+
368
+ Logan Engstrom, Andrew Ilyas, Hadi Salman, Shibani Santurkar, and Dimitris Tsipras. Robustness (python library), 2019. URL https://github.com/MadryLab/robustness.
369
+
370
+ Nic Ford, Justin Gilmer, Nicolas Carlini, and Dogus Cubuk. Adversarial examples are a natural consequence of test error in noise. In ${ICML},{2019}$ .
371
+
372
+ Robert Geirhos, Carlos R Medina Temme, Jonas Rauber, Heiko H Schütt, Matthias Bethge, and Felix A Wichmann. Generalisation in humans and deep neural networks. In NeurIPS, 2018.
373
+
374
+ Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, and Wieland Brendel. Imagenet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. ICLR, 2019.
375
+
376
+ Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. ICLR, 2015.
377
+
378
+ Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In International Conference on Machine Learning, pages 1321-1330. PMLR, 2017.
379
+
380
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. ECCV, 2016.
381
+
382
+ Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations.
383
+
384
+ In ${ICLR},{2019}$ .
UAI/UAI 2022/UAI 2022 Conference/BfMuG_Iiqgc/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,892 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Principle of Relevant Information for Graph Sparsification
2
+
3
+ ## Abstract
4
+
5
+ Graph sparsification aims to reduce the number of edges of a graph while maintaining its structural properties. In this paper, we propose the first general and effective information-theoretic formulation of graph sparsification, by taking inspiration from the Principle of Relevant Information (PRI). To this end, we extend the PRI from a standard scalar random variable setting to structured data (i.e., graphs). Our Graph-PRI objective is achieved by operating on the graph Laplacian, made possible by expressing the graph Laplacian of a subgraph in terms of a sparse edge selection vector w. We provide both theoretical and empirical justifications on the validity of our Graph-PRI. We also analyze its analytical solutions in a few special cases. We finally present three representative real-world applications, namely graph sparsification, graph regularized multi-task learning, and medical imaging-derived brain network classification, to demonstrate the effectiveness, the versatility and the enhanced interpretability of our approach over prevalent sparsification techniques.
6
+
7
+ ## 1 INTRODUCTION
8
+
9
+ Many complex structures and phenomena are naturally described as graphs and networks (e.g., social networks, brain functional connectivity [Zhou et al., 2020], climate causal effect network [Nowack et al., 2020], etc.). However, it is always prohibitive to exactly visualize and analyze a graph even with moderate size due to the quadratic growth in the number of edges. Therefore, techniques to sparsify graphs by pruning less informative edges have gained increasing attention in last two decades [Spielman and Srivas-tava, 2011, Bravo Hermsdorff and Gunderson, 2019, Wu and Chen, 2020]. Apart from offering a much easier visualization, graph sparsification can be used in multiple ways. For example, it may reduce the storage space and accelerate the running time of machine learning algorithms involving graph regularization, with negligible accuracy loss [Sad-hanala et al., 2016]. When differentiable privacy is a major concern, it can remove or hide edges for the purpose of information protection [Arora and Upadhyay, 2019].
10
+
11
+ On the other hand, there is a recent trend to leverage the ideas or principles in Information Theory to problems related to graph or graph neural networks. Let $\mathcal{X}$ denote graph input data which may encode both graph structure information (characterized by either adjacency matrix $A$ or graph Laplacian $L$ ) and node attributes, and $Y$ the desired response such as node labels or graph labels. A notable example is the famed Information Bottleneck (IB) approach [Tishby et al., 1999], which formulates the learning as:
12
+
13
+ $$
14
+ {\mathcal{L}}_{\mathrm{{IB}}} = \min I\left( {\mathcal{X};T}\right) - {\beta I}\left( {Y;T}\right) , \tag{1}
15
+ $$
16
+
17
+ in which $I\left( {\cdot ; \cdot }\right)$ denotes the mutual information. $T$ is the object we want to learn or infer from $\{ \mathcal{X}, Y\}$ that can be used as graph node representation [Wu et al., 2020] or as the most infromative and interpretable subgraph for the label $Y$ [Yu et al.,2020]. $\beta$ is a Lagrange multiplier that controls the trade-off between the sufficiency (the performance of $T$ on down-stream task, as quantified by $I\left( {Y;T}\right)$ ) and the minimality (the complexity of the representation, as measured by $I\left( {\mathcal{X};T}\right) )$ .
18
+
19
+ Instead of using the IB approach, we explore the feasibility and potency of another less well-known information-theoretic principle - the Principle of Relevant Information (PRI) [Principe, 2010, Chapter 8] - in graph data, assuming only a single random variable $\mathcal{X}$ is given. Different from IB that requires an auxiliary relevant variable $Y$ and possibly the joint distribution of $\mathbb{P}\left( {\mathcal{X}, Y}\right)$ , the PRI is fully unsupervised and aims to obtain a reduced statistical representation $T$ by decomposing $\mathcal{X}$ with:
20
+
21
+ $$
22
+ {\mathcal{L}}_{\mathrm{{PRI}}} = \min H\left( T\right) + {\beta D}\left( {\mathbb{P}\left( \mathcal{X}\right) \parallel \mathbb{P}\left( T\right) }\right) , \tag{2}
23
+ $$
24
+
25
+ where $H\left( T\right)$ refers to the entropy of $T$ . The minimization of entropy can be viewed as a means of reducing uncertainty and finding the statistical regularity in $T.D\left( {\mathbb{P}\left( \mathcal{X}\right) \parallel \mathbb{P}\left( T\right) }\right)$ is the divergence between the distributions of $\mathcal{X}$ (i.e., $\mathbb{P}\left( \mathcal{X}\right)$ ) and $T$ (i.e., $\mathbb{P}\left( T\right)$ ), which quantifies the descriptive power of $T$ about $\mathcal{X}$ .
26
+
27
+ So far, PRI has only been used in a standard scalar random variable setting. Recent applications of PRI include, but not limited to, selecting the most relevant examples from the majority class in imbalanced classification [Hoyos-Osorio et al., 2021], and learning disentengled representations with variational autoencoder [Li et al., 2020]. Usually, one uses the 2-order Rényi’s entropy $\left\lbrack {\text{Rényi,}{1961}}\right\rbrack$ to quantify $H\left( T\right)$ and the Cauchy-Schwarz (CS) divergence [Jenssen et al., 2006] to quantify $D\left( {\mathbb{P}\left( X\right) \parallel \mathbb{P}\left( T\right) }\right)$ for ease of optimization.
28
+
29
+ In this paper, we extend PRI to graph data. This is not a trivial task, as the Rényi's quadratic entropy and CS divergence are defined over probability space and do not capture any structure information. We also exemplify our Graph-PRI with an application in graph sparsification. To summarize, our contributions are fourfold:
30
+
31
+ - Taking graph Laplacian as the input variable, we propose a new information-theoretic formulation for graph sparsification, by taking inspiration from PRI.
32
+
33
+ - We provide theoretical and empirical justifications to the objective of Graph-PRI on graph sparsification. We also analyze the analytical solutions in some special cases of hyperparameter $\beta$ .
34
+
35
+ - We demonstrate that the graph Laplacian of the resulting subgraph can be elegantly expressed in terms of a sparse edge selection vector $\mathbf{w}$ , which significantly simplify the learning argument of Graph-PRI. We also show that the objective of Graph-PRI is differentiable, which further simplifies the optimization.
36
+
37
+ - Experimental results on graph sparsification, graph-regularized multi-task learning, and brain network classification demonstrate the versatility and compelling performance of Graph-PRI.
38
+
39
+ ## 2 PRELIMINARY KNOWLEDGE
40
+
41
+ ### 2.1 PROBLEM DEFINITION AND NOTATIONS
42
+
43
+ Consider an undirected graph $G = \left( {V, E}\right)$ with a set of nodes $V = \left\{ {{v}_{1},\cdots ,{v}_{N}}\right\}$ and a set of edges $E =$ $\left\{ {{e}_{1},\cdots ,{e}_{M}}\right\}$ which reveals the connections between nodes. The objective of graph sparsification is to preferentially retain a small subset of edges from $G$ to obtain a sparsified surrogate graph ${G}_{s} = \left( {V,{E}_{s}}\right)$ with the edge set ${E}_{s} \in E$ such that $\left| {E}_{s}\right| \ll M$ .
44
+
45
+ The topology of $G$ is essentially determined by its graph Laplacian $L = D - A$ , where $A$ is the adjacency matrix and $D = \operatorname{diag}\left( \mathbf{d}\right)$ is the diagonal matrix formed by the degrees of the vertices ${d}_{i} = \mathop{\sum }\limits_{{j = 1}}^{N}{A}_{ij}$ . Consider an arbitrary orientation of edges of $G$ , the incidence matrix $B = \left\lbrack {{\mathbf{b}}_{1},\cdots ,{\mathbf{b}}_{M}}\right\rbrack$ of $G$ is a $N \times M$ matrix whose entries is given by:
46
+
47
+ $$
48
+ {\left\lbrack {\mathbf{b}}_{m}\right\rbrack }_{i} = \left\{ {\begin{array}{ll} + 1 & \text{ if node }{v}_{i}\text{ is the head of edge }{e}_{m} \\ - 1 & \text{ if node }{v}_{i}\text{ is the tail of edge }{e}_{m} \\ 0 & \text{ otherwise } \end{array}.}\right. \tag{3}
49
+ $$
50
+
51
+ Mathematically, $L$ can be expressed in terms of $B$ as:
52
+
53
+ $$
54
+ L = B{B}^{T} = \mathop{\sum }\limits_{{m = 1}}^{M}{\mathbf{b}}_{m}{\mathbf{b}}_{m}^{T}. \tag{4}
55
+ $$
56
+
57
+ Suppose the subgraph ${G}_{s}$ contains $K$ edges, one can obtain ${G}_{s}$ from $G$ through a edge selection vector $\mathbf{w} =$ ${\left\lbrack {w}_{1},\cdots ,{w}_{M}\right\rbrack }^{T} \in \{ 0,1{\} }^{M}$ such that $\parallel \mathbf{w}{\parallel }_{0} = K$ , where ${w}_{m} = 1$ if an edge belongs to the edge subset ${E}_{s}$ , and ${w}_{m} = 0$ otherwise. Finally, one can write the graph Laplacian ${L}_{s}$ of the $K$ -sparse graph as a function of $\mathbf{w}$ as:
58
+
59
+ $$
60
+ {L}_{s}\left( \mathbf{w}\right) = \mathop{\sum }\limits_{{m = 1}}^{M}{w}_{m}{\mathbf{b}}_{m}{\mathbf{b}}_{m}^{T} = B\operatorname{diag}\left( \mathbf{w}\right) {B}^{T}, \tag{5}
61
+ $$
62
+
63
+ in which $\operatorname{diag}\left( \mathbf{w}\right) \in {\mathbb{R}}^{M \times M}$ is a square diagonal matrix with $\mathbf{w}$ on the main diagonal.
64
+
65
+ Note that, Eq. (5) also applies for weighted graph $G =$ (V, W)by simply reformulate the incidence matrix $B$ as:
66
+
67
+ $$
68
+ {\left\lbrack {\mathbf{b}}_{m}\right\rbrack }_{i} = \left\{ {\begin{array}{ll} + \sqrt{{\mu }_{m}} & \text{ if node }{v}_{i}\text{ is the head of edge }{e}_{m} \\ - \sqrt{{\mu }_{m}} & \text{ if node }{v}_{i}\text{ is the tail of edge }{e}_{m} \\ 0 & \text{ otherwise } \end{array},}\right.
69
+ $$
70
+
71
+ (6)
72
+
73
+ in which ${\mu }_{m}$ is the weight of edge ${e}_{m}$ .
74
+
75
+ In what follows, we will design a learning-based approach to optimally obtain the edge selection vector $\mathbf{w}$ by making use the general idea of PRI.
76
+
77
+ ### 2.2 GRAPH SPARSIFICATION
78
+
79
+ Substantial efforts have been made on graph sparsification in the last decades. In general, existing methods can be roughly divided into two categories [Wu and Chen, 2020]: 1) graph property preserving sparsifiers; and 2) application-oriented sparsifiers. The first category sparsifies a graph by preserving its main properties, such as shortest path distances, graph cuts [Benczúr and Karger, 2015], or graph Laplacian. The most notable example in this category is the spectrum-preserving approach [Spielman and Srivastava, 2011, Spielman and Teng,2011] that generates a $\gamma$ -spectral approximation to $G$ such that:
80
+
81
+ $$
82
+ \frac{1}{\gamma }{\overrightarrow{x}}^{T}{L}_{s}\overrightarrow{x} \leq {\overrightarrow{x}}^{T}L\overrightarrow{x} \leq \gamma {\overrightarrow{x}}^{T}{L}_{s}\overrightarrow{x}\;\text{ for all }\;\overrightarrow{x}. \tag{7}
83
+ $$
84
+
85
+ Remarkably, authors also proved that every graph $G$ has an $\left( {1 + \epsilon }\right)$ -spectral approximation ${G}_{s}$ with nearly $\mathcal{O}\left( \frac{N}{{\epsilon }^{2}}\right)$ edges. By contrast, the second category targets specific downstream applications, such as community detection Local Similarity (LS) [Satuluri et al., 2011], link prediction [Chen et al., 2015]. Our method belongs to the first category.
86
+
87
+ The learning-based algorithms for graph sparsification are still less-investigated. GSGAN [Wu and Chen, 2020] is designed mainly for community detection, whereas SparRL [Wickman et al., 2021] uses deep reinforcement learning to sequentially prune edges by preserving the subgraph modularity. Different from GSGAN and SparRL, we demonstrate, in this work, that a sparsified graph can be learned simply by a gradient-based method in a principled (information-theoretic) manner, avoiding the necessity of reinforcement learning or the tuning of a generative adversarial network (GAN) [Goodfellow et al., 2014].
88
+
89
+ ## 3 PRI FOR GRAPH SPARSIFICATION
90
+
91
+ ### 3.1 THE LEARNING OBJECTIVE
92
+
93
+ Suppose we are given a graph $G$ with a known but fixed topology that is characterized by its graph Laplacian $\rho$ , from which we want to obtain a surrogate subgraph ${G}_{s}$ with graph Laplacian $\sigma$ , by preferentially removing less informative (or redundant) edges in $G$ . Motivated by the objective of PRI in Eq. (2), we can cast this problem as a trade-off between the entropy $S\left( \sigma \right)$ of ${G}_{s}$ and its descriptive power about $G$ in terms of their divergence (or dissimilarity) $D\left( {\sigma \parallel \rho }\right)$ :
94
+
95
+ $$
96
+ {\mathcal{J}}_{\text{Graph-PRI }} = \arg \mathop{\min }\limits_{\sigma }S\left( \sigma \right) + {\beta D}\left( {\sigma \parallel \rho }\right) , \tag{8}
97
+ $$
98
+
99
+ In this paper, we choose von Neumann entropy on the trace normalized graph Laplacian (i.e., $\widetilde{\sigma } = \sigma /\operatorname{tr}\left( \sigma \right)$ to quantify the entropy of ${G}_{s}$ , which is defined on the cone of symmetric positive semi-definite (SPS) matrix with trace 1 as [Nielsen and Chuang, 2002]:
100
+
101
+ $$
102
+ {S}_{\mathrm{{vN}}}\left( \widetilde{\sigma }\right) = - \operatorname{tr}\left( {\widetilde{\sigma }\log \widetilde{\sigma }}\right) = - \mathop{\sum }\limits_{i}\left( {{\lambda }_{i}\log {\lambda }_{i}}\right) , \tag{9}
103
+ $$
104
+
105
+ in which $\log \left( \cdot \right)$ is the matrix logarithm, $\operatorname{tr}\left( \cdot \right)$ denotes the trace, $\left\{ {\lambda }_{i}\right\}$ are the eigenvalues of $\widetilde{\sigma }$ .
106
+
107
+ We then use the quantum Jenssen-Shannon (QJS) divergence between two trace normalized graph Laplacians $\widetilde{\sigma }$ and $\widetilde{\rho }$ to quantify the divergence between $G$ and ${G}_{s}$ [Lamberti et al., 2008]:
108
+
109
+ $$
110
+ {D}_{\mathrm{{QJS}}}\left( {\widetilde{\sigma }\parallel \widetilde{\rho }}\right) = {S}_{\mathrm{{vN}}}\left( \frac{\widetilde{\sigma } + \widetilde{\rho }}{2}\right) - \frac{1}{2}{S}_{\mathrm{{vN}}}\left( \widetilde{\sigma }\right) - \frac{1}{2}{S}_{\mathrm{{vN}}}\left( \widetilde{\rho }\right) . \tag{10}
111
+ $$
112
+
113
+ In this paper, we absorb a scaling constant 2 into the expression for ${D}_{\mathrm{{QJS}}}\left( {\widetilde{\sigma }\parallel \widetilde{\rho }}\right)$ , the resulting objective combining
114
+
115
+ Eqs. (8)-(10) is given by:
116
+
117
+ $$
118
+ {\mathcal{J}}_{\text{Graph-PRI }} = \arg \min {S}_{\mathrm{{vN}}}\left( \widetilde{\sigma }\right) + \beta {D}_{\mathrm{{QJS}}}\left( {\widetilde{\sigma }\parallel \widetilde{\rho }}\right) \tag{11}
119
+ $$
120
+
121
+ $$
122
+ \equiv \arg \min \left( {1 - \beta }\right) {S}_{\mathrm{{vN}}}\left( \widetilde{\sigma }\right) + {2\beta }{S}_{\mathrm{{vN}}}\left( \frac{\widetilde{\sigma } + \widetilde{\rho }}{2}\right) .
123
+ $$
124
+
125
+ The second equation holds for the fact that the extra term $\beta {S}_{\mathrm{{vN}}}\left( \widetilde{\rho }\right)$ is a constant with respect to $\widetilde{\sigma }$ .
126
+
127
+ ### 3.2 JUSTIFICATION OF THE OBJECTIVE OF GRAPH-PRI
128
+
129
+ One may ask why we choose the von Neumann entropy in ${\mathcal{J}}_{\text{graph-PRI }}$ . In fact, the Laplacian spectrum contains rich information about the multi-scale structure of graphs [Mohar, 1997]. For example, it is well-known that the second smallest eigenvalue ${\lambda }_{2}\left( L\right)$ , which is also called the algebraic connectivity, is always considered to be a measure of how well-connected a graph is [Ghosh and Boyd, 2006].
130
+
131
+ On the other hand, it is natural to use the QJS divergence to quantify the dissimilarity between the original graph and its sparsified version. The QJS divergence is symmetric and its square root has also recently been found to satisfy the triangle inequality [Virosztek, 2021]. In fact, as a graph dissimilarity measure, QJS has also found applications in multilayer networks compression [De Domenico et al., 2015] and anomaly detection in graph streams [Chen et al. 2019].
132
+
133
+ A few recent studies indicate the close connections between ${S}_{\mathrm{{vN}}}\left( L\right)$ with the structure regularity and sparsity of a graph [Passerini and Severini, 2008, Han et al., 2012, Liu et al., 2021, Simmons et al., 2018]. We shall now highlight two theorems therein and explain our justifications in Sections 3.2.1 and 3.2.2 in detail.
134
+
135
+ Theorem 1 ([Passerini and Severini, 2008]). Given an undirected graph $G = \{ V, E\}$ , let ${G}^{\prime } = G + \{ u, v\}$ , where $V\left( G\right) = V\left( {G}^{\prime }\right)$ and $E\left( G\right) = E\left( {G}^{\prime }\right) \cup \{ u, v\}$ , we have:
136
+
137
+ $$
138
+ {S}_{vN}\left( {L}_{{G}^{\prime }}\right) \geq \frac{{d}_{{G}^{\prime }} - 2}{{d}_{{G}^{\prime }}}{S}_{vN}\left( {L}_{G}\right) , \tag{12}
139
+ $$
140
+
141
+ where ${d}_{{G}^{\prime }} = \mathop{\sum }\limits_{{v \in V\left( {G}^{\prime }\right) }}d\left( v\right)$ is the degree-sum of ${G}^{\prime },{L}_{G}$ and ${L}_{{G}^{\prime }}$ refer to respectively the graph Laplacians of $G$ and ${G}^{\prime }$ .
142
+
143
+ Theorem 1 bounds with variation of ${S}_{\mathrm{{vN}}}\left( L\right)$ with edge addition. Although Eq. (12) does not demonstrate a rigorous monotonic increasing trend for ${S}_{\mathrm{{vN}}}\left( L\right)$ , it still indicates that minimizing ${S}_{\mathrm{{vN}}}\left( L\right)$ may lead to a sparse graph, especially when the degree-sum is large.
144
+
145
+ Theorem 2 ([Liu et al., 2021]). For any undirected graph $G = \{ V, E\}$ , we have:
146
+
147
+ $$
148
+ 0 \leq {\Delta H}\left( G\right) = H\left( G\right) - {S}_{vN}\left( {L}_{G}\right) \leq \frac{{\log }_{2}e}{\delta }\frac{\operatorname{tr}\left( {W}^{2}\right) }{{d}_{G}},
149
+ $$
150
+
151
+ (13)where $H\left( G\right) = - \mathop{\sum }\limits_{{i = 1}}^{N}\left( \frac{{d}_{i}}{{d}_{G}}\right) {\log }_{2}\left( \frac{{d}_{i}}{{d}_{G}}\right)$ , in which $\delta =$ $\min {d}_{i} \mid {d}_{i} > 0$ is the minimum positive node degree, ${d}_{G}$ is the degree-sum, $W$ is the weight matrix.
152
+
153
+ Theorem 2 bounds the difference between ${S}_{\mathrm{{vN}}}\left( L\right)$ and $H\left( G\right)$ , the Shannon discrete entropy on degree of node.
154
+
155
+ #### 3.2.1 $\beta$ controls the sparsity of ${G}_{s}$
156
+
157
+ Different from the spectral sparsifiers [Spielman and Srivas-tava, 2011, Spielman and Teng, 2011] in which the sparsity of the subgraph is hard to control (i.e., there is no monotonic relationship between the hyperparameter $\epsilon$ and the degree of sparisity as measured by $\left| {E}_{s}\right|$ ), we argue that the sparsity of subgraph obtained by Graph-PRI is mainly determined by the value of hyperparameter $\beta$ .
158
+
159
+ Our argument is mainly based on Theorem 1. Here, we additionally claim that, under a mild condition (Assumption 1), the QJS divergence ${D}_{\mathrm{{QJS}}}\left( {L\parallel {L}_{s}}\right)$ is prone to decrease with edge addition (Corollary 1).
160
+
161
+ Assumption 1. Given an undirected graph $G = \{ V, E\}$ , let ${G}^{\prime } = G + \{ u, v\}$ , where $V\left( G\right) = V\left( {G}^{\prime }\right)$ and $E\left( G\right) =$ $E\left( {G}^{\prime }\right) \cup \{ u, v\}$ , we have ${S}_{vN}\left( {L}_{{G}^{\prime }}\right) \geq {S}_{vN}\left( {L}_{G}\right)$ , i.e., there exists a strictly monotonically increasing relationship between the number of edges $\left| G\right|$ and the von Neumann entropy ${S}_{vN}\left( {L}_{G}\right)$ .
162
+
163
+ Corollary 1. Under Assumption 1 suppose ${G}_{s} = \left\{ {{V}_{s},{E}_{s}}\right\}$ is a sparse graph obtained from $\bar{G} = \{ V, E\}$ (by removing edges), let ${G}_{s}^{\prime } = {G}_{s} + \{ u, v\}$ , where $V\left( {G}_{s}\right) = V\left( {G}_{s}^{\prime }\right)$ and $E\left( {G}_{s}\right) = E\left( {G}_{s}^{\prime }\right) \cup \{ u, v\}$ , we have ${D}_{QJS}\left( {{L}_{{G}_{s}^{\prime }}\parallel {L}_{G}}\right) \leq$ ${D}_{QJS}\left( {{L}_{{G}_{s}}\parallel {L}_{G}}\right)$ , i.e., adding an edge is prone to decrease the QJS divergence.
164
+
165
+ Combining Theorem 1 and Corollary 1, it is interesting to find that the edge addition has opposite effects on ${S}_{\mathrm{{vN}}}$ and ${D}_{\mathrm{{QJS}}}$ : the former is likely to increase whereas the latter will decrease. Therefore, when minimizing the weighted sum of ${S}_{\mathrm{{vN}}}$ and ${D}_{\mathrm{{QJS}}}$ together as in Graph-PRI, one can expect the number of edges in ${G}_{s}$ is mainly determined by the hyperprameter $\beta$ : a smaller $\beta$ gives more weight to ${S}_{\mathrm{{vN}}}$ and thus encourages a more sparse graph.
166
+
167
+ To empirically justify our argument, we generate a set of graphs with 200 nodes by either the Erdös-Rényi (ER) model or the Barabási-Albert (BA) model [Barabási and Albert, 1999]. For both models, we generate the original dense graph $G$ where the average of the node degree $\bar{d}$ is approximately 10,20 and 30, respectively. We then spar-sify $G$ to obtain ${G}_{s}$ by the random sparsifier, which satisfy the spectral property, however it is computationally very cheap [Sadhanala et al., 2016].
168
+
169
+ We finally evaluate the von Neumann entropy of ${G}_{s}$ and the QJS divergence between $G$ and ${G}_{s}$ with respect to different percentages (pct.) of preserved edges. We repeat the procedure 100 independent times and the averaged results are plotted in Fig. 1, from which we can clearly observe the opposite effects mentioned above. We also sparisify the original graph $G$ by our Graph-PRI with different values of $\beta$ . The number of preserved edges in ${G}_{s}$ with respect to $\beta$ is illustrated in Fig. 2, from which we can observe an obvious monotonic increasing relationship.
170
+
171
+ ![019639a2-e6fc-755b-9bc0-14b7be373237_3_893_453_696_322_0.jpg](images/019639a2-e6fc-755b-9bc0-14b7be373237_3_893_453_696_322_0.jpg)
172
+
173
+ Figure 1: The variations of entropy and divergence with respect to different percentages of preserved edges.
174
+
175
+ ![019639a2-e6fc-755b-9bc0-14b7be373237_3_892_898_695_287_0.jpg](images/019639a2-e6fc-755b-9bc0-14b7be373237_3_892_898_695_287_0.jpg)
176
+
177
+ Figure 2: The monotonic increasing relationship between number of preserved edges and hyperparameter $\beta$ in our Graph-PRI.
178
+
179
+ #### 3.2.2 Graph-PRI in special cases of $\beta$
180
+
181
+ Continuing our discussion in Sec. 3.2.1, it would be interesting to infer what may happen in some special cases of $\beta$ . Here, we restrict our discussion with $\beta = 0$ and $\beta \rightarrow \infty$ .
182
+
183
+ When $\beta = 0$ , our objective can be interpreted as $\min H\left( G\right)$ . $H\left( G\right)$ takes the mathematical form of the Shannon discrete entropy (i.e., $- \mathop{\sum }\limits_{{i = 1}}^{N}\mathbb{P}\left( {x}_{i}\right) \log \mathbb{P}\left( {x}_{i}\right)$ , in which $\mathbb{P}\left( {x}_{i}\right)$ is the probability of the $i$ -th state) on the degree of node. In this sense, $H\left( G\right)$ reaches to maximum for uniformly distributed degree of node (i.e., ${d}_{1} = \cdots = {d}_{N} = k$ , which is also called the $k$ -regular graph) and reduces to minimum if the degree of one node dominates (i.e., the graph possesses a high level of centralization). Thus, ${S}_{\mathrm{{vN}}}\left( L\right)$ can also be interpreted as a measure of degree heterogeneity or graph centrality [Simmons et al., 2018]. It also indicates that minimizing ${S}_{\mathrm{{vN}}}\left( L\right)$ pushes the Graph-PRI to learn a graph that has more graph centrality. When $\beta \rightarrow \infty$ , we are expect to recover original graph $G$ by Corollary 1, Fig. 3 corroborates our analysis.
184
+
185
+ Interestingly, similar properties also hold for the original PRI in scalar random variable setting (see Appendix B).
186
+
187
+ ![019639a2-e6fc-755b-9bc0-14b7be373237_4_151_174_1359_388_0.jpg](images/019639a2-e6fc-755b-9bc0-14b7be373237_4_151_174_1359_388_0.jpg)
188
+
189
+ Figure 3: Illustration of the sparsified graph structures revealed by our Graph-PRI for (a) Zachary's karate club Zachary [1977]. As the values of $\beta$ increases, the solution passes through (b) an approximately star graph to the extreme case of (d) $\beta \rightarrow \infty$ , in which we get back the original graph as the solution.
190
+
191
+ ### 3.3 OPTIMIZATION
192
+
193
+ We define a gradient descendent algorithm to solve Eq. (11). As have been discussed in Section 2.1, we have $\rho = B\overline{{B}^{T}}$ , ${\sigma }_{\mathbf{w}} = B\operatorname{diag}\left( \mathbf{w}\right) {B}^{T}$ , and $\mathbf{w}$ is the edge selection vector we aim to optimize. For simplicity, we assume that the selection of edges from $G$ are conditionally independent to each other [Luo et al. 2020], that is ${\mathbb{P}}_{\mathbf{w}} = \mathop{\prod }\limits_{{i = 1}}^{M}{\mathbb{P}}_{{w}_{i}}$ . Due to the discrete nature of ${G}_{s}$ , we relax $\mathbf{w} = \left\lbrack {{w}_{1},{w}_{2},\ldots ,{w}_{M}}\right\rbrack$ from a binary vector $\{ 0,1{\} }^{M}$ to a continuous real-valued vector in ${\left\lbrack 0,1\right\rbrack }^{M}$ . In this sense, the value of ${w}_{i}$ can be interpreted as the probability of selecting the $i$ -th edge.
194
+
195
+ In practice, we use the Gumbel-softmax [Maddison et al., 2017, Jang et al.,2016] to update ${w}_{i}$ . Particularly, suppose we want to approximate a categorical random variable represented as a one-hot vector in ${\mathbb{R}}^{K}$ with category probability ${p}_{1},{p}_{2},\cdots ,{p}_{K}$ (here, $K = 2$ ), the Gumbel-softmax gives a $K$ -dimensional sampled vector with the $i$ -th entry as:
196
+
197
+ $$
198
+ {\widehat{p}}_{i} = \frac{\exp \left( {\left( {\log {p}_{i} + {g}_{i}}\right) /\tau }\right) }{\mathop{\sum }\limits_{{j = 1}}^{K}\exp \left( {\left( {\log {p}_{j} + {g}_{j}}\right) /\tau }\right) }, \tag{14}
199
+ $$
200
+
201
+ where $\tau$ is a temperature for the Concrete distribution and ${g}_{i}$ is generated from a Gumbel(0,1)distribution:
202
+
203
+ $$
204
+ {g}_{i} = - \log \left( {-\log {u}_{i}}\right) ,\;{u}_{i} \sim \operatorname{Uniform}\left( {0,1}\right) . \tag{15}
205
+ $$
206
+
207
+ Note that, although we use the Gumbel-Softmax to ease the optimization, Graph-PRI itself has analytical gradient (Theorem 3). The detailed algorithm of Graph-PRI is elaborated in Appendix E. We also provide a PyTorch example therein.
208
+
209
+ Theorem 3. The gradient of Eq. (10) w.r.t. edge selection vector w is:
210
+
211
+ $$
212
+ {\nabla }_{w}{\mathcal{J}}_{\text{Graph-PRI }} = {Ug} \tag{16}
213
+ $$
214
+
215
+ where ${\bar{\sigma }}_{w} = \frac{1}{2}\left( {{\widetilde{\sigma }}_{w} + \widetilde{\rho }}\right) = \frac{1}{2}B\operatorname{diag}\left( {\widetilde{\mathbf{w}} + {\widetilde{\mathbf{1}}}_{M}}\right) {B}^{T}$ and $g = - \operatorname{diag}\left( {{B}^{T}\left\lbrack {\left( {1 - \beta }\right) \ln {\widetilde{\sigma }}_{w} + \beta \ln {\bar{\sigma }}_{w}}\right\rbrack B}\right)$ and $U =$ $\left\{ {u}_{ij}\right\} ,{u}_{ij} = - \frac{{\widetilde{w}}_{j}}{1 - {\widetilde{w}}_{i}},\forall {ij} \mid i \neq j,{u}_{ii} = 1.$
216
+
217
+ ### 3.4 APPROXIMATION AND CONNECTIVITY CONSTRAINT
218
+
219
+ The computation of von Neumann entropy requires the eigenvalue decomposition of a trace normalized SPS matrix, which usually takes $\mathcal{O}\left( {N}^{3}\right)$ time. In this paper, based on the tight bound in Theorem 2, we simply approximate ${S}_{\mathrm{{vN}}}\left( {L}_{G}\right)$ with the Shannon discrete entropy on the normalized degree of nodes $H\left( G\right)$ , which immediately reduces the computational complexity to $\mathcal{O}\left( N\right)$ .
220
+
221
+ On the other hand, when the connectivity of the subgraph is preferred, one can simply add another regularization on the degree of the nodes [Kalofolias, 2016]:
222
+
223
+ $$
224
+ \mathop{\min }\limits_{\mathbf{w}}S\left( {\widetilde{\sigma }}_{\mathbf{w}}\right) + {\beta D}\left( {{\widetilde{\sigma }}_{\mathbf{w}}\parallel \widetilde{\rho }}\right) - \alpha {\mathbf{1}}^{T}\log \left( {\operatorname{diag}\left( \sigma \right) }\right) , \tag{17}
225
+ $$
226
+
227
+ where the hyper-parameter $\alpha > 0$ . This Logarithm barrier forces the degree to be positive and improves the connec-tivety of graph without comprising sparsity. Unless otherwise specified, we select $\alpha = {0.005}$ throughout this work.
228
+
229
+ ## 4 EXPERIMENTAL EVALUATION
230
+
231
+ In this section, we demonstrate the effectiveness and versatility of our Graph-PRI in multiple graph-related machine learning tasks. Our experimental study is guided by the following three questions:
232
+
233
+ Q1 What kind of structural property or information does our method preserves?
234
+
235
+ Q2 How well does our method compare against popular and competitive graph sparsification baselines?
236
+
237
+ Q3 How to use the Graph-PRI in practical machine learning problems; and what are the performance gains?
238
+
239
+ The selected competing methods include 1 baseline and 3 state-of-the-art (SOTA) ones: 1) the Random Sampling (RS) that randomly prunes a percentage of edges; 2) the Local Degree (LD) [Hamann et al., 2016] that only preserves the top $\left| {\operatorname{degree}{\left( v\right) }^{\alpha }}\right| \left( {0 \leq \alpha \leq 1}\right)$ neighbors (sorted by degree in descending order) for each node $v$ ; 3) the Local Similarity (LS) [Satuluri et al., 2011]) that applies Jaccard similarity function on nodes $v$ and $u$ ’s neighborhoods to quantify the score of edge $\left( {u, v}\right) ;4)$ the Effective Resistance (ER) [Spielman and Srivastava, 2011]. We implement RS, LD, LS by NetworKit ${}^{1}$ , and ER by PyGSP ${}^{2}$ .
240
+
241
+ ### 4.1 GRAPH SPARSIFICATION
242
+
243
+ We use 2 synthetic data and 4 real-world network data from KONECT network datasets ${}^{3}$ for evaluation. They are, G1: a $k$ -NN $\left( {k = {10}}\right)$ graph with 20 nodes that constitute a global circle structure; G2: a stochastic block model (SBM) with four distinct communities ( 30 nodes in each community, and intra- and inter-community connection probabilities of ${2}^{-2}$ and ${2}^{-7}$ , respectively); G3: the most widely used Zachary karate club network (34 nodes and 78 edges); G4: a network contains contacts between suspected terrorists involved in the train bombing of Madrid on March 11, 2004 (64 nodes and 243 edges); G5: a network of books about US politics published around the time of the 2004 presidential election and sold by the online bookseller Amazon.com (105 nodes and 441 edges); and G6: a collaboration network of Jazz musicians (198 nodes and 2,742 edges).
244
+
245
+ We expect Graph-PRI to preserve two essential properties associated with the original graph: 1) the spectral similarity (due to the divergence term); and 2) the graph centrality (due to the entropy term). We empirically justify our claims with two metrics. They are, the geodesic distance ${d}_{\overrightarrow{x}}\left( {\rho ,\sigma }\right)$ [Bravo Hermsdorff and Gunderson,2019]:
246
+
247
+ $$
248
+ {d}_{\overrightarrow{x}}\left( {\rho ,\sigma }\right) = \operatorname{arccosh}\left( {1 + \frac{\parallel \left( {\rho - \sigma }\right) \overrightarrow{x}{\parallel }_{2}^{2}\parallel \overrightarrow{x}{\parallel }_{2}^{2}}{2\left( {{\overrightarrow{x}}^{T}\rho \overrightarrow{x}}\right) \left( {{\overrightarrow{x}}^{T}\sigma \overrightarrow{x}}\right) }}\right) , \tag{18}
249
+ $$
250
+
251
+ in which we select $\overrightarrow{x}$ to be the smallest non-trivial eigenvector of the original Laplacian $\rho$ , as it encodes the global structure of a graph; and the graph centralization measure by ${C}_{D}$ [Freeman,[ 1978 ]):
252
+
253
+ $$
254
+ {C}_{D} = \frac{\mathop{\sum }\limits_{{i = 1}}^{N}\max \left( {d}_{j}\right) - {d}_{i}}{{N}^{2} - {3N} + 2}, \tag{19}
255
+ $$
256
+
257
+ in which $\max \left( {d}_{j}\right)$ refers to the maximum node degree.
258
+
259
+ We demonstrate in Fig. 4 and Fig. 5 respectively the values of ${d}_{\overrightarrow{x}}\left( {\rho ,\sigma }\right)$ and ${C}_{D}$ with respect to different edge preserving ratio (i.e., $\left| {E}_{s}\right| /\left| E\right|$ ) for different sparsification methods. As can be seen, our Graph-PRI always achieves the ${2}^{\text{nd }}$ best performance across different graphs. Although LD has advantages on preserving spectral distance and graph centrality, it does not have compelling performance in practical applications as will be demonstrated in the next subsection.
260
+
261
+ ### 4.2 GRAPH-REGULARIZED MULTI-TASK LEARNING
262
+
263
+ In traditional multi-task learning (MTL), we are given a group of $T$ related tasks. In each task we have access to a training set ${\mathcal{D}}_{t}$ with ${N}_{t}$ data instances $\left\{ {\left( {{\mathbf{x}}_{t}^{i},{y}_{t}^{i}}\right) : i = }\right.$ $\left. {1,\cdots ,{N}_{t}, t = 1,\cdots , T}\right\}$ . In this section, we focus on the regression setup in which ${\mathbf{x}}_{t}^{i} \in {\mathcal{X}}_{t} \subseteq {\mathbb{R}}^{d}$ and ${y}_{t}^{i} \in \mathbb{R}$ . Multi-task learning aims to learn from each training set ${\mathcal{D}}_{t}$ a prediction model ${f}_{t}\left( {{\mathbf{w}}_{t}, \cdot }\right) : {\mathcal{X}}_{t} \rightarrow \mathbb{R}$ with parameter ${\mathbf{w}}_{t}$ such that the task relatedness is taken into consideration and the overall generalization error is small.
264
+
265
+ In what follows, we assume a linear model in each task, i.e., ${f}_{t}\left( {{\mathbf{w}}_{t},\mathbf{x}}\right) = {\mathbf{w}}_{t}^{T}\mathbf{x}$ . The multi-task regression problem with a regularization $\Omega$ on the model parameters $W =$ $\left\lbrack {{\mathbf{w}}_{1},{\mathbf{w}}_{2},\cdots ,{\mathbf{w}}_{T}}\right\rbrack$ can thus be defined as:
266
+
267
+ $$
268
+ \mathop{\min }\limits_{W}\mathop{\sum }\limits_{{t = 1}}^{T}{\begin{Vmatrix}{\mathbf{w}}_{t}^{T}{\mathbf{x}}_{t} - {y}_{t}\end{Vmatrix}}_{2}^{2} + {\gamma \Omega }\left( W\right) . \tag{20}
269
+ $$
270
+
271
+ Graph is a natural way to establish the relationship over multiple tasks: each node refers to a single task; if two tasks are strongly correlated to each other, there is an edge to connect them. In this sense, the objective for multi-task regression learning regularized with a graph adjacency matrix $A$ can be formulated as [He et al., 2019]:
272
+
273
+ $$
274
+ \mathop{\min }\limits_{W}\mathop{\sum }\limits_{{t = 1}}^{T}{\begin{Vmatrix}{\mathbf{w}}_{t}^{T}{\mathbf{x}}_{t} - {y}_{t}\end{Vmatrix}}_{2}^{2} + \gamma \mathop{\sum }\limits_{{i = 1}}^{T}\mathop{\sum }\limits_{{j \in {\mathcal{N}}_{i}}}{A}_{ij}{\begin{Vmatrix}{\mathbf{w}}_{i} - {\mathbf{w}}_{j}\end{Vmatrix}}_{2}^{2}, \tag{21}
275
+ $$
276
+
277
+ where ${\mathcal{N}}_{i}$ is the set of neighbors of $i$ -th task.
278
+
279
+ Usually, a dense graph $G$ is estimated at first to fully characterize task relatedness [Chen et al., 2010, He et al., 2019]. Here, we are interesting in: 1) sparsifying $G$ to reduce redundant or less-important connections (edges) between tasks; and 2) validating if the sparsified graph can further reduce the generalization error.
280
+
281
+ To this end, we exemplify our motivation with the recently proposed Convex Clustering Multi-Task regression Learning (CCMTL) [He et al., 2019] that optimizes Eq. (21) with the Combinatorial Multigrid (CMG) solver [Koutis et al., 2011], and test its performance on two benchmark MTL datasets -1 ) a synthetic dataset Gonçalves et al. [2016] with 20 tasks in which tasks 1-10 are mutually related and tasks 11-20 are mutually related; 2) a real-world Parkinson's disease dataset [5] which contains biomedical voice measurements from 42 patients. We view each patient as a single task and aim to predict the motor Unified Parkinson's Disease Rating Scale (UPDRS) score based 19-dimensional features such as age, gender, and jitter and shimmer voice measurements.
282
+
283
+ ---
284
+
285
+ https://networkit.github.io/
286
+
287
+ ${}^{2}$ https://github.com/epfl-lts2/pygsp
288
+
289
+ http://konect.cc/networks/
290
+
291
+ ${}^{4}$ See Appendix Con details of datasets in sections 4.2 and 4.3
292
+
293
+ https://archive.ics.uci.edu/ml/datasets/parkinsons+ telemonitoring
294
+
295
+ ---
296
+
297
+ ![019639a2-e6fc-755b-9bc0-14b7be373237_6_147_176_1433_252_0.jpg](images/019639a2-e6fc-755b-9bc0-14b7be373237_6_147_176_1433_252_0.jpg)
298
+
299
+ Figure 4: Spectral distance ${d}_{\overrightarrow{x}}\left( {\rho ,\sigma }\right)$ (the smaller the better).
300
+
301
+ ![019639a2-e6fc-755b-9bc0-14b7be373237_6_148_506_1432_251_0.jpg](images/019639a2-e6fc-755b-9bc0-14b7be373237_6_148_506_1432_251_0.jpg)
302
+
303
+ Figure 5: Graph centrality ${C}_{D}$ (the larger the higher degree of centrality).
304
+
305
+ We evaluate the test performance with the root mean squared error (RMSE) and demonstrate the values of RMSE with respect to different edge pruning ratio (i.e., $1 - \left| {E}_{s}\right| /\left| E\right|$ ) of different methods in Fig. 6. In synthetic data, only Graph-PRI and LS are able to further reduce test error. For Graph-PRI, this phenomenon occurs at the beginning of pruning edges, which indicates that our method begins to remove less-informative or spurious connections in an early stage. In Parkinson's data, most of methods obtain almost similar performances to "no edge pruning" (with Graph-PRI performs slightly better as shown in the zoomed plot), which suggests the existence of redundant task relationships. One should note that, the performance of Graph-PRI becomes worse if we remove large amount of edges. One possible reason is that when $\left| {E}_{s}\right|$ is small, our subgraph tends to have a high graph centrality or star shape, such that one task dominates. Note however that, in MTL, keeping a very sparse relationship is usually not the goal. Because it may lead to weak collaboration between tasks, which violates the motivation of MTL.
306
+
307
+ ![019639a2-e6fc-755b-9bc0-14b7be373237_6_162_1606_678_336_0.jpg](images/019639a2-e6fc-755b-9bc0-14b7be373237_6_162_1606_678_336_0.jpg)
308
+
309
+ Figure 6: The RMSE with respect to the degree of sparsity (defined as $1 - \left| {E}_{s}\right| /\left| E\right|$ ) of the resulting subgraph for all competing methods. Black dashed line indicates performance without any edge pruning. Our method is able to drop out redundant or less-important edges to further reduce generalization error.
310
+
311
+ ### 4.3 fMRI-DERIVED BRAIN NETWORK CLASSIFICATION AND INTERPRETABILITY
312
+
313
+ Brain networks are complex graphs with anatomic brain regions of interest (ROIs) represented as nodes and functional connectivity (FC) between brain ROIs as links. For resting-state functional magnetic resonance imaging (rs-fMRI), the Pearson's correlation coefficient between blood-oxygen-level-dependent (BOLD) signals associated with each pair of ROIs is the most popular way to construct FC network [Farahani et al., 2019].
314
+
315
+ In the problem of brain network classification, the identification of predictive subnetworks or edges is perhaps one of the most important tasks, as it offers a mechanistic understanding of neuroscience phenomena [Wang et al., 2021]. Traditionally, this is achieved by treating all the connections (i.e., the Pearson's correlation coefficients) of FC as a long feature vector, and applying feature selection techniques, such as LASSO [Tibshirani, 1996] and two-sample t-test, to determine if one edge connection is significantly different in different groups (e.g., patients with Alzheimer's disease with respect to normal control members).
316
+
317
+ In this section, we develop a new graph neural networks (GNNs) framework for interpretable brain network classification that can infer brain network categories and identify the most informative edge connections, in a joint end-to-end learning framework. We follow the motivation of [Cui et al., 2021] and aim to learn a global shared edge mask $M$ to highlight decision-specific prominent brain network connections. The final explanation for an input graph ${G}_{i}$ is generated by the element-wise product of ${A}_{i}$ and $\sigma \left( M\right)$ , i.e., ${A}_{i} \odot \sigma \left( M\right)$ , in which ${A}_{i}$ is the adjacency matrix of ${G}_{i},\sigma$ refers to the sigmoid activation function that maps $M$ to ${\left\lbrack 0,1\right\rbrack }^{N \times N}$ . Obviously, $\sigma \left( M\right)$ in our GNN also plays a similar role to the edge selection vector $\mathbf{w}$ in Graph-PRI.
318
+
319
+ Problem definition. Given a weighted brain network $G =$ (V, E, W), where $V = {\left\{ {v}_{i}\right\} }_{i = 1}^{N}$ is the node set of size $N$ defined by the ROIs, $E$ is the edge set, and $W \in {\mathbb{R}}^{N \times N}$ is the weighted adjacency matrix describing FC strengths between ROIs, the model outputs a prediction label $y$ . In brain network analysis, $N$ remains the same across subjects.
320
+
321
+ Experimental data. We evaluate our method on two benchmark real-world brain network datasets. The first one is the eyes open and eyes closed (EOEC) dataset [Zhou et al., 2020], which includes 96 brain networks with the goal to predict either eyes open or eyes closed states. The second one is from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database ${}^{6}$ . We use the brain networks generated by [Kuang et al., 2019], with the task of distinguishing mild cognitive impairment (MCI) ${}^{7}$ group (38 patients) from normal control (NC) subjects (37 in total). Details on brain network construction are elaborated in Appendix C.
322
+
323
+ Methodology and objective. Following [Cui et al., 2021], we provide interpretability by learning an edge mask $M \in$ ${\mathbb{R}}^{N \times N}$ that is shared across all subjects to highlight the disease-specific prominent ROI connections. Motivated by the functionality of PRI to prune redundant or less informative edges as demonstrated in previous sections, we train $M$ such that the resulting subgraph ${G}^{\prime } = G \odot \sigma \left( M\right)$ and the original graph $G$ meets the PRI constraint, i.e., Eq. (8). Therefore, the final objective of our interpretable GNN can be formulated as:
324
+
325
+ $$
326
+ {\mathcal{L}}_{\mathrm{{CE}}} + \lambda {\mathbb{E}}_{G \sim p\left( G\right) }\left\{ {S\left( {G}^{\prime }\right) + {\beta D}\left( {{G}^{\prime }\parallel G}\right) }\right\} , \tag{22}
327
+ $$
328
+
329
+ in which ${\mathcal{L}}_{\mathrm{{CE}}}$ refers to the supervised cross-entropy loss for label prediction, $\lambda$ is the hyperparameter that balances the trade-off between ${\mathcal{L}}_{\mathrm{{CE}}}$ and PRI constraint.
330
+
331
+ Empirical results. We summarize the classification accuracy (%) with different methods over 10 independent runs in Table 1, in which Graph-PRI* refers to our objective implemented by approximating von Neumann entropy with Shannon discrete entropy functional on the normalized degree of nodes (see Section 3.4). As can be seen, our method achieves compelling or higher accuracy in both datasets.
332
+
333
+ To evaluate the interpretability of our method, we visualize the edges been frequently selected for MCI patients and NC group in Fig. 7. We observed that the interactions within sensorimotor cortex (colored blue) for MCI patients are stronger than that of NC group. This result is consistent with the findings in [Ferreri et al., 2016, Niskanen et al., 2011] which observed that the motor cortex excitability is enhanced in AD and MCI from the early stages. We also observed that the interactions within the frontoparietal network (colored yellow) of patients are significantly less than that of NC group, which is in line with previous studies [Ne-ufang et al., 2011, Zanchi et al., 2017] stated that decreased activation in FPN is associated with subtle cognitive deficits.
334
+
335
+ Table 1: Classification accuracy (%) and standard deviation with different methods over 10 independent runs. The best and second-best performances are in bold and underlined, respectively.
336
+
337
+ <table><tr><td>$\mathbf{{Method}}$</td><td>EOEC</td><td>ADNI</td></tr><tr><td>SVM + t-test</td><td>${71.79} \pm {7.80}$</td><td>${60.61} \pm {10.52}$</td></tr><tr><td>SVM + LASSO</td><td>${72.08} \pm {7.29}$</td><td>${54.67} \pm {12.88}$</td></tr><tr><td>GCN [Kipf and Welling, 2017]</td><td>${68.42} \pm {8.59}$</td><td>${66.67} \pm {2.48}$</td></tr><tr><td>GAT [Veličković et al., 202018</td><td>${73.68} \pm {8.60}$</td><td>${66.67} \pm {9.43}$</td></tr><tr><td>Graph-PRI</td><td>${80.70} \pm {9.60}$</td><td>${66.67} \pm {6.67}$</td></tr><tr><td>Graph-PRI*</td><td>$\underline{78.95} \pm {4.30}$</td><td>$\underline{{64.44} \pm {3.14}}$</td></tr></table>
338
+
339
+ ![019639a2-e6fc-755b-9bc0-14b7be373237_7_906_641_676_515_0.jpg](images/019639a2-e6fc-755b-9bc0-14b7be373237_7_906_641_676_515_0.jpg)
340
+
341
+ Figure 7: The contributing functional connectivity links for (a) MCI patients; and (b) normal control group. We visualize edges with a probability of more than ${50}\%$ been selected by our generated edges. The colors of neural systems are described as: sensorimotor network (SMN), occipital network (ON), fronto-parietal network (FPN), default mode network (DMN), cingulo-opercular network (CON), and cerebellum network (CN), respectively. See Appendix F for a zoomed plot.
342
+
343
+ ## 5 CONCLUSIONS
344
+
345
+ We present a first study on extending the Principle of Relevant Information (PRI) - a less well-known but promising unsupervised information-theoretic principle - to network analysis and graph neural networks (GNNs). Our Graph-PRI preserves spectral similarity well, while also encouraging the resulting subgraph to have higher graph centrality. Moreover, our Graph-PRI is easy to optimize. It can be flexibly integrated with either multi-task learning or GNNs to improve not only the quantitative accuracy but also interpretability.
346
+
347
+ In the future, we will explore more unknown properties behind Graph-PRI, including a full understanding to the physical meaning of von Neumann entropy on graphs. We will also investigate more downstream applications of Graph-PRI on GNNs such as node representation learning.
348
+
349
+ ---
350
+
351
+ 6 http://adni.loni.usc.edu/
352
+
353
+ ${}^{7}\mathrm{{MCI}}$ is a transitional stage between $\mathrm{{AD}}$ and $\mathrm{{NC}}$ .
354
+
355
+ ---
356
+
357
+ ## References
358
+
359
+ Raman Arora and Jalaj Upadhyay. On differentially private graph sparsification and applications. NeurIPS, 32:13399- 13410, 2019.
360
+
361
+ Albert-László Barabási and Réka Albert. Emergence of scaling in random networks. science, 286(5439):509- 512, 1999.
362
+
363
+ András A Benczúr and David R Karger. Randomized approximation schemes for cuts and flows in capacitated graphs. SIAM Journal on Computing, 44(2):290-319, 2015.
364
+
365
+ Gecia Bravo Hermsdorff and Lee Gunderson. A unifying framework for spectrum-preserving graph sparsification and coarsening. NeurIPS, 32:7736-7747, 2019.
366
+
367
+ Pin-Yu Chen, Lingfei Wu, Sijia Liu, and Indika Rajapakse. Fast incremental von neumann graph entropy computation: Theory, algorithm, and applications. In ICML, 2019.
368
+
369
+ Xi Chen, Seyoung Kim, Qihang Lin, Jaime G Carbonell, and Eric P Xing. Graph-structured multi-task regression and an efficient optimization method for general fused lasso. arXiv preprint arXiv:1005.3579, 2010.
370
+
371
+ Yi-Ling Chen, Ming-Syan Chen, and S Yu Philip. Ensemble of diverse sparsifications for link prediction in large-scale networks. In ${ICDM}$ , pages 51-60. IEEE,2015.
372
+
373
+ Hejie Cui, Wei Dai, Yanqiao Zhu, Xiaoxiao Li, Lifang He, and Carl Yang. Brainnnexplainer: An interpretable graph neural network framework for brain network based disease analysis. arXiv preprint arXiv:2107.05097, 2021.
374
+
375
+ Manlio De Domenico, Vincenzo Nicosia, Alexandre Arenas, and Vito Latora. Structural reducibility of multilayer networks. Nature communications, 6:6864, 2015.
376
+
377
+ Farzad V Farahani, Waldemar Karwowski, and Nichole R Lighthall. Application of graph theory for identifying connectivity patterns in human brain networks: a systematic review. frontiers in Neuroscience, 13:585, 2019.
378
+
379
+ Florinda Ferreri et al. Sensorimotor cortex excitability and connectivity in alzheimer’s disease: A tms-eeg co-registration study. Human brain mapping, 37(6):2083- 2096, 2016.
380
+
381
+ Linton C Freeman. Centrality in social networks conceptual clarification. Social networks, 1(3):215-239, 1978.
382
+
383
+ Arpita Ghosh and Stephen Boyd. Growing well-connected graphs. In ${CDC}$ , pages ${6605} - {6611}$ . IEEE,2006.
384
+
385
+ André R Gonçalves, Fernando J Von Zuben, and Arindam Banerjee. Multi-task sparse structure learning with gaussian copula models. The Journal of Machine Learning Research, 17(1):1205-1234, 2016.
386
+
387
+ Ian Goodfellow et al. Generative adversarial nets. NeurIPS, 27, 2014.
388
+
389
+ Michael Hamann, Gerd Lindner, Henning Meyerhenke, Christian L Staudt, and Dorothea Wagner. Structure-preserving sparsification methods for social networks. Social Network Analysis and Mining, 6(1):22, 2016.
390
+
391
+ Lin Han, Francisco Escolano, Edwin R Hancock, and Richard C Wilson. Graph characterizations from von neumann entropy. Pattern Recognition Letters, 33(15): 1958-1967, 2012.
392
+
393
+ Xiao He, Francesco Alesiani, and Ammar Shaker. Efficient and scalable multi-task regression on massive number of tasks. In AAAI, volume 33, pages 3763-3770, 2019.
394
+
395
+ J Hoyos-Osorio et al. Relevant information undersampling to support imbalanced data classification. Neurocomput-ing, 436:136-146, 2021.
396
+
397
+ Eric Jang, Shixiang Gu, and Ben Poole. Categorical repa-rameterization with gumbel-softmax. In ICLR, 2016.
398
+
399
+ Robert Jenssen, Jose C Principe, Deniz Erdogmus, and Tor-bjørn Eltoft. The cauchy-schwarz divergence and parzen windowing: Connections to graph theory and mercer kernels. Journal of the Franklin Institute, 343(6):614-629, 2006.
400
+
401
+ Vassilis Kalofolias. How to learn a graph from smooth signals. In AISTATS, pages 920-929. PMLR, 2016.
402
+
403
+ Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In ICLR, 2017.
404
+
405
+ Ioannis Koutis, Gary L Miller, and David Tolliver. Combinatorial preconditioners and multilevel solvers for problems in computer vision and image processing. CVIU, 115(12): 1638-1646, 2011.
406
+
407
+ Liqun Kuang et al. A concise and persistent feature to study brain resting-state network dynamics: Findings from the alzheimer's disease neuroimaging initiative. Human brain mapping, 40(4):1062-1081, 2019.
408
+
409
+ Pedro W Lamberti, Ana P Majtey, A Borras, Montserrat Casas, and A Plastino. Metric character of the quantum jensen-shannon divergence. Physical Review A, 77(5): 052311, 2008.
410
+
411
+ Yanjun Li, Shujian Yu, Jose C Principe, Xiaolin Li, and Dapeng Wu. Pri-vae: principle-of-relevant-information variational autoencoders. arXiv preprint arXiv:2007.06503, 2020.
412
+
413
+ Xuecheng Liu, Luoyi Fu, and Xinbing Wang. Bridging the gap between von neumann graph entropy and structural information: Theory and applications. In Proceedings of the Web Conference 2021, pages 3699-3710, 2021.
414
+
415
+ Dongsheng Luo, Wei Cheng, Dongkuan Xu, Wenchao Yu,
416
+
417
+ Bo Zong, Haifeng Chen, and Xiang Zhang. Parameterized explainer for graph neural network. In NeurIPS, volume 33, pages 19620-19631, 2020.
418
+
419
+ C Maddison, A Mnih, and Y Teh. The concrete distribution: A continuous relaxation of discrete random variables. In ICLR, 2017.
420
+
421
+ Bojan Mohar. Some applications of laplace eigenvalues of graphs. In Graph symmetry, pages 225-275. Springer, 1997.
422
+
423
+ Susanne Neufang et al. Disconnection of frontal and parietal areas contributes to impaired attention in very early alzheimer's disease. Journal of Alzheimer's Disease, 25 (2):309-321, 2011.
424
+
425
+ Michael A Nielsen and Isaac Chuang. Quantum computation and quantum information, 2002.
426
+
427
+ Eini Niskanen et al. New insights into alzheimer's disease progression: a combined tms and structural mri study. PLoS One, 6(10):e26113, 2011.
428
+
429
+ Peer Nowack, Jakob Runge, Veronika Eyring, and Joanna D Haigh. Causal networks for climate model evaluation and constrained projections. Nature communications, 11(1): 1-11, 2020.
430
+
431
+ Filippo Passerini and Simone Severini. The von neumann entropy of networks. arXiv:0812.2597, 2008.
432
+
433
+ Jose C Principe. Information theoretic learning: Renyi's entropy and kernel perspectives. Springer Science & Business Media, 2010.
434
+
435
+ Alfréd Rényi. On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, volume 1, pages 547-561, 1961.
436
+
437
+ Mikail Rubinov and Olaf Sporns. Complex network measures of brain connectivity: uses and interpretations. Neu-roimage, 52(3):1059-1069, 2010.
438
+
439
+ Veeru Sadhanala, Yu-Xiang Wang, and Ryan Tibshirani. Graph sparsification approaches for laplacian smoothing. In AISTATS, pages 1250-1259, 2016.
440
+
441
+ Venu Satuluri, Srinivasan Parthasarathy, and Yiye Ruan. Local graph sparsification for scalable clustering. In ACM SIGMOD, pages 721-732, 2011.
442
+
443
+ David E Simmons, Justin P Coon, and Animesh Datta. The von neumann theil index: characterizing graph centralization using the von neumann index. Journal of Complex Networks, 6(6):859-876, 2018.
444
+
445
+ Daniel A Spielman and Nikhil Srivastava. Graph sparsifica-tion by effective resistances. SIAM Journal on Computing, 40(6):1913-1926, 2011.
446
+
447
+ Daniel A Spielman and Shang-Hua Teng. Spectral sparsi-fication of graphs. SIAM Journal on Computing, 40(4): 981-1025, 2011.
448
+
449
+ Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1):267-288, 1996.
450
+
451
+ Naftali Tishby, Fernando C. Pereira, and William Bialek. The information bottleneck method. In Proc. of the 37-th Annual Allerton Conference on Communication, Control and Computing, pages 368-377, 1999.
452
+
453
+ Petar Veličković et al. Graph attention networks. In ICLR, 2018.
454
+
455
+ Dániel Virosztek. The metric property of the quantum jensen-shannon divergence. Advances in Mathematics, 380:107595, 2021.
456
+
457
+ Lu Wang, Feng Vankee Lin, Martin Cole, and Zhengwu Zhang. Learning clique subgraphs in structural brain network classification with application to crystallized cognition. Neuroimage, 225:117493, 2021.
458
+
459
+ Ryan Wickman, Xiaofei Zhang, and Weizi Li. Sparrl: Graph sparsification via deep reinforcement learning. arXiv preprint arXiv:2112.01565, 2021.
460
+
461
+ Hang-Yang Wu and Yi-Ling Chen. Graph sparsification with generative adversarial network. In ${ICDM}$ , pages 1328-1333. IEEE, 2020.
462
+
463
+ Tailin Wu, Hongyu Ren, Pan Li, and Jure Leskovec. Graph information bottleneck. In NeurIPS, 2020.
464
+
465
+ Junchi Yu, Tingyang Xu, Yu Rong, Yatao Bian, Junzhou Huang, and Ran He. Graph information bottleneck for subgraph recognition. In ICLR, 2020.
466
+
467
+ Wayne W Zachary. An information flow model for conflict and fission in small groups. Journal of anthropological research, 33(4):452-473, 1977.
468
+
469
+ Davide Zanchi et al. Decreased fronto-parietal and increased default mode network activation is associated with subtle cognitive deficits in elderly controls. Neurosignals, 25 (1):127-138, 2017.
470
+
471
+ Zhen Zhou et al. A toolbox for brain network construction and classification (brainnetclass). Human Brain Mapping, 41(10):2808-2826, 2020.
472
+
473
+ ## A PROOFS
474
+
475
+ ### A.1 PROOF TO COROLLARY 1
476
+
477
+ Before proving Corollary 1, we first present Lemma 1,
478
+
479
+ Lemma 1. Let $\lambda = \left\{ {\lambda }_{i}\right\}$ be the eigenvalues of (trace normalized) Graph Laplacian $\widetilde{L}$ . Suppose the $i$ -th eigenvalue ${\lambda }_{i}$ has a minor and negligible increase and the remaining eigenvalues decrease proportionately to their existing values, such that $\mathop{\sum }\limits_{i}{\lambda }_{i} = 1$ . Then, the total derivative of von Neumann entropy $S\left( \mathbf{\lambda }\right) = - \mathop{\sum }\limits_{{i = 1}}^{N}{\lambda }_{i}{\log }_{2}{\lambda }_{i}$ with respect to ${\lambda }_{i}$ is given by:
480
+
481
+ $$
482
+ \frac{dS}{d{\lambda }_{i}} = \frac{\partial S}{\partial {\lambda }_{i}} + \mathop{\sum }\limits_{{i \neq j}}\frac{\partial S}{\partial {\lambda }_{j}}\frac{d{\lambda }_{j}}{d{\lambda }_{i}} = - \frac{S\left( \mathbf{\lambda }\right) + {\log }_{2}{\lambda }_{i}}{1 - {\lambda }_{i}}. \tag{23}
483
+ $$
484
+
485
+ Proof. For simplicity, suppose we increase the element ${\lambda }_{i}$ up to the value ${\lambda }_{i} + d{\lambda }_{i}$ for some infinitesimally small value $d{\lambda }_{i}$ . As we increase this element, we decrease all the other elements proportionately to their values so that the constraint holds. Thus, for some infinitesimal value $\delta$ we update $\lambda$ as:
486
+
487
+ $$
488
+ {\lambda }_{1} \mapsto {\lambda }_{1}\left( {1 - \delta }\right)
489
+ $$
490
+
491
+ $$
492
+ \vdots
493
+ $$
494
+
495
+ $$
496
+ {\lambda }_{i - 1} \mapsto {\lambda }_{i - 1}\left( {1 - \delta }\right)
497
+ $$
498
+
499
+ $$
500
+ {\lambda }_{i + 1} \mapsto {\lambda }_{i + 1}\left( {1 - \delta }\right)
501
+ $$
502
+
503
+ $$
504
+ \vdots
505
+ $$
506
+
507
+ $$
508
+ {\lambda }_{N} \mapsto {\lambda }_{N}\left( {1 - \delta }\right)
509
+ $$
510
+
511
+ or in general ${\lambda }_{j} \mapsto {\lambda }_{j}\left( {1 - \delta }\right) , j \neq i$ . Due to the constraint $\mathop{\sum }\limits_{i}{\lambda }_{i} = 1$ , or
512
+
513
+ $$
514
+ \mathop{\sum }\limits_{{j \neq i}}{\lambda }_{j}\left( {1 - \delta }\right) + {\lambda }_{i} + d{\lambda }_{i} = 1
515
+ $$
516
+
517
+ $$
518
+ \left( {1 - {\lambda }_{i}}\right) \left( {1 - \delta }\right) + {\lambda }_{i} + d{\lambda }_{i} = 1
519
+ $$
520
+
521
+ $$
522
+ 1 - {\lambda }_{i} - \left( {1 - {\lambda }_{i}}\right) \delta + {\lambda }_{i} + d{\lambda }_{i} = 1
523
+ $$
524
+
525
+ $$
526
+ - \left( {1 - {\lambda }_{i}}\right) \delta + d{\lambda }_{i} = 0
527
+ $$
528
+
529
+ thus we have:
530
+
531
+ $$
532
+ \delta = d{\lambda }_{i}/\left( {1 - {\lambda }_{i}}\right) . \tag{24}
533
+ $$
534
+
535
+ Then,
536
+
537
+ $$
538
+ \frac{d{\lambda }_{j}}{d{\lambda }_{i}} = \frac{-{\lambda }_{j}\delta }{d{\lambda }_{i}} = - \frac{{\lambda }_{j}}{1 - {\lambda }_{i}},\;j \neq i. \tag{25}
539
+ $$
540
+
541
+ Therefore, the total derivative of von Neumann entropy
542
+
543
+ $S\left( \mathbf{\lambda }\right) = - \mathop{\sum }\limits_{{i = 1}}^{N}{\lambda }_{i}{\log }_{2}{\lambda }_{i}$ with respect to ${\lambda }_{i}$ is given by:
544
+
545
+ $$
546
+ \frac{dS}{d{\lambda }_{i}} = \frac{\partial S}{\partial {\lambda }_{i}} + \mathop{\sum }\limits_{{i \neq j}}\frac{\partial S}{\partial {\lambda }_{j}}\frac{d{\lambda }_{j}}{d{\lambda }_{i}}
547
+ $$
548
+
549
+ $$
550
+ = - \left( {\frac{1}{\ln 2} + {\log }_{2}{\lambda }_{i}}\right) + \frac{1}{1 - {\lambda }_{i}}\mathop{\sum }\limits_{{i \neq j}}{\lambda }_{j}\left( {\frac{1}{\ln 2} + {\log }_{2}{\lambda }_{j}}\right)
551
+ $$
552
+
553
+ $$
554
+ = - \frac{\left( {1 - {\lambda }_{i}}\right) {\log }_{2}{\lambda }_{i}}{1 - {\lambda }_{i}} + \frac{1}{1 - {\lambda }_{i}}\mathop{\sum }\limits_{{i \neq j}}{\lambda }_{j}{\log }_{2}{\lambda }_{j}
555
+ $$
556
+
557
+ $$
558
+ = - \frac{1}{1 - {\lambda }_{i}}\left\{ {\mathop{\sum }\limits_{{i = 1}}^{N} - {\lambda }_{i}{\log }_{2}{\lambda }_{i} + {\log }_{2}{\lambda }_{i}}\right\}
559
+ $$
560
+
561
+ $$
562
+ = - \frac{1}{1 - {\lambda }_{i}}\left\{ {S\left( \mathbf{\lambda }\right) + {\log }_{2}{\lambda }_{i}}\right\}
563
+ $$
564
+
565
+ (26)
566
+
567
+ ## Now, we present proof to Corollary 1.
568
+
569
+ Proof. Suppose the addition of an edge makes the $i$ -th eigenvalue ${\lambda }_{i}$ has a minor change $\Delta {\lambda }_{i}$ . By first-order approximation, we have:
570
+
571
+ $$
572
+ {S}_{\mathrm{{vN}}}\left( \frac{{L}_{G} + {L}_{{G}_{S}^{\prime }}}{2}\right) - {S}_{\mathrm{{vN}}}\left( \frac{{L}_{G} + {L}_{{G}_{S}}}{2}\right) \approx \frac{1}{2}\frac{d{S}_{\mathrm{{vN}}}\left( {L}_{\bar{G}}\right) }{d{\lambda }_{i}}\Delta {\lambda }_{i},
573
+ $$
574
+
575
+ (27)
576
+
577
+ and
578
+
579
+ $$
580
+ {S}_{\mathrm{{vN}}}\left( {L}_{{G}_{S}^{\prime }}\right) - {S}_{vN}\left( {L}_{{G}_{S}}\right) \approx \frac{d{S}_{\mathrm{{vN}}}\left( {L}_{{G}_{s}}\right) }{d{\lambda }_{i}}\Delta {\lambda }_{i}, \tag{28}
581
+ $$
582
+
583
+ in which $\frac{dS}{d{\lambda }_{i}}$ is the total derivative of von Neumann entropy $S\left( \mathbf{\lambda }\right) = - \mathop{\sum }\limits_{{i = 1}}^{N}{\lambda }_{i}{\log }_{2}{\lambda }_{i}$ with respect to ${\lambda }_{i}$ .
584
+
585
+ By Assumption 1, we have ${S}_{\mathrm{{vN}}}\left( {L}_{\bar{G}}\right) \geq {S}_{\mathrm{{vN}}}\left( {L}_{{G}_{s}}\right)$ . Thus, according to Lemma 1, we obtain:
586
+
587
+ $$
588
+ \frac{d{S}_{\mathrm{{vN}}}\left( {L}_{\bar{G}}\right) }{d{\lambda }_{i}} \leq \frac{d{S}_{\mathrm{{vN}}}\left( {L}_{{G}_{s}}\right) }{d{\lambda }_{i}}. \tag{29}
589
+ $$
590
+
591
+ Combining Eqs. (27) to (29), we get:
592
+
593
+ $$
594
+ {S}_{\mathrm{{vN}}}\left( \frac{{L}_{G} + {L}_{{G}_{S}^{\prime }}}{2}\right) - {S}_{\mathrm{{vN}}}\left( \frac{{L}_{G} + {L}_{{G}_{S}}}{2}\right) \tag{30}
595
+ $$
596
+
597
+ $$
598
+ \leq \frac{1}{2}\left( {{S}_{\mathrm{{vN}}}\left( {L}_{{G}_{S}^{\prime }}\right) - {S}_{\mathrm{{vN}}}\left( {L}_{{G}_{S}}\right) }\right) .
599
+ $$
600
+
601
+ Thus,
602
+
603
+ $$
604
+ {S}_{\mathrm{{vN}}}\left( \frac{{L}_{G} + {L}_{{G}_{S}^{\prime }}}{2}\right) - \frac{1}{2}\left( {{S}_{\mathrm{{vN}}}\left( {L}_{{G}_{S}^{\prime }}\right) + {S}_{vN}\left( {L}_{G}\right) }\right)
605
+ $$
606
+
607
+ $$
608
+ \leq {S}_{\mathrm{{vN}}}\left( \frac{{L}_{G} + {L}_{{G}_{S}}}{2}\right) - \frac{1}{2}\left( {{S}_{\mathrm{{vN}}}\left( {L}_{{G}_{S}}\right) + {S}_{vN}\left( {L}_{G}\right) }\right) ,
609
+ $$
610
+
611
+ (31)
612
+
613
+ which completes the proof.
614
+
615
+ ### A.2 PROOF TO THEOREM 3
616
+
617
+ Proof of Theorem 3. Theorem 3 follows by definition of Eq. (10) and substituting the definition of Eq. (10) and the use of result from Theorem 4. The total derivative of the cost function w.r.t. to the normalized selection vector $\widetilde{\mathbf{w}}$ , is given by $\frac{dJ}{d{w}_{i}} = \frac{\partial J}{\partial {w}_{i}} + \mathop{\sum }\limits_{{i \neq j}}\frac{\partial J}{\partial {w}_{j}}\frac{d{w}_{j}}{d{w}_{i}}$ . With the normalized selector vector, we have that $\mathop{\sum }\limits_{k}{w}_{k} = 1$ before and after the change. If we consider, ${w}_{i} \rightarrow {w}_{i} + \delta$ and ${w}_{j} \rightarrow {w}_{j}(1 -$ $\gamma ), j \neq i$ , then $\gamma = \frac{\delta }{1 - {w}_{i}}$ and $\frac{d{w}_{j}}{d{w}_{i}} = - \frac{\gamma {w}_{j}}{\delta } = - \frac{{w}_{j}}{1 - {w}_{i}}$
618
+
619
+ Theorem 4. The gradient of the von Neumann entropy w.r.t. the edge selection vector $\mathbf{w}$ is
620
+
621
+ $$
622
+ {\nabla }_{\mathbf{w}}S\left( {\sigma }_{\mathbf{w}}\right) = - \operatorname{diag}\left( {{B}^{T}\log \left( {B\operatorname{diag}\left( \mathbf{w}\right) {B}^{T}}\right) B}\right) - {\mathbf{1}}_{M},
623
+ $$
624
+
625
+ (32)
626
+
627
+ where $S\left( \sigma \right) = - \operatorname{tr}\left( {\sigma \log \sigma }\right)$ and ${\sigma }_{w} = B\operatorname{diag}\left( \mathbf{w}\right) {B}^{T}$ .
628
+
629
+ Proof of Theorem 4. Theorem 4 follows from ${\nabla }_{\sigma }S\left( \sigma \right) =$ $- \log \sigma - I$ and the use of gradient of the trace of a function of a matrix. Here we used the un-normalized Laplacian matrix.
630
+
631
+ ## B PRINCIPLE OF RELEVANT INFORMATION (PRI) FOR SCALAR RANDOM VARIABLES
632
+
633
+ In information theory, a natural extension of the well-known Shannon’s entropy is the Rényi’s $\alpha$ -entropy [Rényi,1961]. For a random variable $\mathbf{X}$ with PDF $f\left( x\right)$ in a finite set $\mathcal{X}$ , the $\alpha$ -entropy of $H\left( \mathbf{X}\right)$ is defined as:
634
+
635
+ $$
636
+ {H}_{\alpha }\left( f\right) = \frac{1}{1 - \alpha }\log {\int }_{\mathcal{X}}{f}^{\alpha }\left( x\right) {dx}. \tag{33}
637
+ $$
638
+
639
+ On the other hand, motivated by the famed Cauchy-Schwarz (CS) inequality:
640
+
641
+ $$
642
+ {\left| \int f\left( x\right) g\left( x\right) dx\right| }^{2} \leq \int {\left| f\left( x\right) \right| }^{2}{dx}\int {\left| g\left( x\right) \right| }^{2}{dx}
643
+ $$
644
+
645
+ (34)
646
+
647
+ with equality if and only if $f\left( x\right)$ and $g\left( x\right)$ are linearly dependent (e.g., $f\left( x\right)$ is just a scaled version of $g\left( x\right)$ ), a measure of the "distance" between the PDFs can be defined, which was named the CS divergence [Jenssen et al., 2006], with:
648
+
649
+ $$
650
+ {D}_{cs}\left( {f\parallel g}\right) = - \log {\left( \int fg\right) }^{2} + \log \left( {\int {f}^{2}}\right) + \log \left( {\int {g}^{2}}\right)
651
+ $$
652
+
653
+ $$
654
+ = 2{H}_{2}\left( {f;g}\right) - {H}_{2}\left( f\right) - {H}_{2}\left( g\right) \text{,}
655
+ $$
656
+
657
+ (35)
658
+
659
+ the term ${H}_{2}\left( {f;g}\right) = - \log \int f\left( x\right) g\left( x\right) {dx}$ is also called the quadratic cross entropy [Principe, 2010].
660
+
661
+ Combining Eqs. (33) and (35), the PRI under the 2-order Rényi entropy can be formulated as:
662
+
663
+ $$
664
+ {f}_{\text{opt }} = \arg \mathop{\min }\limits_{f}{H}_{2}\left( f\right) + \beta \left( {2{H}_{2}\left( {f;g}\right) - {H}_{2}\left( f\right) - {H}_{2}\left( g\right) }\right)
665
+ $$
666
+
667
+ $$
668
+ \equiv \arg \mathop{\min }\limits_{f}\left( {1 - \beta }\right) {H}_{2}\left( f\right) + {2\beta }{H}_{2}\left( {f;g}\right) ,
669
+ $$
670
+
671
+ (36)
672
+
673
+ the second equation holds because the extra term $\beta {H}_{2}\left( g\right)$ is a constant with respect to $f$ .
674
+
675
+ As can be seen, the objective of naïve PRI for i.i.d. random variables (i.e., Eq. (36)) resembles its new counterpart on graph data (i.e., Eq. (11)). The big difference is that we replace ${H}_{2}\left( f\right)$ with ${S}_{\mathrm{{vN}}}\left( \widetilde{\sigma }\right)$ and ${H}_{2}\left( {f;g}\right)$ with ${S}_{\mathrm{{vN}}}\left( \frac{\widetilde{\sigma } + \widetilde{\rho }}{2}\right)$ to capture structure information.
676
+
677
+ If we estimate ${H}_{2}\left( f\right)$ and ${H}_{2}\left( {f;g}\right)$ with the Parzen-window density estimator and optimize Eq. (36) by gradient descent. Fig. 8 demonstrates the structure learned from an original intersect data by different values of $\beta$ .
678
+
679
+ Interestingly, when $\beta = 0$ , we obtained a single point, very similar to what happens for Graph-PRI that learns a nearly star graph such that edges concentrates on one node. Similarly, when $\beta \rightarrow \infty$ , both naïve PRI and Graph-PRI get back to the original input as the solution.
680
+
681
+ ## C DETAILS OF USED DATASETS IN SECTION 4.2 AND SECTION 4.3
682
+
683
+ ### C.1 MULTI-TASK LEARNING
684
+
685
+ Synthetic data. This dataset consists of 20 regression tasks with 100 samples each. Each task is a 30-dimensional linear regression problem in which the last 10 variables are independent of the output variable $y$ . The 20 tasks are related in a group-wise manner: the first 10 tasks form a group and the remaining 10 tasks belong to another group. Tasks' coefficients in the same group are completely related to each other, while totally unrelated to tasks in another group.
686
+
687
+ Tasks' data are generated as follows: weight vectors corresponding to tasks 1 to 10 are ${\mathbf{w}}_{k} = {\mathbf{w}}_{a} \odot {\mathbf{b}}_{k} + \xi$ , where $\odot$ is the element-wise Hadamard product; and tasks 11 to 20 are ${\mathbf{w}}_{k} = {\mathbf{w}}_{b} \odot {\mathbf{b}}_{k} + \xi$ , where $\xi \sim \mathcal{N}\left( {\mathbf{0},{0.2}{\mathbf{I}}_{20}}\right)$ . Vectors ${\mathbf{w}}_{a}$ and ${\mathbf{w}}_{b}$ are generated from $\mathcal{N}\left( {\mathbf{0},{\mathbf{I}}_{20}}\right)$ , while ${\mathbf{b}}_{k} \sim \mathcal{U}\left( {0,1}\right)$ are uniformly distributed 20-dimensional random vectors.
688
+
689
+ Input and output variables for the $t$ -th $\left( {t = 1,\cdots ,{20}}\right)$ task, ${X}_{t}$ and ${y}_{t}$ , are generated as ${X}_{t}^{\prime } \sim \mathcal{N}\left( {\mathbf{0},{\mathbf{I}}_{20}}\right)$ and ${y}_{t} =$ ${X}_{t}^{\prime }{\mathbf{w}}_{t} + \mathcal{N}\left( {0,1}\right)$ . 10-dimensional unrelated variables ${X}_{t}^{\prime \prime } \sim$ $\mathcal{N}\left( {\mathbf{0},{\mathbf{I}}_{10}}\right)$ are then concatenated to ${X}_{t}^{\prime }$ to form the final input data ${X}_{t} = \left\lbrack \begin{array}{ll} {X}_{t}^{\prime } & {X}_{t}^{\prime \prime } \end{array}\right\rbrack$ .
690
+
691
+ Parkinsons's disease dataset. This is a benchmark multitask regression data set, comprising a range of biomedical voice measurements taken from 42 patients with earlystage Parkinson's disease. For each patient, the goal is to predict the motor Unified Parkinson's Disease Rating Scale (UP-DRS) score based 18-dimensional record: age, gender, and 16 jitter and shimmer voice measurements. For the categorical variable "gender", we applied label encoding that converts genders into a numeric representation. We treat UPDRS prediction for each patient as a task, resulting in 42 tasks and 5,875 observations in total.
692
+
693
+ ![019639a2-e6fc-755b-9bc0-14b7be373237_12_248_194_1257_742_0.jpg](images/019639a2-e6fc-755b-9bc0-14b7be373237_12_248_194_1257_742_0.jpg)
694
+
695
+ Figure 8: Illustration of the structures revealed by the naïve PRI for (a) Intersect data set. As the values of $\beta$ increase the solution passes through (b) a single point,(c) modes,(d) and (e) principal curves at different dimensions, and in the extreme case of (f) $\beta \rightarrow \infty$ we get back the data themselves as the solution.
696
+
697
+ ### C.2 BRAIN NETWORK CLASSIFICATION
698
+
699
+ For both datasets, the Automated Anatomical Labeling (AAL) template was used to extract ROI-averaged time series from the 116 ROIs. Meanwhile, to construct the initial brain network topology (i.e., the adjacency matrix $A$ ), we only keep edge if its weight (i.e., the absolute correlation coefficient) is among the top ${20}\%$ of all absolute correlation coefficients in the network.
700
+
701
+ As for the node features, we only use the correlation coefficients for simplicity. That is, the node feature for node $i$ can be represented as ${\mathbf{x}}_{i} = {\left\lbrack {\rho }_{i1},{\rho }_{i2},\cdots ,{\rho }_{in}\right\rbrack }^{T}$ , in which ${\rho }_{ij}$ is the Pearson’s correlation coefficient for node $i$ and node $j$ . One can expect performance gain by incorporating more discriminative network property features such as the local clustering coefficient [Rubinov and Sporns, 2010], although this is not the main scope of our work.
702
+
703
+ The first one is the eyes open and eyes closed (EOEC) dataset [Zhou et al., 2020], which contains the rs-fMRI data of 48 (22 females) college students (aged 19-31 years) in both eyes open and eyes closed states. The task is to predict two states based on brain network FC.
704
+
705
+ The second one is from the Alzheimer's Disease Neuroimag-ing Initiative (ADNI) database ${}^{8}$ . We use the rs-fMRI data collected and preprocessed in [Kuang et al., 2019] which includes ${31}\mathrm{{AD}}$ patients aged ${60} - {90}$ years. They were matched by age, gender, and education to mild cognitive impairment (MCI) ${}^{9}$ and 37 normal control (NC) subjects, together comprising 106 participants been selected. In this work, we only focus on distinguishing MCI group from NC group.
706
+
707
+ EOEC is publicly available from https://github.com/ zzstefan/BrainNetClass/
708
+
709
+ ADNI preprocessed by [Kuang et al., 2019] is publicly available from http://gsl.lab.asu.edu/software/ipf/.
710
+
711
+ ## D NETWORK ARCHITECTURE AND HYPERPARAMETER TUNING
712
+
713
+ ### D.1 FMRI-BASED BRAIN NETWORK CLASSIFICATION
714
+
715
+ The classification problem is solved using graph neural networks composed of two graph convolutional networks of size 32 and with relu activation function. We also use node feature drop with probability ${10}^{-1}$ . The node pooling is the sum of the node features, while the node classification minimizes the cross entropy loss. Hyper parameter search is applied to all method with time budget of ${3}^{\prime }{000}$ seconds, over 3 runs. The learning rates, $\lambda ,\beta$ and the softmax temperature are optimized using early pruning. Each graph neural network is fed with graphs generated from the full correlation matrix by selecting edges among the strongest ${20}\%$ absolute correlation values. For the Graph-PRI method, we used the GCN [Kipf and Welling, 2017] as graph classification network.
716
+
717
+ ---
718
+
719
+ 8 http://adni.loni.usc.edu/
720
+
721
+ ${}^{9}$ MCI is a transitional stage between AD and NC.
722
+
723
+ ---
724
+
725
+ For SVM, we use the Gaussian kernel and set kernel size equals to 1 . For LASSO, we set the hyperparameter as 0.1 . For t-test, we set the significance level as 0.05 .
726
+
727
+ ## E MINIMAL IMPLEMENTATION OF GRAPH-PRI IN PYTORCH
728
+
729
+ Algorithm 1 PRI for Graph Sparsification
730
+
731
+ ---
732
+
733
+ Input: $\rho = B{B}^{T},\beta$ , learning rate $\eta$ , number of samples $S$
734
+
735
+ Output: ${\sigma }_{\mathrm{w}}$
736
+
737
+ $B \leftarrow$ incident matrix of $\rho$ ;
738
+
739
+ Initialize $\theta = \left\{ {{\theta }_{1},{\theta }_{2},\cdots ,{\theta }_{M}}\right\}$ ;
740
+
741
+ while not converged do
742
+
743
+ $L = 0$ ;
744
+
745
+ for $i = 1,2,\cdots , S$ do
746
+
747
+ ${\mathbf{w}}^{i} \leftarrow$ GumbelSoftmax $\left( \theta \right)$ ;
748
+
749
+ ${\sigma }_{{\mathbf{w}}^{i}} = B\operatorname{diag}\left( {\mathbf{w}}^{i}\right) {B}^{T}$ ;
750
+
751
+ $L = L + {\mathcal{J}}_{\beta }\left( {\rho ,{\sigma }_{{\mathbf{w}}^{i}}}\right) ;$
752
+
753
+ end for
754
+
755
+ $L = \frac{1}{S}L$
756
+
757
+ $\theta \leftarrow \bar{\theta } - \eta {\nabla }_{\theta }L$
758
+
759
+ end while
760
+
761
+ $\mathbf{w} \leftarrow$ GumbelSoftmax $\left( \theta \right)$ ;
762
+
763
+ return ${\sigma }_{\mathbf{w}} = B\operatorname{diag}\left( \mathbf{w}\right) {B}^{T}$ ;
764
+
765
+ ---
766
+
767
+ We additional provide PyTorch implementation of Graph-PRI.
768
+
769
+ ---
770
+
771
+ import torch
772
+
773
+ import networkx as nx
774
+
775
+ import numpy as np
776
+
777
+ def vn_entropy(k, eps=1e-20) :
778
+
779
+ $\mathrm{k} = \mathrm{k}/$ torch.trace(k)
780
+
781
+ eigv = torch.abs(torch.symeig(k, eigenvectors
782
+
783
+ =True) [0])
784
+
785
+ entropy = -torch.sum(eigv[eigv>0] * torch.log
786
+
787
+ (eigv[eigv>0] + eps))
788
+
789
+ return entropy
790
+
791
+ def entropy_loss(sigma, rho, beta):
792
+
793
+ assert(beta>=0), "beta shall be >=@"
794
+
795
+ if beta > 0 :
796
+
797
+ return 0.5 * (1 - beta) / beta *
798
+
799
+ vn_entropy(sigma) + vn_entropy(0.5 * (sigma +
800
+
801
+ rho))
802
+
803
+ else:
804
+
805
+ return vn_entropy(sigma)
806
+
807
+ def sparse(G, tau, n_samples, max_iteration, lr,
808
+
809
+ beta):
810
+
811
+ ...
812
+
813
+ Args:
814
+
815
+ G: networkx Graph
816
+
817
+ n_samples: number of samples for gumbel
818
+
819
+ softmax
820
+
821
+ , , ,
822
+
823
+ E = nx.incidence_matrix(g1, oriented=True)
824
+
825
+ E = E.todense( ).astype(np.double)
826
+
827
+ E = torch.from_numpy(E)
828
+
829
+ rho = E @ E. T
830
+
831
+ m, n = G.number_of_edges( ), G.number_of_nodes
832
+
833
+ C)
834
+
835
+ theta = torch.randn( $\mathrm{m},2$ , requires_grad=True)
836
+
837
+ optimizer = torch.optim.Adam([theta], lr=lr)
838
+
839
+ for itr in range (max_iteration):
840
+
841
+ cost = 0
842
+
843
+ for sample in range(n_samples):
844
+
845
+ #Sampling
846
+
847
+ z = F.gumbel_softmax(theta, tau, hard
848
+
849
+ $=$ True)
850
+
851
+ w = z[:, 1].squeeze( )
852
+
853
+ sigma = E @ torch.diag(w) @ E.T
854
+
855
+ _loss = entropy_loss(sigma, rho, beta
856
+
857
+ )
858
+
859
+ cost = cost + _loss
860
+
861
+ cost = cost / n_samples
862
+
863
+ cost.backward( )
864
+
865
+ optimizer.step( )
866
+
867
+ optimizer.zero_grad( )
868
+
869
+ z = F.gumbel_softmax(theta, tau, hard=True)
870
+
871
+ w = z[:,1].squeeze( )
872
+
873
+ sigma = E @ torch.diag(w) @ E.T # sparse
874
+
875
+ laplacian
876
+
877
+ return sigma, w
878
+
879
+ Listing 1: Graph-PRI PyTorch
880
+
881
+ ---
882
+
883
+ ## F ZOOMED PLOT OF FIG. 7
884
+
885
+ ![019639a2-e6fc-755b-9bc0-14b7be373237_14_207_288_1328_1601_0.jpg](images/019639a2-e6fc-755b-9bc0-14b7be373237_14_207_288_1328_1601_0.jpg)
886
+
887
+ Figure 9: The contributing functional connectivity links for MCI patients. We visualize edges with a probability of more than ${50}\%$ been selected by our generated edges. The colors of neural systems are described as: sensorimotor network (SMN), occipital network (ON), fronto-parietal network (FPN), default mode network (DMN), cingulo-opercular network (CON), and cerebellum network (CN), respectively.
888
+
889
+ ![019639a2-e6fc-755b-9bc0-14b7be373237_15_203_287_1332_1602_0.jpg](images/019639a2-e6fc-755b-9bc0-14b7be373237_15_203_287_1332_1602_0.jpg)
890
+
891
+ Figure 10: The contributing functional connectivity links for normal control group. We visualize edges with a probability of more than ${50}\%$ been selected by our generated edges. The colors of neural systems are described as: sensorimotor network (SMN), occipital network (ON), fronto-parietal network (FPN), default mode network (DMN), cingulo-opercular network (CON), and cerebellum network (CN), respectively.
892
+