Buckets:
Title: Tighter Information-Theoretic Generalization Bounds from Supersamples
URL Source: https://arxiv.org/html/2302.02432
Markdown Content: Tighter Information-Theoretic Generalization Bounds from Supersamples Ziqiao Wang Yongyi Mao Abstract
In this work, we present a variety of novel information-theoretic generalization bounds for learning algorithms, from the supersample setting of Steinke & Zakynthinou (2020)—the setting of the “conditional mutual information” framework. Our development exploits projecting the loss pair (obtained from a training instance and a testing instance) down to a single number and correlating loss values with a Rademacher sequence (and its shifted variants). The presented bounds include square-root bounds, fast-rate bounds, including those based on variance and sharpness, and bounds for interpolating algorithms etc. We show theoretically or empirically that these bounds are tighter than all information-theoretic bounds known to date on the same supersample setting.
Machine Learning, ICML \usetikzlibrary
intersections, pgfplots.fillbetween, shapes, arrows, positioning, decorations.markings \tikzstyleint=[draw, fill=blue!20, minimum size=2em] \tikzstyleinit = [pin edge=to-,thin,black] \doparttoc\faketableofcontents
1 Introduction
Using information-theoretic bounds to analyze the generalization properties of a learning algorithm has attracted increasing attention since the seminal works of (Russo & Zou, 2016, 2019; Xu & Raginsky, 2017). One major advantage of such bounds is that the information-theoretic quantities, e.g., the mutual information (MI) between the training sample and the trained parameter weights, are both distribution-dependent and algorithm-dependent. This makes them an ideal tool to characterize the generalization properties of a learning algorithm, particularly when the traditional algorithm-independent learning-theoretic tools (e.g., VC-dimension (Vapnik, 1998) and Rademacher complexity (Bartlett & Mendelson, 2002)) appear inadequate. For example, Zhang et al. (2017, 2021) show that the high-capacity deep neural networks can still generalize well, contradicting the traditional wisdom in statistical learning theory that suggests complex models tend to overfit the training data and perform poorly on unseen data (Vapnik, 1998). In contrast, the information-theoretic bounds have experimentally demonstrated that they are capable of tracking the generalization behaviour of modern neural networks (Negrea et al., 2019; Wang et al., 2021; Harutyunyan et al., 2021; Wang & Mao, 2022a, b; Hellström & Durisi, 2022a).
The original information-theoretic bound of Xu & Raginsky (2017) has been extended or improved in many different ways, such as the chaining method (Asadi et al., 2018; Hafez-Kolahi et al., 2020; Zhou et al., 2022b; Clerico et al., 2022), the random subset or individual technique (Negrea et al., 2019; Bu et al., 2019; Haghifam et al., 2020; Rodríguez-Gálvez et al., 2021; Zhou et al., 2022a) and so on. Remarkably, Steinke & Zakynthinou (2020) has developed generalization bounds based on a conditional mutual information (CMI) measure obtained for a “supersample” setting. Specifically, the supersample is an 𝑛 × 2 matrix of data instances. In each row, one instance is selected at random for training and the other is masked out for testing. The authors then show that the CMI of the mask variables and the learned weights conditioned on the supersample can be used to upper-bound the generalization error. Although better behaving than the unconditional weight-based MI bounds (e.g., having boundedness guaranty), the CMI bounds can be difficult to measure for high-dimensional weights, which limits their application. To overcome such difficulty, functional CMI ( 𝑓 -CMI) bounds are proposed by Harutyunyan et al. (2021), where the weight variable in CMI is replaced by the predictions for the supersample. In this case, each prediction pair is a two-dimensional discrete random variable, making the CMI easier to measure and also a tighter bound. More recently, Hellström & Durisi (2022a) uses loss pairs to replace the predictions in 𝑓 -CMI and obtain even tighter CMI bounds, known as evaluated CMI (e-CMI) bounds. In fact, the earliest version of e-CMI bound appeared in Steinke & Zakynthinou (2020). The notion was also exploited in later works (Haghifam et al., 2021, 2022, 2023). Note that e-CMI still measures the dependence between an one-dimensional variable (mask) and a two-dimensional variable (loss pair). In this work, we show that it is possible to further tighten the CMI bounds, using MI terms involving only two one-dimensional variables.
Our development is restricted to the supersample setting of Steinke & Zakynthinou (2020), on which we establish novel CMI/MI bounds which are all easy to measure and tighter than the existing bounds in the same setting. Specifically, 1) we first show that the loss pair used in e-CMI can be replaced by the loss difference, giving rise to a disintegrated CMI bound (Theorem 3.1) and an unconditional MI bound (Theorem 3.2). Both are tighter than the previous square-root CMI bounds, all within the context of the same supersample construction. In particular, the obtained unconditional MI term can be interpreted as the achievable rate over a memoryless channel in communications. We then show that in the interpolating regime (i.e., training error being zero) and under zero-one loss, the generalization error of the learning algorithm can be precisely expressed by the averaged communication rate (Theorem 3.3). In other words, we obtain the “tightest bound” of generalization error in this setting. We also establish a novel chained MI bound (Theorem 3.4) that is particularly advantageous for continuous and unbounded losses. 2) Following a symmetric argument for Rademacher process, similar to Zhivotovskiy & Hanneke (2018), we explicitly exploit the symmetric structure of expected generalization error by correlating losses with a Rademacher sequence and obtain a novel MI bound involving single losses (Theorem 4.1). Using the communication perspective, we show that the MI quantities in the bound are upper-bounded by the entropy function evaluated at half of the testing error (Theorem 4.2). 3) By correlating losses with a shifted Rademacher sequence, we give novel fast-rate MI bounds of the weighted generalization error (Theorem 4.3). 4) In order to enhance the fast-rate bound in the non-zero training error regime, we extend our analysis by deriving two additional bounds: a variance-based MI bound (Theorem 4.4) and a sharpness-based MI bound (Theorem 4.5). These novel bounds also incorporate symmetric arguments, as shown in Lemma 4.4 and Lemma 4.6, respectively. 5) Experimental results show that our bounds nicely track the generalization dynamics of both linear models and non-linear neural networks, and our fast-rate bounds are tighter than the binary KL bound proposed in Hellström & Durisi (2022a), the tightest information-theoretic bound known to date for small, non-zero training error. 6) As a by-product, we also develop a novel Wasserstein distance based bound (Theorem 3.5).
Proofs, additional analysis and experimental results are included in Appendix.
2 Preliminaries 2.1 Probability and Information Theory Notation
Unless otherwise noted, a random variable will be denoted by a capitalized letter, and its realization by the corresponding lower-case letter. Let 𝑃 𝑋 denote the distribution of a random variable 𝑋 and let 𝑃 𝑋 | 𝑌 be the conditional distribution of 𝑋 conditioned on 𝑌 , which, upon conditioning on a specific realization, is denoted by 𝑃 𝑋 | 𝑌
𝑦 or simply 𝑃 𝑋 | 𝑦 . Similarly, 𝔼 𝑋 is the expectation taken over 𝑋 ∼ 𝑃 𝑋 and 𝔼 𝑋 | 𝑌
𝑦 (or 𝔼 𝑋 | 𝑦 )is the expectation taken over 𝑋 ∼ 𝑃 𝑋 | 𝑌
𝑦 . Let 𝐻 ( ⋅ ) be the entropy and let D KL ( 𝑃 | | 𝑄 ) denote the KL divergence of 𝑃 with respect to 𝑄 . Let 𝐼 ( 𝑋 ; 𝑌 ) be the mutual information (MI) between 𝑋 and 𝑌 , and 𝐼 ( 𝑋 ; 𝑌 | 𝑍 ) the conditional mutual information between 𝑋 and 𝑌 conditioned on 𝑍 . Following (Negrea et al., 2019), we refer to 𝐼 𝑧 ( 𝑋 ; 𝑌 ) ≜ D KL ( 𝑃 𝑋 , 𝑌 | 𝑍
𝑧 | | 𝑃 𝑋 | 𝑍
𝑧 𝑃 𝑌 | 𝑍
𝑧 ) as the disintegrated mutual information, and note that 𝐼 ( 𝑋 ; 𝑌 | 𝑍 )
𝔼 𝑍 [ 𝐼 𝑍 ( 𝑋 ; 𝑌 ) ] . Also, we use 𝕎 ( ⋅ , ⋅ ) to denote the Wasserstein distance (formal definition is given in Appendix). Throughout the paper, logarithm takes base 𝑒 , making the unit of mutual information nat.
2.2 Generalization Error
We consider the supervised learning setting. Let 𝒵
𝒳 × 𝒴 be the domain of the instances, where 𝒳 and 𝒴 are input and label spaces respectively. A model of interest prescribes a family ℱ of predictors, ℱ ⊂ 𝒴 𝒳 , where each 𝑓 ∈ ℱ is parameterized by a vector 𝑤 in some space 𝒲 . We may write 𝑓 as 𝑓 𝑤 as needed. Let 𝜇 be the distribution of the instance and let 𝑆
{ 𝑍 𝑖 } 𝑖
1 𝑛 ∼ 𝑖 . 𝑖 . 𝑑 𝜇 be the training sample. There is a learning algorithm 𝒜 : 𝒵 𝑛 → 𝒲 , which takes the training sample 𝑆 as the input and outputs a hypothesis 𝑊 ∈ 𝒲 , giving rise to a predictor 𝑓 𝑊 ∈ ℱ that predicts label 𝑌 for input 𝑋 . Note that the algorithm 𝒜 is characterized by a conditional distribution 𝑃 𝑊 | 𝑆 . Suppose that the quality of the output hypothesis 𝑊 is evaluated by a loss function ℓ : 𝒲 × 𝒵 → ℝ 0 + . Then for a given 𝑤 , we define the population risk 𝐿 𝜇 ( 𝑤 ) ≜ 𝔼 𝑍 ′ [ ℓ ( 𝑤 , 𝑍 ′ ) ] , where 𝑍 ′ ∼ 𝜇 is a testing instance. The quantity 𝐿 𝜇
𝔼 𝑊 [ 𝐿 𝜇 ( 𝑊 ) ] is then the expected population risk. In practice, we cannot access the data distribution 𝜇 , so we usually use the empirical risk as a proxy of the population risk, which is defined as 𝐿 𝑆 ( 𝑤 ) ≜ 1 𝑛 ∑ 𝑖
1 𝑛 ℓ ( 𝑤 , 𝑍 𝑖 ) for a fixed 𝑤 . Similarly, 𝐿 𝑛
𝔼 𝑊 , 𝑆 [ 𝐿 𝑆 ( 𝑊 ) ] is the expected empirical risk, where the expectation is taken over 𝑃 𝑊 , 𝑆
𝜇 𝑛 ⊗ 𝑃 𝑊 | 𝑆 . Thus, Err ≜ 𝐿 𝜇 − 𝐿 𝑛 is the expected generalization error.
2.3 Supersample Setting
The CMI framework for bounding generalization errors is first introduced in Steinke & Zakynthinou (2020). Let 𝑍 ~ ∈ 𝒵 𝑛 × 2 be an 𝑛 × 2 matrix, serving as “supersample”, where every entry is drawn i.i.d. from 𝜇 . For notational convenience, we assume that the columns of 𝑍 ~ are indexed by { 0 , 1 } instead of by { 1 , 2 } . We further denote the 𝑖 th row of 𝑍 ~ as 𝑍 ~ 𝑖 with entries ( 𝑍 ~ 𝑖 , 0 , 𝑍 ~ 𝑖 , 1 ) . Let 𝑈
( 𝑈 1 , 𝑈 2 , … , 𝑈 𝑛 ) 𝑇 ∼ Unif ( { 0 , 1 } 𝑛 ) , independent of 𝑍 ~ , be used to select a training set 𝑆 from 𝑍 ~ : 𝑈 𝑖
0 dictates that 𝑍 ~ 𝑖 , 0 in 𝑍 ~ be included in the training set 𝑆 , and 𝑍 ~ 𝑖 , 1 be used for testing; 𝑈 𝑖 =1 dictates the opposite. Then, the constructed training sample 𝑆 is equivalent to 𝑍 ~ 𝑈
{ 𝑍 ~ 𝑖 , 𝑈 𝑖 } 𝑖
1 𝑛 . Let 𝑈 ¯ 𝑖
1 − 𝑈 𝑖 , then the testing sample is 𝑍 ~ 𝑈 ¯
{ 𝑍 ~ 𝑖 , 𝑈 ¯ 𝑖 } 𝑖
1 𝑛 . In addition, let 𝐿 𝑖 , 0 ≜ ℓ ( 𝒜 ( 𝑍 ~ 𝑈 ) , 𝑍 ~ 𝑖 , 0 ) and 𝐿 𝑖 , 1 defined similarly. We use 𝐿 𝑖
( 𝐿 𝑖 , 0 , 𝐿 𝑖 , 1 ) to denote the loss pair in the 𝑖 th row and Δ 𝐿 𝑖
𝐿 𝑖 , 1 − 𝐿 𝑖 , 0 be the difference in the pair. To avoid clutter, we might use the superscripts + and − to respectively replace the subscripts 0 and 1 in our notations, e.g., 𝑍 ~ 𝑖 +
𝑍 ~ 𝑖 , 0 , 𝑍 ~ 𝑖 −
𝑍 ~ 𝑖 , 1 , 𝐿 𝑖 +
𝐿 𝑖 , 0 and 𝐿 𝑖 −
𝐿 𝑖 , 1 .
3 Generalization Bounds via Loss Difference 3.1 Loss-Difference CMI Bound
Using the loss difference, we first present the following square-root CMI bound.
Theorem 3.1.
Assume that the loss is bounded between [ 0 , 1 ] , we have
| Err | ≤ 1 𝑛 ∑ 𝑖
1 𝑛 𝔼 𝑍 ~ 2 𝐼 𝑍 ~ ( Δ 𝐿 𝑖 ; 𝑈 𝑖 ) ≤ 1 𝑛 ∑ 𝑖
1 𝑛 2 𝐼 ( Δ 𝐿 𝑖 ; 𝑈 𝑖 | 𝑍 ~ ) .
Noting the Markov chain 𝑈 − 𝑊 − 𝑓 𝑊 ( 𝑍 ~ 𝑖 ) − 𝐿 𝑖 − Δ 𝐿 𝑖 (conditioned on 𝑍 ~ ) and due to the data-processing inequality (DPI), this “loss-difference CMI” (or “ld-CMI”) bound in Theorem 3.1 (the second bound) is tighter than the bound in the previous works (Steinke & Zakynthinou, 2020; Haghifam et al., 2020; Harutyunyan et al., 2021; Hellström & Durisi, 2022a), namely, 𝐼 ( Δ 𝐿 𝑖 ; 𝑈 𝑖 | 𝑍 ~ ) ⏟ ld − CMI ≤ 𝐼 ( 𝐿 𝑖 ; 𝑈 𝑖 | 𝑍 ~ ) ⏟ e − CMI ≤ 𝐼 ( 𝑓 𝑊 ( 𝑍 ~ 𝑖 ) ; 𝑈 𝑖 | 𝑍 ~ ) ⏟ 𝑓 − CMI ≤ 𝐼 ( 𝑊 ; 𝑈 𝑖 | 𝑍 ~ ) ⏟ CMI . It is remarkable that the ld-CMI bound can be significantly tighter. To see this, we re-express the loss function ℓ as a function on 𝒴 2
𝒴 × 𝒴 , where 𝑙 ( 𝑦 , 𝑦 ′ ) is the loss value of the predicted label 𝑦 with respect to true label 𝑦 ′ . We say that two elements ( 𝑦 1 , 𝑦 1 ′ ) and ( 𝑦 2 , 𝑦 2 ′ ) in 𝒴 2 are loss-equivalent and write ( 𝑦 1 , 𝑦 1 ′ ) ≡ ℓ ( 𝑦 2 , 𝑦 2 ′ ) if ℓ ( 𝑦 1 , 𝑦 1 ′ )
ℓ ( 𝑦 2 , 𝑦 2 ′ ) . It is straight-forward to verify that ≡ ℓ is an equivalence relation on 𝒴 2 . Let ℒ denote the image of 𝒴 2 under ℓ . The quotient space 𝒴 2 / ≡ ℓ , or the set of equivalence classes modulo ≡ ℓ , has a one-to-one correspondence with ℒ , under which we may identify 𝒴 2 / ≡ ℓ with ℒ . Furthermore, we say that two loss pairs ( ℓ 𝐴 , ℓ 𝐴 ′ ) and ( ℓ 𝐵 , ℓ 𝐵 ′ ) in ℒ 2
ℒ × ℒ are loss-difference-equivalent and write ( ℓ 𝐴 , ℓ 𝐴 ′ ) ≡ Δ ( ℓ 𝐵 , ℓ 𝐵 ′ ) if ℓ 𝐴 − ℓ 𝐴 ′
ℓ 𝐵 − ℓ 𝐵 ′ . Then ≡ Δ is likewise an equivalence relation on ℒ 2 , which induces the quotient space ℒ 2 / ≡ Δ . Note that 𝑓 𝑊 ( 𝑍 ~ 𝑖 ) is a random variable on 𝒴 4
𝒴 2 × 𝒴 2 whereas Δ 𝐿 𝑖 is a essentially a random variable on ℒ 2 / ≡ Δ , which can be identified with ( 𝒴 2 / ≡ ℓ ) 2 / ≡ Δ under the aforementioned one-to-one correspondence. There can be a significant reduction of space size from 𝒴 4 to ( 𝒴 2 / ≡ ℓ ) 2 / ≡ Δ when 𝒴 or ℒ is large (assuming they are finite, to fix ideas). Thus, Δ 𝐿 𝑖 reveals much less information about 𝑈 𝑖 than 𝑓 𝑤 ( 𝑍 ~ 𝑖 ) does, making the term 𝐼 ( Δ 𝐿 𝑖 ; 𝑈 𝑖 | 𝑍 ~ ) significantly smaller than 𝐼 ( 𝑓 𝑊 ( 𝑍 ~ 𝑖 ) ; 𝑈 𝑖 | 𝑍 ~ ) and suggesting that the ld-CMI bound can be much tighter than the 𝑓 -CMI bound. A similar argument can be made comparing the ld-CMI and the e-CMI bounds.
It is noteworthy that the loss-difference CMI bound is easier to compute than the 𝑓 -CMI and e-CMI bounds, since Δ 𝐿 𝑖 is a scalar. Interestingly, when regarding Δ 𝐿 𝑖 as a (scaled) one-dimensional projection of 𝐿 𝑖 on a particular direction, the term 𝐼 ( Δ 𝐿 𝑖 ; 𝑈 𝑖 | 𝑍 ~ ) shares some similarity with the notion of Sliced Mutual Information (SMI) recently proposed in (Goldfeld & Greenewald, 2021; Goldfeld et al., 2022); the difference is that SMI requires averaging over a random direction of projection.
signal=[draw=none,fill=none] \tikzstylespre=[semithick, <-] \tikzstyletpre=[thick, <-] \tikzstylevpre=[very thick, <-] \tikzstyleupre=[ultra thick, <-]
{tikzpicture}
[bend angle=45] \node[signal] (u0) 0 ; \node[signal] (u1) [below of=u0, node distance=1.6cm] 1 ; \node[signal, right of=u0, node distance=3cm] (l1) 1 ; \draw[semithick, ->] (u0) – (l1) node[font =, midway,above] 1 − 𝛼 𝑖 − 𝜖 𝑖 ; \draw[semithick, ->] (u1) – (l1) node[font =, pos=.2,above] 𝜖 𝑖 ; \node[signal, below of=l1, node distance=0.8cm] (l0) 0 ; \draw[semithick, ->] (u0) – (l0) node[font =, midway,above] 𝛼 𝑖 ; \draw[semithick, ->] (u1) – (l0) node[font =, midway,below] 𝛼 𝑖 ; \node[signal, right of=u1, node distance=3cm] (l2) − 1 ; \draw[semithick, ->] (u1) – (l2) node[font =, midway,below] 1 − 𝛼 𝑖 − 𝜖 𝑖 ; \draw[semithick, ->] (u0) – (l2) node[font =, pos=.2,below] 𝜖 𝑖 ;
{tikzpicture}
[bend angle=45] \node[signal] (u0) 0 ; \node[signal] (u1) [below of=u0, node distance=1.6cm] 1 ; \node[signal, right of=u0, node distance=3cm] (l1) 0 ; \draw[semithick, ->] (u0) – (l1) node[font =, midway,above] 1 − 𝑝 𝑖 ; \draw[semithick, ->] (u1) – (l1) node[font =, pos=.3,left] 𝑞 𝑖 ; \node[signal, right of=u1, node distance=3cm] (l2) 1 ; \draw[semithick, ->] (u1) – (l2) node[font =, midway,below] 1 − 𝑞 𝑖 ; \draw[semithick, ->] (u0) – (l2) node[font =, pos=.3,left] 𝑝 𝑖 ;
Figure 1: Left: channel from 𝑈 𝑖 to Δ 𝐿 𝑖 . Right: channel from 𝑈 𝑖 to 𝐿 𝑖 + . Zero-one loss assumed. 3.2 Loss-Difference MI Bound
Under the setting of supersample as above, we can also obtain a generalization bound based on the loss-difference MI without conditioning on the supersample.
Theorem 3.2.
Assume that ℓ ( ⋅ , ⋅ ) ∈ [ 0 , 1 ] , then
| Err | ≤ 1 𝑛 ∑ 𝑖
1 𝑛 2 𝐼 ( Δ 𝐿 𝑖 ; 𝑈 𝑖 ) .
By the independence of 𝑈 𝑖 and 𝑍 ~ , 𝐼 ( Δ 𝐿 𝑖 ; 𝑈 𝑖 ) ≤ 𝐼 ( Δ 𝐿 𝑖 ; 𝑈 𝑖 ) + 𝐼 ( 𝑈 𝑖 ; 𝑍 ~ | Δ 𝐿 𝑖 )
𝐼 ( Δ 𝐿 𝑖 ; 𝑈 𝑖 | 𝑍 ~ ) . Then the bound in Theorem 3.2 is tighter than ld-CMI bound in Theorem 3.1, although not directly comparable to the first bound in Theorem 3.1.
It is interesting to relate the MI 𝐼 ( Δ 𝐿 𝑖 ; 𝑈 𝑖 ) to a communication setting where 𝑃 Δ 𝐿 𝑖 | 𝑈 𝑖 specifies a memoryless channel with input 𝑈 𝑖 and output Δ 𝐿 𝑖 . Then 𝐼 ( Δ 𝐿 𝑖 ; 𝑈 𝑖 ) is the rate of reliable communication over this channel achievable with the input distribution 𝑃 𝑈 𝑖 (which is Bern ( 1 2 ) by the construction of 𝑈 ) (Shannon, 1948). Consider the special case where ℓ ( ⋅ , ⋅ ) is the zero-one loss, i.e., ℓ ( 𝑤 , 𝑧 )
𝟙 𝑓 𝑤 ( 𝑥 ) ≠ 𝑦 . In this case, Δ 𝐿 𝑖 ∈ { − 1 , 0 , 1 } , and the channel is shown in Figure 1 (left), in which 𝜖 𝑖 and 𝛼 𝑖 are transition probabilities as shown on the respective transition edges. In particular, recalling Δ 𝐿 𝑖
𝐿 𝑖 − − 𝐿 𝑖 + , we see that 𝛼 𝑖 is the probability that in 𝑍 ~ 𝑖 the instance selected from training has the same loss value as that selected for testing, and that 𝜖 𝑖 is the probability that the training instance in 𝑍 ~ 𝑖 has a higher loss value than the testing instance. It follows that any interpolating algorithm, namely, one that achieves zero training error must have 𝜖 𝑖
0 for each 𝑖 . The following theorem can then be proved.
Theorem 3.3.
Under zero-one loss and for any interpolating algorithm 𝒜 , 𝐼 ( Δ 𝐿 𝑖 ; 𝑈 𝑖 )
( 1 − 𝛼 𝑖 ) ln 2 nats for each 𝑖 , and | Err |
𝐿 𝜇
∑ 𝑖
1 𝑛 𝐼 ( Δ 𝐿 𝑖 ; 𝑈 𝑖 ) 𝑛 ln 2 .
In this case, the generalization error is exactly determined by the communication rate over the channel in Figure 1 (left) averaged over all such channels, making Theorem 3.3 the obviously the “tightest bound” of generalization error in the “interpolating regime”. It is of course also tighter than the interpolating bound in Hellström & Durisi (2022a), which may be alternatively seen from 𝐼 ( Δ 𝐿 𝑖 ; 𝑈 𝑖 ) ≤ 𝐼 ( 𝐿 𝑖 ; 𝑈 𝑖 | 𝑍 ~ ) . Note that Haghifam et al. (2022) also gives a MI quantity that can determine the generalization error in the interpolating case, although their leave-one-out MI is between an 𝑛 + 1 -dimensional random variable and an one-dimensional random variable, and its corresponding bound is established without exploiting the communication perspective.
Furthermore, it is possible to establish further tightened loss-difference MI bounds for more general loss functions than those required in Theorem 3.2. Specifically, the loss function can be unbounded and continuous, as presented in next theorem, where we apply the chaining technique (Asadi et al., 2018; Hafez-Kolahi et al., 2020; Zhou et al., 2022b; Clerico et al., 2022) and the obtained bound consists of MI terms between 𝑈 𝑖 and the successively quantized versions of Δ 𝐿 𝑖 . To that end, let Err 𝑖 ( Δ ℓ 𝑖 ) ≜ ( − 1 ) 𝑈 𝑖 Δ ℓ 𝑖 and let Γ ⊆ ℝ be the range of Δ ℓ . Then { Err 𝑖 ( Δ ℓ 𝑖 ) } Δ ℓ 𝑖 ∈ Γ is a random process111Some prerequisite definitions of the chaining technique (such as stochastic chain, separable process and sub-Gaussian process) are give in the Appendix A., applying the stochastic chaining method (Zhou et al., 2022b) gives the following chained MI bound.
Theorem 3.4.
For each 𝑖 ∈ [ 𝑛 ] , we assume { Δ 𝐿 𝑖 , 𝑘 } 𝑘
𝑘 0 ∞ is a stochastic chain1 of ( { Err 𝑖 ( Δ ℓ 𝑖 ) } Δ ℓ 𝑖 ∈ Γ , Δ 𝐿 𝑖 ) , then
Err ≤ 1 𝑛 ∑ 𝑖
1 𝑛 ∑ 𝑘
𝑘 0 ∞ 2 𝔼 [ | Δ 𝐿 𝑖 , 𝑘 − Δ 𝐿 𝑖 , 𝑘 − 1 | 2 ] 𝐼 ( Δ 𝐿 𝑖 , 𝑘 ; 𝑈 𝑖 ) ,
where Δ 𝐿 𝑖 , 𝑘 is the 𝑘 th level of quantization of Δ 𝐿 𝑖 , the RHS expectation is taken over ( Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 , 𝑘 − 1 ) .
Notice that the bound is expressed as MI terms each involving 𝑈 𝑖 and Δ 𝐿 𝑖 , 𝑘 , both being discrete random variables. This has not arose in the previous chained weight-based MI bounds where they either contain the continuous random variable 𝑆 (Asadi et al., 2018; Zhou et al., 2022b; Clerico et al., 2022) or are conditioned on the continuous random variable 𝑍 ~ (Hafez-Kolahi et al., 2020). Additionally, by the master definition of MI (Cover & Thomas, 2006, Eq.(8.54)), we know that 𝐼 ( Δ 𝐿 𝑖 ; 𝑈 𝑖 )
sup 𝑘 𝐼 ( Δ 𝐿 𝑖 , 𝑘 ; 𝑈 𝑖 ) , and 𝐼 ( Δ 𝐿 𝑖 , 𝑘 ; 𝑈 𝑖 ) → 𝐼 ( Δ 𝐿 𝑖 ; 𝑈 𝑖 ) when 𝑘 → ∞ .
For bounded loss, the diameter diam ( Γ ) is finite, we can use hierarchical partitions as in Asadi et al. (2018) to construct a deterministic sequence of { Δ 𝐿 𝑖 , 𝑘 } 𝑘
𝑘 0 ∞ . This is deferred to Corollary B.1 in Appendix.
3.3 Loss-Difference Bound Beyond CMI and MI
It is possible to develop generalization bounds based on the loss differences in the supersample using distances or divergences beyond the information-theoretic measures. Here we present such a bound based on Wasserstein distance. As investigated in the previous literature (Rodríguez Gálvez et al., 2021), Wasserstein distance usually gives a tighter bounds than the mutual information.
Theorem 3.5.
Assume that ℓ ( ⋅ , ⋅ ) ∈ [ 0 , 1 ] , then
| Err | ≤ 1 𝑛 ∑ 𝑖
1 𝑛 𝔼 𝑈 𝑖 [ 𝕎 ( 𝑃 Δ 𝐿 𝑖 | 𝑈 𝑖 , 𝑃 Δ 𝐿 𝑖 ) ] .
Unlike the results in Rodríguez Gálvez et al. (2021), here we do not require the loss to be Lipschitz continuous.
4 Generalization Bounds via Correlating with Rademacher Sequence
We have so far obtained tighter square-root MI bounds based on the information measures (and their variants) between the loss difference Δ 𝐿 𝑖 and the mask variable 𝑈 𝑖 . However, the loss difference may not be used to obtain the fast-rate generalization bound where the square root function is removed (Grunwald et al., 2021; Hellström & Durisi, 2021, 2022a). This is because deriving the fast-rate bound usually relies on a weighted generalization error, for which one loses the center-symmetric structure of the standard generalization error. Specifically, knowing Δ 𝐿 𝑖 and 𝑈 𝑖 is sufficient to determine the generalization error at 𝑖 th position by ( − 1 ) 𝑈 𝑖 Δ 𝐿 𝑖 . However, for the weighted generalization error at 𝑖 th row defined by 𝐸 𝐶 1 𝑖
𝐿 𝑖 , 𝑈 ¯ 𝑖 − 𝐶 1 𝐿 𝑖 , 𝑈 𝑖 (for some constant 𝐶 1 > 0 ), having 𝑈 𝑖 and a weighted loss difference Δ 𝐶 1 𝐿 𝑖
𝐿 𝑖 − − 𝐶 1 𝐿 𝑖 + , does not allow its recovering from ( − 1 ) 𝑈 𝑖 Δ 𝐶 1 𝐿 𝑖 since 𝐿 𝑖 + − 𝐶 1 𝐿 𝑖 − ≠ 𝐶 1 𝐿 𝑖 + − 𝐿 𝑖 − in general. Indeed, knowing both 𝐿 𝑖 − − 𝐶 1 𝐿 𝑖 + and 𝐿 𝑖 + − 𝐶 1 𝐿 𝑖 − requires knowing 𝐿 𝑖 . Then in order to obtain fast-rate bounds, we need to give up the loss difference and return to the original e-CMI as in Hellström & Durisi (2022a).
Therefore, if we still want to use a MI between two one-dimensional random variables to bound the error, we need to find another trick. This motivates us to use a Radamecher viewpoint to derive the CMI bounds. Before we handle the fast-rate CMI bound for the weighted generalization error, we again consider the standard generalization error.
4.1 Single-Loss MI Bounds
Although the CMI setting, particularly its construction of the “ghost sample”, is conceptually related to the Rademacher complexity (Bartlett & Mendelson, 2002), the information-theoretic generalization bounds in previous literature do not explicitly exploit this connection. Fortunately, both information-theoretic bounds (Negrea et al., 2019; Hellström & Durisi, 2020, 2021, 2022a) and the Rademacher viewpoint (Kakade et al., 2008; Yang et al., 2019) are shown connected to the PAC-Bayes bounds, we thus derive a variant of e-CMI bound by invoking a similar symmetric argument with (Zhivotovskiy & Hanneke, 2018; Yang et al., 2019).
We first note the following lemma.
Lemma 4.1.
The expected generalization error Err
2 𝑛 ∑ 𝑖
1 𝑛 𝔼 𝐿 𝑖 + , 𝜀 𝑖 [ 𝜀 𝑖 𝐿 𝑖 + ] , where 𝜀 𝑖
( − 1 ) 𝑈 ¯ 𝑖 .
Note that 𝜀 1 : 𝑛
{ 𝜀 𝑖 } 𝑖
1 𝑛 is a sequence of Rademacher random variables, and the lemma suggests that Err
2 𝔼 𝑍 ~ 1 : 𝑛 + 𝔼 𝜀 1 : 𝑛 𝔼 𝐿 1 : 𝑛 + | 𝜀 1 : 𝑛 , 𝑍 ~ 1 : 𝑛 + [ 1 𝑛 ∑ 𝑖
1 𝑛 𝜀 𝑖 𝐿 𝑖 + ] , where 𝑍 ~ 1 : 𝑛 +
{ 𝑍 ~ 𝑖 + } 𝑖
1 𝑛 and 𝐿 1 : 𝑛 +
{ 𝐿 𝑖 + } 𝑖
1 𝑛 . Then, recall that the Rademacher complexity is defined as ℜ 𝑛 ( 𝒲 ) ≜ 𝔼 𝑆 𝔼 𝜀 1 : 𝑛 [ sup 𝑤 ∈ 𝒲 1 𝑛 ∑ 𝑖
1 𝑛 𝜀 𝑖 ℓ ( 𝑤 , 𝑍 𝑖 ) ] (Bartlett & Mendelson, 2002). Notably, the expected generalization error can be viewed, up to a scale factor 2, as an “average” version of the Rademacher complexity. While Err considers the average correlation between the loss sequence and the Rademacher sequence, the Rademacher complexity measures the worst such correlation. Thus, Err ≤ 2 ℜ 𝑛 ( 𝒲 ) .
Based on Lemma 4.1, we have the following bound.
Theorem 4.1.
Assume ℓ ( ⋅ ) ∈ [ 0 , 1 ] , we have
| Err | ≤ 2 𝑛 ∑ 𝑖
1 𝑛 2 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) ≤ 2 𝑛 ∑ 𝑖
1 𝑛 2 𝐼 ( 𝑓 𝑊 ( 𝑋 𝑖 + ) ; 𝑈 𝑖 | 𝑍 ~ ) .
The variable 𝑈 𝑖 in the above MI/CMI terms can obviously be replaced by 𝜀 𝑖 . Thus the theorem can be interpreted as using a different notion of “average correlation”, namely mutual information, between losses (or predictions) and Rademacher noises to bound the original notion of average correlation (as stated in Lemma 4.1 and discussed earlier).
This bound may not be directly comparable to others due to the undesired constant of 2 outside of the square root function in the bound. We will soon see that 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) based bound will be more useful when the square root is removed.
For the zero-one loss, the dependence between 𝑈 𝑖 and 𝐿 𝑖 + is characterized by the communication channel given in Figure 1 (right). In this case, 𝑈 𝑖
0 indicates 𝑍 ~ 𝑖 + is selected for training, then 𝑝 𝑖 is the error rate on this training instance. Similarly, when 𝑈 𝑖
1 , 𝑍 ~ 𝑖 + is used for testing, then 1 − 𝑞 𝑖 is the error rate on this testing instance. In practice, we usually have 𝑝 𝑖 < 1 − 𝑝 𝑖 since 𝐿 𝑖 + is more likely to be zero when 𝑍 ~ 𝑖 + is a training instance, and we may also have 𝑝 𝑖 < 1 − 𝑞 𝑖 since 𝐿 𝑖 + is more likely to be one when 𝑍 ~ 𝑖 + is a testing instance compared with the case when 𝑍 ~ 𝑖 + is used in training. When 𝑝 𝑖
0 , this channel reduces to a 𝑍 -channel (Cover & Thomas, 2006). This corresponds to an interpolating algorithm, for which we have the following theorem.
Theorem 4.2.
For zero-one loss and any interpolating algorithm, we have 1 𝑛 ∑ 𝑖
1 𝑛 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) ≤ 𝐻 ( 𝐿 𝜇 2 ) .
When the loss is not discrete, we can again obtain a chained MI bound by quantizing the continuous random variable 𝐿 𝑖 + , which is given in Theorem C.1 in Appendix C.4.
4.2 Fast-Rate MI Bound
We are now in a position to discuss the weighted generalization error, Err 𝐶 1 ≜ 𝐿 𝜇 − ( 1 + 𝐶 1 ) 𝐿 𝑛 , where 𝐶 1 is a prescribed constant. This notion is important for obtaining the fast-rate PAC-Bayes bounds (Catoni, 2007).
To bound this weighted generalization error, similar to Lemma 4.1, we have the following symmetry argument.
Lemma 4.2.
The weighted generalization error can be rewritten as
Err 𝐶 1
2 + 𝐶 1 𝑛 ∑ 𝑖
1 𝑛 𝔼 𝐿 𝑖 + , 𝜀 ~ 𝑖 [ 𝜀 ~ 𝑖 𝐿 𝑖 + ] ,
where 𝜀 ~ 𝑖
( − 1 ) 𝑈 ¯ 𝑖 − 𝐶 1 𝐶 1 + 2 is a shifted Rademacher variable with mean − 𝐶 1 𝐶 1 + 2 .
The relationship between Err and Rademacher complexity also likewise extends to that between Err 𝐶 1 and “shifted Rademacher complexity" defined as ℜ ~ 𝑛 ( 𝒲 ) ≜ 𝔼 𝑆 𝔼 𝜀 ~ 1 : 𝑛 [ sup 𝑤 ∈ 𝒲 1 𝑛 ∑ 𝑖
1 𝑛 𝜀 ~ 𝑖 ℓ ( 𝑤 , 𝑍 𝑖 ) ] , namely Err 𝐶 1 ≤ 2 ℜ ~ 𝑛 ( 𝒲 ) .
Then, we are ready to present the following bounds.
Theorem 4.3.
Let ℓ ( ⋅ , ⋅ ) ∈ [ 0 , 1 ] . There exist 𝐶 1 , 𝐶 2
0 such that
𝐿
𝜇
≤
(
1
+
𝐶
1
)
𝐿
𝑛
+
∑
𝑖
1
𝑛
𝐼
(
𝐿
𝑖
+
;
𝑈
𝑖
)
𝐶
2
𝑛
,
(1)
𝐿
𝜇
≤
𝐿
𝑛
+
∑
𝑖
1 𝑛 4 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) 𝑛 + 4 ∑ 𝑖
1 𝑛 𝐿 𝑛 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) 𝑛 . (2)
Furthermore, if 𝒜 is an interpolating algorithm, we have
𝐿 𝜇 ≤ ∑ 𝑖
1 𝑛 2 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) 𝑛 ln 2 . (3)
Notice that Eq. (2) does not depend on 𝐶 1 , 𝐶 2 , as it is obtained via minimizing the bound in Eq (1) over a region of ( 𝐶 1 , 𝐶 2 ) in which Eq (1) hold.
Comparing Eq. (3) with the interpolating bound in Hellström & Durisi (2022a, Eq. (12)), the main difference is that their bounds222Note that Hellström & Durisi (2022a) uses 𝐼 ( 𝐿 𝑖 + , 𝐿 𝑖 − ; 𝑈 𝑖 | 𝑍 ~ ) but this CMI term can be strengthened to the unconditional MI by using the same development in this paper. are based on 𝐼 ( 𝐿 𝑖 + , 𝐿 𝑖 − ; 𝑈 𝑖 ) , instead of 2 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) . This difference could be characterized by the interaction information (Yeung, 1991), namely 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ; 𝐿 𝑖 − )
𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) − 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 | 𝐿 𝑖 − )
2 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) − 𝐼 ( 𝐿 𝑖 ; 𝑈 𝑖 ) (where the second equality is by the chain rule of MI), and the value 𝐼 ( 𝐿 𝑖 + ; 𝐿 𝑖 − ; 𝑈 𝑖 ) could be positive, negative and zero. Hence, the interpolating bound could be further improved as below
𝐿 𝜇 ≤ ∑ 𝑖
1 𝑛 min { 2 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) , 𝐼 ( 𝐿 𝑖 ; 𝑈 𝑖 ) } 𝑛 ln 2 . (4)
This bound is strictly non-vacuous since the RHS of Eq. (2) is upper-bounded by ∑ 𝑖
1 𝑛 𝐻 ( 𝑈 𝑖 ) 𝑛 ln 2
1 . Note that the “tightest bound” of the interpolating algorithm is already obtained in Theorem 3.3.
Previous works (Steinke & Zakynthinou, 2020; Hellström & Durisi, 2022a) suggest that the fast-rate bounds for the weighted generalization error are typically useful when the empirical risk is small or even zero, which may restrict their applications. In the sequel, we introduce two new types of MI bound that can further extend Eq. (1) in Theorem 4.3.
4.3 Variance Based MI Bound
Inspired by the above Rademacher perspective, we first present a new bound that depends on the MI term and a notion of loss variance, defined below.
Definition 4.1 ( 𝛾 -Variance).
For any 𝛾 ∈ ( 0 , 1 ) , 𝛾 -variance for a learning algorithm is defined as
𝑉 ( 𝛾 ) ≜ 𝔼 𝑊 , 𝑆 [ 1 𝑛 ∑ 𝑖
1 𝑛 ( ℓ ( 𝑊 , 𝑍 𝑖 ) − ( 1 + 𝛾 ) 𝐿 𝑆 ( 𝑊 ) ) 2 ] .
By definition, 𝛾 -variance also depends on the data distribution. In the zero-one loss case, it can be characterized by the following lemma.
Lemma 4.3.
Under the zero-one loss assumption, we have 𝑉 ( 𝛾 )
𝐿 𝑛 − ( 1 − 𝛾 2 ) 𝔼 𝑊 , 𝑆 [ 𝐿 𝑆 2 ( 𝑊 ) ] .
Loss variances, of any kind, have not appeared in the information-theoretic bounds developed to date. Such a notion however does arise in the PAC-Bayes literature, where such an idea traces back to (Seldin et al., 2012; Tolstikhin & Seldin, 2013). Different from these works, here we utilize an expected empirical variance, and the distribution of 𝑊 in this case is generated by the learning algorithm rather than the posterior distribution used for prediction in PAC-Bayes.
The gap between Err and 𝑉 ( 𝛾 ) also has a “symmetry lemma” (similar to Lemma 4.2) correlating to the shifted Rademacher sequence.
Lemma 4.4.
For any 𝐶 1
0 , we have
Err − 𝐶 1 𝑉 ( 𝛾 ) ≤ 2 + 𝐶 1 𝛾 2 𝑛 ∑ 𝑖
1 𝑛 𝔼 𝐿 𝑖 + , 𝜀 ~ 𝑖 [ 𝜀 ~ 𝑖 𝐿 𝑖 + ] ,
where 𝜀 ~ 𝑖
𝜀 𝑖 − 𝐶 1 𝛾 2 𝐶 1 𝛾 2 + 2 is the shifted Rademacher variable with mean − 𝐶 1 𝛾 2 𝐶 1 𝛾 2 + 2 .
Theorem 4.4.
Assume ℓ ( ⋅ , ⋅ ) ∈ { 0 , 1 } , 𝛾 ∈ ( 0 , 1 ) . Then, there exist 𝐶 1 , 𝐶 2
0 such that
Err
≤
𝐶
1
𝑉
(
𝛾
)
+
∑
𝑖
1 𝑛 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) 𝑛 𝐶 2 . (5)
Notably, the interpolating setting is a sufficient but not necessary condition for the zero 𝛾 -variance, that is, 𝐿 𝑛
0 makes 𝑉 ( 𝛾 )
0 , but 𝑉 ( 𝛾 )
0 does not indicate that 𝐿 𝑛
0 . In addition, by Lemma 4.3, Eq. (5) can be rewritten as 𝐿 𝜇 ≤ ( 1 + 𝐶 1 ) 𝐿 𝑛 − 𝐶 1 ( 1 − 𝛾 2 ) 𝔼 𝑊 , 𝑆 [ 𝐿 𝑆 2 ( 𝑊 ) ] + ∑ 𝑖
1 𝑛 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) 𝐶 2 𝑛 so for the fixed 𝐶 1 and 𝐶 2 , the bound of Eq. (5) is tighter than the bound of Eq. (1) with the gap being at least 𝐶 1 ( 1 − 𝛾 2 ) 𝔼 𝑊 , 𝑆 [ 𝐿 𝑆 2 ( 𝑊 ) ] .
4.4 Sharpness Based MI Bound
The nice generalization property of deep neural networks is often credited to the “flat minima” (Jastrzębski et al., 2017) of loss landscapes. Recently, Neu et al. (2021) and Wang & Mao (2022a) have proved that the generalization error can be upper-bounded by a MI based term plus a sharpness (or flatness) related term. Following the similar development in the previous section, we are able to obtain a bound that also depends on a MI term and a sharpness term, where we use a completely different analysis with (Neu et al., 2021; Wang & Mao, 2022a).
We first define a notion of sharpness.
Definition 4.2 ( 𝜆 -Sharpness).
For any 𝜆 ∈ ( 0 , 1 ) , the “ 𝜆 -sharpness” at position 𝑖 of the training set is defined as
𝐹 𝑖 ( 𝜆 ) ≜ 𝔼 𝑊 , 𝑍 𝑖 [ ℓ ( 𝑊 , 𝑍 𝑖 ) − ( 1 + 𝜆 ) 𝔼 𝑊 | 𝑍 𝑖 ℓ ( 𝑊 , 𝑍 𝑖 ) ] 2 .
This 𝜆 -sharpness can be regarded as an expected version of the “flatness” used in Yang et al. (2019) with 𝑊 ∼ 𝑃 𝑊 | 𝑍 𝑖 instead of some posterior distribution of 𝑊 .
Lemma 4.5.
Assume ℓ ( ⋅ ) ∈ { 0 , 1 } , we have 𝐹 𝑖 ( 𝜆 )
𝔼 𝑊 , 𝑍 𝑖 [ ℓ ( 𝑊 , 𝑍 𝑖 ) ] − ( 1 − 𝜆 2 ) 𝔼 𝑍 𝑖 [ 𝔼 𝑊 | 𝑍 𝑖 2 ℓ ( 𝑊 , 𝑍 𝑖 ) ] .
Let 𝐹 ( 𝜆 )
1 𝑛 ∑ 𝑖
1 𝑛 𝐹 𝑖 ( 𝜆 ) . Similar to Lemma 4.1, Lemma 4.2 and Lemma 4.4, we have the following symmetric argument.
Lemma 4.6.
For any 𝐶 1
0 , we have
Err
−
𝐶
1
𝐹
(
𝜆
)
𝐶 1 + 2 𝑛 ∑ 𝑖
1 𝑛 𝔼 𝐿 𝑖 + , 𝑈 𝑖 [ 𝜀 ~ 𝑖 𝐿 𝑖 + − 𝐶 1 ( 1 − 𝜆 2 ) 𝐶 1 + 2 𝜀 ^ 𝑖 ℎ ( 𝑈 𝑖 ) ] ,
where 𝜀 ~ 𝑖
𝜀 𝑖 − 𝐶 1 𝐶 1 + 2 and 𝜀 ^ 𝑖
𝜀 𝑖 − 1 are the shifted Rademacher variables, and ℎ ( 𝑈 𝑖 )
𝔼 𝑍 ~ 𝑖 + | 𝑈 𝑖 [ 𝔼 𝐿 𝑖 + | 𝑍 ~ 𝑖 + , 𝑈 𝑖 2 𝐿 𝑖 + ] .
We are then ready to present the following bound.
Theorem 4.5.
Assume ℓ ( ⋅ , ⋅ ) ∈ { 0 , 1 } , 𝜆 ∈ ( 0 , 1 ) . Then, there exist 𝐶 1 , 𝐶 2
0 such that
Err
≤
𝐶
1
𝐹
(
𝜆
)
+
∑
𝑖
1 𝑛 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) 𝐶 2 𝑛 . (6)
Similar to the variance based bound, zero 𝜆 -sharpness is a weaker condition than the interpolating assumption. In particular, Eq. (6) could be tighter than Eq. (1) in Theorem 4.3 when the empirical risk is non-zero. Specifically, Eq. (6) can be rewritten as 𝐿 𝜇 ≤ ( 1 + 𝐶 1 ) 𝐿 𝑛 − 𝐶 1 ( 1 − 𝜆 2 ) 𝐿 𝑛 2 + ∑ 𝑖
1 𝑛 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) 𝐶 2 𝑛 by Lemma 4.5 and Jensen’s inequality. If 𝐶 1 , 𝐶 2 are fixed, then the sharpness based bound is always tighter than Eq. (1) and the gap is at least 𝐶 1 ( 1 − 𝜆 2 ) 𝐿 𝑛 2 .
(a) | 𝒴 |
2 (Realizable) (b) | 𝒴 |
2 (Non-Separable) (c) | 𝒴 |
10 (Realizable) (d) | 𝒴 |
10 (Non-Separable) Figure 2: Comparison of bounds on the synthetic dataset. (a) Binary classification with a separable 𝜇 (i.e. the interpolating setting). Notice that the variance bound nearly coincides with Err . (b) Binary classification with a non-separable 𝜇 . (c) Ten-class classification with a separable 𝜇 . (d) Ten-class classification with a non-separable 𝜇 . The binary KL bound is removed in (d) since it is always ≥ 1 .
To conclude this section, we give a bound that combines the variance and the sharpness.
Corollary 4.1.
Assume ℓ ( ⋅ , ⋅ ) ∈ { 0 , 1 } and 𝛾 , 𝜆 ∈ ( 0 , 1 ) , then there exist 𝐶 1 , 𝐶 2
0 such that
Err ≤ 𝐶 1 min { 𝑉 ( 𝛾 ) , 𝐹 ( 𝜆 ) } + ∑ 𝑖
1 𝑛 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) 𝑛 𝐶 2 .
We remark that if 𝒜 satisfies any of the following: (i) 𝐿 𝑛 → 0 ; (ii) 𝑉 ( 𝛾 ) → 0 for some 𝛾 ∈ ( 0 , 1 ) ; (iii) 𝐹 ( 𝜆 ) → 0 for some 𝜆 ∈ ( 0 , 1 ) , we all have Err ≤ ∑ 𝑖
1 𝑛 2 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) 𝑛 ln 2 .
(a) CNN on MNIST (b) ResNet on CIFAR10 (c) SGLD (MNIST) (d) Zoomed-in of (c) Figure 3: Comparison of bounds on two real datasets, MNIST (“ 4 vs 9 ”) and CIFAR10. 5 Numerical Results
In this section, we empirically compare some CMI and MI bounds discussed in our paper. Our first experiment is based on a synthetic Gaussian dataset, where a simple linear classifier (with a softmax output layer) will be trained. The second experiment follows the same deep learning scenario setting with (Harutyunyan et al., 2021; Hellström & Durisi, 2022a), where we will train a 4 -layer CNN on MNIST (LeCun et al., 2010) and fine-tune a ResNet-50 (He et al., 2016) (pretrained on ImageNet (Deng et al., 2009)) on CIFAR10 (Krizhevsky, 2009). In all of these experiments, we let the loss be the zero-one loss, namely ℓ ( 𝑤 , 𝑧 )
𝟙 𝑓 𝑤 ( 𝑥 ) ≠ 𝑦 , and we apply the empirical risk minimization (ERM) to find the hypothesis, namely 𝑤
arg min 𝑤 ∈ 𝒲 𝐿 𝑆 ( 𝑤 ) . Since such loss is not differentiable, to enable the gradient based optimization methods such as SGD, we hereby use the cross-entropy loss as a surrogate classification loss during training. Notice that Err is still defined with respect to the zero-one loss in our experiments.
Under these settings, we will mainly compare the disintegrated ld-CMI bound in the first inequality of Theorem 3.1 (Disint.), the unconditional MI bound in Theorem 3.2 (Uncondi.), the weighted generalization error bound in Eq. (2) of Theorem 4.3 (Weighted), the variance bound in Theorem 4.4 (Variance) and the sharpness bound in Theorem 4.5 (Sharpness). Besides, we will include the binary KL bound in Hellström & Durisi (2022a) as a baseline, which is, to the best of our knowledge, the tightest fast-rate CMI bound in the literature when 𝐿 𝑛 is close (but not equal) to zero. In addition, we note that the difference between the variance bound and the sharpness bound is negligible in the current scale of the figures, so for each figure we only report one of them. The comparison between the variance bound and the sharpness bound, and more comparison of other bounds mentioned in this paper (such as interpolating bounds and single-loss based square-root bounds) are given in Appendix E.
5.1 Linear Classifier
We will first use a simple linear classifier to carry out the Gaussian data classification task (see Appendix E.1 for more details of data generation and training). There are at least two major benefits of using such a synthetic dataset. On the one hand, the ground-truth distribution 𝜇 is known so we can draw unlimited supersamples, allowing repeatition of experiments so as to obtain an accurate estimate of the desired quantity (e.g., for each 𝑛 , we repeat the experiment 5000 times, each with a random ( 𝑍 ~ , 𝑈 ) and report the average). On the other hand, the separability of different classes is adjustable, allowing for a control of the task difficulty. Specifically, we will consider both the zero training loss case (i.e. a separable 𝜇 ) and the high training loss case (i.e. a non-separable 𝜇 ).
In the binary classification tasks (i.e. | 𝒴 |
2 ), the evaluations of Err and the bounds are given in Figure 1(a) and Figure 1(b). When 𝜇 is separable (Figure 1(a)), the algorithm can always interpolate the training sample. In this case, the fast-rate bounds are tighter than the square-root bounds, and the variance bound (or the sharpness bound) is the tightest. Moreover, notice that here the the disintegrated CMI bound is tighter than the unconditional MI bound. For a more challenging classification task (Figure 1(b)), 𝐿 𝑛 is no longer zero, the square-root bounds become tighter than the binary KL bound. Indeed, Hellström & Durisi (2022a) shows that when the empirical risk is large, the square-root bound will be tighter than their fast-rate bounds. In contrast, our variance bound is even slightly tighter than the square-root bound of Theorem 3.2 in Figure 1(b). Additionally, notice that unlike the realizable case (the one with separable 𝜇 ), the unconditional MI bound is now tighter than the disintegrated CMI bound.
We also conduct experiments in the ten-class classification task (i.e. | 𝒴 |
10 ), and the results are shown in Figure 1(c) and Figure 1(d). In the realizable case (Figure 1(c)), the results are similar to binary classification except that the binary KL bound is tighter than all the other bounds when 𝑛
10 , which is the only case we observe where the binary KL bound outperforms Theorem 4.4 and Theorem 4.5. In addition, it is worth mentioning that Eq. (2) decays much faster than the square-root bounds in Figure 1(c) (and also in Figure 1(b)). For the non-separable case in Figure 1(d), only the unconditional MI bound in Theorem 3.2 is non-vacuous for all the values of 𝑛 . While the binary KL bound is removed in this case since it is always vacuous, our sharpness bound is competitive to the square-root bound when 𝑛 ≥ 50 .
5.2 Neural Networks
To compare information-theoretic generalization bounds of modern deep neural networks, we follow the same experiment settings in (Harutyunyan et al., 2021; Hellström & Durisi, 2022a). Specifically, we train a 4-layer CNN model on a binary MNIST dataset (“ 4 vs 9 ”) and also fine-tune a pretrained ResNet-50 on CIFAR10. Unlike the previous synthetic dataset case, here we can only repeatedly run experiments (with different 𝑍 ~ and 𝑈 ) for limited times due to the high computation complexity. Thus, we report the both average numerical values and their standard deviations. Notice that our code is primarily the same as the code provided by Hellström & Durisi (2022a), which is originally based on the code in https://github.com/hrayrhar/f-CMI. More experiment details can be found in Appendix E.1.
Observations in the binary MNIST experiment (Figure 2(a)) are close to the realizable binary classification case in Figure 1(a) (both near the interpolating regime). For example, the fast-rate bounds are tighter than the square-root bound. Notably, our sharpness bound (or variance bound) significantly improve the the binary KL bound in both the MNIST experiment (Figure 2(a)) and the CIFAR10 experiment (Figure 2(b)), while Eq. (2) is slightly weaker than the binary KL bound. Furthermore, we also compare the bounds when the CNN model is trained by a SGLD algorithm (Raginsky et al., 2017), a variant of SGD, on the binary MNIST dataset. In this case, we add the weight-based MI bound of SGLD in Negrea et al. (2019, Eq. 6) as a baseline. Figure 2(c) suggests that both of our sharpness bound and Eq. (2) improve the binary KL bound. Notably, Harutyunyan et al. (2021) and Hellström & Durisi (2022a) observe that 𝑓 -CMI bound and e-CMI bound are worse than the SGLD bound in Negrea et al. (2019) at the beginning of training. As shown in Figure 2(d), although our sharpness bound is still looser than the SGLD bound in Negrea et al. (2019) before the fifth epoch, our sharpness bound significantly shrinks the gap with the SGLD bound during early training.
6 Limitations and Other Related Literature Limitations
More recently, the limitations of information-theoretic bounds in explaining the generalization properties of stochastic convex optimization (SCO) problems have been investigated by Haghifam et al. (2023). In their study, the authors demonstrate that almost all existing information-theoretic bounds, except for the chained MI/CMI bounds, fail to vanish in at least one of their counterexamples, despite the true generalization error vanishing. Unfortunately, neither our loss-difference MI/CMI bounds nor our single-loss MI bounds are capable of overcoming such limitations revealed in their constructed counterexample presented in (Haghifam et al., 2023, Theorem 17). These limitations shed light on certain inherent properties of mutual information measures, which may not be easily overcome solely by introducing new information measures.
In Hellström & Durisi (2022a), the authors provide an e-CMI generalization bound for a generic convex function of the training loss and test loss. Although our analysis, using either loss-difference CMI/MI bounds or single-loss MI bounds, may not be directly applicable to general convex comparison functions between the training loss and testing loss, one potential alternative is to consider the square of the loss difference, for which similar techniques can be employed to derive generalization bounds.
Furthermore, it is important to note that all our new information-theoretic generalization bounds are derived under the assumption of independent and identically distributed (i.i.d.) training instances. Exploring the possibility of relaxing this assumption represents a promising avenue for future research.
Other Related Work
Information-theoretic generalization bounds have been explored for some specific algorithms. For example, the weight based information-theoretic bounds have been successfully applied to characterize the generalization properties of SGLD (Pensia et al., 2018; Bu et al., 2019; Negrea et al., 2019; Haghifam et al., 2020; Rodríguez-Gálvez et al., 2021; Wang et al., 2021), and more recently, these bounds are also used to analyze either the vanilla SGD (Neu et al., 2021; Wang & Mao, 2022a) or the stochastic differential equations (SDEs) approximation of SGD (Wang & Mao, 2022b). To apply the weight based MI or CMI bounds for SGD and its variants, unlike the bounds in our paper and (Harutyunyan et al., 2021; Hellström & Durisi, 2022a) that treat the learning algorithm as a black-box, these weight based bounds are usually further upper bounded by some quantities along the trajectories of the algorithms (e.g., gradient incoherence (Negrea et al., 2019)). This then points to a future direction: Can the losses-based or predictions-based information-theoretic bounds be exploited the similar trajectory information of the gradient based algorithms?
It is also noteworthy that recently Haghifam et al. (2022) and Rammal et al. (2022) concurrently propose a variant of the initial CMI framework (Steinke & Zakynthinou, 2020), the “leave-one-out” (LOO) CMI setting, where their supersample is a 𝑛 + 1 vector instead of a 𝑛 × 2 matrix. While our development in this paper is restricted to the latter, it is curious and tempting to compare—or connect—the two.
In addition to the supervised learning setting, information-theoretic bounds have found applicability in various other learning scenarios, showcasing their versatility. These scenarios include meta-learning (Hellström & Durisi, 2022b), semi-supervised learning (He et al., 2022), transfer learning (Wu et al., 2020), domain adaptation (Wang & Mao, 2023), and so on. It is reasonable to expect that our findings can be effectively utilized in these diverse learning settings as well.
7 Concluding Remarks
In this work, we obtain some novel and easy-to-measure information-theoretic generalization bounds. These bounds are demonstrated to be tighter than the previous results in the same supersample setting of Steinke & Zakynthinou (2020), either theoretically or empirically. In our development, we also discuss some other viewpoints of generalization in the current supersample construction including explaining generalization via the rate of reliable communication over the memoryless channel, and via correlating with the Rademacher sequence. These new insights may help to design new learning algorithms or discover novel algorithm-dependent complexity measures.
Acknowledgements
This work is supported partly by an NSERC Discovery grant and a National Research Council of Canada (NRC) Collaborative R&D grant (AI4D-CORE-07). Ziqiao Wang is also supported in part by the NSERC CREATE program through the Interdisciplinary Math and Artificial Intelligence (INTER-MATH-AI) project. The authors would like to thank the anonymous reviewers for their careful reading and valuable suggestions.
References Asadi et al. (2018) Asadi, A., Abbe, E., and Verdú, S. Chaining mutual information and tightening generalization bounds. Advances in Neural Information Processing Systems, 31, 2018. Bartlett & Mendelson (2002) Bartlett, P. L. and Mendelson, S. Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3(Nov):463–482, 2002. Bu et al. (2019) Bu, Y., Zou, S., and Veeravalli, V. V. Tightening mutual information based bounds on generalization error. In 2019 IEEE International Symposium on Information Theory (ISIT), pp. 587–591. IEEE, 2019. Catoni (2007) Catoni, O. Pac-bayesian supervised classification: the thermodynamics of statistical learning. Vol. 56. Lecture Notes - Monograph Series. Institute of Mathematical Statistics, 2007. Clerico et al. (2022) Clerico, E., Shidani, A., Deligiannidis, G., and Doucet, A. Chained generalisation bounds. In Proceedings of Thirty Fifth Conference on Learning Theory, volume 178 of Proceedings of Machine Learning Research, pp. 4212–4257. PMLR, 02–05 Jul 2022. Cover & Thomas (2006) Cover, T. M. and Thomas, J. A. Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing). Wiley-Interscience, USA, 2006. ISBN 0471241954. Cédric (2008) Cédric, V. Optimal Transport: Old and New (Grundlehren der mathematischen Wissenschaften, 338). Springer, 2008. Deng et al. (2009) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255, 2009. Goldfeld & Greenewald (2021) Goldfeld, Z. and Greenewald, K. Sliced mutual information: A scalable measure of statistical dependence. In Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems, 2021. Goldfeld et al. (2022) Goldfeld, Z., Greenewald, K., Nuradha, T., and Reeves, G. $k$-sliced mutual information: A quantitative study of scalability with dimension. In Oh, A. H., Agarwal, A., Belgrave, D., and Cho, K. (eds.), Advances in Neural Information Processing Systems, 2022. Grunwald et al. (2021) Grunwald, P., Steinke, T., and Zakynthinou, L. Pac-bayes, mac-bayes and conditional mutual information: Fast rate bounds that handle general vc classes. In Conference on Learning Theory. PMLR, 2021. Hafez-Kolahi et al. (2020) Hafez-Kolahi, H., Golgooni, Z., Kasaei, S., and Soleymani, M. Conditioning and processing: Techniques to improve information-theoretic generalization bounds. Advances in Neural Information Processing Systems, 33:16457–16467, 2020. Haghifam et al. (2020) Haghifam, M., Negrea, J., Khisti, A., Roy, D. M., and Dziugaite, G. K. Sharpened generalization bounds based on conditional mutual information and an application to noisy, iterative algorithms. Advances in Neural Information Processing Systems, 2020. Haghifam et al. (2021) Haghifam, M., Dziugaite, G. K., Moran, S., and Roy, D. Towards a unified information-theoretic framework for generalization. Advances in Neural Information Processing Systems, 34:26370–26381, 2021. Haghifam et al. (2022) Haghifam, M., Moran, S., Roy, D. M., and Dziugiate, G. K. Understanding generalization via leave-one-out conditional mutual information. In 2022 IEEE International Symposium on Information Theory (ISIT), pp. 2487–2492. IEEE, 2022. Haghifam et al. (2023) Haghifam, M., Rodríguez-Gálvez, B., Thobaben, R., Skoglund, M., Roy, D. M., and Dziugaite, G. K. Limitations of information-theoretic generalization bounds for gradient descent methods in stochastic convex optimization. In International Conference on Algorithmic Learning Theory, pp. 663–706. PMLR, 2023. Harutyunyan et al. (2021) Harutyunyan, H., Raginsky, M., Steeg, G. V., and Galstyan, A. Information-theoretic generalization bounds for black-box learning algorithms. In Advances in Neural Information Processing Systems, 2021. He et al. (2022) He, H., Yan, H., and Tan, V. Y. Information-theoretic characterization of the generalization error for iterative semi-supervised learning. Journal of Machine Learning Research, 23:1–52, 2022. He et al. (2016) He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Hellström & Durisi (2020) Hellström, F. and Durisi, G. Generalization bounds via information density and conditional information density. IEEE Journal on Selected Areas in Information Theory, 1(3):824–839, 2020. Hellström & Durisi (2021) Hellström, F. and Durisi, G. Fast-rate loss bounds via conditional information measures with applications to neural networks. In 2021 IEEE International Symposium on Information Theory (ISIT), pp. 952–957. IEEE, 2021. Hellström & Durisi (2022a) Hellström, F. and Durisi, G. A new family of generalization bounds using samplewise evaluated CMI. In Advances in Neural Information Processing Systems, 2022a. Hellström & Durisi (2022b) Hellström, F. and Durisi, G. Evaluated CMI bounds for meta learning: Tightness and expressiveness. In Oh, A. H., Agarwal, A., Belgrave, D., and Cho, K. (eds.), Advances in Neural Information Processing Systems, 2022b. Hoeffding (1963) Hoeffding, W. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58(301):13–30, 1963. Jastrzębski et al. (2017) Jastrzębski, S., Kenton, Z., Arpit, D., Ballas, N., Fischer, A., Bengio, Y., and Storkey, A. Three factors influencing minima in sgd. arXiv preprint arXiv:1711.04623, 2017. Kakade et al. (2008) Kakade, S. M., Sridharan, K., and Tewari, A. On the complexity of linear prediction: Risk bounds, margin bounds, and regularization. Advances in neural information processing systems, 21, 2008. Krizhevsky (2009) Krizhevsky, A. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. LeCun et al. (2010) LeCun, Y., Cortes, C., and Burges, C. Mnist handwritten digit database. ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist, 2, 2010. Negrea et al. (2019) Negrea, J., Haghifam, M., Dziugaite, G. K., Khisti, A., and Roy, D. M. Information-theoretic generalization bounds for sgld via data-dependent estimates. Advances in Neural Information Processing Systems, 2019. Neu et al. (2021) Neu, G., Dziugaite, G. K., Haghifam, M., and Roy, D. M. Information-theoretic generalization bounds for stochastic gradient descent. In Conference on Learning Theory. PMLR, 2021. Pedregosa et al. (2011) Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., and Duchesnay, E. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011. Pensia et al. (2018) Pensia, A., Jog, V., and Loh, P.-L. Generalization error bounds for noisy, iterative algorithms. In 2018 IEEE International Symposium on Information Theory (ISIT). IEEE, 2018. Polyanskiy & Wu (2019) Polyanskiy, Y. and Wu, Y. Lecture notes on information theory. Lecture Notes for 6.441 (MIT), ECE 563 (UIUC), STAT 364 (Yale), 2019., 2019. Raginsky et al. (2017) Raginsky, M., Rakhlin, A., and Telgarsky, M. Non-convex learning via stochastic gradient langevin dynamics: a nonasymptotic analysis. In Conference on Learning Theory, pp. 1674–1703. PMLR, 2017. Rammal et al. (2022) Rammal, M. R., Achille, A., Golatkar, A., Diggavi, S., and Soatto, S. On leave-one-out conditional mutual information for generalization. In Advances in Neural Information Processing Systems, 2022. Rodríguez-Gálvez et al. (2021) Rodríguez-Gálvez, B., Bassi, G., Thobaben, R., and Skoglund, M. On random subset generalization error bounds and the stochastic gradient langevin dynamics algorithm. In 2020 IEEE Information Theory Workshop (ITW), pp. 1–5. IEEE, 2021. Rodríguez Gálvez et al. (2021) Rodríguez Gálvez, B., Bassi, G., Thobaben, R., and Skoglund, M. Tighter expected generalization error bounds via wasserstein distance. Advances in Neural Information Processing Systems, 34, 2021. Russo & Zou (2016) Russo, D. and Zou, J. Controlling bias in adaptive data analysis using information theory. In Artificial Intelligence and Statistics. PMLR, 2016. Russo & Zou (2019) Russo, D. and Zou, J. How much does your data exploration overfit? controlling bias via information usage. IEEE Transactions on Information Theory, 66(1):302–323, 2019. Seldin et al. (2012) Seldin, Y., Laviolette, F., Cesa-Bianchi, N., Shawe-Taylor, J., and Auer, P. Pac-bayesian inequalities for martingales. IEEE Transactions on Information Theory, 58(12):7086–7093, 2012. Shannon (1948) Shannon, C. E. A mathematical theory of communication. The Bell System Technical Journal, 27(3):379–423, 1948. doi: 10.1002/j.1538-7305.1948.tb01338.x. Steinke & Zakynthinou (2020) Steinke, T. and Zakynthinou, L. Reasoning about generalization via conditional mutual information. In Conference on Learning Theory. PMLR, 2020. Tolstikhin & Seldin (2013) Tolstikhin, I. O. and Seldin, Y. Pac-bayes-empirical-bernstein inequality. Advances in Neural Information Processing Systems, 26, 2013. Vapnik (1998) Vapnik, V. Statistical learning theory. Wiley, 1998. ISBN 978-0-471-03003-4. Wang et al. (2021) Wang, H., Huang, Y., Gao, R., and Calmon, F. Analyzing the generalization capability of sgld using properties of gaussian channels. Advances in Neural Information Processing Systems, 34:24222–24234, 2021. Wang & Mao (2022a) Wang, Z. and Mao, Y. On the generalization of models trained with SGD: Information-theoretic bounds and implications. In International Conference on Learning Representations, 2022a. Wang & Mao (2022b) Wang, Z. and Mao, Y. Two facets of sde under an information-theoretic lens: Generalization of sgd via training trajectories and via terminal states. arXiv preprint arXiv:2211.10691, 2022b. Wang & Mao (2023) Wang, Z. and Mao, Y. Information-theoretic analysis of unsupervised domain adaptation. In International Conference on Learning Representations, 2023. Wu et al. (2020) Wu, X., Manton, J. H., Aickelin, U., and Zhu, J. Information-theoretic analysis for transfer learning. In 2020 IEEE International Symposium on Information Theory (ISIT), pp. 2819–2824. IEEE, 2020. Xu & Raginsky (2017) Xu, A. and Raginsky, M. Information-theoretic analysis of generalization capability of learning algorithms. Advances in Neural Information Processing Systems, 2017. Yang et al. (2019) Yang, J., Sun, S., and Roy, D. M. Fast-rate pac-bayes generalization bounds via shifted rademacher processes. Advances in Neural Information Processing Systems, 32, 2019. Yeung (1991) Yeung, R. W. A new outlook on shannon’s information measures. IEEE transactions on information theory, 37(3):466–474, 1991. Zhang et al. (2017) Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals, O. Understanding deep learning requires rethinking generalization. In International Conference on Learning Representations, 2017. Zhang et al. (2021) Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals, O. Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, 64(3):107–115, 2021. Zhivotovskiy & Hanneke (2018) Zhivotovskiy, N. and Hanneke, S. Localization of vc classes: Beyond local rademacher complexities. Theoretical Computer Science, 742:27–49, 2018. Zhou et al. (2022a) Zhou, R., Tian, C., and Liu, T. Individually conditional individual mutual information bound on generalization error. IEEE Transactions on Information Theory, 68(5):3304–3316, 2022a. Zhou et al. (2022b) Zhou, R., Tian, C., and Liu, T. Stochastic chaining and strengthened information-theoretic generalization bounds. In 2022 IEEE International Symposium on Information Theory (ISIT). IEEE, 2022b. Appendix A Some Useful Definitions and Lemmas Definition A.1 (Wasserstein Distance).
Let 𝑑 ( ⋅ , ⋅ ) be a metric and let 𝑃 and 𝑄 be probability measures on 𝒳 . Denote Γ ( 𝑃 , 𝑄 ) as the set of all couplings of 𝑃 and 𝑄 (i.e. the set of all joint distributions on 𝒳 × 𝒳 with two marginals being 𝑃 and 𝑄 ), then the Wasserstein Distance of order one between 𝑃 and 𝑄 is defined as 𝕎 ( 𝑃 , 𝑄 ) ≜ inf 𝛾 ∈ Γ ( 𝑃 , 𝑄 ) ∫ 𝒳 × 𝒳 𝑑 ( 𝑥 , 𝑥 ′ ) 𝑑 𝛾 ( 𝑥 , 𝑥 ′ ) .
Definition A.2-A.5 are used in the context of chaining methode, e.g., Theorem 3.4, Corollary B.1 and Theorem C.1.
The following is a technique assumption.
Definition A.2 (Separable Process).
The random process { 𝑋 𝑡 } 𝑡 ∈ 𝒯 is called separable if there is a countable set 𝑇 0 ⊆ 𝑇 s.t. 𝑋 𝑡 ∈ lim 𝑠 → 𝑡 , 𝑠 ∈ 𝑇 0 𝑋 𝑠 for ∀ 𝑡 ∈ 𝑇 a.s., where 𝑥 ∈ lim 𝑠 → 𝑡 , 𝑠 ∈ 𝑇 0 𝑥 𝑠 means that there is a sequence ( 𝑠 𝑛 ) in 𝑇 0 s.t. 𝑠 𝑛 → 𝑡 and 𝑥 𝑠 𝑛 → 𝑥 .
Definition A.3 (Sub-Gaussian Process).
The random process { 𝑋 𝑡 } 𝑡 ∈ 𝑇 on the metric space ( 𝑇 , 𝑑 ) is called subgaussian if 𝔼 [ 𝑋 𝑡 ]
0 for all 𝑡 ∈ 𝑇 and 𝔼 [ 𝑒 𝜆 ( 𝑋 𝑡 − 𝑋 𝑠 ) ] ≤ 𝑒 1 2 𝜆 2 𝑑 2 ( 𝑡 , 𝑠 ) for all 𝑡 , 𝑠 ∈ 𝑇 , 𝜆 ≥ 0 .
Definition A.4 (Stochastic Chain (Zhou et al., 2022b)).
Let ( 𝑋 𝒯 , 𝑇 ) be a random process and random variable pair, where 𝑇 is a random variable in the index set 𝒯 . A sequence of random variables { 𝑇 𝑘 } 𝑘
𝑘 0 ∞ (with each distributed in 𝒯 ) is called a stochastic chain of the pair ( 𝑋 𝒯 , 𝑇 ) , if 1) lim 𝑘 → ∞ 𝔼 [ 𝑋 𝑇 𝑘 ]
𝔼 [ 𝑋 𝑇 ] , 2) 𝔼 [ 𝑋 𝑇 𝑘 0 ]
0 and 3) { 𝑋 𝑡 } 𝑡 ∈ 𝒯 − 𝑇 − 𝑇 𝑘 − 𝑇 𝑘 − 1 is a Markov chain for every 𝑘
𝑘 0 .
Definition A.5 (Increasing Sequence of 𝜖 -Partition).
A partition 𝒫
{ 𝐴 1 , 𝐴 2 , … , 𝐴 𝑚 } of the set 𝑇 is called an 𝜖 -partition of the metric space ( 𝑇 , 𝑑 ) if for all 𝑖
1 , 2 , … , 𝑚 , 𝐴 𝑖 can be contained within a ball of radius 𝜖 . A sequence of partitions { 𝒫 𝑘 } 𝑘
𝑚 ∞ of a set 𝑇 is called an increasing sequence if for any 𝑘 ≥ 𝑚 and each 𝐴 ∈ 𝒫 𝑘 + 1 , there exists 𝐵 ∈ 𝒫 𝑘 s.t. 𝐴 ⊆ 𝐵 .
The following lemmas are foundations of the most proofs in this paper.
Lemma A.1 (Donsker-Varadhan (DV) variational representation of KL divergence (Polyanskiy & Wu, 2019, Theorem 3.5)).
Let 𝑄 , 𝑃 be probability measures on Θ , for any bounded measurable function 𝑓 : Θ → ℝ , we have
D KL ( 𝑄 | | 𝑃 )
sup 𝑓 𝔼 𝜃 ∼ 𝑄 [ 𝑓 ( 𝜃 ) ] − ln 𝔼 𝜃 ∼ 𝑃 [ exp 𝑓 ( 𝜃 ) ] .
Lemma A.2 (Hoeffding’s Lemma (Hoeffding, 1963)).
Let 𝑋 ∈ [ 𝑎 , 𝑏 ] be a bounded random variable with mean 𝜇 . Then, for all 𝑡 ∈ ℝ , we have 𝔼 [ 𝑒 𝑡 𝑋 ] ≤ 𝑒 𝑡 𝜇 + 𝑡 2 ( 𝑏 − 𝑎 ) 2 8 .
Lemma A.3 (Kantorovich-Rubinstein (KR) duality of Wasserstein distance (Cédric, 2008)).
For any two distributions 𝑃 and 𝑄 , we have
𝕎 ( 𝑃 , 𝑄 )
sup 𝑓 ∈ 1 − Lip ( 𝜌 ) ∫ 𝒳 𝑓 𝑑 𝑃 − ∫ 𝒳 𝑓 𝑑 𝑄 ,
where the supremum is taken over all 1 -Lipschitz functions in the metric 𝑑 , i.e. | 𝑓 ( 𝑥 ) − 𝑓 ( 𝑥 ′ ) | ≤ 𝑑 ( 𝑥 , 𝑥 ′ ) for any 𝑥 , 𝑥 ′ ∈ 𝒳 .
The following result is known in the previous work (Xu & Raginsky, 2017).
Lemma A.4 (Xu & Raginsky (2017, Lemma 1)).
If 𝑔 ( 𝑋 ′ , 𝑌 ′ ) is 𝜎 -subgaussian333A random variable 𝑋 is 𝜎 -subgaussian if for any 𝑡 , ln 𝔼 exp ( 𝑡 ( 𝑋 − 𝔼 𝑋 ) ) ≤ 𝑡 2 𝜎 2 / 2 . under 𝑃 𝑋 ′ , 𝑌 ′
𝑃 𝑋 𝑃 𝑌 , then
| 𝔼 𝑋 , 𝑌 [ 𝑔 ( 𝑋 , 𝑌 ) ] − 𝔼 𝑋 ′ , 𝑌 ′ [ 𝑔 ( 𝑋 ′ , 𝑌 ′ ) ] | ≤ 2 𝜎 2 𝐼 ( 𝑋 ; 𝑌 ) .
Appendix B Omitted Proofs and Additional Results in Section 3 B.1 Proof of Theorem 3.1
The following proof shows that the proof of e-CMI bound in Hellström & Durisi (2022a) can be adapted to the loss-difference MI bound, where we just replace the distribution 𝑃 𝐿 𝑖 + , 𝐿 𝑖 − by 𝑃 Δ 𝐿 𝑖 .
Proof.
Let 𝑈 𝑖 ′ be the independent copy of 𝑈 𝑖 s.t. 𝑈 𝑖 ′ ∼ Bern ( 1 / 2 ) and 𝑈 ′ ⟂ ⟂ Δ 𝐿 𝑖 | 𝑍 ~ . Recall Lemma A.1,
𝐼 ( Δ 𝐿 𝑖 ; 𝑈 𝑖 | 𝑍 ~
𝑧 ~ )
D KL ( 𝑃 Δ 𝐿 𝑖 , 𝑈 𝑖 | 𝑍 ~
𝑧 ~ | | 𝑃 Δ 𝐿 𝑖 | 𝑍 ~
𝑧 ~ 𝑃 𝑈 𝑖 ′ )
≥
sup 𝑡 ∈ ℝ 𝔼 Δ 𝐿 𝑖 , 𝑈 𝑖 | 𝑧 ~ [ 𝑡 𝑔 ( Δ 𝐿 𝑖 , 𝑈 𝑖 , 𝑧 ~ ) ] − ln 𝔼 Δ 𝐿 𝑖 , 𝑈 𝑖 ′ | 𝑧 ~ [ 𝑒 𝑡 𝑔 ( Δ 𝐿 𝑖 , 𝑈 𝑖 ′ , 𝑧 ~ ) ] .
Recall that 𝑤
𝒜 ( 𝑧 ~ 𝑢 ) , we now let 𝑔 ( Δ 𝑙 𝑖 , 𝑢 𝑖 , 𝑧 ~ )
( − 1 ) 𝑢 𝑖 Δ 𝑙 𝑖
( − 1 ) 𝑢 𝑖 ( ℓ ( 𝒜 ( 𝑧 ~ 𝑢 ) , 𝑧 ~ 𝑖 − ) − ℓ ( 𝒜 ( 𝑧 ~ 𝑢 ) , 𝑧 ~ 𝑖 + ) ) .
Then,
𝐼 ( Δ 𝐿 𝑖 ; 𝑈 𝑖 | 𝑍 ~
𝑧
~
)
≥
sup
𝑡
𝔼
Δ
𝐿
𝑖
,
𝑈
𝑖
|
𝑧
~
[
𝑡
(
−
1
)
𝑈
𝑖
Δ
𝐿
𝑖
]
−
ln
𝔼
Δ
𝐿
𝑖
,
𝑈
𝑖
′
|
𝑧
~
[
𝑒
𝑡
(
−
1
)
𝑈
𝑖
′
Δ
𝐿
𝑖
]
sup 𝑡 𝔼 Δ 𝐿 𝑖 , 𝑈 𝑖 | 𝑧 ~ [ 𝑡 ( − 1 ) 𝑈 𝑖 Δ 𝐿 𝑖 ] − ln 𝔼 Δ 𝐿 𝑖 | 𝑧 ~ [ 𝔼 𝑈 ′ [ 𝑒 𝑡 ( − 1 ) 𝑈 𝑖 ′ Δ 𝐿 𝑖 | Δ 𝐿 𝑖
Δ 𝑙 𝑖 ] ] , (7)
where the last equality is by the conditional independence.
Since 𝔼 𝑈 ′ [ 𝑡 ( − 1 ) 𝑈 𝑖 ′ Δ 𝑙 𝑖 ]
0 and ( − 1 ) 𝑈 𝑖 ′ is bounded between [ − 1 , 1 ] , by Lemma A.2, we have
𝔼 𝑈 𝑖 ′ [ 𝑒 𝑡 ( − 1 ) 𝑈 𝑖 ′ Δ 𝑙 𝑖 ] ≤ 𝑒 𝑡 2 Δ 𝑙 𝑖 2 2 ≤ 𝑒 𝑡 2 2 ,
where the second inequality is by the boundedness condition of the loss function, i.e. Δ 𝑙 𝑖 ∈ [ − 1 , 1 ] .
Plugging the inequality above into Eq. (7), we have
𝐼 ( Δ 𝐿 𝑖 ; 𝑈 𝑖 | 𝑍 ~
𝑧 ~ ) ≥ sup 𝑡 𝔼 Δ 𝐿 𝑖 , 𝑈 𝑖 | 𝑧 ~ [ 𝑡 ( − 1 ) 𝑈 𝑖 Δ 𝐿 𝑖 ] − 𝑡 2 2 .
Then consider the case of 𝑡 > 0 and 𝑡 < 0 ( 𝑡
0 is trivial), by AM–GM inequality (i.e. the arithmetic mean is greater than or equal to the geometric mean), the following is straightforward,
| 𝔼 Δ 𝐿 𝑖 , 𝑈 𝑖 | 𝑧 ~ [ ( − 1 ) 𝑈 𝑖 Δ 𝐿 𝑖 ] | ≤ 2 𝐼 ( Δ 𝐿 𝑖 ; 𝑈 𝑖 | 𝑍 ~
𝑧 ~ ) .
Notice that
| Err |
| 𝔼 𝑆 , 𝑊 [ 𝐿 𝜇 ( 𝑊 ) − 𝐿 𝑆 ( 𝑊 ) ] |
|
𝔼
𝑍
~
,
𝑈
,
𝑊
[
𝐿
𝑍
~
∖
𝑍
~
𝑈
(
𝑊
)
−
𝐿
𝑍
~
𝑈
(
𝑊
)
]
|
(8)
≤
𝔼
𝑍
~
|
𝔼
𝑈
,
𝑊
|
𝑍
~
[
𝐿
𝑍
~
𝑈
¯
(
𝑊
)
−
𝐿
𝑍
~
𝑈
(
𝑊
)
]
|
(9)
≤
1
𝑛
∑
𝑖
1 𝑛 𝔼 𝑍 ~ | 𝔼 𝑈 𝑖 , 𝑊 | 𝑍 ~ [ ( − 1 ) 𝑈 𝑖 ( ℓ ( 𝑊 , 𝑍 ~ 𝑖 − ) − ℓ ( 𝑊 , 𝑍 ~ 𝑖 + ) ) ] |
1 𝑛 ∑ 𝑖
1 𝑛 𝔼 𝑍 ~ | 𝔼 𝑈 𝑖 , Δ 𝐿 𝑖 | 𝑍 ~ [ ( − 1 ) 𝑈 𝑖 Δ 𝐿 𝑖 ] | ,
wherein the two inequalities are by applying the Jensen’s inequality to the absolute function.
Hence, putting everything together we have
| Err | ≤ 1 𝑛 ∑ 𝑖
1 𝑛 𝔼 𝑍 ~ 2 𝐼 𝑍 ~ ( Δ 𝐿 𝑖 ; 𝑈 𝑖 ) ≤ 1 𝑛 ∑ 𝑖
1 𝑛 2 𝐼 ( Δ 𝐿 𝑖 ; 𝑈 𝑖 | 𝑍 ~ ) ,
where the second inequality is by applying the Jensen’s inequality to the square root function.
This completes the proof. ∎
B.2 Proof of Theorem 3.2
By revisiting the proof of Theorem 3.1, particularly Eq. (8-9), we notice that if we do not move the expectation over 𝑍 ~ outside of the absolute function, we will have a chance to get ride of the expectation over 𝑍 ~ if we take the expectation over Δ 𝐿 𝑖 .
Proof.
By the definition of the expected generalization error, we have
| Err |
| 𝔼 𝑆 , 𝑊 [ 𝐿 𝜇 ( 𝑊 ) − 𝐿 𝑆 ( 𝑊 ) ] |
| 𝔼 𝑍 ~ , 𝑈 , 𝑊 [ 𝐿 𝑍 ~ ∖ 𝑍 ~ 𝑈 ( 𝑊 ) − 𝐿 𝑍 ~ 𝑈 ( 𝑊 ) ] |
| 1 𝑛 ∑ 𝑖
1
𝑛
𝔼
𝑍
~
,
𝑈
𝑖
,
𝑊
[
(
−
1
)
𝑈
𝑖
(
ℓ
(
𝑊
,
𝑍
~
𝑖
−
)
−
ℓ
(
𝑊
,
𝑍
~
𝑖
+
)
)
]
|
(10)
≤
1
𝑛
∑
𝑖
1 𝑛 | 𝔼 Δ 𝐿 𝑖 , 𝑈 𝑖 [ ( − 1 ) 𝑈 𝑖 Δ 𝐿 𝑖 ] | . (11)
We know that ( − 1 ) 𝑈 𝑖 ′ Δ 𝐿 𝑖 is bounded between [ − 1 , 1 ] , so it is a 1 -subgaussian random variable. Then, recall Lemma A.4 and let 𝑔 ( 𝑋 , 𝑌 )
( − 1 ) 𝑈 𝑖 Δ 𝐿 𝑖 , we have
| 𝔼 Δ 𝐿 𝑖 , 𝑈 𝑖 [ ( − 1 ) 𝑈 𝑖 Δ 𝐿 𝑖 ] − 𝔼 Δ 𝐿 𝑖 , 𝑈 𝑖 ′ [ ( − 1 ) 𝑈 𝑖 ′ Δ 𝐿 𝑖 ] | ≤ 2 𝐼 ( Δ 𝐿 𝑖 ; 𝑈 𝑖 ) .
Since 𝔼 Δ 𝐿 𝑖 , 𝑈 𝑖 ′ [ ( − 1 ) 𝑈 𝑖 ′ Δ 𝐿 𝑖 ]
0 , plugging the inequality above into Eq. (11), we have
| Err | ≤ 1 𝑛 ∑ 𝑖
1 𝑛 | 𝔼 Δ 𝐿 𝑖 , 𝑈 𝑖 [ ( − 1 ) 𝑈 𝑖 Δ 𝐿 𝑖 ] | ≤ 1 𝑛 ∑ 𝑖
1 𝑛 2 𝐼 ( Δ 𝐿 𝑖 ; 𝑈 𝑖 ) .
This concludes the proof. ∎
B.3 Proof of Theorem 3.5 Proof.
Recall Eq. (11), we could also obtain
|
Err
|
≤
1
𝑛
∑
𝑖
1 𝑛 | 𝔼 Δ 𝐿 𝑖 , 𝑈 𝑖 [ ( − 1 ) 𝑈 𝑖 Δ 𝐿 𝑖 ] |
1 𝑛 ∑ 𝑖
1 𝑛 | 𝔼 Δ 𝐿 𝑖 , 𝑈 𝑖 [ ( − 1 ) 𝑈 𝑖 Δ 𝐿 𝑖 ] − 𝔼 Δ 𝐿 𝑖 ′ , 𝑈 𝑖 [ ( − 1 ) 𝑈 𝑖 Δ 𝐿 𝑖 ′ ] | ,
where Δ 𝐿 𝑖 ′ is an independent copy of Δ 𝐿 𝑖 (i.e. Δ 𝐿 𝑖 ′ ∼ 𝑃 Δ 𝐿 𝑖 and Δ 𝐿 𝑖 ′ ⟂ ⟂ 𝑈 𝑖 ) and the second equality holds since 𝔼 Δ 𝐿 𝑖 ′ , 𝑈 𝑖 [ ( − 1 ) 𝑈 𝑖 Δ 𝐿 𝑖 ′ ]
0 .
Then, by Jensen’s inequality, we move the expectation over 𝑈 𝑖 and the average outside the absolute function,
|
Err
|
≤
1
𝑛
∑
𝑖
1 𝑛 𝔼 𝑈 𝑖 [ | 𝔼 Δ 𝐿 𝑖 | 𝑈 𝑖 [ ( − 1 ) 𝑈 𝑖 Δ 𝐿 𝑖 ] − 𝔼 Δ 𝐿 𝑖 ′ [ ( − 1 ) 𝑈 𝑖 Δ 𝐿 𝑖 ′ ] | | 𝑈 𝑖
𝑢 𝑖 ] .
Notice that for any fixed 𝑢 𝑖 , we have
𝔼 Δ 𝐿 𝑖 | 𝑈 𝑖
𝑢 𝑖 [ ( − 1 ) 𝑢 𝑖 Δ 𝐿 𝑖 ]
∫ − 1 1 ( − 1 ) 𝑢 𝑖 Δ ℓ 𝑖 𝑑 𝑃 Δ 𝐿 𝑖 | 𝑈 𝑖
𝑢
𝑖
(
Δ
ℓ
𝑖
)
,
𝔼
Δ
𝐿
𝑖
′
[
(
−
1
)
𝑢
𝑖
Δ
𝐿
𝑖
′
]
∫ − 1 1 ( − 1 ) 𝑢 𝑖 Δ ℓ 𝑖 ′ 𝑑 𝑃 Δ 𝐿 𝑖 ( Δ ℓ 𝑖 ′ ) .
Also, noting that 𝑓 ( 𝑥 )
𝑥 is a 1-Lipschitz function, i.e. | ( − 1 ) 𝑢 𝑖 Δ 𝐿 𝑖 − ( − 1 ) 𝑢 𝑖 Δ 𝐿 𝑖 ′ | ≤ | Δ 𝐿 𝑖 − Δ 𝐿 𝑖 ′ | holds trivially.
Recall the KR duality of Wasserstein distance (i.e. Lemma A.3), we have
|
Err
|
≤
1
𝑛
∑
𝑖
1 𝑛 𝔼 𝑈 𝑖 [ | 𝔼 Δ 𝐿 𝑖 | 𝑈 𝑖 [ ( − 1 ) 𝑈 𝑖 Δ 𝐿 𝑖 ] − 𝔼 Δ 𝐿 𝑖 ′ [ ( − 1 ) 𝑈 𝑖 Δ 𝐿 𝑖 ′ ] | | 𝑈 𝑖
𝑢 𝑖 ] ≤ 1 𝑛 ∑ 𝑖
1 𝑛 𝔼 𝑈 𝑖 [ 𝕎 ( 𝑃 Δ 𝐿 𝑖 | 𝑈 𝑖 , 𝑃 Δ 𝐿 𝑖 ) ] .
This concludes the proof. ∎
B.4 Proof of Theorem 3.3 Proof.
The channel capacity can be obtained from Lemma D.1 by letting 𝜖 𝑖
0 and changing the unit of bit to the unit of nat (i.e. replacing logarithm base of 2 to the base of 𝑒 by ln 𝑥
ln 2 log 2 𝑥 ).
Furthermore, the value of 1 − 𝛼 𝑖 reflects the chance that the interpolating learning algorithm 𝒜 will make an error (i.e. ℓ ( 𝑊 , 𝑍 𝑖 ′ )
1 ) for a testing instance, or equivalently,
𝔼 𝑊 , 𝑍 𝑖 ′ [ 𝟙 𝑓 𝑊 ( 𝑋 𝑖 ′ ) ≠ 𝑌 ′ ]
𝔼 𝑈 𝑖 , 𝐿 𝑖 [ 𝐿 𝑖 , 𝑈 ¯ 𝑖 ]
𝔼 𝐿 𝑖 − | 𝑈 𝑖
0 [ 𝐿 𝑖 − ] + 𝔼 𝐿 𝑖 + | 𝑈 𝑖
1 [ 𝐿 𝑖 + ] 2
𝑃 ( Δ 𝐿 𝑖
1 | 𝑈 𝑖
0 ) + 𝑃 ( Δ 𝐿 𝑖
− 1 | 𝑈 𝑖
1 ) 2
1 − 𝛼 𝑖 .
Thus, combining the equality above with 𝐶
𝐼 ( 𝑈 𝑖 ; Δ 𝐿 𝑖 )
( 1 − 𝛼 𝑖 ) ⋅ ln 2 , we have
| Err |
𝔼 𝑊 [ 𝐿 𝜇 ( 𝑊 ) ]
1 𝑛 ∑ 𝑖
1 𝑛 𝔼 𝑊 , 𝑍 𝑖 ′ [ 𝟙 𝑓 𝑊 ( 𝑋 𝑖 ′ ) ≠ 𝑌 𝑖 ′ ]
1 𝑛 ∑ 𝑖
1 𝑛 ( 1 − 𝛼 𝑖 )
1 𝑛 ln 2 ∑ 𝑖
1 𝑛 𝐼 ( 𝑈 𝑖 ; Δ 𝐿 𝑖 ) .
This completes the proof. ∎
B.5 Proof of Theorem 3.4 Proof.
Let 𝐸 Δ 𝐿 𝑖 𝑖
Err 𝑖 ( Δ 𝐿 𝑖 )
( − 1 ) 𝑈 𝑖 Δ 𝐿 𝑖 , then for any integers 𝑘 1 and 𝑘 0 such that 𝑘 1
𝑘 0 , we have
𝐸 Δ 𝐿 𝑖 𝑖
𝐸 Δ 𝐿 𝑖 , 𝑘 0 𝑖 + ∑ 𝑘
𝑘 0 + 1 𝑘 1 ( 𝐸 Δ 𝐿 𝑖 , 𝑘 𝑖 − 𝐸 Δ 𝐿 𝑖 , 𝑘 − 1 𝑖 ) + 𝐸 Δ 𝐿 𝑖 𝑖 − 𝐸 Δ 𝐿 𝑖 , 𝑘 1 𝑖 .
By the definition of the stochastic chain (i.e. Definition A.4), we know that 𝔼 𝐸 Δ 𝐿 𝑖 , 𝑘 0 𝑖 [ 𝐸 Δ 𝐿 𝑖 , 𝑘 0 𝑖 ]
0 and lim 𝑘 1 → ∞ 𝐸 Δ 𝐿 𝑖 , 𝑘 1 𝑖
𝐸 Δ 𝐿 𝑖 𝑖 . Therefore, let 𝑘 1 → ∞ and taking expectation over ( 𝑈 𝑖 , Δ 𝐿 𝑖 ) ∼ 𝑃 𝑈 𝑖 , Δ 𝐿 𝑖 for both sides of the equation above, we have
𝔼 𝑈 𝑖 , Δ 𝐿 𝑖 [ 𝐸 Δ 𝐿 𝑖 𝑖 ]
∑ 𝑘
𝑘 0 + 1 ∞ 𝔼 𝑈 𝑖 , Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 , 𝑘 − 1 [ 𝐸 Δ 𝐿 𝑖 , 𝑘 𝑖 − 𝐸 Δ 𝐿 𝑖 , 𝑘 − 1 𝑖 ] . (12)
Let 𝑈 𝑖 ′ be an independent copy of 𝑈 𝑖 and recall Lemma A.1, we have
𝔼 Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 , 𝑘 − 1 [ D KL ( 𝑃 𝑈 𝑖 | Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 , 𝑘 − 1 | | 𝑃 𝑈 𝑖 ′ ) ]
≥
𝔼 Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 , 𝑘 − 1 [ sup 𝑡
0 𝑡 𝔼 𝑈 𝑖 | Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 , 𝑘 − 1 [ 𝐸 Δ 𝐿 𝑖 , 𝑘 𝑖 − 𝐸 Δ 𝐿 𝑖 , 𝑘 − 1 𝑖 ] − ln 𝔼 𝑈 𝑖 ′ [ 𝑒 𝑡 ( 𝐸 Δ 𝐿 𝑖 , 𝑘 𝑖 − 𝐸 Δ 𝐿 𝑖 , 𝑘 − 1 𝑖 ) ] ]
≥
sup 𝑡
0 𝑡 𝔼 𝑈 𝑖 , Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 , 𝑘 − 1 [ 𝐸 Δ 𝐿 𝑖 , 𝑘 𝑖 − 𝐸 Δ 𝐿 𝑖 , 𝑘 − 1 𝑖 ] − 𝔼 Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 , 𝑘 − 1 ln 𝔼 𝑈 𝑖 ′ [ 𝑒 𝑡 ( − 1 ) 𝑈 𝑖 ′ ( Δ 𝐿 𝑖 , 𝑘 − Δ 𝐿 𝑖 , 𝑘 − 1 ) ] ,
where the second inequality is by applying Jensen’s inequality to the supremum.
Notice that the LHS above is equivalent to 𝐼 ( Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 , 𝑘 − 1 ; 𝑈 𝑖 ) . Since ( − 1 ) 𝑈 𝑖 ′ is bounded between [ − 1 , 1 ] , by Lemma A.2, we have
𝐼 ( Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 , 𝑘 − 1 ; 𝑈 𝑖 ) ≥
sup 𝑡
0 𝑡 𝔼 𝑈 𝑖 , Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 , 𝑘 − 1 [ 𝐸 Δ 𝐿 𝑖 , 𝑘 𝑖 − 𝐸 Δ 𝐿 𝑖 , 𝑘 − 1 𝑖 ] − 𝔼 Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 , 𝑘 − 1 ln 𝑒 𝑡 2 ( Δ 𝐿 𝑖 , 𝑘 − Δ 𝐿 𝑖 , 𝑘 − 1 ) 2 2 .
Thus, let 𝑑 ( Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 , 𝑘 − 1 )
| Δ 𝐿 𝑖 , 𝑘 − Δ 𝐿 𝑖 , 𝑘 − 1 | , we have
𝔼 𝑈 𝑖 , Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 , 𝑘 − 1 [ 𝐸 Δ 𝐿 𝑖 , 𝑘 𝑖 − 𝐸 Δ 𝐿 𝑖 , 𝑘 − 1 𝑖 ] ≤ 2 𝔼 Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 , 𝑘 − 1 [ 𝑑 2 ( Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 , 𝑘 − 1 ) ] 𝐼 ( Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 , 𝑘 − 1 ; 𝑈 𝑖 ) .
Plugging the inequality above into Eq. (12) and taking average over 𝑖 , we have,
Err
1 𝑛 ∑ 𝑖
1
𝑛
𝔼
𝑈
𝑖
,
Δ
𝐿
𝑖
[
𝐸
Δ
𝐿
𝑖
𝑖
]
≤
1
𝑛
∑
𝑖
1 𝑛 ∑ 𝑘
𝑘 0 + 1 ∞ 2 𝔼 Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 , 𝑘 − 1 [ 𝑑 2 ( Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 , 𝑘 − 1 ) ] 𝐼 ( Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 , 𝑘 − 1 ; 𝑈 𝑖 ) .
From the third point of Definition A.4, we know that 𝑈 𝑖 − Δ 𝐿 𝑖 − Δ 𝐿 𝑖 , 𝑘 − Δ 𝐿 𝑖 , 𝑘 − 1 is a Markov chain, so 𝐼 ( Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 , 𝑘 − 1 ; 𝑈 𝑖 )
𝐼 ( Δ 𝐿 𝑖 , 𝑘 ; 𝑈 𝑖 ) + 𝐼 ( Δ 𝐿 𝑖 , 𝑘 − 1 ; 𝑈 𝑖 | Δ 𝐿 𝑖 , 𝑘 )
𝐼 ( Δ 𝐿 𝑖 , 𝑘 ; 𝑈 𝑖 ) . This gives us the final form of the bound,
Err ≤ 1 𝑛 ∑ 𝑖
1 𝑛 ∑ 𝑘
𝑘 0 + 1 ∞ 2 𝔼 Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 , 𝑘 − 1 [ 𝑑 2 ( Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 , 𝑘 − 1 ) ] 𝐼 ( Δ 𝐿 𝑖 , 𝑘 ; 𝑈 𝑖 ) .
This concludes the proof. ∎
B.6 Additional Result: Chained MI Bound for Bounded Loss Corollary B.1.
Let 2 − 𝑘 0 ≥ diam ( Γ ) and let { 𝒫 𝑘 } 𝑘
𝑘 0 ∞ be an increasing sequence of partitions of Γ , where for each 𝑘 ≥ 𝑘 0 , 𝒫 𝑘 is a 2 − 𝑘 -partition of ( Γ , 𝑑 ) . Let Δ 𝐿 𝑖 , 𝑘 be the center of the covering ball of the partition cell that Δ 𝐿 𝑖 belongs to the partition 𝒫 𝑘 , then
Err ≤ 3 𝑛 ∑ 𝑖
1 𝑛 ∑ 𝑘
𝑘 0 ∞ 2 − 𝑘 2 𝐼 ( Δ 𝐿 𝑖 , 𝑘 ; 𝑈 𝑖 ) .
Proof.
By the triangle inequality, we have 𝑑 ( Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 , 𝑘 − 1 ) ≤ 𝑑 ( Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 ) + 𝑑 ( Δ 𝐿 𝑖 , Δ 𝐿 𝑖 , 𝑘 − 1 ) .
Since 𝒫 𝑘 is a 2 − 𝑘 partition, 𝑑 ( Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 ) ≤ 2 − 𝑘 , then 𝑑 ( Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 ) + 𝑑 ( Δ 𝐿 𝑖 , Δ 𝐿 𝑖 , 𝑘 − 1 ) ≤ 2 − 𝑘 + 2 − ( 𝑘 − 1 )
3 × 2 − 𝑘 . Plugging this into Theorem 3.4, we have
Err ≤ 1 𝑛 ∑ 𝑖
1 𝑛 ∑ 𝑘
𝑘 0 + 1 ∞ 2 𝔼 Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 , 𝑘 − 1 [ 𝑑 2 ( Δ 𝐿 𝑖 , 𝑘 , Δ 𝐿 𝑖 , 𝑘 − 1 ) ] 𝐼 ( Δ 𝐿 𝑖 , 𝑘 ; 𝑈 𝑖 ) ≤ 1 𝑛 ∑ 𝑖
1 𝑛 ∑ 𝑘
𝑘 0 + 1 ∞ 3 × 2 − 𝑘 2 𝐼 ( Δ 𝐿 𝑖 , 𝑘 ; 𝑈 𝑖 ) .
This completes the proof. ∎
Appendix C Omitted Proofs and Additional Results in Section 4 C.1 Proof of Lemma 4.1 Proof.
By the definition of Err , we can decompose it into two terms,
Err
1 𝑛 ∑ 𝑖
1 𝑛 𝔼 𝐿 𝑖 + , 𝐿 𝑖 − , 𝑈 𝑖 [ 𝐿 𝑖 , 𝑈 ¯ 𝑖 − 𝐿 𝑖 , 𝑈 𝑖 ]
1 𝑛 ∑ 𝑖
1 𝑛 𝔼 𝐿 𝑖 + , 𝐿 𝑖 − , 𝑈 𝑖 [ ( − 1 ) 𝑈 𝑖 ( 𝐿 𝑖 − − 𝐿 𝑖 + ) ]
1 𝑛 ∑ 𝑖
1 𝑛 [ 𝔼 𝐿 𝑖 − , 𝑈 𝑖 [ ( − 1 ) 𝑈 𝑖 𝐿 𝑖 − ] + 𝔼 𝐿 𝑖 + , 𝑈 𝑖 [ − ( − 1 ) 𝑈 𝑖 𝐿 𝑖 + ] ]
1 𝑛 ∑ 𝑖
1 𝑛 [ 𝔼 𝐿 𝑖 − , 𝑈 𝑖 [ ( − 1 ) 𝑈 𝑖 𝐿 𝑖 − ] + 𝔼 𝐿 𝑖 + , 𝑈 ¯ 𝑖 [ ( − 1 ) 𝑈 ¯ 𝑖 𝐿 𝑖 + ] ] ,
where the last equality is by − ( − 1 ) 𝑈 𝑖 𝐿 𝑖 +
( − 1 ) 𝑈 ¯ 𝑖 𝐿 𝑖 + . We now show that the following holds
𝔼 𝐿 𝑖 − , 𝑈 𝑖 [ ( − 1 ) 𝑈 𝑖 𝐿 𝑖 − ]
𝔼 𝐿 𝑖 + , 𝑈 ¯ 𝑖 [ ( − 1 ) 𝑈 ¯ 𝑖 𝐿 𝑖 + ] .
Recall that 𝑍 ~ and 𝑈 are i.i.d drawn from 𝜇 2 𝑛 and the Bernoulli distribution, respectively, and 𝑍 ~ ⟂ ⟂ 𝑈 . Usually, a learning algorithm may depend on the order of training instances (i.e. for 𝑖 ≠ 𝑗 , even if two training instances satisfy 𝑧 𝑖
𝑧 𝑗 , 𝑃 𝑊 | 𝑧 𝑖 may not be the same with 𝑃 𝑊 | 𝑧 𝑗 ), but it should be invariant to the row index of the supersample 𝑍 ~ , then the distribution 𝑃 𝐿 𝑖 − , 𝑈 𝑖 and the distribution 𝑃 𝐿 𝑖 + , 𝑈 𝑖 have some symmetric property, namely, 𝑃 𝐿 𝑖 − | 𝑈 𝑖
1
𝑃 𝐿 𝑖 + | 𝑈 𝑖
0 and 𝑃 𝐿 𝑖 − | 𝑈 𝑖
0
𝑃 𝐿 𝑖 + | 𝑈 𝑖
1 . Or equivalently, the distribution of the training loss (or testing loss) of the 𝑖 th training instance (or testing instance) is invariant to 𝑈 𝑖 . Hence, we have 𝑃 𝐿 𝑖 −
𝑃 𝐿 𝑖 + , we say 𝐿 𝑖 − and 𝐿 𝑖 + are identically but not independently distributed. Then,
𝔼 𝐿 𝑖 − | 𝑈 𝑖
0 [ 𝐿 𝑖 − ]
∫ 0 1 ℓ 𝑖 − 𝑑 𝑃 𝐿 𝑖 − | 𝑈 𝑖
0 ( ℓ 𝑖 − )
∫ 0 1 ℓ 𝑖 + 𝑑 𝑃 𝐿 𝑖 + | 𝑈 𝑖
1 ( ℓ 𝑖 + )
𝔼 𝐿 𝑖 + | 𝑈 𝑖
1
[
𝐿
𝑖
+
]
,
𝔼
𝐿
𝑖
−
|
𝑈
𝑖
1 [ − 𝐿 𝑖 − ]
∫ 0 1 − ℓ 𝑖 − 𝑑 𝑃 𝐿 𝑖 − | 𝑈 𝑖
1 ( ℓ 𝑖 − )
∫ 0 1 − ℓ 𝑖 + 𝑑 𝑃 𝐿 𝑖 + | 𝑈 𝑖
0 ( ℓ 𝑖 + )
𝔼 𝐿 𝑖 + | 𝑈 𝑖
0 [ − 𝐿 𝑖 + ] .
These give us
𝔼 𝐿 𝑖 − , 𝑈 𝑖 [ ( − 1 ) 𝑈 𝑖 𝐿 𝑖 − ]
𝔼 𝐿 𝑖 − | 𝑈 𝑖
0 [ 𝐿 𝑖 − ] + 𝔼 𝐿 𝑖 − | 𝑈 𝑖
1 [ − 𝐿 𝑖 − ] 2
𝔼 𝐿 𝑖 + | 𝑈 𝑖
1 [ 𝐿 𝑖 + ] + 𝔼 𝐿 𝑖 + | 𝑈 𝑖
0 [ − 𝐿 𝑖 + ] 2
𝔼 𝐿 𝑖 + , 𝑈 ¯ 𝑖 [ ( − 1 ) 𝑈 ¯ 𝑖 𝐿 𝑖 + ] .
Therefore,
Err
1 𝑛 ∑ 𝑖
1 𝑛 [ 𝔼 𝐿 𝑖 − , 𝑈 𝑖 [ ( − 1 ) 𝑈 𝑖 𝐿 𝑖 − ] + 𝔼 𝐿 𝑖 + , 𝑈 ¯ 𝑖 [ ( − 1 ) 𝑈 ¯ 𝑖 𝐿 𝑖 + ] ]
2 𝑛 ∑ 𝑖
1 𝑛 𝔼 𝐿 𝑖 − , 𝑈 𝑖 [ ( − 1 ) 𝑈 𝑖 𝐿 𝑖 − ]
2 𝑛 ∑ 𝑖
1 𝑛 𝔼 𝐿 𝑖 + , 𝑈 ¯ 𝑖 [ ( − 1 ) 𝑈 ¯ 𝑖 𝐿 𝑖 + ] .
Notice that both ( − 1 ) 𝑈 𝑖 and ( − 1 ) 𝑈 ¯ 𝑖 are Rademacher variables. This conclude the proof. ∎
C.2 Proof of Theorem 4.1 Proof.
Notice that 2 ( − 1 ) 𝑈 𝑖 ′ 𝐿 𝑖 + is bounded between [ − 2 , 2 ] , it is a subgaussian random variable with the variance proxy 𝜎
2 . Let the measurable function 𝑔 ( 𝐿 𝑖 + , 𝑈 𝑖 ) in Lemma A.4 be 2 ( − 1 ) 𝑈 𝑖 𝐿 𝑖 + , then 𝑔 ( 𝐿 𝑖 + , 𝑈 𝑖 ) is 2 -subgaussian under 𝑃 𝑈 𝑖 𝑃 𝐿 𝑖 + , we have
| 2 𝔼 𝐿 𝑖 + , 𝑈 𝑖 [ ( − 1 ) 𝑈 𝑖 𝐿 𝑖 + ] − 2 𝔼 𝐿 𝑖 + , 𝑈 𝑖 ′ [ ( − 1 ) 𝑈 𝑖 ′ 𝐿 𝑖 + ] | ≤ 2 2 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) .
Since 𝔼 𝐿 𝑖 + , 𝑈 𝑖 ′ [ ( − 1 ) 𝑈 𝑖 ′ 𝐿 𝑖 + ]
𝔼 𝐿 𝑖 + [ 𝐿 𝑖 + ] − 𝔼 𝐿 𝑖 + [ 𝐿 𝑖 + ] 2
0 , then
| 2 𝔼 𝐿 𝑖 + , 𝑈 𝑖 [ ( − 1 ) 𝑈 𝑖 𝐿 𝑖 + ] | ≤ 2 2 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) .
Recall Lemma 4.1,
Err
2 𝑛 ∑ 𝑖
1 𝑛 𝔼 𝐿 𝑖 + , 𝑈 ¯ 𝑖 [ ( − 1 ) 𝑈 ¯ 𝑖 𝐿 𝑖 + ] .
Notice that 𝑈 ¯ 𝑖 and 𝑈 𝑖 are one-to-one mapping, so using any of them will give the same mutual information. Thus, by applying Jensen’s inequality to the absolute function, we have
| Err | ≤ 1 𝑛 ∑ 𝑖
1 𝑛 | 2 𝔼 𝐿 𝑖 + , 𝑈 𝑖 [ ( − 1 ) 𝑈 𝑖 𝐿 𝑖 + ] | ≤ 2 𝑛 ∑ 𝑖
1 𝑛 2 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) .
In addition, to obtain the second inequality in the theorem, we first invoke the independence between 𝑍 ~ and 𝑈 𝑖 , 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) + 𝐼 ( 𝑍 ~ ; 𝑈 𝑖 | 𝐿 𝑖 + )
𝐼 ( 𝐿 𝑖 + , 𝑍 ~ ; 𝑈 𝑖 )
𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 | 𝑍 ~ ) and then use the DPI, 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 | 𝑍 ~ ) ≤ 𝐼 ( 𝑓 𝑊 ( 𝑋 ) ; 𝑈 𝑖 | 𝑍 ~ ) , we have
| Err | ≤ 2 𝑛 ∑ 𝑖
1 𝑛 2 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) ≤ 2 𝑛 ∑ 𝑖
1 𝑛 2 𝐼 ( 𝑓 𝑊 ( 𝑋 𝑖 + ) ; 𝑈 𝑖 | 𝑍 ~ ) .
This completes the proof. ∎
C.3 Proof of Theorem 4.2 Proof.
Notice that
𝔼 𝑊 , 𝑍 𝑖 ′ [ 𝟙 𝑓 𝑊 ( 𝑋 𝑖 ′ ) ≠ 𝑌 ′ ]
𝔼 𝑈 𝑖 , 𝐿 𝑖 [ 𝐿 𝑖 , 𝑈 ¯ 𝑖 ]
𝔼 𝐿 𝑖 − | 𝑈 𝑖
0 [ 𝐿 𝑖 − ] + 𝔼 𝐿 𝑖 + | 𝑈 𝑖
1 [ 𝐿 𝑖 + ] 2
𝔼 𝐿 𝑖 + | 𝑈 𝑖
1 [ 𝐿 𝑖 + ]
𝑃 ( 𝐿 𝑖 +
1 | 𝑈 𝑖
1 )
1 − 𝑞 𝑖 .
Hence, 𝐿 𝜇
∑ 𝑖
1 𝑛 1 − 𝑞 𝑖 𝑛 .
For each 𝑖 , we have 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 )
𝐻 ( 𝐿 𝑖 + ) − 𝐻 ( 𝐿 𝑖 + | 𝑈 𝑖 )
𝐻 ( 1 − 𝑞 𝑖 2 ) − 1 2 𝐻 ( 1 − 𝑞 𝑖 ) ≤ 𝐻 ( 1 − 𝑞 𝑖 2 ) . Since the entropy function 𝐻 ( ⋅ ) is a concave function, we have 1 𝑛 ∑ 𝑖
1 𝑛 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) ≤ 1 𝑛 ∑ 𝑖
1 𝑛 𝐻 ( 1 − 𝑞 𝑖 2 ) ≤ 𝐻 ( 𝐿 𝜇 2 ) . ∎
C.4 Additional Result: Chained MI Bound Based on Single-Loss
When the loss is not discrete or even not bounded, let 𝜉 𝑖 ( ℓ 𝑖 + ) ≜ 𝜀 𝑖 ℓ 𝑖 + be a random process and let ℒ be the domain of 𝐿 𝑖 + . Similar to Theorem 3.4, we can also have the corresponding chained bound of Theorem 4.1.
Theorem C.1.
For each 𝑖 ∈ [ 𝑛 ] , we assume { 𝐿 𝑖 , 𝑘 + } 𝑘
𝑘 0 ∞ is a stochastic chain of ( 𝜉 𝑖 ( ℓ 𝑖 + ) } ℓ 𝑖 + ∈ ℒ , 𝐿 𝑖 + ) , then
Err ≤ 2 𝑛 ∑ 𝑖
1 𝑛 ∑ 𝑘
𝑘 0 ∞ 2 𝔼 [ 𝑑 2 ( 𝐿 𝑖 , 𝑘 + , 𝐿 𝑖 , 𝑘 − 1 + ) ] 𝐼 ( 𝐿 𝑖 , 𝑘 + ; 𝑈 𝑖 ) ,
where the RHS expectation is taken over ( 𝐿 𝑖 , 𝑘 + , 𝐿 𝑖 , 𝑘 − 1 + ) .
This theorem can be obtained by following the same development with the proof in Section B.5.
C.5 Proof of Lemma 4.2 Proof.
A key step is the second equality of the following
Err 𝐶 1
1 𝑛 ∑ 𝑖
1 𝑛 𝔼 𝐿 𝑖 − , 𝐿 𝑖 + , 𝑈 𝑖 [ 𝐿 𝑖 , 𝑈 ¯ 𝑖 − ( 1 + 𝐶 1 ) 𝐿 𝑖 , 𝑈 𝑖 ]
1 𝑛 ∑ 𝑖
1 𝑛 𝔼 𝐿 𝑖 − , 𝐿 𝑖 + , 𝑈 𝑖 [ ( 1 + 𝐶 1 2 ) ( 𝐿 𝑖 , 𝑈 ¯ 𝑖 − 𝐿 𝑖 , 𝑈 𝑖 ) − 𝐶 1 2 𝐿 𝑖 , 𝑈 ¯ 𝑖 − 𝐶 1 2 𝐿 𝑖 , 𝑈 𝑖 ]
2 + 𝐶 1 2 𝑛 ∑ 𝑖
1 𝑛 [ 𝔼 𝐿 𝑖 − , 𝑈 𝑖 [ ( − 1 ) 𝑈 𝑖 𝐿 𝑖 − − 𝐶 1 𝐶 1 + 2 𝐿 𝑖 − ] + 𝔼 𝐿 𝑖 + , 𝑈 𝑖 [ − ( − 1 ) 𝑈 𝑖 𝐿 𝑖 + − 𝐶 1 𝐶 1 + 2 𝐿 𝑖 − ] ] .
Recall that 𝑃 𝐿 𝑖 −
𝑃 𝐿 𝑖 + , we have 𝔼 𝐿 𝑖 − [ 𝐶 1 𝐶 1 + 2 𝐿 𝑖 − ]
𝔼 𝐿 𝑖 + [ 𝐶 1 𝐶 1 + 2 𝐿 𝑖 + ] . Also, noting that 𝔼 𝐿 𝑖 − | 𝑈 𝑖
0 [ 𝐿 𝑖 − ]
𝔼 𝐿 𝑖 + | 𝑈 𝑖
1 [ 𝐿 𝑖 + ] and 𝔼 𝐿 𝑖 − | 𝑈 𝑖
1 [ − 𝐿 𝑖 − ]
𝔼 𝐿 𝑖 + | 𝑈 𝑖
0 [ − 𝐿 𝑖 + ] , we have
Err 𝐶 1
2 + 𝐶 1 𝑛 ∑ 𝑖
1 𝑛 𝔼 𝐿 𝑖 + , 𝑈 ¯ 𝑖 [ ( − 1 ) 𝑈 ¯ 𝑖 𝐿 𝑖 + − 𝐶 1 𝐶 1 + 2 𝐿 𝑖 + ]
2 + 𝐶 1 𝑛 ∑ 𝑖
1 𝑛 𝔼 𝐿 𝑖 + , 𝜀 ~ 𝑖 [ 𝜀 ~ 𝑖 𝐿 𝑖 + ] , (13)
where 𝜀 ~ 𝑖
𝜀 𝑖 − 𝐶 1 𝐶 1 + 2 and 𝜀 𝑖 ∼ Unif { − 1 , + 1 } is the Rademacher variable. In this case, 𝜀 ~ 𝑖 is called the shifted Rademacher variable and its mean is − 𝐶 1 𝐶 1 + 2 . ∎
C.6 Proof of Theorem 4.3 Proof.
Since 𝜀 ~ 𝑖 is obtained by a bijection function of 𝑈 𝑖 , they two can be replaced by each other in the mutual information. Recall Lemma A.1 and let the measurable function 𝑔 be 𝑡 ( 𝐶 1 + 2 ) 𝜀 ~ 𝑖 𝐿 𝑖 + , we have
𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 )
𝐼 ( 𝐿 𝑖 + ; 𝜀 ~ 𝑖 )
D KL ( 𝑃 𝐿 𝑖 + , 𝜀 ~ 𝑖 | | 𝑃 𝐿 𝑖 + 𝑃 𝜀 ~ 𝑖 ′ ) (14)
≥
sup 𝑡
0 𝔼 𝐿 𝑖 + , 𝜀 ~ 𝑖 [ 𝑡 ( 𝐶 1 + 2 ) 𝜀 ~ 𝑖 𝐿 𝑖 + ] − ln 𝔼 𝐿 𝑖 + , 𝜀 ~ 𝑖 ′ [ 𝑒 𝑡 ( 𝐶 1 + 2 ) 𝜀 ~ 𝑖 ′ 𝐿 𝑖 + ] . (15)
We hope to have
𝔼 𝐿 𝑖 + , 𝜀 ~ 𝑖 ′ [ 𝑒 𝑡 ( 𝐶 1 + 2 ) 𝜀 ~ 𝑖 ′ 𝐿 𝑖 + ] ≤ 1 . (16)
Since 𝜀 ~ 𝑖 ′ is independent of 𝐿 𝑖 + , and 𝑃 ( 𝜀 ~ 𝑖
2 𝐶 1 + 2 )
𝑃 ( 𝜀 ~ 𝑖
− 2 ( 𝐶 1 + 1 ) 𝐶 1 + 2 )
1 2 , then
𝔼 𝐿 𝑖 + , 𝜀 ~ 𝑖 ′ [ 𝑒 𝑡 ( 𝐶 1 + 2 ) 𝜀 ~ 𝑖 ′ 𝐿 𝑖 + ]
𝔼 𝐿 𝑖 + [ 𝑒 − 2 𝑡 ( 𝐶 1 + 1 ) 𝐿 𝑖 + + 𝑒 2 𝑡 𝐿 𝑖 + ] 2 .
Notice that 𝑒 − 2 𝑡 ( 𝐶 1 + 1 ) 𝐿 𝑖 + + 𝑒 2 𝑡 𝐿 𝑖 + is the summation of two convex function, which is still a convex function, so the maximum value of this function is achieved at the endpoints of the bounded domain. Recall that 𝐿 𝑖 + ∈ [ 0 , 1 ] , we now consider two cases, i) when 𝐿 𝑖 +
0 , we have 𝑒 − 2 𝑡 ( 𝐶 1 + 1 ) 𝐿 𝑖 + + 𝑒 2 𝑡 𝐿 𝑖 +
2 ; ii)when 𝐿 𝑖 +
1 , we need to require 𝑒 − 2 𝑡 ( 𝐶 1 + 1 ) + 𝑒 2 𝑡 ≤ 2 s.t. Eq (16) can hold. Note that this inequality implies that 𝑡 ≤ ln 2 2 .
Replacing 𝑡 by 𝐶 2 , let the values of 𝐶 1 , 𝐶 2 be taken from the domain of { 𝐶 1 , 𝐶 2 | 𝐶 1 , 𝐶 2
0 , 𝑒 − 2 𝐶 2 ( 𝐶 1 + 1 ) + 𝑒 2 𝐶 2 ≤ 2 } , so Eq. (16) will hold. Under this condition, by re-arranging the terms in Eq. (15), we have
( 𝐶 1 + 2 ) 𝔼 𝐿 𝑖 + , 𝜀 ~ 𝑖 [ 𝜀 ~ 𝑖 𝐿 𝑖 + ] ≤ 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) 𝐶 2 .
Plugging the inequality above into Eq. (13), we have
Err 𝐶 1
𝐿 𝜇 − ( 1 + 𝐶 1 ) 𝐿 𝑛
2 + 𝐶 1 𝑛 ∑ 𝑖
1 𝑛 [ 𝔼 𝐿 𝑖 + , 𝜀 ~ 𝑖 [ 𝜀 ~ 𝑖 𝐿 𝑖 + ] ] ≤ ∑ 𝑖
1 𝑛 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) 𝐶 2 𝑛 .
Thus, the following inequality can be obtained,
𝐿 𝜇 ≤ min 𝐶 1 , 𝐶 2 > 0 , 𝑒 2 𝐶 2 + 𝑒 − 2 𝐶 2 ( 𝐶 1 + 1 ) ≤ 2 ( 1 + 𝐶 1 ) 𝐿 𝑛 + ∑ 𝑖
1 𝑛 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) 𝐶 2 𝑛 . (17)
We can also optimize the parameters 𝐶 1 , 𝐶 2 by relaxing the condition of 𝑒 − 2 𝐶 2 ( 𝐶 1 + 1 ) + 𝑒 2 𝐶 2 ≤ 2 . By invoking 𝑒 𝑥 ≥ 𝑥 + 1 and 𝑒 − 𝑥 ≤ 1 1 + 𝑥 for 𝑥
− 1 , and 𝑒 𝑥 ≤ 1 1 − 𝑥 for 𝑥 < 1 , it’s sufficient to have
1 1 + 2 𝐶 2 ( 𝐶 1 + 1 ) + 1 1 − 2 𝐶 2 ≤ 2 , and 0 < 𝐶 2 < 1 2 . (18)
Solving Eq (18) gives us 0 < 𝐶 2 ≤ 𝐶 1 4 ( 𝐶 1 + 1 ) . Since 𝐶 1 4 ( 𝐶 1 + 1 ) ≤ 1 4 < 1 2 , then Eq (18) holds when 𝐶 2 ∈ ( 0 , 𝐶 1 4 ( 𝐶 1 + 1 ) ] . Notice that 𝐶 1 4 ( 𝐶 1 + 1 ) is also smaller than ln 2 2 .
Therefore, we obtain
𝐿
𝜇
≤
min
𝐶
1
,
𝐶
2
>
0
,
𝑒
2
𝐶
2
+
𝑒
−
2
𝐶
2
(
𝐶
1
+
1
)
≤
2
(
1
+
𝐶
1
)
𝐿
𝑛
+
∑
𝑖
1
𝑛
𝐼
(
𝐿
𝑖
+
;
𝑈
𝑖
)
𝐶
2
𝑛
≤
min
𝐶
1
>
0
,
0
<
𝐶
2
≤
𝐶
1
4
(
𝐶
1
+
1
)
(
1
+
𝐶
1
)
𝐿
𝑛
+
∑
𝑖
1 𝑛 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) 𝐶 2 𝑛
𝐿 𝑛 + ∑ 𝑖
1 𝑛 4 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) 𝑛 + 4 ∑ 𝑖
1 𝑛 𝐿 𝑛 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) 𝑛 ,
where the last equality is achieved when 𝐶 1
2 ∑ 𝑖
1 𝑛 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) 𝑛 𝐿 𝑛 and 𝐶 2
𝐶 1 4 ( 𝐶 1 + 1 ) .
For the second part of the theorem, if 𝒜 is an interpolating algorithm, then 𝐿 𝑛
0 , in which case we can let 𝐶 1 be arbitrarily large.
Recall that we hope
𝑒 − 2 𝐶 2 ( 𝐶 1 + 1 ) + 𝑒 2 𝐶 2 ≤ 2 .
This can be satisfied by letting 𝐶 2
ln 2 2 and 𝐶 1 → ∞ .
Thus, the interpolating single-loss MI bound is
𝐿 𝜇 ≤ ∑ 𝑖
1 𝑛 2 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) 𝑛 ln 2 .
This completes the proof. ∎
C.7 Proof of Lemma 4.3 Proof.
By the definition of 𝛾 -variance, and notice that 𝐿 𝑆 ( 𝑊 )
1 𝑛 ∑ 𝑖
1 𝑛 ℓ ( 𝑊 , 𝑍 𝑖 ) , we have
𝑉 ( 𝛾 )
𝔼 𝑊 , 𝑆 [ 1 𝑛 ∑ 𝑖
1 𝑛 ( ℓ ( 𝑊 , 𝑍 𝑖 ) − ( 1 + 𝛾 ) 𝐿 𝑆 ( 𝑊 ) ) 2 ]
𝔼 𝑊 , 𝑆 [ 1 𝑛 ∑ 𝑖
1 𝑛 ( ℓ 2 ( 𝑊 , 𝑍 𝑖 ) − 2 ( 1 + 𝛾 ) ℓ ( 𝑊 , 𝑍 𝑖 ) 𝐿 𝑆 ( 𝑊 ) + ( 1 + 𝛾 ) 2 𝐿 𝑆 2 ( 𝑊 ) ) ]
𝔼 𝑊 , 𝑆 [ 1 𝑛 ∑ 𝑖
1 𝑛 ℓ 2 ( 𝑊 , 𝑍 𝑖 ) ] − 𝔼 𝑊 , 𝑆 [ ( 1 − 𝛾 2 ) 𝐿 𝑆 2 ( 𝑊 ) ]
𝐿 𝑛 − ( 1 − 𝛾 2 ) 𝔼 𝑊 , 𝑆 [ 𝐿 𝑆 2 ( 𝑊 ) ]
where the last equality is due to the fact that the loss is the zero-one loss (i.e. ℓ 2 ( ⋅ , ⋅ )
ℓ ( ⋅ , ⋅ ) ). ∎
C.8 Proof of Lemma 4.4 Proof.
By Lemma 4.3 and 𝛾 ∈ ( 0 , 1 ) , we notice that,
Err − 𝐶 1 𝑉 ( 𝛾 )
𝐿
𝜇
−
𝐿
𝑛
−
𝐶
1
𝐿
𝑛
+
𝐶
1
(
1
−
𝛾
2
)
𝔼
𝑊
,
𝑆
[
𝐿
𝑆
2
(
𝑊
)
]
≤
𝐿
𝜇
−
(
1
+
𝐶
1
)
𝐿
𝑛
+
𝐶
1
(
1
−
𝛾
2
)
𝔼
𝑊
,
𝑆
[
𝐿
𝑆
(
𝑊
)
]
(19)
𝐿 𝜇 − ( 1 + 𝐶 1 𝛾 2 ) 𝐿 𝑛 , (20)
where the inequality is because that 𝐿 𝑆 ( 𝑊 ) ∈ [ 0 , 1 ] (i.e. 𝐿 𝑆 2 ( 𝑊 ) ≤ 𝐿 𝑆 ( 𝑊 ) ).
Noting that in Eq (20), Err − 𝐶 1 𝑉 ( 𝛾 ) is upper bounded by a weighted generalization error. Thus, we can then directly apply Lemma 4.2 by choosing 𝐶 1 𝛾 2 as the trade-off coefficient (i.e. replacing 𝐶 1 in Lemma 4.2 by 𝐶 1 𝛾 2 ), which gives us
Err − 𝐶 1 𝑉 ( 𝛾 ) ≤ 2 + 𝐶 1 𝛾 2 𝑛 ∑ 𝑖
1 𝑛 𝔼 𝐿 𝑖 + , 𝜀 ~ 𝑖 [ 𝜀 ~ 𝑖 𝐿 𝑖 + ] ,
where 𝜀 ~ 𝑖
𝜀 𝑖 − 𝐶 1 𝛾 2 𝐶 1 𝛾 2 + 2 . ∎
C.9 Proof of Theorem 4.4 Proof.
The RHS of Eq. (20) in the proof of Lemma 4.4 has already been bounded in Theorem 4.3 by regarding 𝐶 1 𝛾 2 as the weighted parameter 𝐶 1 in Theorem 4.3. Then, there exist 𝐶 1 , 𝐶 2
0 s.t.
Err − 𝐶 1 𝑉 ( 𝛾 ) ≤ 𝐿 𝜇 − ( 1 + 𝐶 1 𝛾 2 ) 𝐿 𝑛 ≤ ∑ 𝑖
1 𝑛 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) 𝑛 𝐶 2 .
Furthermore, from the proof of Theorem 4.3, we note that the following is valid
Err ≤ min 𝐶 1 , 𝐶 2 > 0 , 𝑒 2 𝐶 2 + 𝑒 − 2 𝐶 2 ( 𝐶 1 𝛾 2 + 1 ) ≤ 2 𝐶 1 𝑉 ( 𝛾 ) + ∑ 𝑖
1 𝑛 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) 𝑛 𝐶 2 .
Notice that the original optimization space of the variance based bound should be larger than { 𝐶 1 , 𝐶 2 | 𝐶 1 , 𝐶 2
0 , 𝑒 2 𝐶 2 + 𝑒 − 2 𝐶 2 ( 𝐶 1 𝛾 2 + 1 ) ≤ 2 } because in Eq. (19), we upper bound the most interested quantity Err − 𝐶 1 𝑉 ( 𝛾 ) by 𝐿 𝜇 − ( 1 + 𝐶 1 𝛾 2 ) 𝐿 𝑛 , which restricts the original optimization space.
This completes the proof. ∎
C.10 Proof of Lemma 4.5 Proof.
By the definition of 𝜆 -sharpness, we notice that
𝐹 𝑖 ( 𝜆 )
𝔼 𝑊 , 𝑍 𝑖 [ ℓ ( 𝑊 , 𝑍 𝑖 ) − ( 1 + 𝜆 ) 𝔼 𝑊 | 𝑍 𝑖 ℓ ( 𝑊 , 𝑍 𝑖 ) ] 2
𝔼 𝑍 𝑖 [ 𝔼 𝑊 | 𝑍 𝑖 [ ℓ ( 𝑊 , 𝑍 𝑖 ) 2 ] − 2 ( 1 + 𝜆 ) 𝔼 𝑊 | 𝑍 𝑖 2 ℓ ( 𝑊 , 𝑍 𝑖 ) + ( 1 + 𝜆 ) 2 𝔼 𝑊 | 𝑍 𝑖 2 ℓ ( 𝑊 , 𝑍 𝑖 ) ]
𝔼 𝑊 , 𝑍 𝑖 [ ℓ ( 𝑊 , 𝑍 𝑖 ) ] − ( 1 − 𝜆 2 ) 𝔼 𝑍 𝑖 [ 𝔼 𝑊 | 𝑍 𝑖 2 ℓ ( 𝑊 , 𝑍 𝑖 ) ] ,
where the last equality is due to the fact that the loss is the zero-one loss. ∎
C.11 Proof of Lemma 4.6 Proof.
By Lemma 4.5, we have
Err − 𝐶 1 𝑛 ∑ 𝑖
1 𝑛 𝐹 𝑖 ( 𝜆 )
𝐿 𝜇 − ( 1 + 𝐶 1 ) 𝐿 𝑛 + ( 1 − 𝜆 2 ) 𝐶 1 𝑛 ∑ 𝑖
1 𝑛 𝔼 𝑍 𝑖 [ 𝔼 𝑊 | 𝑍 𝑖 2 ℓ ( 𝑊 , 𝑍 𝑖 ) ]
1 𝑛 ∑ 𝑖
1 𝑛 [ 𝔼 𝑈 𝑖 , 𝐿 𝑖 [ 𝐿 𝑖 , 𝑈 ¯ 𝑖 − ( 1 + 𝐶 1 ) 𝐿 𝑖 , 𝑈 𝑖 ] + ( 1 − 𝜆 2 ) 𝐶 1 𝔼 𝑍 ~ 𝑖 , 𝑈 𝑖 [ 𝔼 𝐿 𝑖 , 𝑈 𝑖 | 𝑍 ~ 𝑖 , 𝑈 𝑖 2 𝐿 𝑖 , 𝑈 𝑖 ] ] .
Let Λ ( 𝑍 ~ 𝑖 , 𝑈 𝑖 )
𝔼 𝐿 𝑖 , 𝑈 𝑖 | 𝑍 ~ 𝑖 , 𝑈 𝑖 2 𝐿 𝑖 , 𝑈 𝑖 and Λ ( 𝑍 ~ 𝑖 , 𝑈 ¯ 𝑖 )
𝔼 𝐿 𝑖 , 𝑈 ¯ 𝑖 | 𝑍 ~ 𝑖 , 𝑈 ¯ 𝑖 2 𝐿 𝑖 , 𝑈 ¯ 𝑖 . Let Λ ( 𝑍 ~ 𝑖 + )
𝔼 𝐿 𝑖 + | 𝑍 ~ 𝑖 + , 𝑈 𝑖 2 𝐿 𝑖 + and Λ ( 𝑍 ~ 𝑖 − )
𝔼 𝐿 𝑖 − | 𝑍 ~ 𝑖 − , 𝑈 𝑖 2 𝐿 𝑖 − . A key observation is the following:
𝔼 𝑈 𝑖 , 𝐿 𝑖 [ 𝐿 𝑖 , 𝑈 ¯ 𝑖 − ( 1 + 𝐶 1 ) 𝐿 𝑖 , 𝑈 𝑖 ] + ( 1 − 𝜆 2 ) 𝐶 1 𝔼 𝑍 ~ 𝑖 , 𝑈 𝑖 [ Λ ( 𝑍 ~ 𝑖 , 𝑈 𝑖 ) ]
(
1
+
𝐶
1
2
)
𝔼
𝑈
𝑖
,
𝐿
𝑖
[
𝐿
𝑖
,
𝑈
¯
𝑖
−
𝐿
𝑖
,
𝑈
𝑖
]
−
𝐶
1
2
(
1
−
𝜆
2
)
𝔼
𝑍
~
𝑖
,
𝑈
𝑖
[
Λ
(
𝑍
~
𝑖
,
𝑈
¯
𝑖
)
−
Λ
(
𝑍
~
𝑖
,
𝑈
𝑖
)
]
−
𝐶
1
2
[
𝔼
𝑈
𝑖
,
𝐿
𝑖
[
𝐿
𝑖
,
𝑈
¯
𝑖
]
−
(
1
−
𝜆
2
)
𝔼
𝑍
~
𝑖
,
𝑈
𝑖
[
Λ
(
𝑍
~
𝑖
,
𝑈
¯
𝑖
)
]
]
−
𝐶
1
2
[
𝔼
𝑈
𝑖
,
𝐿
𝑖
[
𝐿
𝑖
,
𝑈
𝑖
]
−
(
1
−
𝜆
2
)
𝔼
𝑍
~
𝑖
,
𝑈
𝑖
[
Λ
(
𝑍
~
𝑖
,
𝑈
𝑖
)
]
]
𝔼
𝐿
𝑖
−
,
𝑈
𝑖
[
(
−
1
)
𝑈
𝑖
𝐶
1
+
2
2
𝐿
𝑖
−
−
(
−
1
)
𝑈
𝑖
𝐶
1
(
1
−
𝜆
2
)
2
𝔼
𝑍
~
𝑖
−
|
𝑈
𝑖
[
Λ
(
𝑍
~
𝑖
−
)
]
−
𝐶
1
2
𝐿
𝑖
−
+
𝐶
1
(
1
−
𝜆
2
)
2
𝔼
𝑍
~
𝑖
−
|
𝑈
𝑖
[
Λ
(
𝑍
~
𝑖
−
)
]
]
+
𝔼
𝐿
𝑖
+
,
𝑈
𝑖
[
−
(
−
1
)
𝑈
𝑖
𝐶
1
+
2
2
𝐿
𝑖
+
+
(
−
1
)
𝑈
𝑖
𝐶
1
(
1
−
𝜆
2
)
2
𝔼
𝑍
~
𝑖
+
|
𝑈
𝑖
[
Λ
(
𝑍
~
𝑖
+
)
]
−
𝐶
1
2
𝐿
𝑖
+
+
𝐶
1
(
1
−
𝜆
2
)
2
𝔼
𝑍
~
𝑖
+
|
𝑈
𝑖
[
Λ
(
𝑍
~
𝑖
+
)
]
]
( 𝐶 1 + 2 ) 𝔼 𝐿 𝑖 + , 𝑈 𝑖 [ ( 𝜀 𝑖 − 𝐶 1 𝐶 1 + 2 ) 𝐿 𝑖 + − 𝐶 1 ( 1 − 𝜆 2 ) 𝐶 1 + 2 ( 𝜀 𝑖 − 1 ) 𝔼 𝑍 ~ 𝑖 + | 𝑈 𝑖 [ Λ ( 𝑍 ~ 𝑖 + ) ] ] ,
where 𝜀 𝑖 is the Rademacher variable.
Thus,
Err − 𝐶 1 𝑛 ∑ 𝑖
1 𝑛 𝐹 𝑖 ( 𝜆 )
𝐶 1 + 2 𝑛 ∑ 𝑖
1 𝑛 𝔼 𝐿 𝑖 + , 𝑈 𝑖 [ ( 𝜀 𝑖 − 𝐶 1 𝐶 1 + 2 ) 𝐿 𝑖 , 1 − 𝐶 1 ( 1 − 𝜆 2 ) 𝐶 1 + 2 ( 𝜀 𝑖 − 1 ) 𝔼 𝑍 ~ 𝑖 + | 𝑈 𝑖 [ 𝔼 𝐿 𝑖 + | 𝑍 ~ 𝑖 + , 𝑈 𝑖 2 𝐿 𝑖 + ] ] . (21)
This completes the proof. ∎
C.12 Proof of Theorem 4.5 Proof.
Recall Eq. (21),
Err − 𝐶 1 𝑛 ∑ 𝑖
1 𝑛 𝐹 𝑖 ( 𝜆 )
𝐶 1 + 2 𝑛 ∑ 𝑖
1 𝑛 𝔼 𝐿 𝑖 + , 𝑈 𝑖 [ ( 𝜀 𝑖 − 𝐶 1 𝐶 1 + 2 ) 𝐿 𝑖 + − 𝐶 1 ( 1 − 𝜆 2 ) 𝐶 1 + 2 ( 𝜀 𝑖 − 1 ) 𝔼 𝑍 ~ 𝑖 + | 𝑈 𝑖 [ 𝔼 𝐿 𝑖 + | 𝑍 ~ 𝑖 + , 𝑈 𝑖 2 𝐿 𝑖 + ] ] .
Notice that we cannot directly apply Lemma A.1 starting from here since there is a quadratic term, namely, 𝔼 𝑍 ~ 𝑖 + | 𝑈 𝑖 [ 𝔼 𝐿 𝑖 + | 𝑍 ~ 𝑖 + , 𝑈 𝑖 2 𝐿 𝑖 + ] in the RHS.
Inspired by Yang et al. (2019), we now assume that there exists a random variable 𝑅 𝑖 s.t.
(
𝐶
1
+
2
)
𝔼
𝐿
𝑖
+
,
𝑈
𝑖
[
(
𝜀
𝑖
−
𝐶
1
𝐶
1
+
2
)
𝐿
𝑖
+
−
𝐶
1
(
1
−
𝜆
2
)
𝐶
1
+
2
(
𝜀
𝑖
−
1
)
𝔼
𝑍
~
𝑖
+
|
𝑈
𝑖
[
𝔼
𝐿
𝑖
+
|
𝑍
~
𝑖
+
,
𝑈
𝑖
2
𝐿
𝑖
+
]
]
≤
(
𝐶
1
+
2
)
𝔼
𝐿
𝑖
+
,
𝑈
𝑖
[
(
𝜀
𝑖
−
𝐶
1
𝐶
1
+
2
)
𝐿
𝑖
+
−
𝐶
1
(
1
−
𝜆
2
)
𝐶
1
+
2
(
𝜀
𝑖
−
1
)
𝔼
𝑍
~
𝑖
+
|
𝑈
𝑖
[
𝑅
𝑖
𝔼
𝐿
𝑖
+
|
𝑍
~
𝑖
+
,
𝑈
𝑖
𝐿
𝑖
+
]
]
( 𝐶 1 + 2 ) 𝔼 𝐿 𝑖 + , 𝑈 𝑖 [ ( 𝜀 𝑖 − 𝐶 1 𝐶 1 + 2 ) 𝐿 𝑖 + − 𝐶 1 ( 1 − 𝜆 2 ) 𝐶 1 + 2 ( 𝜀 𝑖 − 1 ) 𝔼 𝐿 𝑖 + | 𝑈 𝑖 [ 𝑅 𝑖 𝐿 𝑖 + ] ]
( 𝐶 1 + 2 ) 𝔼 𝐿 𝑖 + , 𝑈 𝑖 [ ( ( 𝜀 𝑖 − 𝐶 1 𝐶 1 + 2 ) − 𝐶 1 ( 1 − 𝜆 2 ) 𝐶 1 + 2 ( 𝜀 𝑖 − 1 ) 𝑅 𝑖 ) 𝐿 𝑖 + ] . (22)
Such 𝑅 𝑖 could satisfy 𝑅 𝑖 ≥ sup 𝑧 ~ 𝑖 + 𝔼 𝐿 𝑖 + | 𝑍 ~ 𝑖 +
𝑧 ~ 𝑖 + , 𝑈 𝑖
𝑢 𝑖 𝐿 𝑖 + , for any fixed 𝑢 𝑖 , and the randomness of 𝑅 𝑖 is controlled by 𝑈 𝑖 , i.e. 𝑅 𝑖 is a function of 𝑈 𝑖 . A simple choice is to let 𝑅 𝑖
1 (so 𝑅 𝑖 always exists), and another choice could be letting 𝑅 𝑖
𝔼 𝐿 𝑖 + ′ | 𝑈 𝑖 ∼ 𝑄 𝑖 [ 𝐿 𝑖 + ′ ] that satisfies the condition, where 𝑄 𝑖 is some distribution of 𝐿 𝑖 + , and 𝐿 𝑖 + ′ is independent of 𝐿 𝑖 + and 𝑍 ~ 𝑖 + given 𝑈 𝑖 .
Recall that the shifted Rademacher varaible 𝜀 ~ 𝑖
𝜀 𝑖 − 𝐶 1 𝐶 1 + 2 , and let another shifted Rademacher variable 𝜀 ^ 𝑖
𝜀 𝑖 − 1 . Then we are ready to invoke Lemma A.1,
𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) ≥ sup 𝑡
0 𝑡 𝔼 𝐿 𝑖 + , 𝑈 𝑖 [ ( ( 𝐶 1 + 2 ) 𝜀 ~ 𝑖 − 𝐶 1 ( 1 − 𝜆 2 ) 𝜀 ^ 𝑖 𝑅 𝑖 ) 𝐿 𝑖 + ] − ln 𝔼 𝐿 𝑖 + , 𝑈 𝑖 ′ [ 𝑒 𝑡 ( ( 𝐶 1 + 2 ) 𝜀 ~ 𝑖 ′ − 𝐶 1 ( 1 − 𝜆 2 ) 𝜀 ^ 𝑖 ′ 𝑅 𝑖 ′ ) 𝐿 𝑖 + ] . (23)
Similar to the proof of Theorem 4.3, we hope the following hold
𝔼 𝐿 𝑖 + 𝔼 𝑈 𝑖 ′ [ 𝑒 𝑡 ( ( 𝐶 1 + 2 ) 𝜀 ~ 𝑖 ′ − 𝐶 1 ( 1 − 𝜆 2 ) 𝜀 ^ 𝑖 ′ 𝑅 𝑖 ′ ) 𝐿 𝑖 + ] ≤ 1 . (24)
By the independence, we have
𝔼 𝐿 𝑖 + 𝔼 𝑈 𝑖 ′ [ 𝑒 𝑡 ( ( 𝐶 1 + 2 ) 𝜀 ~ 𝑖 ′ − 𝐶 1 ( 1 − 𝜆 2 ) 𝜀 ^ 𝑖 ′ 𝑅 𝑖 ′ ) 𝐿 𝑖 + ]
𝔼 𝐿 𝑖 + [ 𝑒 2 𝑡 ( 𝐶 1 ( 1 − 𝜆 2 ) 𝑟 ~ 𝑖 ′ − 𝐶 1 − 1 ) 𝐿 𝑖 + + 𝑒 2 𝑡 𝐿 𝑖 + 2 ] ,
where 𝑟 ~ 𝑖 ′ ∈ [ 0 , 1 ] is the value (or the realization) of 𝑅 𝑖 ′ when 𝑈 𝑖 ′
0 (or 𝜀 𝑖 ′
− ( − 1 ) 𝑈 𝑖 ′
− 1 ), e.g., 𝑟 ~ 𝑖 ′
𝔼 𝐿 𝑖 + ′ | 𝑈 𝑖 ′
0 [ 𝐿 𝑖 + ′ ] .
Then, since 𝐿 𝑖 + could be either 0 or 1 . We now consider the two cases.
(i) When 𝐿 𝑖 +
0 , then 𝑒 2 𝑡 ( 𝐶 1 ( 1 − 𝜆 2 ) 𝑟 ~ 𝑖 ′ − 𝐶 1 − 1 ) 𝐿 𝑖 + + 𝑒 2 𝑡 𝐿 𝑖 + 2
1 . Therefore, when 𝐿 𝑖 +
0 , the value of 𝑅 𝑖 ′ has no effect on the moment generating function.
(ii) When 𝐿 𝑖 +
1 , we have the formula 𝑒 2 𝑡 ( 𝐶 1 ( 1 − 𝜆 2 ) 𝑟 ~ 𝑖 ′ − 𝐶 1 − 1 ) + 𝑒 2 𝑡 2 . Notably, only when 𝜀 𝑖 ′
− 1 (or 𝑈 𝑖 ′
0 ) and 𝐿 𝑖 +
1 , the value of 𝑅 𝑖 ′ , viz, 𝑟 ~ 𝑖 ′ , has some impact on the moment generating function. Since 𝑟 ~ 𝑖 ′ ∈ [ 0 , 1 ] , it’s sufficient to upper bound 𝑅 𝑖 ′ by the random variable 1 − 𝜀 𝑖 ′ 2
− 𝜀 ^ 𝑖 ′ 2 . Thus,
𝔼 𝐿 𝑖 + 𝔼 𝑈 𝑖 ′ [ 𝑒 𝑡 ( ( 𝐶 1 + 2 ) 𝜀 ~ 𝑖 ′ − 𝐶 1 ( 1 − 𝜆 2 ) 𝜀 ^ 𝑖 ′ 𝑅 𝑖 ′ ) 𝐿 𝑖 + ] ≤ 𝔼 𝐿 𝑖 + 𝔼 𝑈 𝑖 ′ [ 𝑒 𝑡 ( ( 𝐶 1 + 2 ) 𝜀 ~ 𝑖 ′ + 𝐶 1 2 ( 1 − 𝜆 2 ) 𝜀 ^ 𝑖 ′ 2 ) 𝐿 𝑖 + ] . (25)
By the moment generating function of the Bernoulli random variable 𝐿 𝑖 + , we have
𝔼 𝑈 𝑖 ′ 𝔼 𝐿 𝑖 + [ 𝑒 𝑡 ( ( 𝐶 1 + 2 ) 𝜀 ~ 𝑖 ′ + 𝐶 1 2 ( 1 − 𝜆 2 ) 𝜀 ^ 𝑖 ′ 2 ) 𝐿 𝑖 + ]
𝔼 𝑈 𝑖 [ 1 − 𝔼 𝐿 𝑖 + [ 𝐿 𝑖 + ] + 𝔼 𝐿 𝑖 + [ 𝐿 𝑖 + ] 𝑒 𝑡 ( ( 𝐶 1 + 2 ) 𝜀 ~ 𝑖 ′ + 𝐶 1 2 ( 1 − 𝜆 2 ) 𝜀 ^ 𝑖 ′ 2 ) ]
1 − 𝔼 𝐿 𝑖 + [ 𝐿 𝑖 + ] + 𝔼 𝐿 𝑖 + [ 𝐿 𝑖 + ] 𝑒 − 2 𝑡 ( 𝐶 1 𝜆 2 + 1 ) + 𝑒 2 𝑡 2 .
Since 0 ≤ 𝔼 𝐿 𝑖 + [ 𝐿 𝑖 + ] ≤ 1 , we only need to require that
𝑒 − 2 𝑡 ( 𝐶 1 𝜆 2 + 1 ) + 𝑒 2 𝑡 2 ≤ 1 .
Replacing 𝑡 by 𝐶 2 and putting everything together (Eq. (21-24)), we have
Err ≤ 𝐶 1 𝑛 ∑ 𝑖
1 𝑛 𝐹 𝑖 ( 𝜆 ) + ∑ 𝑖
1 𝑛 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) 𝑛 𝐶 2 .
This completes the proof. ∎
C.13 Proof of Corollary 4.1 Proof.
According to the proofs in Section C.9 and Section C.12, we know that the sufficient conditions to let Eq. (5) and Eq. (6) hold are 𝑒 2 𝐶 2 + 𝑒 − 2 𝐶 2 ( 𝐶 1 𝛾 2 + 1 ) ≤ 2 and 𝑒 2 𝐶 2 + 𝑒 − 2 𝐶 2 ( 𝐶 1 𝜆 2 + 1 ) ≤ 2 , respectively. Given that both 𝛾 , 𝜆 ∈ ( 0 , 1 ) , there must exist 𝐶 1 , 𝐶 2 to let both Eq. (5) and Eq. (6) hold. Then taking minimum of these two inequalities will give us the desired result. ∎
Additionally, in any of the following case: (i) 𝐿 𝑛 → 0 ; (ii) 𝑉 ( 𝛾 ) → 0 for some 𝛾 ∈ ( 0 , 1 ) ; (iii) 𝐹 ( 𝜆 ) → 0 for some 𝜆 ∈ ( 0 , 1 ) , we can let 𝐶 1 → ∞ and let 𝐶 2
ln 2 2 , then Err ≤ ∑ 𝑖
1 𝑛 2 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) 𝑛 ln 2 . This justifies the remark after Corollary 4.1.
Appendix D Some Background on Channel Capacity of Binary Channel
In this section, we follow the custom of the notations in Cover & Thomas (2006), where the logarithm usually has a base of 2 (i.e. log 2 ). In addition, for a binary random variable, the entropy function 𝐻 ( ⋅ ) can be a binary entropy function, for example, the random variable 𝑋 has the value 0 and 1 , and 𝑃 ( 𝑋
1 )
𝑝 , 𝑃 ( 𝑋
0 )
1 − 𝑝 , then 𝐻 ( 𝑋 )
𝐻 ( 𝑝 )
− 𝑝 log 2 𝑝 − ( 1 − 𝑝 ) log 2 ( 1 − 𝑝 ) . The channel capacity of a channel between input 𝑋 and output 𝑌 is defined as 𝐶 ≜ max 𝑃 𝑋 𝐼 ( 𝑋 ; 𝑌 ) .
D.1 Channel Capacity of Binary Symmetric Channel (BSC)
In a general case, the channel capacity of Figure 1(left) can be computed as in the following lemma.
Lemma D.1.
When 𝑋 ∼ 𝑃 𝑈 𝑖 , the channel capacity of the channel in Figure 1(left) is achieved and 𝐶
( 1 − 𝛼 ) ( 1 − 𝐻 ( 1 − 𝛼 − 𝜖 1 − 𝛼 , 𝜖 1 − 𝛼 ) ) .
We note that this lemma is an exercise problem in Cover & Thomas (2006, Problem 7.13).
Proof.
Let 𝑃 ( 𝑋
0 )
𝜋 and 𝑃 ( 𝑋
0 )
1 − 𝜋 , then 𝐼 ( 𝑋 ; 𝑌 )
𝐻 ( 𝑌 ) − 𝐻 ( 𝑌 | 𝑋 )
𝐻 ( 𝑌 ) − 𝐻 ( 1 − 𝜖 − 𝛼 , 𝛼 , 𝜖 ) . It’s easy to see that 𝐻 ( 𝑌 )
𝐻 ( 𝜋 ( 1 − 𝛼 − 𝜖 ) + 𝜖 ( 1 − 𝜋 ) , 𝛼 , 𝜋 𝜖 + ( 1 − 𝜋 ) ( 1 − 𝜖 − 𝛼 ) ) . We let 𝐴
𝜋 ( 1 − 𝛼 − 𝜖 ) + 𝜖 ( 1 − 𝜋 ) and 𝐵
𝜋 𝜖 + ( 1 − 𝜋 ) ( 1 − 𝜖 − 𝛼 ) ) . Notice that 𝐴 + 𝐵
1 − 𝛼 , then
𝐻 ( 𝑌 )
𝐻 ( 𝜋 ( 1 − 𝛼 − 𝜖 ) + 𝜖 ( 1 − 𝜋 ) , 𝛼 , 𝜋 𝜖 + ( 1 − 𝜋 ) ( 1 − 𝜖 − 𝛼 ) )
− [ ( 𝐴 + 𝐵 ) log 2 ( 1 − 𝛼 ) + 𝛼 log 2 𝛼 + 𝐴 log 2 𝐴 1 − 𝛼 + 𝐵 log 2 𝐵 1 − 𝛼 ]
𝐻 ( 𝛼 ) − ( 1 − 𝛼 ) [ 𝐴 1 − 𝛼 log 2 𝐴 1 − 𝛼 + 𝐵 1 − 𝛼 log 2 𝐵 1 − 𝛼 ]
𝐻 ( 𝛼 ) + ( 1 − 𝛼 ) 𝐻 ( 𝐴 1 − 𝛼 , 𝐵 1 − 𝛼 ) ≤ 𝐻 ( 𝛼 ) + 1 − 𝛼 .
To achieve the channel capacity (or to let the equality above hold), we need to let 𝐴
𝐵 , which indicates that 𝜋
1 2 .
Thus,
𝐶
𝐻 ( 𝛼 ) + 1 − 𝛼 − 𝐻 ( 1 − 𝜖 − 𝛼 , 𝛼 , 𝜖 )
𝐻 ( 𝛼 ) + 1 − 𝛼 − ( 𝐻 ( 𝛼 ) + ( 1 − 𝛼 ) 𝐻 ( 1 − 𝜖 − 𝛼 1 − 𝛼 , 𝜖 1 − 𝛼 ) )
( 1 − 𝛼 ) ( 1 − 𝐻 ( 1 − 𝛼 − 𝜖 1 − 𝛼 , 𝜖 1 − 𝛼 ) ) ,
which completes the proof. ∎
D.2 Channel Capacity of Binary Asymmetric Channel (BAC)
The channel capacity of the BAC channel in Figure 1(right) is given below.
Lemma D.2.
The channel capacity of the BAC in Figure 1(right) is 𝐶
log 2 ( 1 + 𝛽 ) − 1 − 𝑞 1 − 𝑝 − 𝑞 𝐻 ( 𝑝 ) + 𝑝 1 − 𝑝 − 𝑞 𝐻 ( 𝑞 ) , where 𝛽
2 𝐻 ( 𝑝 ) − 𝐻 ( 𝑞 ) 1 − 𝑝 − 𝑞 and the capacity is achived when 𝑃 ( 𝑈 𝑖
1 )
1 − 𝑞 ( 1 + 𝛽 ) ( 1 − 𝑝 − 𝑞 ) ( 1 + 𝛽 ) . Further, if 𝑝
0 (i.e. Z-channel), 𝐶
log 2 ( 1 + 2 − 𝐻 ( 𝑞 ) 1 − 𝑞 ) , and for small 𝑞 , the capacity can be approximated by 𝐶 ≈ 1 − 1 2 𝐻 ( 𝑞 ) .
Remark D.1.
Notice that in this case, let 𝑋 ∼ Bern ( 1 / 2 ) (i.e. 𝑋
𝑈 𝑖 ) will not achieve the channel capacity. Thus, in the interpolating setting, except for Theorem 4.2, we have another upper bound for 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) , namely 1 𝑛 ∑ 𝑖
1 𝑛 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) ≤ 1 𝑛 ∑ 𝑖
1 ln ( 1 + 2 − 𝐻 ( 1 − 𝑞 𝑖 ) 1 − 𝑞 𝑖 ) . If we further let 𝑞 𝑖 be the same for each 𝑖 (which indeed should be true), then 𝐼 ( 𝐿 𝑖 + ; 𝑈 𝑖 ) ≤ ln ( 1 + 2 − 𝐻 ( 𝐿 𝜇 ) 𝐿 𝜇 ) .
Proof.
Let 𝑃 ( 𝑋
1 )
𝜋 , then 𝐼 ( 𝑋 ; 𝑌 )
𝐻 ( 𝜋 ( 1 − 𝑝 − 𝑞 ) + 𝑞 ) − 𝜋 ( 𝐻 ( 𝑝 ) − 𝐻 ( 𝑞 ) ) − 𝐻 ( 𝑞 ) . Let 𝑑 𝐼 ( 𝑋 ; 𝑌 ) 𝑑 𝜋
( 1 − 𝑝 − 𝑞 ) log 2 ( 1 𝜋 ( 1 − 𝑝 − 𝑞 ) + 𝑞 − 1 ) − 𝐻 ( 𝑝 ) + 𝐻 ( 𝑞 )
0 , we have the optimal 𝜋 *
1 − 𝑞 ( 1 + 𝛽 ) ( 1 − 𝑝 − 𝑞 ) ( 1 + 𝛽 ) where 𝛽
2 𝐻 ( 𝑝 ) − 𝐻 ( 𝑞 ) 1 − 𝑝 − 𝑞 . Plugging 𝜋
𝜋 * into the formula of 𝐼 ( 𝑋 ; 𝑌 ) , we have 𝐼 ( 𝑋 ; 𝑌 )
log 2 ( 1 + 𝛽 ) − 1 − 𝑞 1 − 𝑝 − 𝑞 𝐻 ( 𝑝 ) + 𝑝 1 − 𝑝 − 𝑞 𝐻 ( 𝑞 ) . ∎
Appendix E Experimental Details and Additional Results E.1 Experiment Setup
In our linear classifier experiment, we generate synthetic Gaussian data using the widely-used Python package scikit-learn (Pedregosa et al., 2011). We draw each dimension (or feature) of 𝑋 independently from some Gaussian distribution, and let all the features be informative to its class labels 𝑌 . Specifically, we choose the dimension of data 𝑋 to be 5 and we create different class of points normally distributed (with the standard deviation being 1 ) about vertices of an 5 -dimensional hypercube, where its sides of length can be manually controlled. In addition, we utilize full-batch gradient descent with a fixed learning rate of 0.01 to train the linear classifier. We perform training for a total of 500 epochs, and we employ early stopping when the training error reaches a sufficiently low threshold (e.g., < 0.5 % ). To ensure robustness and statistical significance, we draw 50 different supersamples for each experiment. Within each supersample, we further generate 100 different mask random variables, resulting in a total of 5 , 000 runs for each experimental setting. This comprehensive setup enables us to compare both the unconditional MI bounds and the disintegrated MI bounds. Additionally, if the unconditional MI bound is the sole evaluated objective, one has the option to completely restart the training process 5 , 000 times.
In the neural networks experiments, we follow the same setup with (Harutyunyan et al., 2021; Hellström & Durisi, 2022a). Specifically, we draw 𝑘 1 samples of 𝑍 ~ and 𝑘 2 samples of 𝑈 for each given 𝑧 ~ . For the CNN on the binary MNIST dataset, we set 𝑘 1
5 and 𝑘 2
30 . The 4-layer CNN model is trained using the Adam optimizer with a learning rate of 0.001 and a momentum coefficient of 𝛽 1
0.9 . The training process spans 200 epochs, with a batch size of 128. For ResNet-50 on CIFAR10, we set 𝑘 1
2 and 𝑘 2
40 . The ResNet model is trained using stochastic gradient descent (SGD) with a learning rate of 0.01 and a momentum coefficient of 0.9 for a total of 40 epochs. The batch size for this experiment is set to 64. In the SGLD experiment, we once again train a 4-layer CNN on the binary MNIST dataset. The batch size is set to 100, and the training lasts for 40 epochs. The initial learning rate is 0.01 and decays by a factor of 0.9 after every 100 iterations. Let 𝑡 be the iteration index, the inverse temperature of SGLD is given by min { 4000 , max { 100 , 10 𝑒 𝑡 / 100 } } . We set the training sample size to 𝑛
4000 , and 𝑘 1
5 and 𝑘 2
30 . We save checkpoints every 4 epochs. All these experiments are conducted using NVIDIA Tesla V100 GPUs with 32 GB of memory. For more comprehensive details, including model architectures, we recommend referring to (Harutyunyan et al., 2021; Hellström & Durisi, 2022a).
Estimating the 𝛾 -variance and 𝜆 -sharpness in the CMI setting is a straightforward process. For example, to estimate sharpness, for each fixed 𝑧 ~ , we store the training losses 𝐿 𝑖 + when 𝑈 𝑖
0 (corresponding to 𝑧 ~ + ) and the training losses 𝐿 𝑖 − when 𝑈 𝑖
1 (corresponding to 𝑧 ~ − ) with different weight configurations 𝑊 . By doing so, we collect the necessary data to compute the second term of the equation in Lemma 4.5.
(a) | 𝒴 |
2 (Realizable) (b) | 𝒴 |
2 (Non-Separable) (c) | 𝒴 |
10 (Realizable) (d) | 𝒴 |
10 (Non-Separable) Figure 4: Comparison of the square-root bounds on the synthetic dataset. Here f-CMI, e-CMI and se-CMI refer to the disintegrated 𝑓 -CMI bound (Harutyunyan et al., 2021), the unconditional e-CMI bound and the single-loss square-root bound in Theorem 4.1, respectively. E.2 Additional Numerical Results: Comparison of Square-Root Bounds
We conduct a comparison of square-root bounds on the synthetic dataset, where we also include the disintegrated version of the 𝑓 -CMI bound proposed by Harutyunyan et al. (2021), an improved unconditional e-CMI bound (obtained by replacing 𝐼 ( 𝐿 𝑖 ; 𝑈 𝑖 | 𝑍 ~ ) with 𝐼 ( 𝐿 𝑖 ; 𝑈 𝑖 ) ), and the single-loss square-root bound presented in Theorem 4.1. The results are illustrated in Figure 4. Consistent with the observations in the main text, we find that the disintegrated bounds are tighter than the unconditional MI bounds when the training loss approaches zero, but looser than the unconditional MI bounds when the training loss is large. This suggests that while, according to the DPI, the unconditional e-CMI bound or ld-MI bound should be tighter than the 𝑓 -CMI bound, in some cases, the disintegrated version of the 𝑓 -CMI bound may be tighter than the unconditional e-CMI bound or ld-MI bound. For non-separable 𝜇 , the 𝑓 -CMI bound becomes looser as the number of classes increases, which provides justification for the remarks made after Theorem 3.1. In fact, it can be even worse than the single-loss square-root bound in Theorem 4.1, which includes an undesired constant of 2.
(a) | 𝒴 |
2 (Small 𝐿 𝑛 ) (b) | 𝒴 |
2 (Large 𝐿 𝑛 ) (c) | 𝒴 |
10 (Small 𝐿 𝑛 ) (d) | 𝒴 |
10 (Large 𝐿 𝑛 ) Figure 5: Comparison of three fast-rate bounds on the synthetic dataset. Here Fast-Rate refers to the fast-rate bound of Eq. (1) in Theorem 4.3. E.3 Additional Numerical Results: Comparison of Fast-Rate Bounds
We conduct a comparison of fast-rate bounds, including Eq. (1) in Theorem 4.3, the variance bound in Theorem 4.4, and the sharpness bound in Theorem 4.5, on the synthetic dataset with fixed values of 𝐶 1 and 𝐶 2 . As mentioned in the main text, if 𝐿 𝑛 → 0 , both 𝑉 ( 𝛾 ) and 𝐹 ( 𝜆 ) become zero, resulting in the three bounds being equivalent. However, when 𝐿 𝑛 ≠ 0 , the variance bound and sharpness bound are always sharper than Eq. (1), as discussed earlier. In Figure 5, we compare these bounds with fixed values of 𝐶 1
3 and 𝐶 2
0.3 . Figures 4(a) and 4(c) demonstrate that when 𝐿 𝑛 is small, the gap between the variance bound and Eq. (1) is small, indicating that the loss variance in this case is also small. However, the sharpness bound clearly outperforms the other two bounds. Furthermore, in Figures 4(b) and 4(d), when 𝐿 𝑛 is large, both the sharpness bound and the variance bound significantly improve upon Eq. (1). Notably, only the sharpness bound remains non-vacuous in Figure 4(d).
Generated on Thu Jul 13 18:36:59 2023 by LATExml
Xet Storage Details
- Size:
- 128 kB
- Xet hash:
- 01a81e4913b24b5c5a2f2f383234a4f443cc2fc87021ad4163013d0cdef4a0e3
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.