Buckets:
Title: Sample Complexity of Probability Divergences under Group Symmetry
URL Source: https://arxiv.org/html/2302.01915
Markdown Content: Back to arXiv
This is experimental HTML to improve accessibility. We invite you to report rendering errors. Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off. Learn more about this project and help improve conversions.
Why HTML? Report Issue Back to Abstract Download PDF Abstract 1Introduction 2Related work 3Background and motivation 4Sample complexity under group invariance 5Conclusion and future work 6Proofs References
HTML conversions sometimes display errors due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.
failed: commath
Authors: achieve the best HTML results from your LaTeX submissions by following these best practices.
License: CC BY 4.0 arXiv:2302.01915v3 [math.ST] 22 Nov 2024 Sample Complexity of Probability Divergences under Group Symmetry Ziyu Chen Department of Mathematics and Statistics University of Massachusetts Amherst Amherst, MA 01003, USA ziyuchen@umass.edu &Markos A. Katsoulakis Department of Mathematics and Statistics University of Massachusetts Amherst Amherst, MA 01003, USA markos@umass.edu &Luc Rey-Bellet Department of Mathematics and Statistics University of Massachusetts Amherst Amherst, MA 01003, USA luc@umass.edu &Wei Zhu Department of Mathematics and Statistics University of Massachusetts Amherst Amherst, MA 01003, USA weizhu@umass.edu Abstract
We rigorously quantify the improvement in the sample complexity of variational divergence estimations for group-invariant distributions. In the cases of the Wasserstein-1 metric and the Lipschitz-regularized πΌ -divergences, the reduction of sample complexity is proportional to the group size if the group is finite. In addition to the published version at ICML 2023, our proof indeed has included the case when the group is infinite such as compact Lie groups, the convergence rate can be further improved and depends on the intrinsic dimension of the fundamental domain characterized by the scaling of its covering number. Our approach is different from that in [Tahmasebi & Jegelka, ICML 2024] and our work also applies to asymmetric divergences, such as the Lipschitz-regularized πΌ -divergences. For the maximum mean discrepancy (MMD), the improvement of sample complexity is more nuanced, as it depends on not only the group size but also the choice of kernel. Numerical simulations verify our theories.
1Introduction
Probability divergences provide means to measure the discrepancy between two probability distributions. They have broad applications in a variety of inference tasks, such as independence testing [64, 37], independent component analysis [34], and generative modeling [28, 50, 1, 33, 60, 49].
A key task within the above applications is the computation and estimation of the divergences from finite data, which is known to be a difficult problem [51, 25]. Empirical estimators based on the variational representations for the probability divergences are generally favored and widely used due to their scalability to both the data size and the ambient space dimension [3, 8, 47, 48, 54, 57, 7, 10, 58, 30, 32, 31, 27].
Figure 1:The distribution of the whole-slide prostate cancer images (LYSTO data set [17]) is rotation-invariant, i.e., an image and its rotated copies are equiprobable.
Empirical computation of the probability divergences and theoretical analysis on their sample complexity are typically studied without any a priori structural assumption on the probability measures. Many distributions in real life, however, are known to have intrinsic structures, such as group symmetry. For example, the distribution of the medical images collected without preferred orientation should be rotation-invariant, i.e., an image is supposed to have the same likelihood as its rotated copies; see Figure 1. Such structural information could be leveraged to improve the accuracy and/or sample-efficiency for divergence estimation.
Indeed, the recent work by Birrell et al. [9] shows that one can develop an improved variational representation for divergences between group-invariant distributions. The key idea is to reduce the test function space in the variational formula to its subset of group-invariant functions, which effectively acts as an unbiased regularization. When used in a generative adversarial network (GAN) for group-invariant distribution learning, Birrell et al. [9] empirically show that divergence estimation/optimization based on their proposed variational representation under group symmetry leads to significantly improved sample generation, especially in the small data regime.
The purpose of this work is to rigorously quantify the performance gain of divergence estimation under group symmetry. More specifically, we analyze the reduction in sample complexity of divergence estimation in terms of the convergence of the empirical estimation. We focus, in particular, on three types of probability divergences: the Wasserstein-1 metric, the maximum mean discrepancy (MMD), and the family of Lipschitz-regularized πΌ -divergences; see Section 3.1 for the exact definition. Our main results show that the reduction of sample complexity is proportional to the group size if the group is finite. When the group is infinite, the convergence rate can be further improved and depends on the intrinsic dimension of the fundamental domain characterized by the scaling of its covering number defined in Definition 3; see Theorem 1 and Footnote 1 for the Wasserstein-1 metric; Theorem 3 and Theorem 4 for the Lipschitz-regularized πΌ -divergences respectively. In the case of MMD, the reduction in sample complexity due to the group invariance is more nuanced and depends on the properties of the kernel; see Theorem 5. As a byproduct, we also establish the consistency and sample complexity for the Lipschitz-regularized πΌ -divergences without group symmetry, which, to the best of our knowledge, is missing in the previous literature.
The rest of the paper is organized as follows. In Section 2, we review previous work related to the empirical estimation of probability divergences and group-invariant distributions. We introduce the background and the motivation in Section 3. Theoretical results and numerical examples are provided in Section 4. We conclude our work in Section 5. Proofs and detailed statement of the theorems in Section 4 are provided in Section 6.
2Related work
Empirical estimation of probability divergences. Probability divergences have been widely used, including in generative adversarial networks (GANs) [1, 28, 50, 9, 33], uncertainty quantification [16, 22], independence determination through mutual information estimation [3], bounding risk in probably approximately correct (PAC) learning [14, 44, 55], statistical mechanics and interacting particles [38], large deviations [21], and parameter estimation [13].
A growing body of literature has been dedicated to the empirical estimation of divergences from finite data. Earlier works based on density estimation are known to work best for low dimensions [35, 52]. Recent research has shown that statistical estimators based on the variational representations of probability divergences scale better with dimensions; such studies include the KL-divergences [3], the more general π -divergences [8, 47, 48, 54, 57], RΓ©nyi divergences [7, 10], integral probability metrics (IPMs) [58, 30, 32, 31], and Sinkhorn divergences [27]. Such estimators are typically constructed to compare an arbitrary pair of probability measures without any a priori structural assumption, and are hence sub-optimal in estimating divergences between distributions with known structures, such as group symmetry.
Group-invariant distributions. Recent development in group-equivariant machine learning [18, 19, 63] has sparked a flurry of research in neural generative models for group-invariant distributions. Most of the works focus only on the guaranteed generation, through, e.g., an equivariant normalizing-flow, of the group-invariant distributions [5, 11, 26, 39, 43, 53]; the divergence computation between the generated distribution and the ground-truth target, a crucial step in the optimization pipeline, however, does not leverage their group-invariant structure. Equivariant GANs for group-invariant distribution learning have also been proposed by modifying the inner loop of discriminator update through either data-augmentation [65] or constrained optimization within a subspace of group-invariant discriminators [20]; the theoretical justification of such procedures, as well as the resulting performance gain, have been explained by Birrell et al. [9] as an improved estimation of variational divergences under group symmetry via an unbiased regularization. The exact quantification of the improvement, in terms of reduction in sample complexity, is however still missing; this is the main focus of this work.
3Background and motivation 3.1Variational divergences and probability metrics
Let π³ be a measurable space, and π« β’ ( π³ ) be the set of probability measures on π³ . A map π· : π« β’ ( π³ ) Γ π« β’ ( π³ ) β [ 0 , β ] is called a divergence on π« β’ ( π³ ) if
π· β’ ( π , π )
0 β π
π β π« β’ ( π³ ) ,
(1)
hence providing a notion of βdistanceβ between probability measures. Many probability divergences of interest can be formulated using a variational representation
π· β’ ( π , π )
sup πΎ β Ξ π» β’ ( πΎ ; π , π ) ,
(2)
where Ξ β β³ β’ ( π³ ) is a space of test functions, β³ β’ ( π³ ) is the set of measurable functions on π³ , and π» : β³ β’ ( π³ ) Γ π« β’ ( π³ ) Γ π« β’ ( π³ ) β [ β β , β ] is some objective functional. Through suitable choices of π» β’ ( πΎ ; π , π ) and Ξ , formula (2) includes many divergences and probability metrics. Below we list two specific classes of examples.
(a) Ξ -Integral Probability Metrics ( Ξ -IPMs). Given Ξ β β³ π β’ ( π³ ) , the space of bounded measurable functions on π³ , the Ξ -IPM between π and π is defined as
π· Ξ β’ ( π , π ) β sup πΎ β Ξ { πΈ π β’ [ πΎ ] β πΈ π β’ [ πΎ ] } .
(3)
Some prominent examples of the Ξ -IPMs include the Wasserstein-1 metric, the total variation metric, the Dudley metric, and the maximum mean discrepancy (MMD) [46, 58]. Our work, in particular, focuses on the following two specific IPMs.
β’
The Wasserstein-1 metric, π β’ ( π , π ) β π· Lip πΏ β’ ( π³ ) β’ ( π , π ) , i.e.,
π β’ ( π , π ) β sup πΎ β Lip πΏ β’ ( π³ ) { πΈ π β’ [ πΎ ] β πΈ π β’ [ πΎ ] } ,
(4)
where Lip πΏ β’ ( π³ ) is the space of πΏ -Lipschitz functions on π³ . We note that the normalizing factor πΏ β 1 has been omitted from the formula.
β’
The maximum mean discrepancy, MMD β’ ( π , π ) β π· π΅ β β’ ( π , π ) , i.e.,
MMD β’ ( π , π ) β sup πΎ β π΅ β { πΈ π β’ [ πΎ ] β πΈ π β’ [ πΎ ] } ,
(5)
where π΅ β is the unit ball of some reproducing kernel Hilbert space (RKHS) β on π³ .
(b) ( π , Ξ ) -divergences. Let π : [ 0 , β ) β β be convex and lower semi-continuous, with π β’ ( 1 )
0 and π strictly convex at π₯
1 . Given Ξ β β³ π β’ ( π³ ) that is closed under the shift transformations πΎ β¦ πΎ + π , π β β , the ( π , Ξ ) -divergence introduced by Birrell et al. [6] is defined as
π· π Ξ β’ ( π β₯ π )
sup πΎ β Ξ { πΈ π β’ [ πΎ ] β πΈ π β’ [ π β β’ ( πΎ ) ] } ,
(6)
where π β denotes the Legendre transform of π . Formula (6) includes, as a special case when Ξ
β³ π β’ ( π³ ) , the widely known class of π -divergences, with notable examples such as the Kullback-Leibler (KL) divergence [42], the total variation distance, the Jensen-Shannon divergence, the π 2 -divergence, the Hellinger distance, and more generally the family of πΌ -divergences [50]. Of particular interest to us is the class of the Lipschitz-regularized πΌ -divergences,
π· π πΌ Ξ β’ ( π β₯ π ) , Ξ
Lip πΏ β’ ( π³ ) , π πΌ β’ ( π₯ )
π₯ πΌ β 1 πΌ β’ ( πΌ β 1 ) ,
(7)
where πΌ
0 and πΌ β 1 is a positive parameter.
An important observation that will be useful in one of our results, Theorem 3, is that π· π πΌ Ξ admits an equivalent representation, which writes
π· π πΌ Ξ β’ ( π β₯ π )
sup πΎ β Ξ , π β β { πΈ π β’ [ πΎ + π ] β πΈ π β’ [ π πΌ β β’ ( πΎ + π ) ] }
(8)
due to the invariance of Ξ
Lip πΏ β’ ( π³ ) under the shift map πΎ β¦ πΎ + π for π β β .
3.2Empirical estimation of variational divergences
Given i.i.d. samples π
{ π₯ 1 , π₯ 2 , β― , π₯ π } and π
{ π¦ 1 , π¦ 2 , β― , π¦ π } , respectively, from two unknown probability measures π , π β π« β’ ( π³ ) , it is often of interestβin applications such as two-sample testing [4, 30, 31, 15] and independence testing [32, 31, 64, 37]βto estimate the divergence between π and π [58, 7, 47, 48]. For variational divergences π· Ξ β’ ( π , π ) and π· π Ξ β’ ( π β₯ π ) in the form of (3) and (6), their empirical estimators can naturally be given by
π· Ξ β’ ( π π , π π )
sup πΎ β Ξ { β π
1 π πΎ β’ ( π₯ π ) π β β π
1 π πΎ β’ ( π¦ π ) π } ,
(9)
π· π Ξ β’ ( π π β₯ π π )
sup πΎ β Ξ { β π
1 π πΎ β’ ( π₯ π ) π β β π
1 π π β β’ ( πΎ β’ ( π¦ π ) ) π }
(10)
where π π
1 π β’ β π
1 π πΏ π₯ π and π π
1 π β’ β π
1 π πΏ π¦ π represent the empirical distributions of π and π , respectively.
The consistency and sample complexity of the empirical estimators π β’ ( π π , π π ) and MMD β’ ( π π , π π ) in the form of (9) for, respectively, the Wasserstein-1 metric (4) and MMD (5) between two general distributions π , π β π« β’ ( π³ ) have been well studied [58, 31]. However, for probability measures with special structures, such as group symmetry, one can potentially obtain a divergence estimator with substantially improved sample complexity as empirically observed by Birrell et al. [9]. We provide, in the following section, a brief review of group-invariant distributions and the improved variational representations for probability divergences under group symmetry, which serves as a motivation and foundation for our theoretical analysis in Section 4.
3.3Variational divergences under group symmetry
A group is a set Ξ£ equipped with a group product satisfying the axioms of associativity, identity, and invertibility. Given a group Ξ£ and a set π³ , a map π : Ξ£ Γ π³ β π³ is called a group action on π³ if π π β π β’ ( π , β ) : π³ β π³ is an automorphism on π³ for all π β Ξ£ , and π π 2 β π π 1
π π 2 β π 1 , β π 1 , π 2 β Ξ£ . By convention, we will abbreviate π β’ ( π , π₯ ) as π β’ π₯ throughout the paper.
A function πΎ : π³ β β is called Ξ£ -invariant if πΎ β π π
πΎ , β π β Ξ£ . Let Ξ be a set of measurable functions πΎ : π³ β β ; its subset, Ξ Ξ£ , of Ξ£ -invariant functions is defined as
Ξ Ξ£ β { πΎ β Ξ : πΎ β π π
πΎ , β π β Ξ£ } .
(11)
On the other hand, a probability measure π β π« β’ ( π³ ) is called Ξ£ -invariant if π
( π π ) β β’ π , β π β Ξ£ , where ( π π ) β β’ π β π β ( π π ) β 1 is the push-forward measure of π under π π . We denote the set of all Ξ£ -invariant distributions on π³ as π« Ξ£ β’ ( π³ ) β { π β π« β’ ( π³ ) : π β’ is β’ Ξ£ β’ -invariant } .
Finally, for a compact Hausdorff topological group Ξ£ [23], we define two symmetrization operators, π Ξ£ : β³ π β’ ( π³ ) β β³ π β’ ( π³ ) and π Ξ£ : π« β’ ( π³ ) β π« β’ ( π³ ) , on functions and probability measures, respectively, as follows
π Ξ£ β’ [ πΎ ] β’ ( π₯ ) β β« Ξ£ πΎ β’ ( π β’ π₯ ) β’ π Ξ£ β’ ( π β’ π ) , β πΎ β β³ π β’ ( π³ )
(12)
πΈ π Ξ£ β’ [ π ] β’ πΎ β πΈ π β’ π Ξ£ β’ [ πΎ ] , β π β π« β’ ( π³ ) , β πΎ β β³ π β’ ( π³ )
(13)
where π Ξ£ is the unique Haar probability measure on Ξ£ . The operators π Ξ£ β’ [ πΎ ] and π Ξ£ β’ [ π ] can be intuitively understood, respectively, as βaveragingβ the function πΎ or βspreadingβ the probability mass π across the group orbits in π³ ; one can easily verify that they are projection operators onto the corresponding invariant subsets Ξ Ξ£ β Ξ and π« Ξ£ β’ ( π³ ) β π« β’ ( π³ ) [9].
The main result by Birrel et al. [9], which we summarize in Result 1, is that for Ξ£ -invariant distributions, the function space Ξ in the variational formulae (3) and (6) can be reduced to its invariant subset Ξ Ξ£ β Ξ .
Result 1 (paraphrased from [9]).
If π Ξ£ β’ [ Ξ ] β Ξ and π , π β π« β’ ( π ) , then
π· Ξ β’ ( π Ξ£ β’ [ π ] , π Ξ£ β’ [ π ] )
π· Ξ Ξ£ β’ ( π , π ) ,
(14)
π· π Ξ β’ ( π Ξ£ β’ [ π ] β₯ π Ξ£ β’ [ π ] )
π· π Ξ Ξ£ β’ ( π β₯ π ) ,
(15)
where π· Ξ β’ ( π , π ) and π· π Ξ β’ ( π β₯ π ) are given by (3) and (6). In particular, if π , π β π« Ξ£ β’ ( π³ ) are Ξ£ -invariant, then
π· Ξ β’ ( π , π )
π· Ξ Ξ£ β’ ( π , π ) , π· π Ξ β’ ( π β₯ π )
π· π Ξ Ξ£ β’ ( π β₯ π ) .
Result 1 motivates a potentially more sample-efficient way to estimate the divergences π· Ξ β’ ( π , π ) and π· π Ξ β’ ( π β₯ π ) between Ξ£ -invariant distributions π , π β π« β’ ( π³ ) using
π· Ξ Ξ£ β’ ( π π , π π )
sup πΎ β Ξ Ξ£ { β π
1 π πΎ β’ ( π₯ π ) π β β π
1 π πΎ β’ ( π¦ π ) π } ,
(16)
π· π Ξ Ξ£ β’ ( π π β₯ π π )
sup πΎ β Ξ Ξ£ { β π
1 π πΎ β’ ( π₯ π ) π β β π
1 π π β β’ ( πΎ β’ ( π¦ π ) ) π } .
(17)
Compared to Eq. (9) and (10), the estimators (16) and (17) have the benefit of optimizing over a reduced space Ξ Ξ£ β Ξ of test functions, effectively acting as an unbiased regularization, and their efficacy has been empirically observed by Birrell et al. [9] in neural generation of group-invariant distributions with substantially improved data-efficiency. However, the theoretical understanding of the performance gain is still lacking.
The purpose of this work is to rigorously quantify the improvement in sample complexity of the divergence estimations (16) and (17) for group-invariant distributions. To contextualize the idea, we will focus our analysis on three specific types of probability divergences, the Wasserstein-1 metric (4), the MMD (5), and the Lipschitz-regularized πΌ divergence (6)(7) between Ξ£ -invariant π , π β π« Ξ£ β’ ( π³ ) ,
π β’ ( π , π )
π Ξ£ β’ ( π , π ) β π Ξ£ β’ ( π π , π π ) ,
(18)
MMD β’ ( π , π )
MMD Ξ£ β’ ( π , π ) β MMD Ξ£ β’ ( π π , π π )
(19)
π· π πΌ Ξ β’ ( π β₯ π )
π· π πΌ Ξ Ξ£ β’ ( π β₯ π ) β π· π πΌ Ξ Ξ£ β’ ( π π β₯ π π ) ,
(20)
where
π Ξ£ β’ ( π , π ) β π· [ Lip πΏ β’ ( π³ ) ] Ξ£ β’ ( π , π ) ,
(21)
MMD Ξ£ β’ ( π , π ) β π· [ π΅ β ] Ξ£ β’ ( π , π ) ,
(22)
and the definition of π· π πΌ Ξ Ξ£ β’ ( π β₯ π ) is given by Equations (6), (7) and (11).
3.4Further notations and assumptions
For the rest of the paper, we assume the measurable space π³ β β π· is a bounded subset of β π· equipped with the Euclidean metric β₯ β β₯ 2 . The group is denoted by Ξ£ . The Haar measure π Ξ£ is thus a uniform probability measure over Ξ£ , and the symmetrization π Ξ£ β’ [ πΎ ] [Eq. (12)] is an average of πΎ over the group orbit. We next introduce the concept of fundamental domain in the following definition.
Definition 1.
A subset π³ 0 β π³ is called a fundamental domain of π³ under the Ξ£ -action if for each π₯ β π³ , there exists a unique π₯ 0 β π³ such that π₯
π β’ π₯ 0 for some π β Ξ£ .
Figure 2:The unit disk π³ β β 2 with the action of the (discrete) rotation groups Ξ£
πΆ π , π
1 , 4 , 16 , 64 . The fundamental domain π³ 0 for each πΆ π is filled with yellow color.
Figure 2 displays an example where π³ is the unit disk in β 2 , and Ξ£
πΆ π , π
1 , 4 , 16 , 64 , are the discrete rotation groups acting on π³ ; the fundamental domain π³ 0 for each Ξ£
πΆ π is filled with yellow color. We note that the choice of the fundamental domain π³ 0 is not unique. We will slightly abuse the notation π³
Ξ£ Γ π³ 0 to denote π³ 0 being a fundamental domain of π³ under the Ξ£ -action. We define π 0 : π³ β π³ 0
π 0 β’ ( π₯ ) β π¦ β π³ 0 , if β’ π¦
π β’ π₯ β’ for some β’ π β Ξ£ ,
(23)
i.e., π 0 maps π₯ β π³ to its unique orbit representative in π³ 0 . In addition, we denote by π π³ 0 β π« β’ ( π³ 0 ) the distribution on the fundamental domain π³ 0 induced by a Ξ£ -invariant distribution π β π« Ξ£ β’ ( π³ ) on π³ ; that is,
π β’ π π³ 0 β’ ( π₯ )
β« π¦ β π³ : π¦
π β’ π₯ , π β Ξ£ π π β’ ( π₯ ) , π₯ β π³ 0 .
(24)
The diameter of π³ β β π is defined as
diam β’ ( π³ )
sup π₯ , π¦ β π³ β π₯ β π¦ β 2 .
(25)
Finally, part of our results in Section 4 relies heavily on the concept of covering numbers which we define below.
Definition 2 (Covering number).
Let ( π³ , π ) be a metric space. A subset π β π³ is called a πΏ -cover of π³ if for any π₯ β π³ there is an π β π such that π β’ ( π , π₯ ) β€ πΏ . The πΏ -covering number of π³ is defined as
π© β’ ( π³ , πΏ , π ) := min β‘ { | π | : π β’ is a β’ πΏ β’ -cover of β’ π³ } .
When π β’ ( π₯ , π¦ )
β π₯ β π¦ β 2 is the Euclidean metric in β π· , we abbreviate π© β’ ( π³ , πΏ , π ) as π© β’ ( π³ , πΏ ) .
Following the notion of capacity dimension from [36], we define the intrinsic dimension of a set as follows.
Definition 3.
The intrinsic dimension of π³ β β π· , denoted by π β’ π β’ π β’ ( π³ ) is defined as
π β’ π β’ π β’ ( π³ ) β β lim πΏ β 0 + ln β‘ π© β’ ( π³ , πΏ ) log β‘ πΏ .
(26)
For example, if π³ β β π· contains an open set of β π· , then dim ( π³ )
π· ; if π³ is a π -dimensional submanifold of β π· , then dim ( π³ )
π .
4Sample complexity under group invariance
In this section, we outline our main results for the sample complexity of divergence estimation under group invariance. In particular, we focus on three cases: the Wasserstein-1 metric (18), the MMD (19) and the ( π πΌ , Ξ ) -divergence (20). While the convergence rate in the bounds for the Wasserstein-1 metric and the ( π πΌ , Ξ ) -divergence depends on the dimension of the ambient space, that for the MMD case does not. In all the numerical experiments, for simplicity, we choose π
{ π₯ 1 , π₯ 2 , β― , π₯ π } and π
{ π¦ 1 , π¦ 2 , β― , π¦ π } to sample from the same Ξ£ -invariant distribution π
π for easy visualization and clear benchmark.
4.1Wasserstein-1 metric, π β’ ( π , π )
In this section, we set Ξ
Lip πΏ β’ ( π³ ) to be the set of πΏ -Lipschitz functions on π³ ; see Eq. (4). We further assume that the Ξ£ -actions on π³ is 1 -Lipschitz, i.e., β π β’ π₯ β π β’ π¦ β 2 β€ β π₯ β π¦ β 2 , β π β Ξ£ , β π₯ , π¦ β π³ , so that π Ξ£ β’ [ Ξ ] β Ξ (see Lemma 1 for a proof). Due to Result 1, we have π β’ ( π , π )
π Ξ£ β’ ( π , π ) for Ξ£ -invariant probability measures π , π β π« Ξ£ β’ ( π³ ) .
To convey the main message, we provide a summary of our result in Theorem 1 for the sample complexity under group invariance for the Wasserstein-1 metric. The detailed statement and the technical assumption of the theorem as well as its proof are deferred to Section 6.1. Readers are referred to Section 3 for the notations.
Theorem 1 (Finite groups).
Let π³
Ξ£ Γ π³ 0 be a bounded subset of β π· equipped with the Euclidean distance, | Ξ£ | < β and dim ( π³ )
π . Suppose π , π β π« Ξ£ β’ ( π³ ) are Ξ£ -invariant distributions on π³ . If the number π , π of samples drawn from π and π are sufficiently large, then we have with high probability,
- when
π
β₯
2
, for any
π
0 ,
| π β’ ( π , π ) β π Ξ£ β’ ( π π , π π ) |
β€ πΆ β’ [ ( 1 | Ξ£ | β’ π ) 1 π + π + ( 1 | Ξ£ | β’ π ) 1 π + π ] ,
(27)
where πΆ
0 depends only on π , π and π³ , and is independent of π and π ;
| π β’ ( π , π ) β π Ξ£ β’ ( π π , π π ) |
β€ πΆ β diam β’ ( π³ 0 ) β’ ( 1 π + 1 π ) ,
(28)
where πΆ
0 is an absolute constant independent of π³ , π³ 0 , π and π .
When Ξ£ is infinite and dim ( π³ 0 )
π β , we have the following result with improved convergence rates.
Theorem 2 (Infinite groups). 1
Let π³
Ξ£ Γ π³ 0 be a bounded subset of β π· equipped with the Euclidean distance and dim ( π³ 0 )
π β . Suppose π , π β π« Ξ£ β’ ( π³ ) are Ξ£ -invariant distributions on π³ . Then we have with high probability,
- when
π
β
β₯
2
, for any
π
0 ,
| π β’ ( π , π ) β π Ξ£ β’ ( π π , π π ) |
β€ πΆ β’ [ ( vol β’ ( π³ 0 ) π ) 1 π β + π + ( vol β’ ( π³ 0 ) π ) 1 π β + π ] ,
(29)
where πΆ
0 depends only on π , π and π³ , and is independent of π and π , and vol β’ ( π³ 0 ) is the volume of π³ 0 ;
| π β’ ( π , π ) β π Ξ£ β’ ( π π , π π ) |
β€ πΆ β diam β’ ( π³ 0 ) β’ ( 1 π + 1 π ) ,
(30)
where πΆ
0 is an absolute constant independent of π³ , π³ 0 , π and π .
Furthermore, if π³ is a π -dimensional connected submanifold, and Ξ£ is a compact Lie group acting locally smoothly on π³ , then π β
π β dim ( Ξ£ ) , where dim ( Ξ£ ) is the dimension of a principal orbit (i.e., the maximal dimension among all orbits) by Theorem IV 3.8 in [12]. This recovers the bound derived in [59].
Sketch of the proof. Using the group invariance and the map π 0 defined in (23), we can transform the i.i.d. samples on π³ to i.i.d. samples on π³ 0 , which are effectively sampled from π π³ 0 and π π³ 0 [cf. Eq. (24)]. Hence the supremum after applying the triangle inequality to the error (29) can be taken over πΏ -Lipschitz functions defined on the fundamental domain π³ 0 , i.e., Lip πΏ β’ ( π³ 0 ) , instead of over the original space Lip πΏ β’ ( π³ ) . We further demonstrate in Lemma 2 that the supremum can be taken over an even smaller function space
β± 0
{ πΎ β Lip πΏ β’ ( π³ 0 ) : β₯ πΎ β₯ β β€ π } β Lip πΏ β’ ( π³ 0 ) ,
(31)
with some uniformly bounded πΏ β -norm π due to the translation-invariance of πΎ in definition (4). Using the Dudleyβs entropy integral [2], the error can be bounded in terms of the metric entropy of β± 0 ,
inf πΌ
0 { 8 β’ πΌ + 24 π β’ β« πΌ π ln β‘ π© β’ ( β± 0 , πΏ , β₯ β β₯ β ) β’ d πΏ } .
(32)
For π β₯ 2 , we establish the relations between the metric entropy, ln β‘ π© β’ ( β± 0 , πΏ , β₯ β β₯ β ) , of β± 0 and the covering numbers of π³ 0 and π³ via Lemma 4 and Lemma 5:
ln β‘ π© β’ ( β± 0 , πΏ , β₯ β β₯ β ) β€ π© β’ ( π³ 0 , π 2 β’ πΏ πΏ ) β’ ln β‘ ( π 1 β’ π πΏ ) ,
(33)
π© β’ ( π³ 0 , πΏ ) π© β’ ( π³ , πΏ ) β€ 1 | Ξ£ | , for small enough β’ πΏ ,
(34)
which yields a factor in terms of the group size | Ξ£ | in Eq. (29) when Ξ£ is finite. The dominant term of the bound based on the singularity of the entropy integral at πΌ
0 is shown in Eq. (29). For π
1 , the entropy integral is not singular at the origin, and we bound the covering number of β± 0 by diam β’ ( π³ 0 ) instead. The probability bound is from the application of the McDiarmidβs inequality. For Footnote 1, the result is direct using Equation 33 with the condition π© β’ ( π³ 0 , πΏ ) β² πΏ β π β implied by Definition 3 without resorting to Equation 34.
Remark 1.
Even though we present in Theorem 1 only the dominant terms showing the rate of convergence for the estimator, our result for sample complexity is actually non-asymptotic. See Theorem 6 in Section 6.1 for a complete description of the result.
Remark 2.
When | Ξ£ |
1 , i.e., no group symmetry is leveraged in the divergence estimation, our result reduces to the case considered in, e.g., [58], for general distributions π , π β π« Ξ£ β’ ( π³ )
π« β’ ( π³ ) .
Remark 3.
In the case for π β₯ 2 , the π > 0 in Theorem 1 means the rate can be arbitrarily close to β 1 π . If we further assume that π³ 0 is connected, then the bound can be improved to | π β’ ( π , π ) β π Ξ£ β’ ( π π , π π ) | β€ πΆ β’ [ ( 1 | Ξ£ | β’ π ) 1 2 β’ ln β‘ π + ( 1 | Ξ£ | β’ π ) 1 2 β’ ln β‘ π ] for π
2 , and | π β’ ( π , π ) β π Ξ£ β’ ( π π , π π ) | β€ πΆ β’ [ ( 1 | Ξ£ | β’ π ) 1 π + ( 1 | Ξ£ | β’ π ) 1 π ] for π β₯ 3 , without the dependence of π , which matches the rate in [24]. See Remark 9 after Lemma 4. Similar improvement holds for Footnote 1 in π β .
Remark 4.
The factor diam β’ ( π³ 0 ) in the case for π
1 is not necessarily directly related to the group size | Ξ£ | . We refer to Example 1 below and its explanation in Remark 10 for cases where we can achieve a factor of | Ξ£ | β 1 in the convergence bound.
Example 1.
Let π³
[ 0 , 1 ) and Ξ
Lip πΏ β’ ( [ 0 , 1 ) ) , i.e., π
1 . We consider the Ξ£ -actions on π³ generated by the translation π₯ β¦ ( π₯ + 1 π ) mod 1 , where π
1 , 4 , 16 , 64 , 256 , so that | Ξ£ |
π is the group size. We draw samples π₯ π βΌ π
π β π« Ξ£ β’ ( π³ ) on π³ in the following way: π₯ π
π β 1 β’ π π 1 / 3 + π π , where π π are i.i.d. uniformly distributed random variables on [ 0 , 1 ) and π π take values over { 0 , 1 π , β¦ , π β 1 π } with equal probabilities. One can easily verify that π
π are indeed Ξ£ -invariant. The numerical results for the empirical estimation of π β’ ( π , π )
0 using π Ξ£ β’ ( π π , π π ) with different group size | Ξ£ |
π , π
1 , 4 , 16 , 64 , 256 , are shown in the left panel of Figure 3. One can clearly observe a significant improvement of the estimator as the group size | Ξ£ | increases. Furthermore, the right panel of Figure 3 displays the ratios between the adjacent curves, all of which converge to 4, which is the ratio between the consecutive group size. This matches our calculation in Remark 10; see also Remark 4.
Example 2.
We let π³
β 2 , i.e., π
2 . The probability distributions π
π are the mixture of 8 Gaussians centered at ( cos β‘ ( 2 β’ π β’ π 8 ) , sin β‘ ( 2 β’ π β’ π 8 ) ) , π
0 , 1 , β¦ , 7 , with the same covariance. The distribution has πΆ 8 -rotation symmetry, but we pretend that it is only πΆ 1 , πΆ 2 and πΆ 4 ; that is, the Ξ£ used in the empirical estimation π Ξ£ β’ ( π π , π π ) does not reflect the entire invariance structure. Even though in this case the domain π³ is unbounded, which is beyond our theoretical assumptions, we can still see in Figure 4 that as we increase the group size | Ξ£ | in the computation of π Ξ£ β’ ( π π , π π ) , fewer samples are needed to reach the same accuracy level in the approximation. The ratios between adjacent curves in this case are slight above the predicted value 2 β 1.414 according to our theory (see Remark 3), suggesting that the complexity bound could be further improved. For instance, in [58], a logarithmic correction term can be revealed for π
2 after a more thorough analysis.
Figure 3:Left: the Wasserstein-1 distance with different group sizes on [ 0 , 1 ) , averaged over 10 replicas. Right: the ratio of the average of the Wasserstein-1 distance between different group sizes: | Ξ£ |
1 over | Ξ£ |
4 (blue), | Ξ£ |
4 over | Ξ£ |
16 (red), | Ξ£ |
16 over | Ξ£ |
64 (orange), | Ξ£ |
64 over | Ξ£ |
256 (purple). The black horizontal dashed line refers to the ratio equal to 4, which is the value theoretically predicted in Theorem 1 for π
1 . See Example 1 and Remark 10 for the detail.
Figure 4:Left: The Wasserstein-1 distance assuming different group sizes in β 2 , averaged over 10 replicas. Right: the ratio of the average of the Wasserstein-1 distance between different group sizes: | Ξ£ |
1 over | Ξ£ |
2 (blue), | Ξ£ |
2 over | Ξ£ |
4 (red). The black horizontal dashed line refers to the ratio equal to 2 , which is the value theoretically predicted in Theorem 1 for π
2 . The ratios are slight above the reference line, suggesting that the complexity bound could be further improved. See Example 2 and Remark 3 for the detail. 4.2Lipschitz-regularized πΌ -divergence, π· π πΌ Ξ β’ ( π β₯ π )
The space Ξ in this section is always set to Ξ
Lip πΏ β’ ( π³ ) ; see Eq. (7). We only consider πΌ
1 , as the case when 0 < πΌ < 1 can be derived in a similar manner. For πΌ
1 , the Legendre transform of π πΌ , which is defined in (7), is
π πΌ β β’ ( π¦ )
( πΌ β 1 β’ ( πΌ β 1 ) πΌ πΌ β 1 β’ π¦ πΌ πΌ β 1 + 1 πΌ β’ ( πΌ β 1 ) ) β’ π π¦
0 .
We provide a theorem for the sample complexity for the ( π πΌ , Ξ ) -divergence under group invariance, whose detailed statement and proof can be found in Section 6.2. We note that this is a new sample complexity result for the ( π πΌ , Ξ ) -divergence even without the group structure, which is still missing in the literature.
Theorem 3 (Finite groups).
Let π³
Ξ£ Γ π³ 0 be a subset of β π· equipped with the Euclidean distance, | Ξ£ | < β and dim ( π³ )
π . Let π πΌ β’ ( π₯ )
π₯ πΌ β 1 πΌ β’ ( πΌ β 1 ) , πΌ > 1 and Ξ
Lip πΏ β’ ( π³ ) . Suppose π and π are Ξ£ -invariant distributions on π³ . If the number of samples π , π drawn from π and π are sufficiently large, we have with high probability,
- when
π
β₯
2
, for any
π
0 ,
| π· π πΌ Ξ ( π β₯ π ) β π· π πΌ Ξ Ξ£ ( π π β₯ π π ) |
β€ πΆ 1 β’ ( 1 | Ξ£ | β’ π ) 1 π + π + πΆ 2 β’ ( 1 | Ξ£ | β’ π ) 1 π + π ,
(35)
where πΆ 1 depends only on π , π and π³ ; πΆ 2 depends only on π , π , π³ and πΌ ; both πΆ 1 and πΆ 2 are independent of π and π ;
| π· π πΌ Ξ ( π β₯ π ) β π· π πΌ Ξ Ξ£ ( π π β₯ π π ) |
β€ diam β’ ( π³ 0 ) β’ ( πΆ 1 π + πΆ 2 π ) ,
(36)
where πΆ 1 and πΆ 2 are independent of π³ 0 , π and π ; πΆ 2 depends on πΌ .
When Ξ£ is infinite and dim ( π³ 0 )
π β for some π β < π , we have the following result with improved convergence rates.
Theorem 4 (Infinite groups).
Let π³
Ξ£ Γ π³ 0 be a subset of β π· equipped with the Euclidean distance and dim ( π³ 0 )
π β . Let π πΌ β’ ( π₯ )
π₯ πΌ β 1 πΌ β’ ( πΌ β 1 ) , πΌ > 1 and Ξ
Lip πΏ β’ ( π³ ) . Suppose π and π are Ξ£ -invariant distributions on π³ . If the number of samples π , π drawn from π and π are sufficiently large, we have with high probability,
- when
π
β₯
2
, for any
π
0 ,
| π· π πΌ Ξ ( π β₯ π ) β π· π πΌ Ξ Ξ£ ( π π β₯ π π ) |
β€ πΆ 1 β’ ( vol β’ ( π³ 0 ) π ) 1 π β + π + πΆ 2 β’ ( vol β’ ( π³ 0 ) π ) 1 π β + π ,
(37)
where πΆ 1 depends only on π , π and π³ ; πΆ 2 depends only on π , π , π³ and πΌ ; both πΆ 1 and πΆ 2 are independent of π and π ;
| π· π πΌ Ξ ( π β₯ π ) β π· π πΌ Ξ Ξ£ ( π π β₯ π π ) |
β€ diam β’ ( π³ 0 ) β’ ( πΆ 1 π + πΆ 2 π ) ,
(38)
where πΆ 1 and πΆ 2 are independent of π³ 0 , π and π ; πΆ 2 depends on πΌ .
Sketch of the proof. The idea is similar to the proof of Theorem 1. The only difference is that we need to tackle the π πΌ β β’ ( πΎ ) term separately, since it is not translation-invariant in πΎ . Using the equivalent form (8), we can obtain a different Lipschitz constant associated with π πΌ β , as well as a different πΏ β bound π than that in Eq. (31) by Lemma 7. This results in the πΌ dependence for πΆ 2 .
Remark 5 (Compact Lie groups).
Similar to Theorem 1 and Footnote 1, π in the bound can be removed in Theorem 3 and Theorem 4 when π³ 0 is connected. Furthurmore, if π³ is a π -dimensional connected submanifold, and Ξ£ is a compact Lie group acting locally smoothly on π³ , then π β
π β dim ( Ξ£ ) , where dim ( Ξ£ ) is the dimension of a principal orbit (i.e., the maximal dimension among all orbits) by Theorem IV 3.8 in [12].
4.3Maximum mean discrepancy, MMD β’ ( π , π )
Though one can utilize the results on the covering number of the unit ball of a reproducing kernel Hilbert space, e.g. [66, 41], to derive the sample complexity bounds that depend on the dimension π , we provide a dimension-independent bound as in [31] without the use of the covering numbers. In the MMD case, we let π΅ β represent the unit ball in some reproducing kernel Hilbert space (RKHS) β on π³ ; see Eq. (5). In addition, we make the following assumptions for the kernel π β’ ( π₯ , π¦ ) .
Assumption 1.
The kernel π β’ ( π₯ , π¦ ) for β satisfies
β’
π β’ ( π₯ , π¦ ) β₯ 0 and π β’ ( π β’ ( π₯ ) , π β’ ( π¦ ) )
π β’ ( π₯ , π¦ ) , β π β Ξ£ , π₯ , π¦ β π³ ;
β’
Let πΎ := max π₯ , π¦ β π³ β‘ π β’ ( π₯ , π¦ ) , then π β’ ( π₯ , π¦ )
πΎ if and only if π₯
π¦ ;
β’
There exists π Ξ£ , π β ( 0 , 1 ) such that for any π β Ξ£ and π is not the identity element and π₯ β π³ 0 , we have π β’ ( π β’ π₯ , π₯ ) β€ π Ξ£ , π β’ πΎ .
Intuitively, the third condition in Assumption 1 suggests uniform decay of the kernel on the group orbits. See Remark 7 and Example 3 for more details and a related example.
From Lemma C.1 in [9], we know π Ξ£ β’ [ Ξ ] β Ξ by the first assumption. Below is an abbreviated result for the sample complexity for the MMD, whose detailed statement and proof can be found in Section 6.3.
Theorem 5.
Let π³
Ξ£ Γ π³ 0 and | Ξ£ | < β . β is a RKHS on π³ whose kernel satisfies Assumption 1. Suppose π and π are Ξ£ -invariant distributions on π³ . Then for π , π sufficiently large, we have with high probability,
| MMD β’ ( π , π ) β MMD Ξ£ β’ ( π π , π π ) |
< π β’ ( πΆ Ξ£ , π β’ ( 1 π + 1 π ) ) ,
(39)
where πΆ Ξ£ , π
1 + π Ξ£ , π β’ ( | Ξ£ | β 1 ) | Ξ£ | , and π Ξ£ , π is the constant in Assumption 1.
Sketch of the proof. Based on Result 1, we use the equality MMD Ξ£ β’ ( π π , π π )
MMD β’ ( π Ξ£ β’ [ π π ] , π Ξ£ β’ [ π π ] ) to expand the divergence over all the orbit elements. The error bound is controlled in terms of the Rademacher average, whose supremum is attained at some known witness function due to the structure of the RKHS using Lemma 8. Since the Rademacher average is estimated without covering numbers, the rate is independent of the dimension π . Then we use the decay of the kernel to obtain the bound.
Remark 6.
When | Ξ£ |
1 , the proof is reduced to that in [58].
Remark 7.
Unlike the cases for the Wasserstein metric and the Lipschitz-regularized πΌ -divergence in Theorem 1 and Theorem 3, the improvement of the sample complexity under group symmetry for MMD (measured by πΆ Ξ£ , π in Theorem 5) depends on not only the group size | Ξ£ | but also the kernel π β’ ( π₯ , π¦ ) . For a fixed π³ and kernel π β’ ( π₯ , π¦ ) , simply increasing the group size | Ξ£ | does not necessarily lead to a reduced sample complexity beyond a certain threshold; see the first four subfigures in Figure 5. However, we show in Example 3 below that, by adaptively picking a suitable kernel π depending on the group size | Ξ£ | , one can obtain an improvement in sample complexity by πΆ Ξ£ , π β 1 | Ξ£ | for arbitrarily large | Ξ£ | .
Figure 5:MMD simulations with Gaussian kernels π π β’ ( π₯ , π¦ )
π β β₯ π₯ β π¦ β₯ 2 2 2 β’ π 2 . From left to right, top to bottom: π
2 β’ π 1 Γ 6 , π
2 β’ π 4 Γ 6 , π
2 β’ π 16 Γ 6 , π
2 β’ π 64 Γ 6 , π
2 β’ π 6 β’ | Ξ£ | . The first four subfigures (top two rows) show that the Gaussian kernel with a fixed bandwidth π > 0 satisfies the third condition in Assumption 1 up to a group size of | Ξ£ |
π , π
1 , 4 , 16 , 64 , and thus an improvement of sample complexity of order πΆ Ξ£ , π β | Ξ£ | β 1 / 2 persists till | Ξ£ |
π ; when | Ξ£ |
π , no further reduction in sample complexity can be observed. The last subfigure demonstrates that with an adaptive bandwidth π inversely scaled with | Ξ£ | , nonstop improvement of the sample complexity can be achieved as the group size | Ξ£ | increases. See Example 3 for the detail and explanations. Example 3.
Let π³
{ ( π β’ cos β‘ π , π β’ sin β‘ π ) β β 2 : π β [ 0 , 1 ] , π β [ 0 , 2 β’ π ) } be the unit disk centered at the origin, and let π π β’ ( π₯ , π¦ )
π β β₯ π₯ β π¦ β₯ 2 2 2 β’ π 2 , π₯ , π¦ β π³ , be the Gaussian kernel. Consider the group actions generated by a rotation (with respect to the origin) of 2 β’ π π , π
1 , 4 , 16 , 64 , 256 , so that | Ξ£ |
π is the group size. The fundamental domain π³ 0 under the Ξ£ -action is π³ 0
[ 0 , 1 ] Γ [ 0 , 2 β’ π π ) (see Figure 2 for a visual illustration). We draw samples π₯ π βΌ π
π β π« Ξ£ β’ ( π³ ) in the following way,
π₯ π
π π β’ ( cos β‘ [ 2 β’ π π β’ π π 1 / 3 + π π β’ 2 β’ π π ] , sin β‘ [ 2 β’ π π β’ π π 1 / 3 + π π β’ 2 β’ π π ] )
where π π and π π are i.i.d. uniformly distributed random variables on [ 0 , 1 ) and π π take values over { 0 , 1 π , β¦ , π β 1 π } with equal probabilities. We select the kernel bandwidth π
0 in different ways:
β’
Fixed π with changing group size | Ξ£ |
π . We intuitively follow the βthree-sigma ruleβ in the argument direction to pick different π . Since the angle of each sector is 2 β’ π π , we select π
2 β’ π 6 β’ π , π
1 , 4 , 16 , 64 . Smaller bandwidth π corresponds to faster decay of the kernel π π β’ ( π₯ , π¦ ) , such that for a fixed bandwidth π
2 β’ π 6 β’ π , the third condition in Assumption 1 is satisfied with a small π π for any group Ξ£ such that | Ξ£ | β€ π , i.e., πΆ Ξ£ , π β | Ξ£ | β 1 / 2 . On the other hand, it is difficult to observe the improvement by further increasing the group size | Ξ£ | beyond | Ξ£ | > π , since the third condition in Assumption 1 is not satisfied with any uniformly small π . See the top two rows in Figure 5 for the results for π
2 β’ π π Γ 6 , π
1 , 4 , 16 , 64 . Notice that the sample complexity improvement stops right at | Ξ£ |
π , perfectly matching our theoretical result Theorem 5.
β’
π inversely scales with | Ξ£ | , i.e., π
2 β’ π 6 β’ | Ξ£ | . Unlike the fixed π discusses previously, with these adaptive selections of kernels, we can observed nonstop improvement of the sample complexity as the group size | Ξ£ | increases; see the last row of Figure 5. This numerical result is explained by the third condition in Assumption 1; that is, in order to continuously observe the benefit from the increasing group size | Ξ£ | , we need to have a faster decay in the kernel π π (i.e., smaller π ) so that π Ξ£ , π π is uniformly small for all | Ξ£ | .
Remark 8.
The bound provided in Theorem 5 for the MMD case is almost sharp in the sense that, by a direct calculation, one can obtain that
πΈ π³ β’ MMD Ξ£ β’ ( π , π π ) 2 πΈ π³ β’ MMD β’ ( π , π π ) 2 β πΆ Ξ£ , π 2 ,
if the Gaussian kernel bandwidth π β 2 β’ π Ξ£ , π β’ π | Ξ£ | .
5Conclusion and future work
We provide rigorous analysis to quantify the reduction in sample complexity for variational divergence estimations between group-invariant distributions. We obtain a reduction in the error bound by a power of the group size when the group is finite. The exponent on the group size depends on the intrinsic dimension of the support, characterized by the covering number rate for the Wasserstein-1 metric and the Lipschitz-regularized πΌ -divergence; that reduction, however, is independent of the ambient dimension for the MMD as in [30, 32].
This work also motivates some possible future directions. For the Wasserstein-1 metric in β 2 , one could potentially derive a sharper bound in terms of the group size. For the MMD with Gaussian kernels, it is worth investigating how to choose the bandwidth to make as much use of the group structure as possible. Further applications of the theories on machine learning, such as neural generative models or neural estimations of divergence under symmetry, are also expected.
6Proofs
In this section, we provide detailed statements of the theorems introduced in Section 4 as well as their proofs.
6.1Wasserstein-1 metric Assumption 2.
Let π³
Ξ£ Γ π³ 0 β β π . Assume that there exists some πΏ 0
0 such that
1)
β₯ π β’ ( π₯ ) β π β² β’ ( π₯ β² ) β₯ 2
2 β’ πΏ 0 , β π₯ , π₯ β² β π³ 0 , π β π β² β Ξ£ ; and
2)
β₯ π β’ ( π₯ ) β π β’ ( π₯ β² ) β₯ 2 β₯ β₯ π₯ β π₯ β² β₯ 2 , β π₯ , π₯ β² β π³ 0 , π β Ξ£ ,
where β₯ β β₯ 2 is the Euclidean norm on β π .
Example 1 provides a simple example when this assumption holds.
Theorem 6.
Let π³
Ξ£ Γ π³ 0 be a subset of β π· satisfying the conditions in Assumption 2. Assume that π© β’ ( π³ , πΏ ) β² πΏ β π for sufficiently small πΏ . Suppose π and π are Ξ£ -invariant probability measures on π³ .
- If
π
β₯
2
, then for any
π
0 , π
0 and π , π sufficiently large, we have with probability at least 1 β π ,
| π β’ ( π , π ) β π Ξ£ β’ ( π π , π π ) |
β€ ( 8 + 24 ( π + π 2 β 1 ) ) β’ [ ( 9 β’ π· π³ , πΏ 2 | Ξ£ | β’ π ) 1 π + π + ( 9 β’ π· π³ , πΏ 2 | Ξ£ | β’ π ) 1 π + π ]
- π· Β― π³ 0 , πΏ β’ ( 24 π
- 24 π )
- πΏ β diam β’ ( π³ 0 ) β’ 2 β’ ( π
- π ) π β’ π β’ ln β‘ 1 π ,
where π· π³ , πΏ depends only on π³ and πΏ ; π· Β― π³ 0 , πΏ depends only on π³ 0 and πΏ , and is increasing in π³ 0 , i.e., π· Β― π΄ 1 , πΏ β€ π· Β― π΄ 2 , πΏ for π΄ 1 β π΄ 2 ; 2) If π
1 , then for any π
0 and π , π sufficiently large, we have with probability at least 1 β π ,
| π β’ ( π , π ) β π Ξ£ β’ ( π π , π π ) |
β€ π β’ πΏ β diam β’ ( π³ 0 ) β’ ( 1 π + 1 π ) + πΏ β diam β’ ( π³ 0 ) β’ 2 β’ ( π + π ) π β’ π β’ ln β‘ 1 π ,
where π
0 is an absolute constant independent of π³ and π³ 0 .
Before proving this theorem, we have the following lemmas.
Lemma 1.
Suppose the Ξ£ -actions on π³ are 1 -Lipschitz, i.e., β₯ π β’ π₯ β π β’ π¦ β₯ 2 β€ β₯ π₯ β π¦ β₯ 2 for any π₯ , π¦ β π³ and π β Ξ£ , then we have π Ξ£ β’ [ Ξ ] β Ξ , where Ξ
Lip πΏ β’ ( π³ ) .
Proof.
For any π₯ , π¦ β π³ and π β Ξ , we have
| π Ξ£ β’ ( π ) β’ ( π₯ ) β π Ξ£ β’ ( π ) β’ ( π¦ ) |
|
1
|
Ξ£
|
β’
β
π
β
Ξ£
π
β’
(
π
β’
π₯
)
β
1
|
Ξ£
|
β’
β
π
β
Ξ£
π
β’
(
π
β’
π¦
)
|
β€
1
|
Ξ£
|
β’
β
π
β
Ξ£
|
π
β’
(
π
β’
π₯
)
β
π
β’
(
π
β’
π¦
)
|
β€
1
|
Ξ£
|
β’
β
π
β
Ξ£
πΏ
β’
β₯
π
β’
π₯
β
π
β’
π¦
β₯
2
β€
1
|
Ξ£
|
β’
β
π
β
Ξ£
πΏ
β’
β₯
π₯
β
π¦
β₯
2
πΏ β’ β₯ π₯ β π¦ β₯ 2 .
Therefore, we have π Ξ£ β’ ( π ) β Ξ . β
Lemma 2.
For any πΎ β Lip πΏ β’ ( π³ 0 ) , there exists π β β , such that β₯ πΎ + π β₯ β β€ πΏ β diam β’ ( π³ 0 ) .
Proof.
Suppose πΎ β Lip πΏ β’ ( π³ 0 ) and β₯ πΎ β’ ( π₯ ) β₯ β
πΏ β diam β’ ( π³ 0 ) . Without loss of generality, we can assume sup π₯ β π³ 0 πΎ β’ ( π₯ )
πΏ β diam β’ ( π³ 0 ) . Since πΎ is πΏ -Lipschitz on π³ 0 , we have sup π₯ β π³ 0 πΎ β’ ( π₯ ) β inf π₯ β π³ 0 πΎ β’ ( π₯ ) β€ πΏ β diam β’ ( π³ 0 ) , so that
inf π₯ β π³ 0 πΎ β’ ( π₯ ) β₯ sup π₯ β π³ 0 πΎ β’ ( π₯ ) β πΏ β diam β’ ( π³ 0 )
0 .
Hence we can select π
β inf π₯ β π³ 0 πΎ β’ ( π₯ ) 2 , so that β₯ πΎ + π β₯ β < β₯ πΎ β₯ β . β
We provide a variant of the Dudleyβs entropy integral as well as its proof for completeness.
Lemma 3.
Suppose β± is a family of functions mapping the metric space ( π³ , π ) to [ β π , π ] for some π > 0 . Also assume that 0 β β± and β±
β β± . Let π
{ π 1 , β¦ , π π } be a set of independent random variables that take values on { β 1 , 1 } with equal probabilities, π
1 , β¦ , π . π₯ 1 , π₯ 2 , β¦ , π₯ π β π³ . Then we have
πΈ π β’ sup π β β± | 1 π β’ β π
1 π π π β’ π β’ ( π₯ π ) | β€ inf πΌ
0 4 β’ πΌ + 12 π β’ β« πΌ π ln β‘ π© β’ ( β± , πΏ , β₯ β β₯ β ) β’ d πΏ .
The proof of Lemma 3 is standard using the dyadic path., e.g. see the proof of Lemma A.5. in [2].
Proof.
Let π be an arbitrary positive integer and πΏ π
π β’ 2 β ( π β 1 ) , π
1 , β¦ , π . Let π π be the cover achieving π© β’ ( β± , πΏ π , β₯ β β₯ β ) and denote | π π |
π© β’ ( β± , πΏ π , β₯ β β₯ β ) . For any π β β± , let π π β’ ( π ) β π π , such that β₯ π β π π β’ ( π ) β₯ β β€ πΏ π . We have
πΈ π β’ sup π β β± | 1 π β’ β π
1
π
π
π
β’
π
β’
(
π₯
π
)
|
β€
πΈ
π
β’
sup
π
β
β±
|
1
π
β’
β
π
1 π π π β’ ( π β’ ( π₯ π ) β π π β’ ( π ) β’ ( π₯ π ) ) | + β π
1 π β 1 πΈ π β’ sup π β β± | 1 π β’ β π
1
π
π
π
β’
(
π
π
+
1
β’
(
π
)
β’
(
π₯
π
)
β
π
π
β’
(
π
)
β’
(
π₯
π
)
)
|
+
πΈ
π
β’
sup
π
β
β±
|
1
π
β’
β
π
1 π π π β’ π 1 β’ ( π ) β’ ( π₯ π ) | .
The first on the right hand side is bounded by πΏ π . Note that we can choose π 1
{ 0 } , so that π 1 β’ ( π ) is the zero function. For each π , let π π
{ π π + 1 β’ ( π ) β π π β’ ( π ) : π β β± } . We have | π π | β€ | π π + 1 | β’ | π π | β€ | π π + 1 | 2 . Then we have
β π
1 π β 1 πΈ π β’ sup π β β± | 1 π β’ β π
1 π π π β’ ( π π + 1 β’ ( π ) β’ ( π₯ π ) β π π β’ ( π ) β’ ( π₯ π ) ) |
β π
1 π β 1 πΈ π β’ sup π€ β π π | 1 π β’ β π
1 π π π β’ π€ β’ ( π₯ π ) | .
In addition, we have
sup π€ β π π β π
1 π π€ β’ ( π₯ π ) 2
sup π β β± β π
1
π
(
π
π
+
1
β’
(
π
)
β’
(
π₯
π
)
β
π
π
β’
(
π
)
β’
(
π₯
π
)
)
2
β€
sup
π
β
β±
β
π
1 π ( π π + 1 β’ ( π ) β’ ( π₯ π ) β π β’ ( π₯ π ) ) 2 + sup π β β± β π
1
π
(
π
β’
(
π₯
π
)
β
π
π
β’
(
π
)
β’
(
π₯
π
)
)
2
β€
π
β
πΏ
π
+
1
2
+
π
β
πΏ
π
2
π β’ ( πΏ π + 1 + πΏ π )
3 β’ π β’ πΏ π + 1 .
By the Massart finite class lemma (see, e.g. [45]), we have
πΈ π β’ sup π€ β π π | 1 π β’ β π
1 π π π β’ π€ β’ ( π₯ π ) | β€ 3 β’ π β’ πΏ π + 1 β’ 2 β’ ln β‘ | π π | π β€ 6 β’ πΏ π + 1 β’ ln β‘ | π π + 1 | π .
Therefore,
πΈ π β’ sup π β β± | 1 π β’ β π
1
π
π
π
β’
π
β’
(
π₯
π
)
|
β€
πΏ
π
+
6
π
β’
β
π
1
π
β
1
πΏ
π
+
1
β’
ln
β‘
π©
β’
(
β±
,
πΏ
π
+
1
,
β₯
β
β₯
β
)
β€
πΏ
π
+
12
π
β’
β
π
1 π ( πΏ π β πΏ π + 1 ) β’ ln β‘ π© β’ ( β± , πΏ π , β₯ β β₯ β )
β€ πΏ π + 12 π β’ β« πΏ π + 1 π ln β‘ π© β’ ( β± , πΏ , β₯ β β₯ β ) β’ d πΏ .
Finally, select any πΌ β ( 0 , π ) and let π be the largest integer with πΏ π + 1 > πΌ , (implying πΏ π + 2 β€ πΌ and πΏ π
4 β’ πΏ π + 2 β€ 4 β’ πΌ ), so that
πΏ π + 12 π β’ β« πΏ π + 1 π ln β‘ π© β’ ( β± , πΏ π , β₯ β β₯ β ) β’ d πΏ β€ 4 β’ πΌ + 12 π β’ β« πΌ π ln β‘ π© β’ ( β± , πΏ , β₯ β β₯ β ) β’ d πΏ .
β
We can easily extend Lemma 6 in [29] to the following lemma by meshing on the range [ β π , π ] rather than [ 0 , 1 ] .
Lemma 4.
Let β± be the family of πΏ -Lipschitz functions mapping the metric space ( π³ , β₯ β β₯ 2 ) to [ β π , π ] for some π
0 . Then we have
π© β’ ( β± , πΏ , β₯ β β₯ β ) β€ ( π 1 β’ π πΏ ) π© β’ ( π³ , π 2 β’ πΏ πΏ ) ,
where π 1 β₯ 1 and π 2 β€ 1 are some absolute constants not depending on π³ , π , and πΏ .
Remark 9.
If π³ is connected, then the bound can be improved to π© β’ ( β± , πΏ , β₯ β β₯ β ) β€ π π© β’ ( π³ , π 2 β’ πΏ πΏ ) by the result in [40].
Lemma 5 (Theorem 3 in [56]).
Assume that π³
Ξ£ Γ π³ 0 . If for some πΏ
0 we have
1) β₯ π β’ ( π₯ ) β π β² β’ ( π₯ β² ) β₯ 2
2 β’ πΏ , β π₯ , π₯ β² β π³ 0 , π β π β² β Ξ£ ; and 2) β₯ π β’ ( π₯ ) β π β’ ( π₯ β² ) β₯ 2 β₯ β₯ π₯ β π₯ β² β₯ 2 , β π₯ , π₯ β² β π³ 0 , π β Ξ£ , then we have
π© β’ ( π³ 0 , πΏ ) π© β’ ( π³ , πΏ ) β€ 1 | Ξ£ | .
In addition, we provide the following lemma for the scaling of covering numbers.
Lemma 6.
Let π³ be a subset of β π or π© β’ ( π³ , πΏ ) β² πΏ β π , and πΏ Β―
0 . Then there exists a constant πΆ π , πΏ Β― that depends on π and πΏ Β― such that for πΏ β ( 0 , 1 ) we have
π© β’ ( π³ , πΏ ) β€ πΆ π , πΏ Β― β π© β’ ( π³ , πΏ Β― ) πΏ π .
Proof.
Let π := π© β’ ( π³ , πΏ Β― ) . Then π³ can be covered by π balls with radius πΏ Β― . From Proposition 4.2.12 in [62], we know that each ball with radius πΏ Β― can be covered by ( πΏ Β― + πΏ / 2 ) π ( πΏ / 2 ) π balls with radius πΏ . This implies that π³ can be covered by π β ( πΏ Β― + πΏ / 2 ) π ( πΏ / 2 ) π balls with radius πΏ , so that π© ext β’ ( π³ , πΏ ) β€ π β ( πΏ Β― + πΏ / 2 ) π ( πΏ / 2 ) π , where π© ext β’ ( π³ , πΏ ) is the exterior covering number of π³ with radius πΏ . Therefore, π© β’ ( π³ , πΏ ) β€ π© ext β’ ( π³ , πΏ / 2 ) β€ π β ( πΏ Β― + πΏ / 4 ) π ( πΏ / 4 ) π
π β ( 4 β’ πΏ Β― πΏ + 1 ) π β€ π β ( 4 β’ πΏ Β― + 1 ) π πΏ π . β
Proof of Theorem 6.
|
π
β’
(
π
,
π
)
β
π
Ξ£
β’
(
π
π
,
π
π
)
|
|
sup
πΎ
β
Ξ
Ξ£
{
πΈ
π
β’
[
πΎ
]
β
πΈ
π
β’
[
πΎ
]
}
β
sup
πΎ
β
Ξ
Ξ£
{
πΈ
π
π
β’
[
πΎ
]
β
πΈ
π
π
β’
[
πΎ
]
}
|
β€
sup
πΎ
β
Ξ
Ξ£
|
πΈ
π
β’
[
πΎ
]
β
1
π
β’
β
π
1 π πΎ β’ ( π₯ π ) β ( πΈ π β’ [ πΎ ] β 1 π β’ β π
1 π πΎ β’ ( π¦ π ) ) |
sup πΎ β Ξ Ξ£ | πΈ π β’ [ πΎ ] β 1 π β’ β π
1 π πΎ β’ ( π 0 β’ ( π₯ π ) ) β ( πΈ π β’ [ πΎ ] β 1 π β’ β π
1
π
πΎ
β’
(
π
0
β’
(
π¦
π
)
)
)
|
β€
(
π
)
sup
πΎ
β
Lip
πΏ
β’
(
π³
0
)
|
πΈ
π
π³
0
β’
[
πΎ
]
β
1
π
β’
β
π
1 π πΎ β’ ( π 0 β’ ( π₯ π ) ) β ( πΈ π π³ 0 β’ [ πΎ ] β 1 π β’ β π
1 π πΎ β’ ( π 0 β’ ( π¦ π ) ) ) |
(40)
:= π β’ ( π₯ 1 , β¦ , π₯ π , π¦ 1 , β¦ , π¦ π ) ,
where inequality ( π ) is due to the fact that πΈ π β’ [ πΎ ]
πΈ π π³ 0 β’ [ πΎ | π³ 0 ] and πΈ π β’ [ πΎ ]
πΈ π π³ 0 β’ [ πΎ | π³ 0 ] since π and π are both Ξ£ -invariant and πΎ β Ξ Ξ£ , and the fact that if πΎ β Ξ Ξ£ , then πΎ | π³ 0 β Lip πΏ β’ ( π³ 0 ) , where πΎ | π³ 0 is the restriction of πΎ on π³ 0 .
Note that the quantity inside the absolute value in (6.1) will not change if we replace πΎ by πΎ + π and we still have πΎ + π β Lip πΏ β’ ( π³ 0 ) for any π β β . Therefore, by Lemma 2, the supremum in (6.1) can be taken over πΎ β Lip πΏ β’ ( π³ 0 ) , where β₯ πΎ β₯ β β€ πΏ β diam β’ ( π³ 0 ) . The denominator in the exponent when applying the McDiarmidβs inequality is thus equal to
π β’ ( 2 β’ πΏ β diam β’ ( π³ 0 ) π ) 2 + π β’ ( 2 β’ πΏ β diam β’ ( π³ 0 ) π ) 2
4 β’ πΏ 2 β diam β’ ( π³ 0 ) 2 β’ π + π π β’ π .
(41)
Denoting by π β²
{ π₯ 1 β² , π₯ 2 β² , β¦ , π₯ π β² } and π β²
{ π¦ 1 β² , π¦ 2 β² , β¦ , π¦ π β² } the i.i.d. samples drawn from π π³ 0 and π π³ 0 . Also note that π 0 β’ ( π₯ 1 ) , β¦ , π 0 β’ ( π₯ π ) and π 0 β’ ( π¦ 1 ) , β¦ , π 0 β’ ( π¦ π ) can be viewed as i.i.d. samples on π³ 0 drawn from π π³ 0 and π π³ 0 respectively, such that the expectation
πΈ π , π β’ π β’ ( π₯ 1 , π₯ 2 , β¦ , π₯ π , π¦ 1 , π¦ 2 , β¦ , π¦ π )
πΈ π , π β’ sup πΎ β Lip πΏ β’ ( π³ 0 ) | πΈ π π³ 0 β’ [ πΎ ] β 1 π β’ β π
1 π πΎ β’ ( π 0 β’ ( π₯ π ) ) β ( πΈ π π³ 0 β’ [ πΎ ] β 1 π β’ β π
1 π πΎ β’ ( π 0 β’ ( π¦ π ) ) ) |
can be replaced by the equivalent quantity
πΈ π , π β’ sup πΎ β Lip πΏ β’ ( π³ 0 ) | πΈ π π³ 0 β’ [ πΎ ] β 1 π β’ β π
1 π πΎ β’ ( π₯ π ) β ( πΈ π π³ 0 β’ [ πΎ ] β 1 π β’ β π
1 π πΎ β’ ( π¦ π ) ) | ,
where π
{ π₯ 1 , π₯ 2 , β¦ , π₯ π } and π
{ π¦ 1 , π¦ 2 , β¦ , π¦ π } are are i.i.d. samples on π³ 0 drawn from π π³ 0 and π π³ 0 respectively. Then we have
πΈ π , π β’ sup πΎ β Lip πΏ β’ ( π³ 0 ) | πΈ π π³ 0 β’ [ πΎ ] β 1 π β’ β π
1 π πΎ β’ ( π₯ π ) β ( πΈ π π³ 0 β’ [ πΎ ] β 1 π β’ β π
1 π πΎ β’ ( π¦ π ) ) |
πΈ π , π β’ sup πΎ β Lip πΏ β’ ( π³ 0 ) | πΈ π β² β’ ( 1 π β’ β π
1 π πΎ β’ ( π₯ π β² ) ) β 1 π β’ β π
1 π πΎ β’ ( π₯ π ) β πΈ π β² β’ ( 1 π β’ β π
1 π πΎ β’ ( π¦ π β² ) ) + 1 π β’ β π
1
π
πΎ
β’
(
π¦
π
)
|
β€
πΈ
π
,
π
,
π
β²
,
π
β²
β’
sup
πΎ
β
Lip
πΏ
β’
(
π³
0
)
|
1
π
β’
β
π
1 π πΎ β’ ( π₯ π β² ) β 1 π β’ β π
1 π πΎ β’ ( π₯ π ) β 1 π β’ β π
1 π πΎ β’ ( π¦ π β² ) + 1 π β’ β π
1 π πΎ β’ ( π¦ π ) |
πΈ π , π , π β² , π β² , π , π β² β’ sup πΎ β Lip πΏ β’ ( π³ 0 ) | 1 π β’ β π
1 π π π β’ ( πΎ β’ ( π₯ π β² ) β πΎ β’ ( π₯ π ) ) β 1 π β’ β π
1
π
π
π
β²
β’
(
πΎ
β’
(
π¦
π
β²
)
β
πΎ
β’
(
π¦
π
)
)
|
β€
πΈ
π
,
π
β²
,
π
β’
sup
πΎ
β
Lip
πΏ
β’
(
π³
0
)
|
1
π
β’
β
π
1 π π π β’ ( πΎ β’ ( π₯ π β² ) β πΎ β’ ( π₯ π ) ) | + πΈ π , π β² , π β² β’ sup πΎ β Lip πΏ β’ ( π³ 0 ) | 1 π β’ β π
1 π π π β² β’ ( πΎ β’ ( π¦ π β² ) β πΎ β’ ( π¦ π ) ) |
β€ inf πΌ
0 8 β’ πΌ + 24 π β’ β« πΌ π ln β‘ π© β’ ( β± 0 , πΏ , β₯ β β₯ β ) β’ d πΏ + inf πΌ
0 8 β’ πΌ + 24 π β’ β« πΌ π ln β‘ π© β’ ( β± 0 , πΏ , β₯ β β₯ β ) β’ d πΏ ,
where β± 0
{ πΎ β Lip πΏ β’ ( π³ 0 ) : β₯ πΎ β₯ β β€ π } and π
πΏ β diam β’ ( π³ 0 ) by Lemma 2.
For π β₯ 2 , from Lemma 4, we have ln β‘ π© β’ ( β± 0 , πΏ , β₯ β β₯ β ) β€ π© β’ ( π³ 0 , π 2 β’ πΏ πΏ ) β’ ln β‘ ( π 1 β’ π πΏ ) . We fix a πΏ Β― > 0 such that π© β’ ( π³ , π 2 β’ πΏ Β― πΏ )
1 , and select πΏ β such that π 2 β’ πΏ β πΏ β€ 1 and π 2 β’ πΏ β πΏ β€ πΏ 0 ; that is, πΏ β β€ min β‘ ( πΏ π 2 , πΏ β’ πΏ 0 π 2 ) , so that by Lemma 5 and 6, we have
π© β’ ( π³ 0 , π 2 β’ πΏ πΏ ) β’ ln β‘ ( π 1 β’ π πΏ ) β€ π© β’ ( π³ , π 2 β’ πΏ πΏ ) | Ξ£ | β’ ln β‘ ( π 1 β’ π πΏ ) β€ πΆ π , πΏ Β― β’ πΏ π | Ξ£ | β’ π 2 π β’ πΏ π β’ ln β‘ ( π 1 β’ π πΏ ) ,
when πΏ < πΏ β . Therefore, for sufficiently small πΌ , we have
β« πΌ π ln β‘ π© β’ ( β± 0 , πΏ , β₯ β β₯ β ) β’ d πΏ
β« πΌ πΏ β ln β‘ π© β’ ( β± 0 , πΏ , β₯ β β₯ β ) β’ d πΏ + β« πΏ β π ln β‘ π© β’ ( β± 0 , πΏ , β₯ β β₯ β ) β’ d πΏ
β€ β« πΌ πΏ β πΆ π , πΏ Β― β’ πΏ π | Ξ£ | β’ π 2 π β’ πΏ π β’ ln β‘ ( π 1 β’ π πΏ ) β’ d πΏ + β« πΏ β π ln β‘ π© β’ ( β± 0 , πΏ , β₯ β β₯ β ) β’ d πΏ .
(42)
For any π > 0 , we can choose πΏ β to be sufficiently small, such that we have ln β‘ ( π 1 β’ π πΏ ) β€ 1 πΏ π when πΏ β€ πΏ β . Therefore, if we let π· π³ , πΏ
πΆ π , πΏ Β― β’ πΏ π π 2 π , we will have
β«
πΌ
πΏ
β
πΆ
π
,
πΏ
Β―
β’
πΏ
π
|
Ξ£
|
β’
π
2
π
β’
πΏ
π
β’
ln
β‘
(
π
1
β’
π
πΏ
)
β’
d
πΏ
β€
π·
π³
,
πΏ
β’
β«
πΌ
πΏ
β
1
|
Ξ£
|
β’
πΏ
π
+
π
β’
d
πΏ
β€
π·
π³
,
πΏ
β’
β«
πΌ
β
1
|
Ξ£
|
β’
πΏ
π
+
π
β’
d
πΏ
π· π³ , πΏ | Ξ£ | β πΌ 1 β π + π 2 π + π 2 β 1 .
Notice that the second integral in (6.1) is bounded while the first integral diverges as πΌ tends to zero, so we can optimize the majorizing terms
8 β’ πΌ + 24 π β π· π³ , πΏ | Ξ£ | β πΌ 1 β π + π 2 π + π 2 β 1
with respect to πΌ , to obtain
πΌ
( 9 π ) 1 π + π β ( π· π³ , πΏ 2 | Ξ£ | ) 1 π + π ,
so that
inf πΌ
0 8 β’ πΌ + 24 π β’ β« πΌ π ln β‘ π© β’ ( β± 0 , πΏ , β₯ β β₯ β ) β’ d πΏ
β€ 8 β’ ( 9 π ) 1 π + π β ( π· π³ , πΏ 2 | Ξ£ | ) 1 π + π + 24 ( π + π 2 β 1 ) β’ ( 9 π ) 1 π + π β ( π· π³ , πΏ 2 | Ξ£ | ) 1 π + π + 24 π β’ β« πΏ β π ln β‘ π© β’ ( β± 0 , πΏ , β₯ β β₯ β ) β’ d πΏ .
Therefore, for sufficiently large π and π , we have
πΈ π , π β’ sup πΎ β Lip πΏ β’ ( π³ 0 ) | πΈ π π³ 0 β’ [ πΎ ] β 1 π β’ β π
1 π πΎ β’ ( π₯ π ) β ( πΈ π π³ 0 β’ [ πΎ ] β 1 π β’ β π
1 π πΎ β’ ( π¦ π ) ) |
β€ ( 8 + 24 ( π + π 2 β 1 ) ) β’ [ ( 9 β’ π· π³ , πΏ 2 | Ξ£ | β’ π ) 1 π + π + ( 9 β’ π· π³ , πΏ 2 | Ξ£ | β’ π ) 1 π + π ]
- ( 24 π
- 24 π ) β’ β« πΏ β π ln β‘ π© β’ ( β± 0 , πΏ , β₯ β β₯ β ) β’ d πΏ .
For π
1 , the first integral in (6.1) in the one-dimensional case does not have a singularity at πΌ
0 . On the other hand, the covering number π© β’ ( β± 0 , πΏ , β₯ β β₯ β ) is bounded by the covering number for which we extend the domain to an interval that contains π³ 0 . Replacing the interval [ 0 , 1 ] by an interval of length diam β’ ( π³ 0 ) in Lemma 5.16 in [61], there exists a constant π
0 such that
π© β’ ( β± 0 , πΏ , β₯ β β₯ β ) β€ π π β’ πΏ β diam β’ ( π³ 0 ) πΏ β’ for β’ πΏ < π
πΏ β diam β’ ( π³ 0 ) .
Therefore, we have
8 β’ πΌ + 24 π β’ β« πΌ π ln β‘ π© β’ ( β± 0 , πΏ , β₯ β β₯ β ) β’ d πΏ β€ 8 β’ πΌ + 24 π β’ β« πΌ π π β’ πΏ β diam β’ ( π³ 0 ) πΏ β’ d πΏ ,
whose minimum is achieved at πΌ
9 β’ π β’ πΏ β diam β’ ( π³ 0 ) π . This implies that
inf
πΌ
>
0
8
β’
πΌ
+
24
π
β’
β«
πΌ
π
ln
β‘
π©
β’
(
β±
0
,
πΏ
,
β₯
β
β₯
β
)
β’
d
πΏ
β€
72
β’
π
β’
πΏ
β
diam
β’
(
π³
0
)
π
+
48
β’
πΏ
β’
π
β
diam
β’
(
π³
0
)
π
β
144
β’
π
β’
πΏ
β
diam
β’
(
π³
0
)
π
48 β’ πΏ β’ π β diam β’ ( π³ 0 ) π β 72 β’ π β’ πΏ β diam β’ ( π³ 0 ) π .
Hence, we have
πΈ π , π β’ sup πΎ β Lip πΏ β’ ( π³ 0 ) | πΈ π π³ 0 β’ [ πΎ ] β 1 π β’ β π
1 π πΎ β’ ( π₯ π ) β ( πΈ π π³ 0 β’ [ πΎ ] β 1 π β’ β π
1 π πΎ β’ ( π¦ π ) ) |
β€ 48 β’ πΏ β’ π β diam β’ ( π³ 0 ) π β 72 β’ π β’ πΏ β diam β’ ( π³ 0 ) π + 48 β’ πΏ β’ π β diam β’ ( π³ 0 ) π β 72 β’ π β’ πΏ β diam β’ ( π³ 0 ) π .
Finally, by a simple change of variable for the probability provided in (41), we prove the theorem. β
Remark 10.
Though we do not directly observe the effect under the group invariance in the case when π
1 in Theorem 6, the upper bound can be improved in some special cases. Here we analyze Example 1 as an example. Replacing the interval [ 0 , 1 ] by π³ 0
[ 0 , 1 | Ξ£ | ) in Lemma 5.16 in [61], there exists a constant π
0 such that
π© β’ ( β± 0 , πΏ , β₯ β β₯ β ) β€ π π β’ πΏ | Ξ£ | β’ πΏ β’ for β’ πΏ < π
πΏ β diam β’ ( π³ 0 ) .
Therefore, we have
8 β’ πΌ + 24 π β’ β« πΌ π ln β‘ π© β’ ( β± 0 , πΏ , β₯ β β₯ β ) β’ d πΏ
8 β’ πΌ + 24 π β’ β« πΌ π π β’ πΏ | Ξ£ | β’ πΏ β’ d πΏ ,
whose minimum is achieved at πΌ
9 β’ π β’ πΏ π β’ | Ξ£ | . This implies that
inf πΌ > 0 8 β’ πΌ + 24 π β’ β« πΌ π ln β‘ π© β’ ( β± 0 , πΏ , β₯ β β₯ β ) β’ d πΏ
72 β’ π β’ πΏ | Ξ£ | β’ π + 48 β’ πΏ β’ π | Ξ£ | β’ π β 144 β’ π β’ πΏ | Ξ£ | β’ π
48 β’ πΏ β’ π | Ξ£ | β’ π β 72 β’ π β’ πΏ | Ξ£ | β’ π .
Hence, we have
πΈ π , π β’ sup πΎ β Lip πΏ β’ ( π³ 0 ) | πΈ π π³ 0 β’ [ πΎ ] β 1 π β’ β π
1 π πΎ β’ ( π₯ π ) β ( πΈ π π³ 0 β’ [ πΎ ] β 1 π β’ β π
1 π πΎ β’ ( π¦ π ) ) | β€ 48 β’ πΏ β’ π | Ξ£ | β’ π β 72 β’ π β’ πΏ | Ξ£ | β’ π + 48 β’ πΏ β’ π | Ξ£ | β’ π β 72 β’ π β’ πΏ | Ξ£ | β’ π .
This matches the numerical result in Figure 3 where the ratio curves are around 4 , since our group sizes are | Ξ£ |
1 , 4 , 16 , 64 , 256 , increasing by a factor of 4 ,
6.2 ( π πΌ , Ξ ) -divergence
We assume Assumption 2 also holds in this case.
Theorem 7.
Let π³
Ξ£ Γ π³ 0 be a subset of β π· equipped with the Euclidean distance, π β’ ( π₯ )
π πΌ β’ ( π₯ )
π₯ πΌ β 1 πΌ β’ ( πΌ β 1 ) , πΌ > 1 and Ξ
Lip πΏ β’ ( π³ ) . Assume that π© β’ ( π³ , πΏ ) β² πΏ β π for sufficiently small πΏ . Suppose π and π are Ξ£ -invariant distributions on π³ . We have
- if
π
β₯
2
, then for any
π
0 and π , π sufficiently large, we have with probability at least 1 β π ,
| π· π πΌ Ξ ( π β₯ π ) β π· π πΌ Ξ Ξ£ ( π π β₯ π π ) |
β€ ( 8 + 24 ( π + π 2 β 1 ) ) β’ [ ( 9 β’ π· π³ , πΏ 2 | Ξ£ | β’ π ) 1 π + π + ( 9 β’ π· π³ , πΏ β² 2 | Ξ£ | β’ π ) 1 π + π ]
24 β’ π· Β― π³ 0 , πΏ π
24 β’ π· Β― π³ 0 , πΏ β² π
2 β’ ( π 1 2 β’ π
π 0 2 β’ π ) π β’ π β’ ln β‘ 1 π ,
where π· π³ , πΏ depends only on π³ and πΏ , and π· π³ , πΏ β² depends only on π³ , πΏ and πΌ ; π· Β― π³ 0 , πΏ depends only on π³ 0 and πΏ , and π· Β― π³ 0 , πΏ β² depends only on π³ 0 and πΏ and πΌ , and both are increasing in π³ 0 ; π 0 and π 1 both only depend on π³ , πΏ and πΌ ;
if π
1 , for any π
0 and π , π sufficiently large, we have with probability at least 1 β π ,
| π· π πΌ Ξ ( π β₯ π ) β π· π πΌ Ξ Ξ£ ( π π β₯ π π ) |
β€ 48 β’ πΏ β’ π β diam β’ ( π³ 0 ) π β 72 β’ π β’ πΏ β diam β’ ( π³ 0 ) π + 48 β’ πΏ β² β’ π β diam β’ ( π³ 0 ) π β 72 β’ π β’ πΏ β² β diam β’ ( π³ 0 ) π
- 2 β’ ( π 1 2 β’ π
- π 0 2 β’ π ) π β’ π β’ ln β‘ 1 π ,
where π
0 is an absolute constant independent of π³ 0 ; πΏ β² depends only on π³ , πΏ and πΌ ; π 0 and π 1 both only depend on π³ , πΏ and πΌ .
Before proving this theorem, we first provide the following lemma.
Lemma 7.
π· π πΌ Ξ β’ ( π β₯ π )
π· π πΌ β± β’ ( π β₯ π ) , where
β±
{ πΎ β Lip πΏ β’ ( π³ ) : β₯ πΎ β₯ β β€ ( πΌ β 1 ) β 1 + πΏ β diam β’ ( π³ ) } ,
and π and π are probability distributions on π³ that are not necessarily Ξ£ -invariant.
Proof.
For any fixed πΎ β Ξ , let β β’ ( π )
πΈ π β’ [ πΎ + π ] β πΈ π β’ [ π πΌ β β’ ( πΎ + π ) ] . We know that sup π₯ β π³ πΎ β’ ( π₯ ) β inf π₯ β π³ πΎ β’ ( π₯ ) β€ πΏ β diam β’ ( π³ ) , so interchanging the integration with differentiation is allowed by the dominated convergence theorem: β β² β’ ( π )
1 β πΈ π β’ [ π πΌ β β£ β² β’ ( πΎ + π ) ] , where
π πΌ β β£ β² β’ ( π¦ )
( πΌ β 1 ) 1 πΌ β 1 β’ π¦ 1 πΌ β 1 β’ π π¦
0 .
If inf π₯ β π³ πΎ β’ ( π₯ ) > ( πΌ β 1 ) β 1 , then β β² β’ ( 0 ) < 0 . So there exists some π 0 < 0 such that πΈ π β’ [ πΎ + π 0 ] β πΈ π β’ [ π πΌ β β’ ( πΎ + π 0 ) ]
β β’ ( π 0 ) > β β’ ( 0 )
πΈ π β’ [ πΎ ] β πΈ π β’ [ π πΌ β β’ ( πΎ ) ] . This indicates the supremum in π· π Ξ β’ ( π β₯ π ) is attained only if sup π₯ β π³ πΎ β’ ( π₯ ) β€ ( πΌ β 1 ) β 1 + πΏ β diam β’ ( π³ ) . On the other hand, if sup π₯ β π³ πΎ β’ ( π₯ ) < 0 , then there exists π 0 > 0 that satisfies sup π₯ β π³ πΎ β’ ( π₯ ) + π 0 < 0 such that πΈ π β’ [ πΎ + π 0 ] β πΈ π β’ [ π πΌ β β’ ( πΎ + π 0 ) ]
πΈ π β’ [ πΎ ] + π 0 > πΈ π β’ [ πΎ ]
πΈ π β’ [ πΎ ] β πΈ π β’ [ π πΌ β β’ ( πΎ ) ] . This indicates that the supremum in π· π Ξ β’ ( π β₯ π ) is attained only if inf π₯ β π³ πΎ β’ ( π₯ ) β₯ β πΏ β diam β’ ( π³ ) . Therefore, we have that the supremum in π· π Ξ β’ ( π β₯ π ) is attained only if β₯ πΎ β₯ β β€ ( πΌ β 1 ) β 1 + πΏ β diam β’ ( π³ ) . β
Proof of Theorem 7.
Similar to the beginning of the proof of Theorem 6, we have by Lemma 7 that
| π· π πΌ Ξ ( π β₯ π ) β π· π πΌ Ξ Ξ£ ( π π , π π ) |
| sup πΎ β Ξ Ξ£
β₯ πΎ β₯ β β€ π 0 { πΈ π β’ [ πΎ ] β πΈ π β’ [ π πΌ β β’ ( πΎ ) ] } β sup πΎ β Ξ Ξ£
β₯ πΎ β₯ β β€ π 0 { πΈ π π β’ [ πΎ ] β πΈ π π β’ [ π πΌ β β’ ( πΎ ) ] } |
β€ sup πΎ β Ξ Ξ£
β₯ πΎ β₯ β β€ π 0 | πΈ π β’ [ πΎ ] β 1 π β’ β π
1 π πΎ β’ ( π₯ π ) β ( πΈ π β’ [ π πΌ β β’ ( πΎ ) ] β 1 π β’ β π
1 π π πΌ β β’ ( πΎ β’ ( π¦ π ) ) ) |
sup πΎ β Ξ Ξ£
β₯ πΎ β₯ β β€ π 0 | πΈ π β’ [ πΎ ] β 1 π β’ β π
1 π πΎ β’ ( π 0 β’ ( π₯ π ) ) β ( πΈ π β’ [ π πΌ β β’ ( πΎ ) ] β 1 π β’ β π
1 π π πΌ β β’ ( πΎ β’ ( π 0 β’ ( π¦ π ) ) ) ) |
β€ sup πΎ β Lip πΏ β’ ( π³ 0 )
β₯ πΎ β₯ β β€ π 0 | πΈ π π³ 0 β’ [ πΎ ] β 1 π β’ β π
1 π πΎ β’ ( π 0 β’ ( π₯ π ) ) β ( πΈ π π³ 0 β’ [ π πΌ β β’ ( πΎ ) ] β 1 π β’ β π
1 π π πΌ β β’ ( πΎ β’ ( π 0 β’ ( π¦ π ) ) ) ) |
:= π β’ ( π₯ 1 , β¦ , π₯ π , π¦ 1 , β¦ , π¦ π ) ,
where π 0 is the same as defined in (23). The denominator in the exponent when applying the McDiarmidβs inequality is thus equal to
π β’ ( 2 β’ π 0 π ) 2 + π β’ ( 2 β’ π 1 π ) 2
4 β’ π 0 2 π + 4 β’ π 1 2 π ,
where π 0
( πΌ β 1 ) β 1 + πΏ β diam β’ ( π³ ) , π 1
π πΌ β β’ ( π 0 ) , since for any πΎ such that β₯ πΎ β₯ β β€ π 0 , we have β₯ π πΌ β β πΎ β₯ β β€ π 1 . Denoting by π β²
{ π₯ 1 β² , π₯ 2 β² , β¦ , π₯ π β² } and π β²
{ π¦ 1 β² , π¦ 2 β² , β¦ , π¦ π β² } the i.i.d. samples drawn from π π³ 0 and π π³ 0 . Also note that π 0 β’ ( π₯ 1 ) , β¦ , π 0 β’ ( π₯ π ) and π 0 β’ ( π¦ 1 ) , β¦ , π 0 β’ ( π¦ π ) can be viewed as i.i.d. samples on π³ 0 drawn from π π³ 0 and π π³ 0 respectively, such that the expectation
πΈ π , π β’ π β’ ( π₯ 1 , π₯ 2 , β¦ , π₯ π , π¦ 1 , π¦ 2 , β¦ , π¦ π )
πΈ π , π β’ sup πΎ β Lip πΏ β’ ( π³ 0 )
β₯ πΎ β₯ β β€ π 0 | πΈ π π³ 0 β’ [ πΎ ] β 1 π β’ β π
1 π πΎ β’ ( π 0 β’ ( π₯ π ) ) β ( πΈ π π³ 0 β’ [ π πΌ β β’ ( πΎ ) ] β 1 π β’ β π
1 π π πΌ β β’ ( πΎ β’ ( π 0 β’ ( π¦ π ) ) ) ) |
can be replaced by the equivalent quantity
πΈ π , π β’ sup πΎ β Lip πΏ β’ ( π³ 0 )
β₯ πΎ β₯ β β€ π 0 | πΈ π π³ 0 β’ [ πΎ ] β 1 π β’ β π
1 π πΎ β’ ( π₯ π ) β ( πΈ π π³ 0 β’ [ π πΌ β β’ ( πΎ ) ] β 1 π β’ β π
1 π π πΌ β β’ ( πΎ β’ ( π¦ π ) ) ) | ,
where π
{ π₯ 1 , π₯ 2 , β¦ , π₯ π } and π
{ π¦ 1 , π¦ 2 , β¦ , π¦ π } are are i.i.d. samples on π³ 0 drawn from π π³ 0 and π π³ 0 respectively. Then we have
πΈ π , π β’ sup πΎ β Lip πΏ β’ ( π³ 0 )
β₯ πΎ β₯ β β€ π 0 | πΈ π π³ 0 β’ [ πΎ ] β 1 π β’ β π
1 π πΎ β’ ( π₯ π ) β ( πΈ π π³ 0 β’ [ π πΌ β β’ ( πΎ ) ] β 1 π β’ β π
1 π π πΌ β β’ ( πΎ β’ ( π¦ π ) ) ) |
πΈ π , π β’ sup πΎ β Lip πΏ β’ ( π³ 0 )
β₯ πΎ β₯ β β€ π 0 | πΈ π β² β’ ( 1 π β’ β π
1 π πΎ β’ ( π₯ π β² ) ) β 1 π β’ β π
1 π πΎ β’ ( π₯ π ) β πΈ π β² β’ ( 1 π β’ β π
1 π π πΌ β β’ ( πΎ β’ ( π¦ π β² ) ) ) + 1 π β’ β π
1 π π πΌ β β’ ( πΎ β’ ( π¦ π ) ) |
β€ πΈ π , π , π β² , π β² β’ sup πΎ β Lip πΏ β’ ( π³ 0 )
β₯ πΎ β₯ β β€ π 0 | 1 π β’ β π
1 π πΎ β’ ( π₯ π β² ) β 1 π β’ β π
1 π πΎ β’ ( π₯ π ) β 1 π β’ β π
1 π π πΌ β β’ ( πΎ β’ ( π¦ π β² ) ) + 1 π β’ β π
1 π π πΌ β β’ ( πΎ β’ ( π¦ π ) ) |
πΈ π , π , π β² , π β² , π , π β² β’ sup πΎ β Lip πΏ β’ ( π³ 0 )
β₯ πΎ β₯ β β€ π 0 | 1 π β’ β π
1 π π π β’ ( πΎ β’ ( π₯ π β² ) β πΎ β’ ( π₯ π ) ) β 1 π β’ β π
1 π π π β² β’ ( π πΌ β β’ ( πΎ β’ ( π¦ π β² ) ) β π πΌ β β’ ( πΎ β’ ( π¦ π ) ) ) |
β€ πΈ π , π β² , π β’ sup πΎ β Lip πΏ β’ ( π³ 0 )
β₯ πΎ β₯ β β€ π 0 | 1 π β’ β π
1 π π π β’ ( πΎ β’ ( π₯ π β² ) β πΎ β’ ( π₯ π ) ) | + πΈ π , π β² , π β² β’ sup πΎ β Lip πΏ β’ ( π³ 0 )
β₯ πΎ β₯ β β€ π 0 | 1 π β’ β π
1 π π π β² β’ ( π πΌ β β’ ( πΎ β’ ( π¦ π β² ) ) β π πΌ β β’ ( πΎ β’ ( π¦ π ) ) ) |
β€ inf πΌ
0 8 β’ πΌ + 24 π β’ β« πΌ π 0 ln β‘ π© β’ ( β± 0 , πΏ , β₯ β β₯ β ) β’ d πΏ + inf πΌ
0 8 β’ πΌ + 24 π β’ β« πΌ π 1 ln β‘ π© β’ ( β± 1 , πΏ , β₯ β β₯ β ) β’ d πΏ ,
where β± 0
{ πΎ β Lip πΏ β’ ( π³ 0 ) : β₯ πΎ β₯ β β€ π 0 } and β± 1
{ πΎ β Lip πΏ β² β’ ( π³ 0 ) : β₯ πΎ β₯ β β€ π 1 } , since for any πΎ β β± 0 , β₯ π πΌ β β πΎ β₯ β β€ π 1 and | d d β’ π¦ β’ π πΌ β β’ ( π¦ ) | β€ ( πΌ β 1 ) 1 πΌ β 1 β’ ( π 0 ) 1 πΌ β 1 for | π¦ | β€ π 0 such that π πΌ β β πΎ is πΏ β² -Lipschitz, where π 1
π πΌ β β’ ( π 0 ) and πΏ β²
πΏ β’ ( πΌ β 1 ) 1 πΌ β 1 β’ ( π 0 ) 1 πΌ β 1 . Then the rest of the proof follows from the proof of Theorem 6. β
6.3MMD
We assume the kernel π β’ ( π₯ , π¦ ) satisfies Assumption 1. Furthermore, let π β’ ( π₯ ) be the evaluation functional at π₯ in β : β¨ π β’ ( π₯ ) , π β’ ( π¦ ) β© β
π β’ ( π₯ , π¦ ) , β π₯ , π¦ β β .
Theorem 8.
Let π³
Ξ£ Γ π³ 0 and β be a RKHS on π³ whose kernel satisfies Assumption 1. Suppose π and π are Ξ£ -invariant distributions on π³ . Then for π , π sufficiently large and any π
0 we have with probability at least 1 β π ,
| MMD β’ ( π , π ) β MMD Ξ£ β’ ( π π , π π ) |
< 2 β’ πΎ 1 2 β’ [ 1 + π β’ ( | Ξ£ | β 1 ) ] 1 2 β’ ( 1 | Ξ£ | β’ π + 1 | Ξ£ | β’ π )
- 2 β’ πΎ β’ ( 1
- π β’ ( | Ξ£ | β 1 ) ) β’ ln β‘ ( 1 π ) | Ξ£ | β’ 1 π
- 1 π ,
where πΎ and π are the constants in Assumption 1.
Before proving the theorem, we provide the following lemma.
Lemma 8.
Suppose the kernel in an RKHS satisfies Assumption 1, and π
{ π 1 , β¦ , π π } is a set of independent random variables, each of which takes values on { β 1 , 1 } with equal probabilities. Then we have
πΈ π β’ sup β₯ πΎ β₯ β β€ 1 | 1 π β’ | Ξ£ | β’ β π
1 π π π β’ β π
1 | Ξ£ | πΎ β’ ( π π β’ π₯ π ) | β€ ( 1 + π β’ ( | Ξ£ | β 1 ) ) β’ πΎ 1 2 | Ξ£ | β’ π .
Proof.
Since the witness function to attain the supremum is explicit, we can write
πΈ π β’ sup β₯ πΎ β₯ β β€ 1 | 1 π β’ | Ξ£ | β’ β π
1 π π π β’ β π
1 | Ξ£ | πΎ β’ ( π π β’ π₯ π ) |
πΈ π β’ β₯ 1 π β’ | Ξ£ | β’ β π
1 π π π β’ β π
1 | Ξ£ | π β’ ( π π β’ π₯ π ) β₯ β
1 π β’ | Ξ£ | πΈ π [ β π , π β²
1 π π π π π β² β π , π β²
1
|
Ξ£
|
π
(
π
π
π₯
π
,
π
π
β²
π₯
π
β²
)
]
]
1
2
β€
1
π
β’
|
Ξ£
|
[
πΈ
π
β
π
,
π
β²
1 π π π π π β² β π , π β²
1 | Ξ£ | π ( π π π₯ π , π π β² π₯ π β² ) ] ] 1 2
1 π β’ | Ξ£ | [ πΈ π β π
1 π ( π π ) 2 β π , π β²
1
|
Ξ£
|
π
(
π
π
π₯
π
,
π
π
β²
π₯
π
)
]
]
1
2
β€
1
π
β’
|
Ξ£
|
β’
[
π
β
(
|
Ξ£
|
β’
πΎ
+
π
β’
(
|
Ξ£
|
2
β
|
Ξ£
|
)
β’
πΎ
)
]
1
2
πΎ 1 2 β’ [ 1 + π β’ ( | Ξ£ | β 1 ) ] 1 2 | Ξ£ | β’ π .
β
Proof of Theorem 8.
The proof below is a generalization of the proof of Theorem 7 in [31], which does not need the notion of covering numbers due to the structure of RKHS.
| MMD β’ ( π , π ) β MMD Ξ£ β’ ( π π , π π ) |
| MMD β’ ( π , π ) β MMD β’ ( π Ξ£ β’ [ π π ] , π Ξ£ β’ [ π π ] ) |
| sup β₯ πΎ β₯ β β€ 1 { πΈ π β’ [ πΎ ] β πΈ π β’ [ πΎ ] } β sup β₯ πΎ β₯ β β€ 1 { πΈ π Ξ£ β’ [ π π ] β’ [ πΎ ] β πΈ π Ξ£ β’ [ π π ] β’ [ πΎ ] } |
| sup β₯ πΎ β₯ β β€ 1 { πΈ π β’ [ πΎ ] β πΈ π β’ [ πΎ ] } β sup β₯ πΎ β₯ β β€ 1 { 1 π β’ | Ξ£ | β’ β π
1 π β π
1 | Ξ£ | πΎ β’ ( π π β’ π₯ π ) β 1 π β’ | Ξ£ | β’ β π
1 π β π
1
|
Ξ£
|
πΎ
β’
(
π
π
β’
π¦
π
)
}
|
β€
sup
β₯
πΎ
β₯
β
β€
1
|
πΈ
π
β’
[
πΎ
]
β
πΈ
π
β’
[
πΎ
]
β
1
π
β’
|
Ξ£
|
β’
β
π
1 π β π
1 | Ξ£ | πΎ β’ ( π π β’ π₯ π ) + 1 π β’ | Ξ£ | β’ β π
1 π β π
1 | Ξ£ | πΎ β’ ( π π β’ π¦ π ) |
:= π β’ ( π₯ 1 , π₯ 2 , β¦ , π₯ π , π¦ 1 , π¦ 2 , β¦ , π¦ π ) .
Now we estimate the upper bound of the difference of π if we change one of π₯ π βs.
|
π
β’
(
π₯
1
,
β¦
,
π₯
π
,
β¦
,
π¦
1
,
β¦
,
π¦
π
)
β
π
β’
(
π₯
1
,
β¦
,
π₯
~
π
,
β¦
,
π¦
1
,
β¦
,
π¦
π
)
|
β€
sup
β₯
πΎ
β₯
β
β€
1
|
1
π
β’
|
Ξ£
|
β’
β
π
1 | Ξ£ | πΎ β’ ( π π β’ π₯ π ) β πΎ β’ ( π π β’ π₯ ~ π ) |
1 π β’ | Ξ£ | β’ β₯ β π
1 | Ξ£ | π β’ ( π π β’ π₯ π ) β π β’ ( π π β’ π₯ ~ π ) β₯ β
(43)
β€ 1 π β’ | Ξ£ | β’ ( β₯ β π
1 | Ξ£ | π β’ ( π π β’ π₯ π ) β₯ β + β₯ β π
1 | Ξ£ | π β’ ( π π β’ π₯ ~ π ) β₯ β ) .
To bound β₯ β π
1 | Ξ£ | π β’ ( π π β’ π₯ π ) β₯ β , we have
β₯ β π
1 | Ξ£ | π β’ ( π π β’ π₯ π ) β₯ β
[ β π
1 | Ξ£ | π β’ ( π π β’ π₯ π , π π β’ π₯ π ) + β π β π π β’ ( π π β’ π₯ π , π π β’ π₯ π ) ] 1 2
[ β π
1 | Ξ£ | π β’ ( π π β’ π₯ π , π π β’ π₯ π ) + β π π β π β’ π π β’ ( π π β’ π₯ π , π₯ π ) ] 1 2
β€ [ | Ξ£ | β πΎ + ( | Ξ£ | 2 β | Ξ£ | ) β π β’ πΎ ] 1 2 .
The upper bound of the difference of π if we change one of π¦ π βs can be derived in the same way. To apply the McDiarmidβs inequality, the denominator in the exponent is thus
π β 4 β’ [ | Ξ£ | β πΎ + ( | Ξ£ | 2 β | Ξ£ | ) β π β’ πΎ ] π 2 β’ | Ξ£ | 2 + π β 4 β’ [ | Ξ£ | β πΎ + ( | Ξ£ | 2 β | Ξ£ | ) β π β’ πΎ ] π 2 β’ | Ξ£ | 2
β€ 4 β’ πΎ β’ ( 1 π + 1 π ) β 1 + π β’ ( | Ξ£ | β 1 ) | Ξ£ | .
Moreover, we can extend inequality (16) in [31] to take into account the group invariance. Denoting by π β²
{ π₯ 1 β² , π₯ 2 β² , β¦ , π₯ π β² } and π β²
{ π¦ 1 β² , π¦ 2 β² , β¦ , π¦ π β² } the i.i.d. samples drawn from π and π , and π
{ π 1 , β¦ , π π } , π β²
{ π 1 β² , β¦ , π π β² } sets of independent random variables, each of which takes values on { β 1 , 1 } with equal probabilities, we have
πΈ π , π β’ π β’ ( π₯ 1 , π₯ 2 , β¦ , π₯ π , π¦ 1 , π¦ 2 , β¦ , π¦ π )
πΈ π , π β’ sup β₯ πΎ β₯ β β€ 1 | πΈ π β’ [ πΎ ] β πΈ π β’ [ πΎ ] β 1 π β’ | Ξ£ | β’ β π
1 π β π
1 | Ξ£ | πΎ β’ ( π π β’ π₯ π ) + 1 π β’ | Ξ£ | β’ β π
1 π β π
1 | Ξ£ | πΎ β’ ( π π β’ π¦ π ) |
πΈ π , π β’ sup β₯ πΎ β₯ β β€ 1 | πΈ π β² β’ ( 1 π β’ | Ξ£ | β’ β π
1 π β π
1 | Ξ£ | πΎ β’ ( π π β’ π₯ π β² ) ) β πΈ π β² β’ ( 1 π β’ | Ξ£ | β’ β π
1 π β π
1
|
Ξ£
|
πΎ
β’
(
π
π
β’
π¦
π
β²
)
)
β
1
π
β’
|
Ξ£
|
β
π
1 π β π
1 | Ξ£ | πΎ ( π π π₯ π ) + 1 π β’ | Ξ£ | β π
1 π β π
1
|
Ξ£
|
πΎ
(
π
π
π¦
π
)
|
β€
πΈ
π
,
π
,
π
β²
,
π
β²
β’
sup
β₯
πΎ
β₯
β
β€
1
|
1
π
β’
|
Ξ£
|
β’
β
π
1 π β π
1 | Ξ£ | ( πΎ β’ ( π π β’ π₯ π β² ) β πΎ β’ ( π π β’ π₯ π ) ) β 1 π β’ | Ξ£ | β’ β π
1 π β π
1 | Ξ£ | ( πΎ β’ ( π π β’ π¦ π β² ) β πΎ β’ ( π π β’ π¦ π ) ) |
πΈ π , π , π β² , π β² , π , π β² β’ sup β₯ πΎ β₯ β β€ 1 | 1 π β’ | Ξ£ | β’ β π
1 π π π β’ β π
1 | Ξ£ | ( πΎ β’ ( π π β’ π₯ π β² ) β πΎ β’ ( π π β’ π₯ π ) ) β 1 π β’ | Ξ£ | β’ β π
1 π π π β² β’ β π
1
|
Ξ£
|
(
πΎ
β’
(
π
π
β’
π¦
π
β²
)
β
πΎ
β’
(
π
π
β’
π¦
π
)
)
|
β€
πΈ
π
,
π
β²
,
π
β’
sup
β₯
πΎ
β₯
β
β€
1
|
1
π
β’
|
Ξ£
|
β’
β
π
1 π π π β’ β π
1 | Ξ£ | ( πΎ β’ ( π π β’ π₯ π β² ) β πΎ β’ ( π π β’ π₯ π ) ) | + πΈ π , π β² , π β² β’ sup β₯ πΎ β₯ β β€ 1 | 1 π β’ | Ξ£ | β’ β π
1 π π π β² β’ β π
1 | Ξ£ | ( πΎ β’ ( π π β’ π¦ π β² ) β πΎ β’ ( π π β’ π¦ π ) ) |
β€ 2 β’ [ πΎ 1 2 β’ [ 1 + π β’ ( | Ξ£ | β 1 ) ] 1 2 | Ξ£ | β’ π + πΎ 1 2 β’ [ 1 + π β’ ( | Ξ£ | β 1 ) ] 1 2 | Ξ£ | β’ π ] ,
where the last inequality is due to Lemma 8. Therefore, by the McDiarmidβs theorem, we have
β β’ ( | MMD β’ ( π , π ) β MMD Ξ£ β’ ( π π , π π ) | β 2 β’ πΎ 1 2 β’ [ 1 + π β’ ( | Ξ£ | β 1 ) ] 1 2 β’ ( 1 | Ξ£ | β’ π + 1 | Ξ£ | β’ π )
π )
β€ exp β‘ ( β π 2 β’ π β’ π β’ | Ξ£ | 2 β’ πΎ β’ ( π + π ) β’ ( 1 + π β’ ( | Ξ£ | β 1 ) ) ) .
By a change of variable, we have with probability at least 1 β π ,
| MMD β’ ( π , π ) β MMD Ξ£ β’ ( π π , π π ) |
< 2 β’ πΎ 1 2 β’ [ 1 + π β’ ( | Ξ£ | β 1 ) ] 1 2 β’ ( 1 | Ξ£ | β’ π + 1 | Ξ£ | β’ π )
- 2 β’ πΎ β’ ( 1
- π β’ ( | Ξ£ | β 1 ) ) β’ ln β‘ ( 1 π ) | Ξ£ | β’ 1 π
- 1 π .
β
Acknowledgements
The research of M.K. and L.R.-B. was partially supported by the Air Force Office of Scientific Research (AFOSR) under the grant FA9550-21-1-0354. The research of M. K. and L.R.-B. was partially supported by the National Science Foundation (NSF) under the grants DMS-2008970 and TRIPODS CISE-1934846. The research of Z.C and W.Z. was partially supported by NSF under DMS-2052525 and DMS-2140982. We thank Yulong Lu for the insightful discussions.
References [1] β M. Arjovsky, S. Chintala, and L. Bottou, Wasserstein generative adversarial networks, in International conference on machine learning, PMLR, 2017, pp. 214β223. [2] β P. L. Bartlett, D. J. Foster, and M. J. Telgarsky, Spectrally-normalized margin bounds for neural networks, Advances in neural information processing systems, 30 (2017). [3] β M. I. Belghazi, A. Baratin, S. Rajeshwar, S. Ozair, Y. Bengio, A. Courville, and D. Hjelm, Mutual information neural estimation, in Proceedings of the 35th International Conference on Machine Learning, J. Dy and A. Krause, eds., vol. 80 of Proceedings of Machine Learning Research, StockholmsmΓ€ssan, Stockholm Sweden, 10β15 Jul 2018, PMLR, pp. 531β540. [4] β P. J. Bickel, A distribution free version of the smirnov two sample test in the p-variate case, The Annals of Mathematical Statistics, 40 (1969), pp. 1β23. [5] β M. BiloΕ‘ and S. GΓΌnnemann, Scalable normalizing flows for permutation invariant densities, in International Conference on Machine Learning, PMLR, 2021, pp. 957β967. [6] β J. Birrell, P. Dupuis, M. A. Katsoulakis, Y. Pantazis, and L. Rey-Bellet, ( π , Ξ ) -Divergences: Interpolating between π -Divergences and Integral Probability Metrics, Journal of Machine Learning Research, 23 (2022), pp. 1β70. [7] β J. Birrell, P. Dupuis, M. A. Katsoulakis, L. Rey-Bellet, and J. Wang, Variational representations and neural network estimation of RΓ©nyi divergences, SIAM Journal on Mathematics of Data Science, 3 (2021), pp. 1093β1116. [8] β J. Birrell, M. A. Katsoulakis, and Y. Pantazis, Optimizing variational representations of divergences and accelerating their statistical estimation, IEEE Transactions on Information Theory, (2022). [9] β J. Birrell, M. A. Katsoulakis, L. Rey-Bellet, and W. Zhu, Structure-preserving GANs, Proceedings of the 39th International Conference on Machine Learning, PMLR 162, (2022), pp. 1982β2020. [10] β J. Birrell, Y. Pantazis, P. Dupuis, M. A. Katsoulakis, and L. Rey-Bellet, Function-space regularized RΓ©nyi divergences, arXiv preprint arXiv:2210.04974, (2022). [11] β D. Boyda, G. Kanwar, S. RacaniΓ¨re, D. J. Rezende, M. S. Albergo, K. Cranmer, D. C. Hackett, and P. E. Shanahan, Sampling using su (n) gauge equivariant flows, Physical Review D, 103 (2021), p. 074504. [12] β G. E. Bredon, Introduction to compact transformation groups, Academic press, 1972. [13] β M. Broniatowski and A. Keziou, Parametric estimation and tests through divergences and the duality technique, Journal of Multivariate Analysis, 100 (2009), pp. 16 β 36. [14] β O. Catoni, P. Euclid, C. U. Library, and D. U. Press, PAC-Bayesian Supervised Classification: The Thermodynamics of Statistical Learning, Lecture notes-monograph series, Cornell University Library, 2008. [15] β X. Cheng and Y. Xie, Kernel two-sample tests for manifold data, arXiv preprint arXiv:2105.03425, (2021). [16] β K. Chowdhary and P. Dupuis, Distinguishing and integrating aleatoric and epistemic variation in uncertainty quantification, ESAIM: Mathematical Modelling and Numerical Analysis, 47 (2013), p. 635β662. [17] β F. Ciompi, Y. Jiao, and J. van der Laak, Lymphocyte assessment hackathon (LYSTO), Oct. 2019. [18] β T. Cohen and M. Welling, Group equivariant convolutional networks, in Proceedings of The 33rd International Conference on Machine Learning, M. F. Balcan and K. Q. Weinberger, eds., vol. 48 of Proceedings of Machine Learning Research, New York, New York, USA, 20β22 Jun 2016, PMLR, pp. 2990β2999. [19] β T. S. Cohen, M. Geiger, and M. Weiler, A general theory of equivariant CNNs on homogeneous spaces, in Advances in Neural Information Processing Systems, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'AlchΓ©-Buc, E. Fox, and R. Garnett, eds., vol. 32, Curran Associates, Inc., 2019. [20] β N. Dey, A. Chen, and S. Ghafurian, Group equivariant generative adversarial networks, in International Conference on Learning Representations, 2021. [21] β P. Dupuis and R. S. Ellis, A weak convergence approach to the theory of large deviations, vol. 902, John Wiley & Sons, 2011. [22] β P. Dupuis, M. A. Katsoulakis, Y. Pantazis, and P. Plechac, Path-space information bounds for uncertainty quantification and sensitivity analysis of stochastic dynamics, SIAM/ASA Journal on Uncertainty Quantification, 4 (2016), pp. 80β111. [23] β G. B. Folland, Real analysis: modern techniques and their applications, vol. 40, John Wiley & Sons, 1999. [24] β N. Fournier and A. Guillin, On the rate of convergence in wasserstein distance of the empirical measure, Probability theory and related fields, 162 (2015), pp. 707β738. [25] β S. Gao, G. Ver Steeg, and A. Galstyan, Efficient estimation of mutual information for strongly dependent variables, in Artificial intelligence and statistics, PMLR, 2015, pp. 277β286. [26] β V. Garcia Satorras, E. Hoogeboom, F. Fuchs, I. Posner, and M. Welling, E (n) equivariant normalizing flows, Advances in Neural Information Processing Systems, 34 (2021). [27] β A. Genevay, L. Chizat, F. Bach, M. Cuturi, and G. PeyrΓ©, Sample complexity of sinkhorn divergences, in The 22nd international conference on artificial intelligence and statistics, PMLR, 2019, pp. 1574β1583. [28] β I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, Generative adversarial nets, Advances in neural information processing systems, 27 (2014). [29] β L.-A. Gottlieb, A. Kontorovich, and R. Krauthgamer, Efficient regression in metric spaces via approximate lipschitz extension, IEEE Transactions on Information Theory, 63 (2017), pp. 4838β4849. [30] β A. Gretton, K. Borgwardt, M. Rasch, B. SchΓΆlkopf, and A. Smola, A kernel method for the two-sample-problem, Advances in neural information processing systems, 19 (2006). [31] β A. Gretton, K. M. Borgwardt, M. J. Rasch, B. SchΓΆlkopf, and A. Smola, A kernel two-sample test, The Journal of Machine Learning Research, 13 (2012), pp. 723β773. [32] β A. Gretton, K. Fukumizu, C. Teo, L. Song, B. SchΓΆlkopf, and A. Smola, A kernel statistical test of independence, Advances in neural information processing systems, 20 (2007). [33] β I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville, Improved training of Wasserstein GANs, in Advances in Neural Information Processing Systems, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, eds., vol. 30, Curran Associates, Inc., 2017. [34] β A. Hyvarinen, J. Karhunen, and E. Oja, Independent component analysis, Studies in informatics and control, 11 (2002), pp. 205β207. [35] β K. Kandasamy, A. Krishnamurthy, B. Poczos, L. Wasserman, et al., Nonparametric von mises estimators for entropies, divergences and mutual informations, Advances in Neural Information Processing Systems, 28 (2015). [36] β B. KΓ©gl, Intrinsic dimension estimation using packing numbers, Advances in neural information processing systems, 15 (2002). [37] β J. B. Kinney and G. S. Atwal, Equitability, mutual information, and the maximal information coefficient, Proceedings of the National Academy of Sciences, 111 (2014), pp. 3354β3359. [38] β C. Kipnis and C. Landim, Scaling Limits of Interacting Particle Systems, Springer-Verlag, 1999. [39] β J. KΓΆhler, L. Klein, and F. NoΓ©, Equivariant flows: sampling configurations for multi-body systems with symmetric energies, arXiv preprint arXiv:1910.00753, (2019). [40] β A. N. Kolmogorov, π -entropy and π -capacity of sets in function spaces, American Mathematical Society Translations, 17 (1961), pp. 277β364. [41] β T. KΓΌhn, Covering numbers of gaussian reproducing kernel hilbert spaces, Journal of Complexity, 27 (2011), pp. 489β499. [42] β S. Kullback and R. A. Leibler, On information and sufficiency, The annals of mathematical statistics, 22 (1951), pp. 79β86. [43] β J. Liu, A. Kumar, J. Ba, J. Kiros, and K. Swersky, Graph normalizing flows, Advances in Neural Information Processing Systems, 32 (2019). [44] β D. A. McAllester, Pac-bayesian model averaging, in Proceedings of the Twelfth Annual Conference on Computational Learning Theory, COLT β99, New York, NY, USA, 1999, Association for Computing Machinery, p. 164β170. [45] β M. Mohri, A. Rostamizadeh, and A. Talwalkar, Foundations of machine learning, MIT press, 2018. [46] β A. MΓΌller, Integral probability metrics and their generating classes of functions, Advances in Applied Probability, 29 (1997), pp. 429β443. [47] β X. Nguyen, M. J. Wainwright, and M. I. Jordan, Nonparametric estimation of the likelihood ratio and divergence functionals, in 2007 IEEE International Symposium on Information Theory, IEEE, 2007, pp. 2016β2020. [48] β , Estimating divergence functionals and the likelihood ratio by convex risk minimization, IEEE Transactions on Information Theory, 56 (2010), pp. 5847β5861. [49] β S. Nietert, Z. Goldfeld, and K. Kato, Smooth π -wasserstein distance: Structure, empirical approximation, and statistical applications, in International Conference on Machine Learning, PMLR, 2021, pp. 8172β8183. [50] β S. Nowozin, B. Cseke, and R. Tomioka, f-gan: Training generative neural samplers using variational divergence minimization, Advances in neural information processing systems, 29 (2016). [51] β L. Paninski, Estimation of entropy and mutual information, Neural computation, 15 (2003), pp. 1191β1253. [52] β B. PΓ³czos, L. Xiong, and J. Schneider, Nonparametric divergence estimation with applications to machine learning on distributions, in Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence, 2011, pp. 599β608. [53] β D. J. Rezende, S. RacaniΓ¨re, I. Higgins, and P. Toth, Equivariant hamiltonian flows, arXiv preprint arXiv:1909.13739, (2019). [54] β A. Ruderman, M. D. Reid, D. GarcΓa-GarcΓa, and J. Petterson, Tighter variational representations of f-divergences via restriction to probability measures, in Proceedings of the 29th International Coference on International Conference on Machine Learning, ICMLβ12, Madison, WI, USA, 2012, Omnipress, p. 1155β1162. [55] β J. Shawe-Taylor and R. C. Williamson, A PAC analysis of a Bayesian estimator, in Proceedings of the Tenth Annual Conference on Computational Learning Theory, COLT β97, New York, NY, USA, 1997, Association for Computing Machinery, p. 2β9. [56] β J. Sokolic, R. Giryes, G. Sapiro, and M. Rodrigues, Generalization error of invariant classifiers, in Artificial Intelligence and Statistics, PMLR, 2017, pp. 1094β1103. [57] β S. Sreekumar and Z. Goldfeld, Neural estimation of statistical divergences, Journal of Machine Learning Research, 23 (2022), pp. 1β75. [58] β B. K. Sriperumbudur, K. Fukumizu, A. Gretton, B. SchΓΆlkopf, and G. R. Lanckriet, On the empirical estimation of integral probability metrics, Electronic Journal of Statistics, 6 (2012), pp. 1550β1599. [59] β B. Tahmasebi and S. Jegelka, Sample complexity bounds for estimating probability divergences under invariances, in Forty-first International Conference on Machine Learning. [60] β I. Tolstikhin, O. Bousquet, S. Gelly, and B. Schoelkopf, Wasserstein auto-encoders, in International Conference on Learning Representations, 2018. [61] β R. Van Handel, Probability in high dimension, tech. rep., PRINCETON UNIV NJ, 2014. [62] β R. Vershynin, High-dimensional probability: An introduction with applications in data science, vol. 47, Cambridge university press, 2018. [63] β M. Weiler and G. Cesa, General E(2)-equivariant steerable CNNs, in Advances in Neural Information Processing Systems, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'AlchΓ©-Buc, E. Fox, and R. Garnett, eds., vol. 32, Curran Associates, Inc., 2019. [64] β Q. Zhang, S. Filippi, A. Gretton, and D. Sejdinovic, Large-scale kernel methods for independence testing, Statistics and Computing, 28 (2018), pp. 113β130. [65] β S. Zhao, Z. Liu, J. Lin, J.-Y. Zhu, and S. Han, Differentiable augmentation for data-efficient gan training, Advances in Neural Information Processing Systems, 33 (2020), pp. 7559β7570. [66] β D.-X. Zhou, The covering number in learning theory, Journal of Complexity, 18 (2002), pp. 739β767. Report Issue Report Issue for Selection Generated by L A T E xml Instructions for reporting errors
We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:
Click the "Report Issue" button. Open a report feedback form via keyboard, use "Ctrl + ?". Make a text selection and click the "Report Issue for Selection" button near your cursor. You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.
Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.
Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.
Xet Storage Details
- Size:
- 106 kB
- Xet hash:
- f0bb0cb301039ad3e19f67760a632cad1f854830618b353934cd4a636ccb5425
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.